id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.10790
Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference
Fine-tuning has been proven to be a simple and effective technique to transfer the learned knowledge of Pre-trained Language Models (PLMs) to downstream tasks. However, vanilla fine-tuning easily overfits the target data and degrades the generalization ability. Most existing studies attribute it to catastrophic forgetting, and they retain the pre-trained knowledge indiscriminately without identifying what knowledge is transferable. Motivated by this, we frame fine-tuning into a causal graph and discover that the crux of catastrophic forgetting lies in the missing causal effects from the pretrained data. Based on the causal view, we propose a unified objective for fine-tuning to retrieve the causality back. Intriguingly, the unified objective can be seen as the sum of the vanilla fine-tuning objective, which learns new knowledge from target data, and the causal objective, which preserves old knowledge from PLMs. Therefore, our method is flexible and can mitigate negative transfer while preserving knowledge. Since endowing models with commonsense is a long-standing challenge, we implement our method on commonsense QA with a proposed heuristic estimation to verify its effectiveness. In the experiments, our method outperforms state-of-the-art fine-tuning methods on all six commonsense QA datasets and can be implemented as a plug-in module to inflate the performance of existing QA models.
Junhao Zheng, Qianli Ma, Shengjie Qiu, Yue Wu, Peitian Ma, Junlong Liu, Huawen Feng, Xichen Shang, Haibin Chen
2023-06-19T09:06:44Z
http://arxiv.org/abs/2306.10790v1
# Preserving Commonsense Knowledge from Pre-trained ###### Abstract Fine-tuning has been proven to be a simple and effective technique to transfer the learned knowledge of Pre-trained Language Models (PLMs) to downstream tasks. However, vanilla fine-tuning easily overfits the target data and degrades the generalization ability. Most existing studies attribute it to catastrophic forgetting, and they retain the pre-trained knowledge indiscriminately without identifying what knowledge is transferable. Motivated by this, we frame fine-tuning into a causal graph and discover that the crux of catastrophic forgetting lies in the missing causal effects from the pre-trained data. Based on the causal view, we propose a unified objective for fine-tuning to retrieve the causality back. Intriguingly, the unified objective can be seen as the sum of the vanilla fine-tuning objective, which learns new knowledge from target data, and the causal objective, which preserves old knowledge from PLMs. Therefore, our method is flexible and can mitigate negative transfer while preserving knowledge. Since endowing models with commonsense is a long-standing challenge, we implement our method on commonsense QA with a proposed heuristic estimation to verify its effectiveness. In the experiments, our method outperforms state-of-the-art fine-tuning methods on all six commonsense QA datasets and can be implemented as a plug-in module to inflate the performance of existing QA models. 1 Footnote 1: Our codes are publicly available at [https://github.com/zzz47zzz/CET](https://github.com/zzz47zzz/CET) and [https://github.com/qianlima-lab/CET](https://github.com/qianlima-lab/CET) ## 1 Introduction Deep Pre-trained Language Models (PLMs) such as RoBERTa Liu et al. (2019) and T5 Raffel et al. (2020)) are inherently knowledge bases since they are exposed to a tremendous amount of data (e.g., the C4 dataset Raffel et al. (2020)) in the pre-training stage Petroni et al. (2019); AlKhamissi et al. (2022). Unfortunately, transferring the intrinsic knowledge in PLMs to downstream tasks is nontrivial. In practice, fine-tuning is adopted widely due to its flexibility Chen et al. (2020) and numerous improved methods Lee et al. (2019); Chen et al. (2020, 2019); Mosbach et al. (2020); Zhang et al. (2020); Xu et al. (2021); Aghajanyan et al. (2020); Wu et al. (2022) are proposed in recent years. However, fine-tuning faces two challenges when adapting models to new domains Chen et al. (2019), including catastrophic forgetting Kirkpatrick et al. (2017) and negative transfer Torrey and Shavlik (2010). More specifically, catastrophic forgetting refers to models losing previously learned knowledge and overfitting the target domain data. Negative transfer occurs because not all pre-trained knowledge is transferable across domains. Obviously, catastrophic forgetting and negative transfer constitute a dilemma where the crux lies in identifying and utilizing transferable knowledge. A large body of previous work has been conducted to solve this problem. Existing fine-tuning methods for mitigating catastrophic forgetting can be summarized as preventing the fine-tuned models from deviating too far from the pre-trained weights. For example, _RecAdam_Chen et al. (2020) and _Child-Tuning_Xu et al. (2021) utilize the Fisher Information Matrix estimated by the pre-trained model to constraint the update in the fine-tuned model. _Mixout_Lee et al. (2019) randomly replaces the model parameters with their pre-trained weights. These methods constrain the update of models' parameters indiscriminately without identifying what knowledge is transferable and thus susceptible to negative transfer. Chen et al. (2019) proposed _BSS_, which focuses on mitigating negative transfer by penalizing the small singular values of the feature matrix. However, when only negative transfer is concerned, _BSS_ may not fully utilize the pre-trained knowledge. In this paper, we propose a novel method called _Causal Effect Tuning_ (**CET**) for mining the pre-trained knowledge in PLMs. Unlike the previous fine-tuning method, our method is rooted in the theory of causal inference. It delves into the causalities between data, models, and features instead of merely statistical association. First, we frame vanilla fine-tuning into a causal graph (Glymour et al., 2016) and find out that the cause of catastrophic forgetting is the vanishing causal effects of pre-trained data. Therefore, preventing forgetting is to maximize the causal effect. Then, we approximate the causal effect with the likelihood of the joint prediction of K-Nearest-Neighbor (KNN) samples. Since equipping models with commonsense knowledge is still challenging, we implement the proposed causal graph with a heuristic approximation on commonsense QA. We measure the distance with the similarity between gold answers (i.e., ground-truth answers) instead of questions for retrieving KNNs. The rationale is that the questions with the same gold answer share the same commonsense knowledge in PLMs. Finally, we apply our method to RoBERTa (Liu et al., 2019) and T5 (Raffel et al., 2020) and conduct extensive experiments on six commonsense datasets. The experimental results show that our method outperforms state-of-the-art fine-tuning methods and can be plugged into the state-of-the-art QA models to improve performance. More importantly, our method is lightweight and flexible since it requires no learnable parameter except PLMs and has fewer hyper-parameters to tune. It is worth noting that our method readily controls the strength of knowledge preservation by a single hyper-parameter, enabling a good balance between preserving pre-trained knowledge and absorbing new knowledge from downstream tasks. In summary, our contributions are three-fold: * We present a causal graph for fine-tuning with less forgetting by identifying the root cause of catastrophic forgetting as the missing causal effects of pre-trained data. * Based on the proposed causal graph, we design a lightweight and flexible fine-tuning method called _Causal Effect Tuning_ for preserving knowledge in PLMs. * For commonsense QA, we estimate the causal effect with a heuristic approximation. And we verify the effectiveness and versatility of our method through extensive experiments on six commonsense QA datasets. ## 2 Related Work ### Fine-tuning Methods Apart from the methods mentioned above, some approaches improve downstream performances from the perspective of robustness. Aghajanyan et al. (2020) proposed \(R3F\), which regularizes the symmetric KL divergence between the classifications of the original samples and the perturbed ones. Wu et al. (2022) proposed _Noisytune_, which adds uniform distribution noise to pre-trained parameters before fine-tuning to reduce the risk of overfitting the pre-training tasks and data. Besides, Mosbach et al. (2020); Zhang et al. (2020) increased the stability of fine-tuning BERT (Devlin et al., 2019) in the low-data regime. Mosbach et al. (2020) advocated fine-tuning for a long time and choosing good optimizers and hyper-parameters. Zhang et al. (2020) verified that re-initialized the top layers of BERT helps pre-trained knowledge transfer to downstream tasks. ### Causal Inference Causal inference (Glymour et al., 2016; Scholkopf, 2022) has been recently introduced to various computer vision tasks such as image classification (Hu et al., 2021), semantic segmentation (Zhang et al., 2020) and long-tailed classification (Tang et al., 2020; Nan et al., 2021), and NLP tasks such as distantly supervised NER (Zhang et al., 2021), neural dialogue generation (Zhu et al., 2020) and continual named entity recognition (Zheng et al., 2022). To our best knowledge, we are the first to apply causal inference to fine-tuning. ### Continual Learning Although catastrophic forgetting happens in both continual learning (Rebuffi et al., 2017; Hu et al., 2021) and fine-tuning, the targets of these two tasks are fundamentally different. Continual learning aims to learn a growing number of tasks sequentially and maximize the performance on all recognized tasks. In contrast, fine-tuning maximize only the performance of target tasks. The recent advance in continual learning (Hu et al., 2021; Zheng et al., 2022) partially inspires this work. ## 3 Methodology In this section, we first use causal graphs Pearl (2009) to analyze how pre-trained knowledge is forgotten in fine-tuning. Then, we present a causal graph for anti-forgetting based on previous analysis. Next, we estimate the causal effect through derivations and propose a unified learning objective for fine-tuning with less forgetting. At last, we provide a heuristic approximation for estimating the causal effect on a challenging downstream task, commonsense QA. Note that the proposed causal graph and the fine-tuning method are generic to all downstream tasks. ### Vanilla Fine-Tuning In a causal graph, nodes represent variables, and directed edges are causalities between nodes. Fig.(1a) delineates the process of vanilla fine-tuning. We denote the pre-trained data (i.e., pre-trained knowledge) as \(P\); the data in target tasks as \(X\); the feature of \(X\) extracted by the pre-trained model and fine-tuned model as \(H_{0}\) and \(H\), respectively; the prediction of the fine-tuned model on target tasks as \(\hat{Y}\) (i.e., the probability over categories). The causality between nodes (i.e., directed edges) is as follows: (1) \(X\to H\rightarrow\hat{Y}\): \(X\to H\) represents that the feature \(H\) is extracted by the backbone model such as RoBERTa, and \(H\rightarrow\hat{Y}\) represents a classifier compute the prediction \(\hat{Y}\) according to the extracted feature \(H\); (2) \(X\to H_{0}\gets P\): \(H_{0}\) is determined by both \(P\) and \(X\) because \(H_{0}\) is extracted by the pre-trained model, which is trained on \(P\)2. Footnote 2: Here, we ignore the effect of initial parameters initialized from the pre-trained model since it will be exponentially decayed towards zero during fine-tuning Kirkpatrick et al. (2017). Then, the effect of pre-trained data \(P\) on predictions \(\hat{Y}\) can be calculated as: \[\textit{Effect}_{P} =\mathbb{P}(\hat{Y}=\hat{y}|\textit{do}(P=p))\] \[-\mathbb{P}(\hat{Y}=\hat{y}|\textit{do}(P=0)) \tag{1}\] \[=\mathbb{P}(\hat{Y}=\hat{y}|P=p)-\mathbb{P}(\hat{Y}=\hat{y}|P=0)\] (2) \[=\mathbb{P}(\hat{Y}=\hat{y})-\mathbb{P}(\hat{Y}=\hat{y})\] (3) \[=0, \tag{4}\] In Eq.(1), \(\textit{do}(P=0)\) represents that no pre-trained data is used for pre-training, and \(\textit{do}(P=p)\) represents a standard pre-training is performed. Then, \(\mathbb{P}(\hat{Y}=\hat{y}|\textit{do}(P=p))\) is the prediction given by a **pre-trained-then-fine-tuned** model and \(\mathbb{P}(\hat{Y}=\hat{y}|\textit{do}(P=0))\) is the prediction given by a **randomly-initialized-then-fine-tuned** model. Eq.(1) defines \(\textit{Effect}_{P}\) as the difference between the two predictions. Eq.(2) holds because \(P\) has no parent nodes. Eq.(3) holds because collider \(H_{0}\) blocks all causal paths from \(P\) to \(Y\). Eq.(1)-(4) shows that a vanilla fine-tuned model will eventually forget all pre-trained knowledge when no constraints are imposed. In practice, fine-tuned models will not forget all learned knowledge because the learning rate and training time are considerably lower and shorter than those in pre-training. However, fine-tuned models likely forget partial pre-trained knowledge, overfit the target data, and fall into sub-optimal states since the amount of target data is usually considerably less than that of pre-trained data. ### Fine-Tuning with Less Forgetting The causal graph in Fig.(1a) necessitates the retrieval of the causality between \(P\) and \(\hat{Y}\) back. A straightforward solution is utilizing the pre-trained Figure 1: The causal graphs of vanilla fine-tuning and our method. (a): The knowledge is forgotten during vanilla fine-tuning since the causal effect of the pre-trained data is missing; (b): When conditioning on \(H_{0}^{T}\), the causal effect of the pre-trained data is retained through the causal path \(P\leftrightarrow X^{T}\to H\rightarrow\hat{Y}\). In addition, the model absorbs new knowledge from \(X^{NT}\) through the causal path \(X^{NT}\to H\rightarrow\hat{Y}\). data to constrain model behaviors in new tasks. However, it is often obstructed by time, space, and financial constraints. Thanks to causal inference, we can build a causal path between \(P\) and \(X\) without storing \(P\). In the causal graph Fig.(1a), \(H_{0}\) is the joint outcome of the independent causes \(P\) and \(X\). Intriguingly, once the common effect \(H_{0}\) is observed, the causes \(P\) and \(X\) become dependent. The causal effect is called **colliding effect** in Hu et al. (2021); Zheng et al. (2022)3. We'd like to provide a vivid example Pearl (2009) for understanding this pattern in causal inference: If the admission criteria to a certain school require either high grades or special musical talents, then these two attributes will be found to be correlated (negatively) in that school's student population, even if these attributes are uncorrelated in the population at large. By conditioning on \(H_{0}\), the causal effect of pre-trained data is preserved during fine-tuning (i.e., \(\textit{Effect}_{P}>0\)), and thus the pre-trained knowledge is preserved. Footnote 3: This phenomenon is also known as _Berkson’s paradox_ in [1] and as the _explaining away effect_ in [1]. Except for preserving old knowledge, assimilating new knowledge from target data is critical. In addition, negative transfer may occur if we preserve pre-trained knowledge overly. Motivated by this, we split the target data into two nodes \(X^{T}\) and \(X^{NT}\). \(X^{T}\) represents the samples where we calculate colliding effects, and their knowledge should be transferred from PLMs. \(X^{NT}\) is the samples where we do not calculate colliding effects, and their knowledge is domain-specific and should be absorbed into fine-tuned models. Consequently, the causal graph for our method is in Fig.(1b), and the rationale is as follows: The fine-tuned model preserves pre-trained knowledge by utilizing colliding effects (\(P\leftrightarrow X^{T}\)) while learning domain-specific knowledge (\(X^{NT}\)). The final prediction depends on both **pre-trained knowledge** and **domain-specific knowledge** from causal paths \(P\leftrightarrow X^{T}\to H\rightarrow\hat{Y}\) and \(X^{NT}\to H\rightarrow\hat{Y}\), respectively. ### Estimating Colliding Effects Next, we need to estimate the colliding effect between \(P\) and \(X^{T}\). When conditioning on \(H_{0}\), \(\textit{Effect}_{P}\) can be calculated as: \[\textit{Effect}_{P}=\sum_{i=1}^{N}\textit{Effect}_{P}^{(i)} \tag{5}\] \[\approx\sum_{i=1}^{N}\sum_{k=0}^{K}\mathbb{P}(\hat{Y}^{(i)}|X=x^{ (i,k)})W_{P}(x^{(i)},x^{(i,k)}), \tag{6}\] where \(\sum_{k=0}^{K}W_{P}(x^{(i)},x^{(i,k)})=1\). \(N\) is the number of samples in the target data and \(x^{(i)}\) is the \(i\)-th sample. \(\textit{Effect}_{P}^{(i)}\) is the colliding effect of \(P\) on the prediction \(\hat{Y}^{(i)}\). \(W_{P}(\cdot,\cdot)\) is a function determined by the pre-trained model and measures the similarity between two samples in the hidden space of the pre-trained model. In this case, we denote \(W_{P}(x^{(i)},x^{(i,k)})\) as \(W_{i,k}\) for brevity. \(x^{(i,k)}\) is the \(k\)-th nearest neighbor of \(x^{(i)}\) in the hidden space. Since \(x^{(i)}\) always has the largest similarity with itself, we let \(x^{(i,0)}=x^{(i)}\) and call \(x^{(i)}\) the anchor sample. Besides, we assume that the \(K\) Nearest Neighbours (KNNs) are sorted in descending order according to the similarity. Therefore, we have \(W_{i,0}\geq W_{i,1}\geq W_{i,2}\geq\cdots\geq W_{i,K}\). \(K\) is a hyper-parameter representing the number of neighbors for estimating \(\hat{Y}^{(i)}\). We provide a detailed derivation and further explanation in Appendix A. Eq.(5) re-writes the total causal effect as the sum of the causal effect on the prediction of each target sample (i.e.,\(\textit{Effect}_{P}^{(i)}\)). In Eq.(6), \(\mathbb{P}(\hat{Y}^{(i)}|X=x^{(i,k)})\) represents the likelihood of \(\hat{Y}^{(i)}\) when \(x^{(i,k)}\) is the model input. Eq.(6) shows that \(\textit{Effect}_{P}^{(i)}\) can be approximated by the weighted sum of the likelihood when the model input is the anchor sample \(x^{(i)}\) and its KNNs. Since we expect to maximize \(\mathbb{P}(\hat{Y}^{(i)}=y^{(i)}|X=x^{(i)})\), maximizing \(\textit{Effect}_{P}^{(i)}\) equals to maximizing the likelihood of the **joint prediction** on the ground-truth label \(y^{(i)}\). ### Overall Objective In Eq. 6, the total causal effect \(\textit{Effect}_{P}\) is broken down into the causal effect of each sample \(\textit{Effect}_{P}^{(i)}\). In this case, maximizing \(\textit{Effect}_{P}\) is to preserve the related knowledge of all samples. As we mentioned before, indiscriminately preserving knowledge may lead to negative transfer. To address this problem, we introduce a similarity threshold \(\theta\) to select the number of nearest neighbors for each sample automatically. Specifically, for the \(i\)-th sample, we truncate the \(k_{i}\) (\(K\geq k_{i}\geq 0\)) nearest neighbors whose similarity is greater or equal than \(\theta\). In this way, we differentiate the strength of knowledge preservation on each sample by selecting the neighbors with small distances to their anchor sample. More interestingly, when \(k_{i}=0\), _i.e._, a sample has no neighbors, the \(\textit{Effect}^{(i)}_{P}\) amounts to \(\mathbb{P}(\hat{Y}^{(i)}=y^{(i)}|X=x^{(i)})\), which is exactly the objective of each sample in vanilla fine-tuning. Fig. 2 provides an illustration for our method, where the samples with no neighbors can be seen as a special case of our method. Formally, we define the overall objective as follows: \[\max\quad\textit{Effect}_{P}=\sum_{i=1}^{N}\textit{Effect}^{(i)}_ {P} \tag{7}\] \[=\underbrace{\sum_{i\in\mathcal{S}^{T}}\textit{Effect}^{(i)}_{P }}_{\text{Colliding Effects}}+\underbrace{\sum_{i\in\mathcal{S}^{NT}}\textit{ Effect}^{(i)}_{P}}_{\text{Vanilla Fine-Tuning}}\] (8) \[=\underbrace{\sum_{i\in\mathcal{S}^{T}}\sum_{k=0}^{k_{i}}\mathbb{ P}(\hat{Y}^{(i)}|X=x^{(i,k)})W_{i,k}}_{\text{Colliding Effects}}\] (9) \[\qquad+\underbrace{\sum_{i\in\mathcal{S}^{NT}}\mathbb{P}(\hat{Y}^ {(i)}|X=x^{(i)})}_{\text{Vanilla Fine-Tuning}},\] where \(\sum_{k}W_{i,k}=1,\mathcal{S}^{T}=\{i|k_{i}>0\},\mathcal{S}^{NT}=\{i|k_{i}=0\}\). Considering the distances between KNNs and their anchor sample are approximated and thus inaccurate, we set \(W_{i,0}=W_{0}\) and \(W_{i,1}=W_{i,2}=\cdots=W_{i,k_{i}}=\frac{1-W_{0}}{k_{i}}\) when \(k_{i}>0\) for implementation. \(W_{0}\) is a hyper-parameter for controlling the strength of colliding effects. When \(W_{0}=0\), the overall target degenerates to the vanilla fine-tuning target. When \(W_{0}=1\), the overall target retains knowledge indiscriminately on all samples. In Eq.(9), the second term amounts to the vanilla fine-tuning objective since only the anchor sample's prediction is computed. In other words, we preserve knowledge for the samples with KNNs and learn new knowledge for the samples without KNNs. The rationale is that the knowledge should be preserved when more samples require it to answer the question. In the proposed causal graph in Fig.(1b), the first and the second term of Eq.(9) correspond to the two causal paths through \(X^{T}\) and \(X^{NT}\) respectively. We summarized the proposed method in Fig. 2 and Alg. 1 in Appendix A. ### An Implementation on Commonsense QA In this subsection, we provide an implementation for the causal graph in Fig.(1b) on commonsense QA. We note that the overall objective in Eq. 9 is agnostic to specific downstream tasks and model architectures. The implementation can be different in various tasks or model architectures, and the key is to find proper KNNs. This paper provides an implementation on commonsense QA since PLMs may be endowed with commonsense knowledge in pre-training Petroni et al. (2019); AlKhamissi et al. (2022), and it is still challenging for models to Figure 2: An illustration of Causal Effect Tuning. \(x^{(i)}\) is the anchor sample and \(h^{(i)}_{0}\) is the hidden feature extracted by the pre-trained model. \(x^{(i,1)},x^{(i,2)},x^{(i,3)}\) are the KNNs of \(x^{(i)}\). We apply colliding effects on \(x^{(i)}\) to preserve the old knowledge. After fine-tuning, the “Red” knowledge is preserved with colliding effects, and “blue” knowledge is forgotten without colliding effects. A specific instance is as follows: \(x^{(i)}\)= “What is a fast but expensive way to send small cargo? (answer: airplane)”; \(x^{(i,1)}\)=“Where could you find a seat that sometimes vibrates?”(answer: airplane); \(x^{(i,2)}\)=“What has metal wings?”(answer: airplane); \(x^{(i,3)}\)= “It was important precious cargo, so it was delivered as quickly as possible by means of what?”(answer: aeroplane). The “red” knowledge represents the commonsense about “**airplane**”. capitalize on commonsense (Talmor et al., 2018). We first formulate the commonsense QA as follows: Given a dataset with \(N\) samples \(\{(q^{(i)},a^{(i)},\{o^{(i)}_{j}\}_{j})\}_{i}^{N}\), we train the best model for choosing the gold answer \(a^{(i)}\) among options \(\{o^{(i)}_{j}\}\) given a question \(q^{(i)}\). More specifically, the input of the \(i\)-th sample can be \(x^{(i)}=q^{(i)}||o^{(i)}_{1}||\cdots||o^{(i)}_{j}\) or \(\{x^{(i)}\}_{j}=\{q^{(i)}||o^{(i)}_{j}\}_{j}\)4 where \(||\) is the string-level concatenation. Footnote 4: Concatenating all options or each option depends on models. Then, we define a metric to search KNNs. A simple solution is to compute the euclidean distance or cosine similarity between the average last hidden states of PLMs. However, this method struggles to capture accurate semantic meanings, and measuring sentence similarity remains challenging. In this regard, we provide a simple heuristic approximation. In most cases, the questions with the same gold answers share the same knowledge. For example, "airplane" is the gold answer to the following questions, and we can use the knowledge about "airplane" to answer them: "_What is a fast but expensive way to send small cargo?_"; "_Where could you find a seat that sometimes vibrates?_"; "_What has metal wings?_". Therefore, we estimate the similarity between gold answers to cope with the difficulty of evaluating sentence similarity. Since options are usually much shorter than questions, lightweight tools such as spaCy (Honnibal et al., 2020) can be used to retrieve gold answers with close semantic meanings (e.g., "airplane" and "aeroplane"). At last, we define the input of the \(i\)-th sample's KNNs as \(x^{(i,k)}=q^{(i,k)}||o^{(i)}_{1}||\cdots||o^{(i)}_{j}\) or \(\{x^{(i,k)}\}_{j}=\{q^{(i,k)}||o^{(i)}_{j}\}_{j}\). It alleviates the over-fitting problem since the model needs to select the correct answer among the options of anchor sample when the question is from its KNNs. ## 4 Experiments ### Settings Datasets.We conduct experiments on 6 datasets: CommonsenseQA(CSQA) (Talmor et al., 2018), OpenBookQA(OBQA) (Mihaylov et al., 2018), ARC (Clark et al., 2018, 2016), QASC (Khot et al., 2020), SocialQA (SIQA) (Sap et al., 2019), PIQA (Bisk et al., 2020). Since the official test sets of CSQA, QASC, SIQA, and PIQA are not available, we follow (Yasunaga et al., 2021) and use the official dev sets as test sets and split in-house dev set from the original training sets. The dataset statistics are summarized in Table 6 in Appendix B. Training.Given its popularity, we use RoBERTalarge (Liu et al., 2019) as the backbone model in default. We also explore T5-large (Raffel et al., 2020) since Khashabi et al. (2020) showed that it excels at answering questions in different formats. Other training details are specified in Appendix B. Competitive Methods.We make comparisons with nine state-of-the-art fine-tuning methods: vanilla fine-tuning, BSS (Chen et al., 2019), ChildTune-F&ChildTune-D (Xu et al., 2021), Mixout (Lee et al., 2019), NoisyTune (Wu et al., 2022), R3F (Aghajanyan et al., 2020), RecAdam (Chen et al., 2020) and Relnit (Zhang et al., 2020). For each method, we use the recommended hyper-parameters in the paper and source code for a fair comparison. We discuss the implementation details of the fine-tuning methods in Appendix C. Hyper-Parameters.As for the hyperparameters of our methods, we fix \(K=5\) and search the best \(W_{0}\) in {0.5, 0.7, 0.9, 0.95, 0.97} for each dataset. We use spaCy to estimate the similarity between gold answers. We set \(\theta=0.99\) for PIQA and \(\theta=1.00\) for other datasets (i.e., the gold answers should be matched precisely). ### Results and Analyses Comparisons with State-Of-The-Art.To demonstrate the effectiveness of our method, we re-implement several strong baselines on commonsense QA datasets using their officially released codes and hyper-parameters. The results are summarized in Table 1. Results show that our method outperforms all fine-tuning methods consistently. On QASC and OBQA, our method achieves 57.57% and 70.76% accuracy, obtaining 3.53% and 2.64% improvements on vanilla fine-tuning. _Why our method better preserves commonsense knowledge from PLMs?_ The reasons are two-fold. The first reason is that our method utilizes the colliding effect for transferring the "colliding" commonsense knowledge, while other methods do not. For instance, in Fig.2, our method encourages models to update \(x^{(i)}\) and its KNNs \(x^{(i,1)},x^{(i,2)},x^{(i,3)}\) simultaneously. In this way, the commonsense knowledge about "airplane" that "airplanes deliver small and precious cargo", "airplanes have metal wings" and "airplanes have seats" can be trans ferred jointly, which reduces the risk of over-fitting. We provide more examples from each dataset in Table 3 and Table 10,11, in Appendix F. The second reason is that our method does not directly constrain (e.g., ChildTune-D, Mixout, RecAdam) or modify (e.g., NoisyTune, ReInit) the parameters of fine-tuned models. Empirical results show that these methods encounter negative transfers on some of the datasets. Instead, our method builds upon the causal inference theory and utilizes the joint prediction as a soft constraint to transfer related knowledge while mitigating negative transfer. **Compared with Knowledge-Graph-Based Methods.** Utilizing knowledge graphs such as ConceptNet (Speer et al., 2017) is a common practice for building commonsense QA systems. We compared our method with six knowledge-graph-based methods: Relation Network (Santoro et al., 2017), KagNet (Lin et al., 2019), RGCN(Schlichtkrull et al., 2018), MHGRN(Feng et al., 2020), QAGNN(Yasunaga et al., 2021), SAFE(Jiang et al., 2022). Detailed descriptions and other related works are given in Appendix D. Note that these methods utilize knowledge graphs (KGs) as external knowledge resources, and most of them train graph neural networks (GNNs) for extracting features from KGs. In contrast, our method does not introduce any additional learnable parameters except PLMs and the final fully-connected layer. The result in Table 2 shows that our method out \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Methods & CSQA & OBQA & ARC-Easy & ARC-Challenge & QASC & PIQA & SIQA \\ \hline Fine-Tuning & 75.74 (0.47) & 68.12 (0.32) & 67.66 (0.45) & 45.98 (0.53) & 54.04 (1.05) & 78.62 (0.53) & 77.46 (0.33) \\ BSS & 76.21 (0.63) & 68.64 (1.23) & 68.24 (0.31) & 46.62 (0.80) & 53.82 (1.20) & 78.20 (0.96) & 77.35 (0.18) \\ ChildTune-F & 75.50 (0.44) & 69.84 (0.88) & 68.17 (0.77) & 46.30 (1.67) & 54.41 (1.63) & 77.61 (1.06) & 75.87 (0.64) \\ ChildTune-D & 76.76 (0.81) & 69.36 (0.60) & 67.86 (0.73) & 45.28 (0.67) & 55.77 (0.52) & 78.32 (0.38) & 78.20 (0.35) \\ Mixout & 76.09 (0.56) & 69.70 (0.71) & 67.85 (0.57) & 44.87 (0.72) & 57.34 (1.02) & 79.22 (0.31) & 77.89 (0.37) \\ NoisyTune & 76.01 (0.61) & 67.56 (0.52) & 67.61 (0.58) & 46.05 (0.65) & 54.43 (0.60) & 78.61 (0.31) & 76.59 (0.36) \\ R3F & 76.59 (0.48) & 68.47 (0.26) & 68.13 (0.68) & 47.01 (0.58) & 55.69 (0.78) & 79.38 (0.60) & 77.05 (0.44) \\ RecAdam & 75.43 (0.33) & 70.68 (0.89) & 68.07 (0.69) & 45.90 (0.59) & 54.62 (1.22) & 78.26 (1.25) & 76.71 (0.61) \\ Relnit & 75.51 (0.71) & 69.92 (1.14) & 67.63 (0.59) & 46.68 (0.39) & 52.12 (1.66) & 78.61 (0.37) & 77.79 (0.15) \\ CET(Ours) & **76.82 (0.33)** & **70.76 (0.33)** & **68.53 (0.53)** & **47.52 (0.38)** & **57.57 (0.44)** & **79.43 (0.27)** & **78.76 (0.31)** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art methods. The average accuracy (%) and the standard derivation are reported. \begin{table} \begin{tabular}{l|c|c|c c c c c c} \hline \hline & & & & & & & & \\ \multirow{2}{*}{Methods} & \multirow{2}{*}{use GNN?} & \multirow{2}{*}{use KG?} & \multirow{2}{*}{5\%} & \multirow{2}{*}{10\%} & \multirow{2}{*}{20\%} & \multirow{2}{*}{50\%} & \multirow{2}{*}{80\%} & \multirow{2}{*}{100\%} \\ \cline{1-2} \cline{6-9} & & & & & & & & & \\ \hline RoBERTa-large & \(\bigstar\) & \(\bigstar\) & 29.66 & 42.84 & 58.47 & 66.13 & 68.47 & 68.69 \\ \hline +RGCN (Schlichtkrull et al., 2018) & \(\bigstar\) & \(\bigstar\) & 24.41 & 43.75 & 59.44 & 66.07 & 68.33 & 68.41 \\ +KagNet (Lin et al., 2019) & \(\bigstar\) & \(\bigstar\) & 21.92 & 49.83 & 60.09 & 66.93 & 69.14 & 68.59 \\ +Relation Network (Santoro et al., 2017) & \(\bigstar\) & \(\bigstar\) & 23.77 & 34.09 & 59.90 & 65.62 & 67.37 & 69.08 \\ +MHGRN (Feng et al., 2020) & \(\bigstar\) & \(\bigstar\) & 29.01 & 32.02 & 50.23 & 68.09 & 70.83 & 71.11 \\ +QAGNN (Yasunaga et al., 2021) & \(\bigstar\) & \(\bigstar\) & 32.95 & 37.77 & 50.15 & 69.33 & 70.99 & 73.41 \\ +SAFE (Jiang et al., 2022) & \(\bigstar\) & \(\bigstar\) & 36.45 & 56.51 & 65.16 & 70.72 & 73.22 & 74.03 \\ \hline +CET(Ours) & \(\bigstar\) & \(\bigstar\) & 56.24 & 59.55 & 65.19 & 67.93 & 70.02 & 70.99 \\ +CET+QAGNN & \(\bigstar\) & \(\bigstar\) & 58.78 & 60.35 & 65.59 & 70.43 & 72.04 & 73.81 \\ +CET+SAFE & \(\bigstar\) & \(\bigstar\) & **59.39** & **61.02** & **65.75** & **70.79** & **73.31** & **74.54** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons with knowledge-graph-based methods on CSQA with different proportions of training data. We use the train-dev-test split in Jiang et al. (2022) and thus the CSQA results are inconsistent with those in other experiments. The results of RoBERTa-large, RGCN, KagNet, Relation Network, MHGRN, QAGNN, and SAFE are reported in Jiang et al. (2022). We report the average accuracy (%). \begin{table} \begin{tabular}{l c} \hline \hline & Gold Assuer & Question \\ \hline Anchor & pet shops & Too many people want exotic names. The demand is driving what to carry them? \\ \hline \multirow{5}{*}{KNNs} & pet shops & Where can a person buy a snake? \\ \cline{2-2} & pet shop & Where might a blownish be kept? \\ \cline{2-2} & pet shop & Where can you take home a harmful crnb? \\ \cline{2-2} & pet store & Where would you get a dog if you do not have one? \\ \cline{2-2} & pet store & Jeha koves animals and he hates animal abuse. Because of this, john is very careful about the places he goes. Where might the avoid going? \\ \hline \hline \end{tabular} \end{table} Table 3: An CSQA example and its KNNs in our method. performs RGCN, KagNet, and Relation Network by only mining the internal knowledge of PLMs. Furthermore, our method significantly outperforms all the knowledge-graph-based methods under low resource conditions (\(\leq\) 20% training data is used), which shows that our method helps PLMs adapt to downstream tasks with less data. In addition, our method can be easily implemented as a plug-in module by simply substituting the vanilla fine-tuning objective for the causal effect in Eq.(9). We combine our method with QAGNN and SAFE, respectively. Table 2 shows that our approach consistently improves QAGNN and SAFE and achieves superior performances. Therefore, the pre-trained commonsense knowledge benefits downstream tasks even when KGs are introduced. **Fine-tuning on a Cyclic Chain of Tasks.** To understand how our method preserves knowledge during fine-tuning, we follow Aghajanyan et al. (2020) and design a cyclic chain of tasks: \[\underbrace{A\to B\to C}_{Cycle1}\rightarrow\underbrace{A\to B \to C}_{Cycle2}\rightarrow\cdots\] In our experiment, we set A=CSQA, B=OBQA, and C=QASC for a demonstration. Specifically, we start from a PLM and fine-tune it on CSQA. Then, we use the model fine-tuned on CSQA to initialize the backbone model's parameters and continue fine-tuning it on OBQA. Table 4 shows that our method retains knowledge significantly better than vanilla fine-tuning. The performances on OBQA and QASC improve at every cycle, suggesting that our method effectively retains knowledge from the previous datasets. Unfortunately, both performances of vanilla fine-tuning and our method on CSQA degrade slightly, showing that negative transfer happens. In this case, vanilla fine-tuning will lead to more serious performance degradation. The experiment is for demonstration, and a better combination of tasks that promote each other may be found. **Ablation Study.** To verify the effectiveness of our method, we consider the following ablated version of our method: (1) replacing the KNNs (_Large_,Ours) with randomly selected samples (_Rand_) or samples with the smallest similarity (_Small_); (2) searching the KNNs according to the similarity of average last hidden states (_Avg_) instead of gold answers (_Gold_, Ours). The result in Table 5 shows that the model learns commonsense knowledge better when the KNNs share the gold answer with close meaning. **Additional Experiments.** Due to space constraints, we present the experiments on T5, the hyper-parameter analysis, the experiments on Named Entity Recognition, and further discussions in Appendix E. ## 5 Conclusion We propose a novel fine-tuning technique rooted in causal inference for preserving pre-trained knowledge from PLMs. Although many fine-tuning methods have been proposed in recent years, most of them overlooked one or both hidden issues of fine-tuning, catastrophic forgetting and negative transfer, which result in a dilemma. In this paper, we provide an answer to the dilemma from the casual lens. Impressively, we empirically find that the proposed method achieves the best performance on six commonsense QA datasets and is flexible to be applied to various QA systems and model architectures. \begin{table} \begin{tabular}{l|l|c c} \hline \hline & Dataset & Fine-Tuning & CET(Ours) \\ \hline \multirow{3}{*}{Cycle1} & CSQA & 75.74 & **76.82** \\ & OBQA & 68.80 & **70.89** \\ & QASC & 54.31 & **57.49** \\ \hline \multirow{3}{*}{Cycle 2} & CSQA & 75.52 & **76.69** \\ & OBQA & 69.95 & **71.18** \\ & QASC & 55.06 & **57.64** \\ \hline \multirow{3}{*}{Cycle 3} & CSQA & 75.44 & **76.75** \\ & OBQA & 70.28 & **71.45** \\ \cline{1-1} & QASC & 55.12 & **57.78** \\ \hline \hline \end{tabular} \end{table} Table 4: The results of cyclical sequential fine-tuning for three cycles. The average accuracy (%) is reported. \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & CSQA & OBQA & QASC \\ \hline Gold+Large(Ours) & **76.82** & **70.76** & **57.57** \\ \hline Gold+Rand & 74.61 & 68.53 & 55.77 \\ Gold+Small & 74.04 & 64.67 & 53.13 \\ Avg+Large & 76.17 & 69.64 & 55.62 \\ Avg+Rand & 74.12 & 68.54 & 54.54 \\ Avg+Small & 74.20 & 68.07 & 53.46 \\ \hline Fine-Tuning & 75.74 & 68.12 & 54.04 \\ \hline \hline \end{tabular} \end{table} Table 5: The ablation study of our method. _Gold/Avg_: searching the KNNs according to the similarity of gold answers or the average last hidden states. _Large/Small/Rand_: searching the KNNs with the largest or smallest similarity, or randomly. The average accuracy (%) is reported. ### Limitations There are three limitations on our method. First, we did not verify our method on more generic tasks, such as text classification, yet it is not limited to commonsense QA. Extending our method to other downstream tasks is our future work. Second, our method requires a longer training time and a larger GPU memory since the KNNs require forward and backward propagation additionally. Third, we do not consider the ambiguity of gold answers, which may affect the quality of KNNs. For example, "apple" may refer to a kind of fruit or a technology company. ## Acknowledgements The work described in this paper was partially funded by the National Natural Science Foundation of China (Grant Nos. 62272173, 61872148), the Natural Science Foundation of Guangdong Province (Grant Nos. 2022A1515010179, 2019A1515010768).
2305.18844
Forward-backward correlations with the $Σ$ quantity in the wounded constituent framework at LHC energies
$\Sigma$ is a new correlation measure, quite recently introduced to heavy-ion physics. This measure, defined in the independent source model as a strongly intensive quantity, is expected to be free of the effects of system volume and volume fluctuations. This paper discusses the forward-backward correlation quantified with the $\Sigma$ observable calculated in the framework of the wounded nucleon model (WNM) and wounded quark model (WQM). Findings show that the wounded constituent approach outperforms the commonly used heavy-ion Monte Carlo generators, such as HIJING, AMPT or EPOS, by accurately describing the experimental data on FB correlations with $\Sigma$ measured by ALICE in Xe--Xe reactions at $\sqrt{s_{\rm{NN}}}$=5.44 TeV and in Pb--Pb collisions at $\sqrt{s_{\rm{NN}}}$= 2.76 and 5.02 TeV. This paper demonstrates that $\Sigma$ can be a unique tool for determining the fragmentation function of a wounded constituent in a symmetric nucleus-nucleus collision. However, in the wounded constituent framework, it is no longer a strongly intensive quantity.
Iwona Anna Sputowska
2023-05-30T08:36:36Z
http://arxiv.org/abs/2305.18844v1
Forward-backward correlations with the \(\Sigma\) quantity in the wounded constituent framework at LHC energies ###### Abstract \(\Sigma\) is a new correlation measure, quite recently introduced to heavy-ion physics. This measure, defined in the independent source model as a strongly intensive quantity, is expected to be free of the effects of system volume and volume fluctuations. This paper discusses the forward-backward correlation quantified with the \(\Sigma\) observable calculated in the framework of the wounded nucleon model (WNM) and wounded quark model (WQM). Findings show that the wounded constituent approach outperforms the commonly used heavy-ion Monte Carlo generators, such as HIJING, AMPT or EPOS, by accurately describing the experimental data on FB correlations with \(\Sigma\) measured by ALICE in Xe-Xe reactions at \(\sqrt{s_{\rm NN}}\)=5.44 TeV and in Pb-Pb collisions at \(\sqrt{s_{\rm NN}}\)= 2.76 and 5.02 TeV. This paper demonstrates that \(\Sigma\) can be a unique tool for determining the fragmentation function of a wounded constituent in a symmetric nucleus-nucleus collision. However, in the wounded constituent framework, it is no longer a strongly intensive quantity. ## I Introduction Over the last few years, there has been growing interest in the analysis of particle multiplicity correlations and fluctuations in high-energy nucleus-nucleus collisions with so-called _strongly intensive quantities._ These observables were first introduced in heavy-ion physics in Ref. [1] as a remedy to the spurious effect of volume (centrality) fluctuations that contaminate the measured physical variables such as the multiplicity correlation coefficient Refs. [2; 3]. Strongly intensive quantities were defined in an _independent source model_ framework as observables that do not depend on system volume or its fluctuations but rather carry direct information about a single (average) source producing particles. The term independent source model refers to a class of superposition models that assume that particles are emitted independently from a collection of statistically identical sources. A fundamental example of the independent source model is the _bounded nucleon model_ Ref. [4], which assumes that a nucleus-nucleus collision can be constructed as a superposition of elementary nucleon-nucleon interactions. Two sets of strongly intensive quantities, \(\Sigma\) and \(\Delta\), were constructed as a combination of second moments and dedicated to studying correlations and fluctuations. Of the two, only the \(\Sigma\) family contains a covariance term sensitive to particle correlations. Recently, new results on forward-backward (FB) multiplicity correlations with strongly intensive quantity \(\Sigma\) were measured in Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=\)2.76 and 5.02 TeV as well as Xe-Xe collisions at \(\sqrt{s_{\rm NN}}\)= 5.44 TeV in ALICE at LHC, Refs. [5; 6]. Results were obtained for pairs of forward F and backward B pseudorapidity (\(\eta\)) intervals of width \(\delta\eta=\)0.2, each located symmetrically around midrapidity (\(\eta=0\)) as illustrated in Fig. 1. The \(\Sigma\) observable values were studied for different centrality classes and as a function of the distance between the forward (F) and backward (B) pseudorapidity interval \(\Delta\eta\). In terms of this FB correlation analysis, \(\Sigma\) quantity is defined by the Eq. 1 as a combination of scaled variances \(\omega_{F(B)}\), first moments \(\langle N_{F(B)}\rangle\), and covariance of multiplicity distributions: \[\Sigma=\frac{\omega_{B}\langle N_{F}\rangle+\omega_{F}\langle N_{B}\rangle-2 \mbox{Cov}(N_{B},N_{F})}{\langle N_{F}\rangle+\langle N_{B}\rangle}. \tag{1}\] ALICE Collaboration studies have confirmed that \(\Sigma\) exhibits strongly intensive quantity properties in heavy-ion collisions. Specifically, it has been observed that it is independent of the centrality selection method in the experiment and free from the contribution from volume fluctuations within studied centrality classes, Ref. [5]. It was found that \(\Sigma\) displays a dependence on the energy and centrality of the collision system. The observed trend with energy, namely the growth, is similar for all studied nucleus-nucleus collision types. On the other hand, the behavior of \(\Sigma\) with centrality has the opposite character for collisions of larger systems such as Pb-Pb and Xe-Xe versus that observed in pp interactions, Refs. [6; 7]. While theoretical predictions characterize the \(\Sigma\) energy and centrality dependence in pp collisions rea
2306.01845
Multi-View Multi-Task Representation Learning for Mispronunciation Detection
The disparity in phonology between learner's native (L1) and target (L2) language poses a significant challenge for mispronunciation detection and diagnosis (MDD) systems. This challenge is further intensified by lack of annotated L2 data. This paper proposes a novel MDD architecture that exploits multiple `views' of the same input data assisted by auxiliary tasks to learn more distinctive phonetic representation in a low-resource setting. Using the mono- and multilingual encoders, the model learn multiple views of the input, and capture the sound properties across diverse languages and accents. These encoded representations are further enriched by learning articulatory features in a multi-task setup. Our reported results using the L2-ARCTIC data outperformed the SOTA models, with a phoneme error rate reduction of 11.13% and 8.60% and absolute F1 score increase of 5.89%, and 2.49% compared to the single-view mono- and multilingual systems, with a limited L2 dataset.
Yassine El Kheir, Shammur Absar Chowdhury, Ahmed Ali
2023-06-02T18:04:38Z
http://arxiv.org/abs/2306.01845v2
# Multi-View Multi-Task Representation Learning ###### Abstract The disparity in phonology between learner's native (L1) and target (L2) language poses a significant challenge for mispronunciation detection and diagnosis (MDD) systems. This challenge is further intensified by lack of annotated L2 data. This paper proposes a novel MDD architecture that exploits multiple 'views' of the same input data assisted by auxiliary tasks to learn more distinctive phonetic representation in a low-resource setting. Using the mono- and multilingual encoders, the model learn multiple views of the input, and capture the sound properties across diverse languages and accents. These encoded representations are further enriched by learning articulatory features in a multi-task setup. Our reported results using the L2-ARCTIC data outperformed the SOTA models, with a phoneme error rate reduction of \(11.13\%\) and \(8.60\%\) and absolute F1 score increase of \(5.89\%\), and \(2.49\%\) compared to the single-view mono- and multilingual systems, with a limited L2 dataset. Yassine El Kheir, Shammur Absar Chowdhury\({}^{*}\), Ahmed Ali Qatar Computing Research Institute, HBKU, Doha, Qatar **Index Terms**: multi-view, multi-task, auxiliary tasks, mispronunciation detection and diagnosis, articulatory features. ## 1 Introduction Non-native speakers are often influenced by their mother tongue (L1) when learning a target language (L2). The difference in phonology between the native and target language systems is one of the main factors for pronunciation errors. Computer-aided Pronunciation Training (CAPT) provide personalized and interactive training to help non-native speakers to overcome the negative language transfer effects [1] and improve their pronunciation skills. The mispronunciation detection and diagnosis (MDD) system is a crucial part of the CAPT that detects such pronunciation errors in an L2 learner's speech and provide effective feedback. Over the years, various approaches have been investigated, with the majority relying on pre-trained automatic speech recognition (ASR) systems. These methods either (i) identify discrepancies between the aligned ASR output and the reference sequence, or (ii) utilize the log-posterior probability from the ASR to calculate different measures of goodness of pronunciation (GOP) scores, as shown in [2, 3, 4]. Deep learning techniques have also been used to train the models, either through an end-to-end approach or by using a cascaded pipeline, where the detection model is trained with GOP features from a pre-trained ASR [5, 6, 7]. Recently the end-to-end models trained with Connectionist Temporal Classification (CTC) loss have gained popularity due to their promising performance in mispronunciation detection tasks [8, 9, 10, 11, 12, 13]. These CTC-based methods eliminate the need for forced alignment and seamlessly integrate the training pipeline. Furthermore, various of these approaches capitalize on the success of acoustic self-supervised encoders such as wav2vec2.0 [14]. An effective methodology involves the utilization of the CTC decoding techniques to generate sequences of phonemes for MDD [15, 16, 17], this approach demonstrates superior outcomes compared to prior research endeavors that solely rely on self-supervised pre-trained models. In this study, we introduce a novel MDD framework that disambiguates learners' L1/L2 phonetic representations by utilizing multiple sources of information during training. This information is injected into the model via a multi-view (MV) input representation and multi-task (MT) learning combining primary and auxiliary training objectives. The proposed framework leverages learned representation from monolingual and multilingual pre-trained speech encoders, as an input to the MDD model and fine-tuned it for phoneme sequence recognition along with auxiliary tasks such as classifying different articulatory feature (AFs) sets. The underlying assumption is this ensemble of encoders and auxiliary tasks together provides a unique 'view' of the learners' input signal and enriches latent phonetic representation. The multi-view representation of the learner's speech enables the model to learn phonetic properties that are either distinct to the target language or shared across multiple languages and accents. Our multi-task setup enables the model to capture different dimensions of the input signal, it also allows further disentanglement of phonetic properties based on properties of speech production. For instance, the English consonants _/pv_, _/lv_, and _/kv_ are all produced with the same manner of articulation (plosive), but they differ in their place of articulation (bilabial, alveolar, and dorsal) Previous studies [18, 19, 20, 21] rely on manual frame-level phoneme annotation, canonical phoneme embedding, boundary annotation, pseudo-labeling, and data-augmentation. In contrast, our framework leverages multiple views of input along with additional learning signals from auxiliary tasks to capture a comprehensive latent phonetic representation that enhance the performance of MDD models in low-resource settings. ## 2 Proposed MDD Framework Figure 1 shows the pipeline designed to train the E2E-MDD model using multiple sources of information. The model ingests these different views of the input by utilizing representation from different encoders and auxiliary tasks that add additional information and constraints to the MDD model. ### Multi-View Input Representation Learning Given the input raw signal \(\mathcal{X}=(x_{1},x_{2},...,x_{n})\), of n sample length, we first extracted representations from two pre-trained encoders - monolingual (\(\mathcal{H}_{\text{mono}}\)) with feature dimension \(768\) and multilingual (\(\mathcal{H}_{\text{multi}}\)) with feature dimension \(1024\). We then applied average pooling to both \(\mathcal{H}_{\text{mono}}\) and \(\mathcal{H}_{\text{multi}}\) to downsample the feature dimensions to \(300\), resulting in \(\hat{\mathcal{H}}_{\text{mono}}\) and \(\hat{\mathcal{H}}_{\text{multi}}\) respectively. Next, we performed element-wise concatenation of \(\hat{\mathcal{H}}_{\text{mono}}\) and \(\hat{\mathcal{H}}_{\text{multi}}\), resulting in \(\mathcal{H}_{\text{res}}\) with feature dimension of \((300,2)\): \[\mathcal{H}_{\text{res}}=\begin{pmatrix}\hat{\mathcal{H}}_{\text{multi}}\\ \hat{\mathcal{H}}_{\text{mono}}\end{pmatrix}=\begin{pmatrix}\hat{h}_{\text{ multi}_{1}}&\hat{h}_{\text{multi}_{2}}&...&\hat{h}_{\text{multi}_{300}}\\ \hat{h}_{\text{mono}_{1}}&\hat{h}_{\text{mono}_{2}}&...&\hat{h}_{\text{ mono}_{300}}\end{pmatrix} \tag{1}\] To further process \(\mathcal{H}_{\text{res}}\), we pass it through a simple CNN layer of kernel size \((16,2)\). The output multi-view representation is then passed to the phoneme output head (\(PR\)), which is then trained using the CTC loss (\(\mathcal{L}_{PR}\)), calculated by comparing the predicted L2 speech phoneme sequence \(\hat{\mathcal{Y}}=(\hat{y}_{1},\hat{y}_{2},...,\hat{y}_{m})\) with the human-labeled phoneme sequences \(\mathcal{Y}=(y_{1},y_{2},...,y_{l})\). ### Auxiliary Tasks for Multi-Task Learning We enrich the MDD model by adding the auxiliary \(\tau\) tasks in multi-task setup. These auxiliary tasks add depth to the phonetic representation, disentangling them by encoding information about speech production properties. We chose to classify quantized levels of four articulatory feature sets, as described in Section 3.3, using 4 individual auxiliary tasks heads, \(\tau\) = \(\{AF_{M},AF_{P},AF_{HL},\) and \(AF_{FR}\}\). These auxiliary task heads (\(AF_{*}\)) are simple feed-forward layers, which take the phoneme embeddings from the \(PR\) head as input and output sequence of quantized class labels of that task. We opt for CTC loss \(\mathcal{L}_{AF_{*}}\) to train the auxiliary objective. We combined the auxiliary loss \(\mathcal{L}_{\tau}\) to the main objective loss \(\mathcal{L}_{PR}\), based on the training strategy we followed. We opt for two different training strategies: (a) combining all the auxiliary loss functions from the beginning of the training \(\mathcal{L}_{All}=\mathcal{L}_{PR}+\mathcal{L}_{AF_{M}}+\mathcal{L}_{AF_{P}}+ \mathcal{L}_{AF_{HL}}+\mathcal{L}_{AF_{FB}}\) ; (b) gradually adding one auxiliary loss at a time in a specific order, after a warm-up period ( \(\mathcal{L}_{Seq}=\mathcal{L}_{PR}+\mathcal{L}_{AF_{*}}\) ). Sequential Learning StrategyWe designed a scheduler that switches the auxiliary tasks at regular intervals during training to train on a sequence of tasks following a curriculum [22]. During the initial warm-up steps, the model learns with only a single task - i.e. the main objective loss, \(\mathcal{L}_{PR}\), we then add one scheduled auxiliary task at a time. These auxiliary tasks are switched at \(I\) intervals1 allowing the model to build upon previously learned representation. This strategy allows the model to learn different properties of sounds without overfitting to any certain properties. Footnote 1: Note only one auxiliary task is jointly optimized with the main loss at a time. ## 3 Experimental Setup ### L2-Arctic corpus L2-ARCTIC [23] is a non-native English corpus publicly available for research in voice conversion, accent conversion, and mispronunciation detection. It consists of utterances from non-native English speakers, including \(12\) males and \(12\) females, with L1 languages such as Hindi, Korean, Spanish, Arabic, Vietnamese, and Chinese. The corpus is manually annotated by experts and includes instances of mispronunciation and non-native accents, providing a valuable resource for studying the characteristics of non-native speech. We used the common L2-ARCTIC dataset split with \(6\) speakers for the test set, \(12\) speakers as training, and the remaining \(6\) dev sets. For training purposes we mapped the L2-ARCTIC phone set to the mapping table from [24]. ### Pre-trained Speech Encoders Monolingual Encoder:Wav2vec2-base2 comprises Convolutional Neural Network (CNN) and Transformer layers. The CNNs serve as a feature extractor that converts the input audio waveform \(X\) into a latent representation \(Z\). Prior to being fed into the Transformer layers, \(Z\) undergoes random masking, where a certain portion is masked. The Transformer layers contextualize \(Z\) and produce the contextualized representation \(C\). The unmasked latent representation \(Z\) is further discretized into \(Q\) using a learnable codebook. During pre-training, Wav2vec2.0-base is trained using Contrastive Loss only on English Data, which aims to differentiate the true underlying discretized representation \(q^{+}\) for each masked time step \(t\) from those at other masked positions (\(q^{-}\)), based on the contextualized representation \(c_{t}\). The complete Semi-Supervised Learning (SSL) loss is a weighted sum of the Contrastive Loss and a codebook diversity loss [14]. Footnote 2: [https://huggingface.co/facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) Multilingual Encoder:The XLS-R-533[25] model is a pre-trained wav2vec2.0 [14] model that consists of 53 languages and uses 56,000 hours of speech data for pre-training. The multilingual pre-trained XLS-R-53 model also follows the same architecture as the wav2vec2.0 model. It uses a CNN-based encoder network to encode the raw audio sample and a transformer-based context network to build context represen Figure 1: Proposed Multi-View Multi-Task Model (\(MV_{multi}-MT_{seq}\) Model), with multilingual and monolingual encoders, trained with auxiliary tasks using sequential learning strategy. The weights of the monolingual encoder are frozen. tations over the entire latent speech representation. The encoder network consists of \(7\) blocks of temporal convolution layers with \(512\) channels, and the convolutions in each block have strides and kernel sizes that compress about \(25\)ms of \(16\)kHz audio every \(20\)ms. The context network consists of \(24\) blocks with model dimension \(1024\), inner dimension \(4096\), and \(16\) attention heads. ### Auxiliary Tasks We used four auxiliary tasks that classify different articulatory properties into their quantized class labels. **Articulatory Features** are the physical properties of speech sounds that can be used to distinguish the close-pair phoneme from each other. The four articulatory feature sets we used are: * **Place of articulation** refers to the position of the articulators (the tongue, lips, teeth, and palate) in the vocal tract when a speech sound is produced. * **Manner of articulation** refers to the way in which the airstream is obstructed or modified in the vocal tract when a speech sound is produced. * **High-low** refers to the position of the tongue in the vocal tract when a speech sound (vowels) is produced. * **Front-back** refers to the position of the tongue in the vocal tract when a speech sound (vowels) is produced. Each articulatory feature is abstracted into several quantized classes, which indicate distinct behaviors [26]. These feature sets and their quantized class values (see Table 1) classes are obtained using human-labeled phoneme sequence present in the dataset and the AF mapping proposed in [26]. ### Model Training and Parameters We jointly trained encoders and task-heads (both primary and auxiliary heads) with backpropagation by optimizing the total (multi-task) loss, computed by averaging the losses of the task heads. For simplicity, we use unweighted average loss. All the model components are optimized using Adam optimiser [27], for \(10,000\) steps, with an initial learning rate of \(4\times 10^{-5}\), and batch size of \(32\). For the sequential learning setup, we ran the primary task (phoneme sequence recognition) only for the first \(2,000\) steps, and then fine-tuned the multi-view setup with added auxiliary tasks switched at an equal interval (\(I=2,000\) steps). That means in every \(2,000\) steps, we switched the auxiliary task. The sequence we followed: Phoneme recognition (\(PR\)\(\rightarrow\)\(PR\) + Manner (\(AF_{M}\)) \(\rightarrow\)\(PR\) + Place (\(AF_{P}\)) \(\rightarrow\)\(PR\) + high-low (\(AF_{HL}\)) \(\rightarrow\)\(PR\) + fr-back (\(AF_{FB}\)). We tuned the hyper-parameters with an early stopping criterion using the development set. We used phoneme error rate (PER) to choose the best model and reported results on the test set. ### Evaluation We followed the hyperarchical evaluation structure presented in [28]. We detected pronunciation errors using the predicted and reference phoneme sequence. For the canonical phonemes, we calculated true acceptance (TA) and false rejection (FR) to evaluate model's efficacy in distinguishing correct pronunciation, while to evaluate mispronunciation detection capability, we opted for false acceptance (FA) and true rejection (TR) measures. We then used these measure to calculate the overall recall (\(R=\frac{TR}{TR+FA}\)), precision (\(P=\frac{TR}{TR+FR}\)), F1-score, and phoneme error rate (PER). ## 4 Results and Discussion Table 2 showed that the proposed multi-view multi-task MDD architecture along with sequential learning strategy (\(MV_{multi}-MT_{seq}\)) outperformed single-view (\(SV_{*}\)) models significantly. The \(MV_{multi}-MT_{seq}\) achieved a phoneme error reduction of \(11.13\%\) and \(8.6\%\) compared to \(SV_{mono}\) and \(SV_{multi}\) respectively. Furthermore, the model exhibits noteworthy improvements in absolute F1-score, with an increase of \(5.89\%\), and \(2.49\%\) compared to \(SV_{mono}\) and \(SV_{multi}\), respectively. ### Model Ablation Study Table 2 reported the performance of the proposed architecture in multi-view and multi-task (\(MV_{multi}-MT_{seq}\)) settings and showed its efficacy over different design variations. The model architecture variation we considered for ablation study are: 1. [leftmargin=*] 2. Single-View (\(SV\)) setting where we either use monolingual (\(SV_{mono}\)) or multilingual (\(SV_{multi}\)) encoder representation; 3. Single-View with the auxiliary task in multi-task (\(SV-MT\)) setting for either monolingual (\(SV_{mono}-MT\)) or multilingual (\(SV_{multi}-MT\)) encoder representation. 4. Multi-View (\(MV\)) setting where we use both mono- and multilingual representation, but we freeze the weights of one of the encoders. Here we evaluate monolingual (\(MV_{mono}\)) and multilingual (\(MV_{multi}\)) setup with frozen multilingual and monolingual encoders respectively; 5. Multi-View along with the auxiliary task (\(MV-MT\)) setting with either sequential (\(MV_{multi}-MT_{seq}\)) or all-at-once (\(MV_{multi}-MT_{all}\)) learning strategies. Single-View _vs_ Multi-ViewWhen comparing the \(SV_{*}\) to \(MV_{*}\), we observed a significant improvement in F1 score (mono: 1.01%, multi:1.37%) and in PER reduction (mono:3%, multi:1%) using multi-view representation learning. Indicating \(SV_{*}<MV_{*}\)4. Footnote 4: Here \(<\) represent the strength of the model architecture. sistently outperformed the monolingual counterpart. From the results, we observed the efficacy of \(MV_{multi}-MT_{seq}\) over \(MV_{multi}\). Indicating \(MV_{multi}<MV_{multi}-MT\). Effect of Sequential Learning:We investigate the influence of two different training strategies - sequential or all-at-once. For brevity, we reported the effectiveness of sequential learning only on the best architecture (i.e., \(MV_{multi}-MT_{seq}\)_vs_\(MV_{multi}-MT_{all}\)).5 We observed with sequential learning, the model outperforms the all-at-once setting by F1 score:\(1.56\)% and PER reduction:\(4.84\)%. Indicating \(MV_{multi}-MT_{all}<MV_{multi}-MT_{seq}\). Footnote 5: We noticed a similar pattern in other \(*-MT\) tasks. Note all the \(SV-MT_{\star}\) model architectures use sequential learning. ### Comparison with prior works Table 3 showcases the experimental outcomes of various methods for mispronunciation detection. These results highlight two key observations. First, it is evident that all the E2E DNN methods outperform the GOP approach significantly, particularly in the F1 score. Additionally, while CNN-RNN-CTC achieves the highest recall value, it exhibits a drawback of lower precision. On the other hand, APL-2, Peppanet, Wav2vec2 with additional momentum labeling, and RNN-T demonstrate strong competitiveness among each other and showcase good performance in mispronunciation detection based on the L2-ARCTIC benchmark. Second, our proposed \(MV_{multi}-MT_{Seq}\) MDD model outperforms the aforementioned models with a significant margin of **1.7%** in F1 score, along with a PER reduction of **1.6%**. These improvements are achieved using only the limited training data of \(12\) L2 speakers provided by L2-ARCTIC. We abstain from using additional datasets to pre-train (encoders) or fine-tune the model, to emulate low-resource settings. Our model demonstrates a balanced trade-off between recall and precision, in comparison to the current state-of-the-art approach.Unlike our model, Joint MDD-Align, relies on manual frame-level phoneme annotation, canonical text-phoneme embedding, and boundary annotation. Moreover, the reported performance of Joint MDD-Align relies on finetuning their model with additional L1-data TIMIT, whereas we achieve comparable results with only limited L2 data. ## 5 Conclusion In this study, we proposed a novel multi-view multi-task MDD architecture that (a) leverages different views of input representation from mono- and multilingual encoders; (b) capture different properties of speech production from the auxiliary learning objectives; and (c) learns distinctive and rich phonetic representation in a low-resource setting. The proposed MDD model significantly outperformed single-view models with/without additional auxiliary signals for both mispronunciation detection and phoneme recognition tasks. Our empirical results suggest that looking into the input signal from different views can effectively capture both distinct and shared patterns in L1 and L2 phonetic representation with limited observation. In future, we will explore how to harness these multi-view models for detecting supra-segmental errors such as intonation among others. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Models** & **Encoders** & **Auxiliary-Tasks** & **R** & **P** & **F1** & **PER** \\ \hline \hline \(SV_{mono}\) & Wav2vec2-base (w2v-b) & ✗ & 53.90\% & 54.94\% & 54.42\% & 15.91\% \\ \hline \(SV_{multi}\) & XLS-R53 (w2v-x) & ✗ & 58.93\% & 56.76\% & 57.82\% & 15.48\% \\ \hline \hline \(SV_{mono}-MT\) & w2v-b & ✓ & 54.05\% & 58.30\% & 56.10\% & 14.60\% \\ \(SV_{multi}-MT\) & w2v-x & ✓ & 59.17\% & 56.74\% & 57.89\% & 15.46\% \\ \hline \hline \(MV_{mono}\) & w2v-b + w2v-x\({}^{+}\) & ✗ & 54.06\% & 56.87\% & 55.43\% & 15.42\% \\ \(MV_{multi}\) & w2v-b\({}^{+}\) + w2v-x & ✗ & **59.54\%** & 58.84\% & 59.19\% & 15.32\% \\ \hline \hline \(MV_{multi}-MT_{all}\) & w2v-b\({}^{+}\) + w2v-x & ✓ & 56.63\% & 61.04\% & 58.75\% & 14.85\% \\ **Proposed \(MV_{multi}=MT_{seq}\)** & w2v-b\({}^{+}\) + w2v-x & ✓ & 59.23\% & **61.43\%** & **60.31\%** & **14.13\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Reported precision (P), recall (R), F1 and phoneme error rate (PER) for different experimental settings. ’\(+\)’ indicates the encoder weight is frozen. The rows highlighted in blue represent the best result in each setting and row in gray shows the overall best performance. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Models** & **Pre-training/Fine-tuning** & **L2-ARCTIC** (\#Train/\#Test) & **R** & **P** & **F1** & **PER** \\ \hline \hline GOP [29] & Librispecch [30] / - & -/ 6 & 69.97\% & 32.54\% & 44.42\% & - \\ CNN-RNN-CTC [8] & -/ TIMIT [31] & 12 / 6 & 74.78\% & 36.76\% & 49.29\% & - \\ APL-2 [32] & -/ TIMIT & 12 / 6 & 54.49\% & 52.79\% & 53.62\% & - \\ Wav2vec2+ & -/ UTD-4Accents [33] & 18 / 6 & 51.20\% & 60.39\% & 55.42\% & 14.36\% \\ Momentum labeling [19] & -/ TMIT & 12 / 6 & 56.12\% & 56.04\% & 56.08\% & 15.58\% \\ Text-CTC-ATTN [12] & -/ TIMIT & 15 / 6 & 64.53\% & 51.38\% & 56.81\% & - \\ Peppanet [13] & -/ TIMIT & 12 / 6 & 54.16\% & 58.30\% & 56.16\% & 14.69\% \\ Wav2vec2-960h+ & & & & & & \\ momentum labeling [19] & Librispecch / UTD-4Accents & 18 / 6 & 54.16\% & 58.30\% & 56.16\% & 14.69\% \\ RNN-T [34] & -/ TIMIT & 12 / 6 & 57.2\% & 60.1\% & 58.6\% & 15.73\% \\ Joint MDD-Align [18] & -/ TIMIT & 18 / 6 & 53.31\% & 77.12\% & 63.04\% & - \\ \hline **Proposed \(MV_{multi}=MT_{Seq}\)** & -/- & 12 / 6 & 59.23\% & 61.43\% & 60.31\% & **14.13\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Reported precision (P), recall (R), F1 and phoneme error rate (PER) of different state-of-the-art models and our proposed \(MV_{multi}-MT_{Seq}\) model on mispronunciation detection and phoneme recognition tasks.
2310.02472
First-order phase-transition on dynamical Lorentz symmetry breaking system
A model of $N$ 4-component massless fermions in a quartic self-interaction based on ref. \cite{gomes2022} is investigated in the presence of chemical potential and temperature via optimized perturbation theory that accesses finite-N contributions. We use the generating functional approach to calculate the corrections to the effective potential of the model. The model introduces an auxiliary pseudo-vector field with a nontrivial minimum and is influenced by temperature $(T)$ and chemical potential $(\mu)$. These thermodynamic quantities are introduced through Matsubara formalism. Thereby, the integrals are modified, and via the principle of minimum sensitivity, we obtain the gap equations of the model. The correspondent finite-N solutions of these equations define the vacuum states of the model associated with the background pseudo-vector field. In particular, one focuses on its temporal component that acts as an effective chiral chemical potential. We discuss the solutions of the four cases in which $(T = 0,\mu = 0)$, $(T \neq 0,\mu \neq 0)$, $(T \neq 0,\mu = 0)$ and $(T = 0,\mu \neq 0)$, where the effective potential is so obtained as a function of the background vector field, the chemical potential, and the temperature. The model shows the finite-N corrections generate first-order phase transitions on the self-interacting fermions for the case $N=1$ and the persistence of a second-order phase transition for $N \geq 2$.
Y. M. P. Gomes, M. J. Neves
2023-10-03T22:39:32Z
http://arxiv.org/abs/2310.02472v2
# Finite-N corrections to dynamical Lorentz symmetry breaking ###### Abstract A model of \(N\) 4-component massless fermions in a quartic self-interaction based on the ref. [18] is investigated in the presence of chemical potential and temperature via optimized perturbation theory that accesses finite-N contributions. We use the generating functional approach to calculate the corrections at one-loop to the effective potential of the model. The model introduces an auxiliary vector field that breaks the Lorentz symmetry in the presence of a temperature (\(T\)), and of a chemical potential (\(\mu\)). These thermodynamics quantities are introduced through the Matsubara formalism. Thereby, the one-loop integrals are modified and via the principle of minimum sensitivity, we obtain the gap equations of the model. The correspondent solutions of these equations define the vacuum states of the model associated with the background vector field. We discuss the solutions of the four cases in which (\(T=0,\mu=0\)), (\(T\neq 0,\mu\neq 0\)), (\(T\neq 0,\mu=0\)) and (\(T=0,\mu\neq 0\)), where the effective potential is so obtained as a function of the background vector field, the chemical potential, and the temperature. The work demonstrates the persistence of second-order phase transitions in the finite-N improvement made on the finite-\(N\) self-interacting fermions. ## I Introduction Neutrino physics is one of the keys to understanding the origin of elementary particles beyond the Standard Model (SM). It is known that neutrinos do not have mass in the SM approach, however, experimental results point out mass measurements through the transition probabilities in oscillations phenomena [1; 2; 3]. The most famous approach to generate masses for the neutrinos is known as see-saw mechanisms [4; 5; 6; 7; 8; 9], that also motivate the models with new particle contents [10; 11; 12; 13]. The Lorentz symmetry violation (LSV) also provides another framework beyond the SM in that background 4-vectors are introduced and constrained via decay widths, scattering processes, or astrophysical tests [14; 15]. This is known as the Standard Model Extension (SME) [16; 17]. In connection with neutrino physics, an LSV scenario also emerges on effective models with quartic self-interacting couplings involving fermions in which background fields are introduced as auxiliaries fields that play a fundamental role in the understanding of the vacuum via effective potential formalism [18; 31; 32; 33]. The measurements of the neutrino's oscillations impose interesting constraints on the LSV parameters associated with the vacuum of the fermion quartic models [18]. In this sense, fermionic models with quartic self-interactions also are motivated by the study of physics in low dimensions, as the Thirring model [19], the Gross-Neveu model [20], and the Nambu-Jona-Lanisio (NJL) model for quantum chromodynamics (QCD) [21]. In all these models, the dynamical symmetry breaking shows the vacuum properties and the possible phase transitions in the presence of a temperature and chemical potential. Recently, the study of finite-N effects on symmetry breaking and its effects on many systems have been analyzed, as graphene-like systems [22; 23; 24], bidimensional semimetals [27] and nuclear and quark matter as well [25; 26]. The nonperturbative calculations have shown new features that pass unseen in the large-N approximations, such as first-order transitions. Thus, inspired by those intriguing non-perturbative results, in this paper we investigate the vacuum phase transitions of \(N\) massless fermions in the presence of quartic self-interactions based on the ref. [18]. This self-interaction between pseudo-current \(j_{\mu}^{5}=\bar{\psi}\gamma_{\mu}\gamma_{5}\psi\) generates a fermionic condensate that breaks parity (P) and (CPT) symmetries, along with the Lorentz symmetry and therefore could be a source to matter anti-matter asymmetry. The generating functional approach from quantum field theory is used to obtain the effective potential at one loop in terms of one auxiliary 4-vector field plus the optimized perturbation theory (OPT) method that brings the non-perturbative results to the model. The temperature (\(T\)) and the chemical potential (\(\mu\)) are introduced via Matsubara formalism directly through the time-momentum integral. Consequently, the Lorentz symmetry is broken in the model. We choose the time-like 4-vector for our analysis such that, the principle of minimal sensitivity (PMS) and the gap equations are derivative from the effective potential. Therefore, we obtain the solutions of these equations for the four cases : (i) (\(T=0,\mu=0\)), (ii) (\(T\neq 0,\mu\neq 0\)), (iii) (\(T\neq 0,\mu=0\)), and (iv) (\(T=0,\mu\neq 0\)). The solutions for the effective potential are discussed, and the correspondent vacuum states are shown numerically in cases (i) and (iii), and analytically in the second case (ii). The paper is organized as follows: In the section (II), the \(N\) 4-component fermions model in the presence of a quartic self-interaction is presented, and the effective potential at one-loop is obtained in terms of one-loop momentum integrals. In the section (III), the PMS and GAP equations are evaluated in the vacuum states, and the effective potential is studied for the case (\(T=0,\mu=0\)) with the subsection (III.1). We introduce the temperature and chemical potential through the momentum integrals in the section (IV). This section is subdivided into three subsections: The first subsection (IV.1) is for (\(T\neq 0,\mu=0\)), and the second one (IV.2) is for (\(T=0,\mu\neq 0\)). We discuss the phase diagrams in the third subsection (IV.3). Finally, the conclusions are considered in the section (V). The useful integrals in the paper are shown in the appendix (VI). We use the natural units \(\hbar=c=k_{B}=1\), and the Minkowski metric is \(\eta^{\mu\nu}=\mathrm{diag}(+1,-1,-1,-1)\) for throughout this work. ## II The \(N\) fermions model in the presence of a quartic self-interaction The model is governed by the lagrangian [18]: \[\mathcal{L}=\overline{\psi}_{k}\,i\not{\partial}\,\psi_{k}-\frac{g}{2N}\left( \,\overline{\psi}_{k}\gamma_{\mu}\gamma_{5}\psi_{k}\,\right)^{2}\;, \tag{1}\] where \(\psi_{k}\) is a massless four-component fermion, \(k=\left\{\,1,...,N\,\right\}\) is the index that specifies the flavor, the repeated \(k\)-index means an implicit sum over the flavors, \(g\) is a coupling constant with length dimension to the square in four dimensions, in which \(g<0\) for attractive interaction, \(\not{\partial}=\gamma^{\mu}\partial_{\mu}\) is the slashed partial operator, and \(\gamma^{\mu}\) are the usual Dirac matrices that satisfy the relation \(\gamma^{\mu}\,\gamma^{\nu}=\eta^{\mu\nu}\,\mathds{1}-2\,i\,\Sigma^{\mu\nu}\). The auxiliary vector field \(A_{\mu}\) is introduced as follows : \[\mathcal{L}=\overline{\psi}_{k}\left(i\not{\partial}+\not{A}\gamma_{5}\right) \psi_{k}+\frac{N}{2g}A_{\mu}A^{\mu}\;, \tag{2}\] where \(A_{\mu}=-\frac{g}{N}\,\overline{\psi}_{k}\gamma_{\mu}\gamma_{5}\psi_{k}\). Therefore, the effective Lagrangian (2) sets the model that we investigate with the beyond large-N approach. In previous works [31; 32], the dynamical breaking of Lorentz symmetry on a massless fermionic system has been studied through the model represented by equation (2) by integrating out the fermionic fields and assuming the large-N limit to compute the effective potential. In order, the introduction of corrections beyond the large-N limit to the model are implemented via the optimized perturbation theory (OPT) deformation based on [22; 23; 28; 29], through the fictitious \(\delta\)-parameter : \[\mathcal{L}_{\delta}\ =\ (1-\delta)\mathcal{L}_{0}(\mathbbm{a})+ \delta\mathcal{L}=\] \[=\overline{\psi}_{k}\left[\,i\not{\partial}+\left(\not{\mathbbm{ a}}-\left(\not{\mathbbm{a}}-\not{\mathbbm{A}}\right)\delta\right)\gamma_{5}\, \right]\psi_{k}+\frac{\delta N}{2g}A_{\mu}A^{\mu}\;, \tag{3}\] where \(\mathbbm{a}_{\mu}\) is a regulator vector field containing the finite-N corrections information and \(\mathcal{L}_{0}\) is the kinetic term of the fermionic Lagrangian coupled to \(\mathbbm{a}_{\mu}\). The correspondent classical equations of motion for the auxiliary fields \(A^{\mu}\) and \(a^{\mu}\) are, respectively, read as \[\frac{\delta N}{g}\,A_{\mu}+\delta\left(\overline{\psi}_{k}\gamma _{\mu}\gamma_{5}\psi_{k}\right)=0\;, \tag{4a}\] \[(\delta-1)\,\overline{\psi}_{k}\gamma_{\mu}\gamma_{5}\psi_{k}=0\;, \tag{4b}\] in which the limit \(\delta\to 1\) recovers the original lagrangian of (2), and \(\mathbbm{a}_{\mu}\) disappears naturally in (3). As usual in quantum field theory, one can integrate out the fermionic degree of freedom, resulting in an effective action for the bosonic field \(A_{\mu}\). Specifically, the generating functional associated with (3) yields the effective potential \[N^{-1}\,V_{eff}(A_{\mu},\mathbbm{a})=\frac{\delta}{2g}A_{\mu}A^{\mu}\] \[+\int\frac{d^{4}p}{(2\pi)^{4}}\,\mathrm{Tr}\ln\left(\not{p}+\not{ \mathbbm{a}}\gamma_{5}\right)\] \[+\delta\int\frac{d^{4}p}{(2\pi)^{4}}\,\mathrm{Tr}\left[\frac{( \not{\mathbbm{a}}-\not{\mathbbm{A}})\gamma_{5}}{\not{p}+\not{\mathbbm{a}} \gamma_{5}}\right]\] \[+\frac{i}{2}\int\frac{d^{4}p}{(2\pi)^{4}}\,\mathrm{Tr}\left[\frac{ \Sigma(\mathbbm{a})}{\not{p}+\not{\mathbbm{a}}\gamma_{5}}\,\right]+O(\delta^{ 2})\;, \tag{5}\] here one applies an expansion on \(\delta\)-parameter up to \(\delta^{2}\) with \(\delta\ll 1\). The numerator of the third trace in (II) is \[\Sigma(\mathbbm{a})\approx\frac{\delta g}{N}\int\frac{d^{4}q}{(2\pi)^{4}}\frac{ i}{\not{q}+\not{\mathbbm{a}}\gamma_{5}}\ +O(\delta^{2}), \tag{6}\] that is expanded up to the first order of the \(\delta\)-parameter, and the trace (Tr) acts on the spinor space. The correspondent diagrams of these contributions are illustrated in the figure (1). The common fraction of the integrals in (5) can be written as \[\frac{1}{\not{p}+\not{\bar{\bf a}}\gamma_{5}}=\frac{N_{0}(p)}{D(p)}\;, \tag{7}\] in which the numerator is \(N_{0}(p)=(\not{p}+\not{\bar{\bf a}}\gamma_{5})(p^{2}-\mathsf{a}^{2}-[\not{p}\,, \not{\bar{\bf a}}]\gamma_{5})\), and the denominator is \(D(p)=(p+\mathsf{a})^{2}(p-\mathsf{a})^{2}\). Thereby, the effective potential can be written as \[N^{-1}\,V_{eff}(A^{\mu},\mathsf{a}^{\mu})\ =\ \frac{\delta}{2g}A_{\mu}A^{\mu}+\, \Omega(\mathsf{a})+\delta\,(\mathsf{a}^{\mu}-A^{\mu})\ \Pi_{\mu}(\mathsf{a})-\frac{\delta g}{2N}\,\Xi( \mathsf{a})\;, \tag{8}\] where \(\Omega(\mathsf{a})\), \(\Pi_{\mu}(\mathsf{a})\) and \(\Xi(\mathsf{a})\) are defined by the integrals \[\Omega(\mathsf{a}) = \sum_{\lambda=-1}^{+1}\int^{\Lambda}\frac{d^{4}p}{(2\pi)^{4}}\, \ln(p+\lambda\mathsf{a})^{2}\;, \tag{9a}\] \[\Pi^{\mu}(\mathsf{a}) = \sum_{\lambda=-1}^{+1}\lambda\int^{\Lambda}\frac{d^{4}p}{(2\pi)^ {4}}\,\frac{p^{\mu}+\lambda\mathsf{a}^{\mu}}{(p+\lambda\mathsf{a})^{2}}\;,\] (9b) \[\Xi(\mathsf{a}) = \int^{\Lambda}\frac{d^{4}p}{(2\pi)^{4}}\frac{d^{4}q}{(2\pi)^{4}} \frac{\operatorname{Tr}\big{[}\,N_{0}(p)\,N_{0}(q)\,\big{]}}{D(p)\,D(q)}\,, \tag{9c}\] and we have used the trace properties \(\operatorname{tr}\ln(A)=\ln\det(A)\), and \(\det(A^{-1})=(\det A)^{-1}\). We also have introduced the cutoff parameter (\(\Lambda\)) (that has energy dimension) to regularize the momentum integrals from (5) and (6). Thereby, the modified integrals are finite and naturally diverge in the ultraviolet regime when \(\Lambda\to\infty\). Consequently, the effective potential is also regularized by the UV cutoff (\(\Lambda\)). In the next section, we investigate the contributions of the finite parts for the effective potential in the case of the time-like background fields. ## III PMS and gap equation beyond the large-N limit The principle of minimal sensitivity (PMS) applied to the effective potential is defined by the following identity: \[\left.\frac{\partial V_{eff}}{\partial\mathsf{a}_{\mu}}\right|_{\bar{\mathsf{ a}},\delta=1}=0\;. \tag{10}\] The PMS equation which defines \[\left.\left[\ \frac{\partial\Omega}{\partial\mathsf{a}_{\mu}}+\Pi_{\mu}( \mathsf{a})+\left(\mathsf{a}-A\right)^{\nu}\left.\frac{\partial\Pi_{\nu}}{ \partial\mathsf{a}_{\mu}}-\frac{g}{2N}\frac{\partial\Xi}{\partial\bar{\mathsf{ a}}_{\mu}}\right]\Bigg{|}_{\bar{\mathsf{a}}^{\mu},\delta=1}=0\;.\right. \tag{11}\] The solutions of equation (11) given by \(\bar{\mathsf{a}}\) depend on various original parameters including the couplings and the pseudo-vector field \(A_{\mu}\). This results in nonperturbative dependencies in the model's coupling and other parameters. The gap equation defined by \[\left.\frac{\partial V_{eff}}{\partial A_{\mu}}\right|_{A^{\mu}=\bar{A}^{\mu},\mathsf{a}^{\mu}=\bar{\mathsf{a}}^{\mu},\delta=1}=0\;, \tag{12}\] that yields the relation \[\bar{A}^{\mu}-g\,\Pi^{\mu}(\bar{\mathsf{a}})=0\;. \tag{13}\] The equations (11) and (13) are evaluated at the states that we denote as \(\bar{A}^{\mu}\) and \(\bar{\mathsf{a}}^{\mu}\), respectively. The \(\bar{A}^{\mu}\) means the vacuum state of the model, whereas \(\bar{\mathsf{a}}^{\mu}\) is an auxiliary state that is function of \(\bar{A}^{\mu}\) by the equation (13). Figure 1: The Feynman diagrams contributing up to \(\delta^{1}\) to the effective potential. The first sets the effective potential in the large-N limit. The black dot represents a \(\delta\)-insertion. The last diagram brings the first correction to the effective potential due to the self-energy (figure based on ref. [22]). ### The case \(T=0\) and \(\mu=0\) For our analysis of the phase transitions, we base our analysis on the cosmological principle of space isotropy to consider time-like background fields such that \(A^{\mu}=(A^{0},\mathbf{0})\) and \(\mathtt{a}^{\mu}=(\mathtt{a}^{0},\mathbf{0})\) in the integrals (9a)-(9c), and in the effective potential (15). Under these assumptions, the physical results of the integrals (9a)-(9c) are reduced to \[\Omega(\mathtt{a}_{0}) = \frac{\Lambda^{4}}{8\pi^{2}}+\frac{\mathtt{a}_{0}^{4}}{12\pi^{2} }\;, \tag{14a}\] \[\Pi_{\mu}(\mathtt{a}_{0}) = \delta_{\mu 0}\,\frac{\mathtt{a}_{0}^{3}}{3\pi^{2}}\;,\] (14b) \[\Xi(\mathtt{a}_{0}) = -\frac{\mathtt{a}_{0}^{6}}{9\pi^{4}}\;, \tag{14c}\] where the regulator parameter \(\Lambda\) remains in the \(\Omega(\mathtt{a}_{0})\), which can be interpreted as an energy scale of the model. Therefore, the effective potential in this case is \[N^{-1}\,V_{eff}(A_{0},\mathtt{a}_{0}) = \frac{\mathtt{a}_{0}^{3}\,A_{0}}{3\pi^{2}}+\frac{\mathtt{a}_{0}^ {6}\,g}{18\pi^{4}N}-\frac{\mathtt{a}_{0}^{4}}{4\pi^{2}}+ \tag{15}\] \[+\frac{A_{0}^{2}}{2g}+\frac{\Lambda^{4}}{4\pi^{2}}\;.\] The PMS and GAP equations are, respectively, given by \[\frac{\mathtt{a}_{0}^{3}\,g}{3\pi^{4}N}-\frac{\mathtt{a}_{0}}{\pi ^{2}}+\frac{\bar{A}_{0}}{\pi^{2}}=0\;, \tag{16a}\] \[\bar{A}_{0}=-g\,\frac{\mathtt{a}_{0}^{3}}{3\pi^{2}}\;, \tag{16b}\] in which the vacuum state is set by the time-like vector \(\bar{A}^{\mu}=(\bar{A}^{0},\mathbf{0})\), and \((\bar{\mathtt{a}}^{\mu}=\bar{\mathtt{a}}^{0},\mathbf{0})\) is the correspondent auxiliary state. The asymptotic behavior of the PMS solution is the N-large limit (LN), when \(N\to\infty\), is reduced to : \[\text{LN}\;:\;\bar{\mathtt{a}}_{0}=\bar{A}_{0}\quad,\quad\bar{A}_{0}^{2}+\frac {3\pi^{2}}{g}=0\;. \tag{17}\] Thereby, the large-\(N\) limit of the Lorentz symmetry is broken, if \(g=-|g|\), and the gap solution is given by \(\bar{A}_{0}=\frac{\sqrt{3}\pi}{\sqrt{|g|}}\), that confirms the result obtained in the ref. [31]. In the limit \(N\to\infty\), the effective potential is given by \[V_{eff}^{LN}(A_{0})=-\frac{A_{0}^{2}}{2g}+\frac{A_{0}^{4}}{12\pi^{2}}+\frac{1} {4\pi^{2}}\;. \tag{18}\] Clearly, the non-trivial minimum emerges when \(g<0\). Although the analytic form of the finite-N contributions to the effective potential has an intricate non-linear character, one can plot it via numerical implementation. The finite-N effective potential normalized by the factor \(V_{0}=\Lambda^{4}\) is plotted in Fig. (2) as a function of dimensionless variable \(A_{0}/\Lambda\). We choose \(N=3\) in both plots. The Gap equation can be solved and is given by : \[\bar{A}_{0}=\frac{\sqrt{3}\pi}{\sqrt{|g|}}\frac{1}{(1-1/N)^{3/2}}\;, \tag{19}\] and for \(N=3\), the extra factor \((1-1/N)^{-3/2}\) is given a multiplicative factor of approximately \(1.837\). It is important to highlight that this extra factor reflects the non-perturbative nature of the OPT approach. Next, we introduce the temperature (\(T\)) and chemical potential (\(\mu\)) via Matsubara formalism modifying the \(p^{0}\)-component by \(p^{0}\to i\omega_{n}-\mu\), and summing over all the Matsubara frequencies \(\omega_{n}\). This modification is clarified in the appendix VI. Thus, the formalism opens the three possible cases (\(T\neq 0,\mu\neq 0\)), (\(T\neq 0,\mu=0\)), and (\(T=0,\mu\neq 0\)) that will be discussed in the next subsections. ## IV Introducing the temperature and chemical potential Whenever the thermodynamic quantities \(T\) and \(\mu\) are present, the translational symmetry in time Figure 2: The plot of the effective potential over \(V_{0}=\Lambda^{4}\) as function of the dimensionless variable \(A_{0}/\Lambda\). In this figure, we choose \(N=3\), and \(g\Lambda^{2}=-20\) (blue line). The black line sets the limit \(N\to\infty\). is broken, but it is kept for the space coordinates. Thereby, we consider the \({\sf a}^{\mu}\) as a time-like 4-vector \({\sf a}^{\mu}=({\sf a}^{0},{\bf 0})\), without any loss of generality. This choice also is justified here in the case of a non-null temperature in which the analysis of the phase transition emerges of a time-like background vector, see [31]. The \(\Omega\)-integral (9a) in the presence of a temperature (\(T\)), and of a chemical potential (\(\mu\)) is so written as : \[\Omega({\sf a}_{0})=\int^{\Lambda}\frac{d^{3}p}{(2\pi)^{3}}\,\times\] \[T\sum_{\lambda=-1}^{+1}\sum_{n=-\infty}^{\infty}\ln\left[(i \omega_{n}-\mu+\lambda{\sf a}_{0})^{2}-|{\bf p}|^{2}\right]\;, \tag{20}\] that yields the result \[\Omega({\sf a}_{0}) = -\frac{T^{4}}{\pi^{2}}\sum_{\lambda=-1}^{+1}\left[{\rm Li}_{4} \left(-e^{-\beta\mu_{\lambda}}\right)+{\rm Li}_{4}\left(-e^{\beta\mu_{\lambda} }\right)\right] \tag{21}\] \[+\frac{\Lambda^{4}}{4\pi^{2}}\;,\] where \(\beta=T^{-1}\) is the inverse of the temperature, \(\mu_{\lambda}=\mu-\lambda\,{\sf a}_{0}\,(\lambda=\pm 1)\) is the chemical potential summed to the time-component \({\sf a}_{0}\), and \({\rm Li}_{4}\) is poly-logarithm function. In this result, \(\Lambda\) is a cut-off parameter with energy dimension. The integral (9b) for \(T\neq 0\) is \[\Pi_{\mu}({\sf a}_{0})=2\,\delta_{\mu 0}\!\!\int^{\Lambda}\frac{d^{3 }p}{(2\pi)^{3}}\!\!\sum_{\lambda=-1}^{+1}\lambda\] \[\times\sum_{n=-\infty}^{\infty}\frac{(i\omega_{n}-\mu+\lambda{\sf a }_{0})}{(i\omega_{n}-\mu+\lambda{\sf a}_{0})^{2}-|{\bf p}|^{2}}\;, \tag{22}\] that yields \[\Pi_{\mu}({\sf a}_{0})=\frac{T^{3}}{\pi^{2}}\,\delta_{\mu 0}\!\!\sum_{ \lambda=-1}^{+1}\lambda\left[{\rm Li}_{3}\left(-e^{\beta\mu_{\lambda}}\right)\right.\] \[\left.-{\rm Li}_{3}\left(-e^{-\beta\mu_{\lambda}}\right)\right]\;. \tag{23}\] The \(\Xi\)-integral (9c), for \(T\neq 0\), is read \[\Xi({\sf a}_{0}) = 4{\sf a}_{0}\left[f_{1}(\mu_{+})f_{2}(\mu_{-})-f_{1}(\mu_{-})f_{ 2}(\mu_{+})\right]+ \tag{24}\] \[+4f_{2}(\mu_{-})f_{2}(\mu_{+})-4{\sf a}_{0}^{2}f_{1}(\mu_{-})f_{ 1}(\mu_{+})\;,\] where \[f_{1}(\mu_{\pm}) = \int^{\Lambda}\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{(p\pm{\sf a}_{0}) ^{2}}=\] \[=\frac{T^{2}}{4\pi^{2}}\left[\,{\rm Li}_{2}\left(-e^{-\beta\mu_{ \pm}}\right)+{\rm Li}_{2}\left(-e^{\beta\mu_{\pm}}\right)\,\right]\;, \tag{25}\] \[+\frac{\Lambda^{2}}{8\pi^{2}}\;\;,\] and \[f_{2}(\mu_{\pm}) = \int^{\Lambda}\frac{d^{4}p}{(2\pi)^{4}}\frac{p_{0}}{(p\pm{\sf a} _{0})^{2}}= \tag{26}\] \[=\frac{T^{3}}{2\pi^{2}}\left[\,{\rm Li}_{3}\left(-e^{\beta\mu_{ \pm}}\right)-{\rm Li}_{3}\left(-e^{-\beta\mu_{\pm}}\right)\,\right]+\] \[\mp{\sf a}_{0}\,f_{1}(\mu_{\pm})\;,\] where \({\rm Li}_{2,3}\) is a Poly-logarithm function of second/three degree. Using these results we can discuss the particular cases of (\(T\neq 0,\mu=0\)) and (\(T=0,\mu\neq 0\)). ### The case \(T\neq 0\) and \(\mu=0\) Taking the limit \(\mu\to 0\), the gap equation reads : \[\bar{A}_{0}=-\frac{2gT^{3}}{\pi^{2}}\left[{\rm Li}_{3}\left(-e^{-\bar{{\sf a}}_ {0}/T}\right)-{\rm Li}_{3}\left(-e^{\bar{{\sf a}}_{0}/T}\right)\right]\;. \tag{27}\] The PMS equation in the limit \(\mu\to 0\) is \[\left[{\rm Li}_{2}\left(-e^{\bar{{\sf a}}_{0}/T}\right)+{\rm Li}_{ 2}\left(-e^{-\bar{{\sf a}}_{0}/T}\right)\right]\times\] \[\left[\bar{{\sf a}}_{0}-\bar{A}_{0}+\frac{2gT^{3}}{\pi^{2}N}{\rm Li }_{3}\left(-e^{\bar{{\sf a}}_{0}/T}\right)+\right.\] \[\left.-\frac{2gT^{3}}{\pi^{2}N}\,{\rm Li}_{3}\left(-e^{-\bar{{\sf a }}_{0}/T}\right)\right]=0\;, \tag{28}\] in which we can check that \(\bar{{\sf a}}_{0}=0\) implies into \(\bar{A}_{0}=0\). One can find the numerical solution for \(\bar{A}_{0}(T)\) by combining both equations. The solution is illustrated in Fig. 3 for representative values of \(N\) and \(g\). Substituting the gap equation in the PMS equation, and taking the limit \(\bar{{\sf a}}_{0}\to 0\), we obtain the solutions for the critical temperature \[T_{c}=\frac{\sqrt{3}}{\sqrt{|g|}}\frac{1}{\sqrt{1-1/N}}\;. \tag{29}\] The \(N\)-large limit fixes the critical temperature at \(T_{c}=\sqrt{3/|g|}\). ### The case \(T=0\) and \(\mu\neq 0\) In the case of \(T\to 0\), we need to use the limits \[\lim_{T\to 0}\left[\frac{1}{1+e^{\frac{|{\bf p}|+\mu_{\lambda}}{T}}}\right] =\Theta(-|{\bf p}|\mp\mu_{\lambda})\;, \tag{30a}\] \[\lim_{T\to 0}T\ln\left[1+e^{-\frac{|{\bf p}|+\mu_{\lambda}}{T}} \right] =-\left(|{\bf p}|+\mu_{\lambda}\right)\Theta(-|{\bf p}|-\mu_{\lambda})\;,\] (30b) \[\lim_{T\to 0}T\ln\left[1+e^{-\frac{|{\bf p}|-\mu_{\lambda}}{T}} \right] =\left(-|{\bf p}|+\mu_{\lambda}\right)\Theta(-|{\bf p}|+\mu_{\lambda})\;, \tag{30c}\] in which the \(\Theta\)-Heaviside function satisfies the conditions \(\Theta(x)=0\) for \(x<0\), and \(\Theta(x)=1\) for \(x>0\). Using these properties, the \(\Omega\)-integral is \[\lim_{T\to 0}\Omega(\bar{\bf a}_{0})=\frac{1}{12\pi^{2}}(6\,\bar{\bf a}_{0}^{2} \,\mu^{2}+\bar{\bf a}_{0}^{4}+3\pi^{4}\Lambda^{4}+\mu^{4})\;. \tag{31}\] Analogously, the result for \(\Pi^{\mu}\) is \[\Pi_{\mu}(\bar{\bf a}_{0})=-\delta_{\mu 0}\;\frac{\bar{\bf a}_{0}}{3\pi^{2}} \left(\bar{\bf a}_{0}^{2}+3\mu^{2}\right)\;. \tag{32}\] The limits (30a)-(30c), when used in the functions \(f_{1}\) and \(f_{2}\) from (25) and (26) are : \[\lim_{T\to 0}f_{1}(\mu_{\pm}) = -\frac{(\mu_{\pm})^{2}-\Lambda^{2}}{8\pi^{2}}\;, \tag{33a}\] \[\lim_{T\to 0}f_{2}(\mu_{\pm}) = -\frac{\mu_{\pm}^{3}}{6\pi^{2}}\pm{\bf a}_{0}\frac{\mu_{\pm}^{2} -\Lambda^{2}}{8\pi^{2}}\;, \tag{33b}\] where the \(\Xi\)-integral is reduced to \[\Xi(\bar{\bf a}_{0})=-\frac{(\bar{\bf a}_{0}^{2}-\mu^{2})^{3}}{9\pi^{4}}\;. \tag{34}\] Thereby, the PMS and GAP equations are, respectively, given by \[\bar{\bf a}_{0}^{2}\,\bar{A}_{0}-\bar{\bf a}_{0}^{3}\left(1+\frac {2g\mu^{2}}{3\pi^{2}N}\right)+\frac{\bar{\bf a}_{0}^{5}g}{3\pi^{2}N}+\] \[+\mu^{2}\,\bar{\bf a}_{0}\left(\frac{3\,g\,\mu^{2}}{\pi^{2}N}-1 \right)+\bar{A}_{0}\,\mu^{2}=0\;, \tag{35a}\] \[\frac{\bar{A}_{0}}{g}+\frac{\bar{\bf a}_{0}}{3\pi^{2}}\left(\bar{ \bf a}_{0}^{2}+3\mu^{2}\right)=0\;. \tag{35b}\] By combining both equations one can numerically find the solution for \(\bar{A}_{0}(\mu)\) and it is shown in Fig. 4 for some representative values of \(N\) and \(g\). Once again, we can check that \(\bar{\bf a}_{0}=0\) implies into \(\bar{A}_{0}=0\). Using the analysis of the PMS equation in the limit \(\bar{\bf a}_{0}\to 0\), after the application of the gap equation, the critical value of chemical potential (\(\mu_{c}\)) satisfies the equation \[g\,\mu_{c}^{4}\left(\frac{1}{N}-3\right)-3\pi^{2}\mu_{c}^{2}=0\;, \tag{36}\] where the symmetry is restored as the solution of (36). The critical chemical potential is constrained by the conditions \(g<0\) and \(N>1\). The solutions are given by \[\mu_{c}=\frac{\pi}{\sqrt{|g|}}\frac{1}{\sqrt{1-\frac{1}{3N}}}\;, \tag{37}\] where the N-large limit has the result \(\mu_{c}\approx\frac{\pi}{\sqrt{|g|}}\). For last, the density of the symmetric phase at \(T=0\) can be written as: \[n(T=0,\mu)=\frac{\mu^{3}}{3\pi^{2}}\left(1+\frac{|g|\mu^{2}}{\pi^{2}N}\right)\;\;. \tag{38}\] From this result, one can write the density at \(\mu_{c}\) where the symmetry breaks, and is given by: \[n(T=0,\mu_{c})=\frac{\sqrt{3}\pi(3N-4)}{(3N-1)^{5/2}\left(\frac{|g|}{N}\right) ^{3/2}}\;\;. \tag{39}\] and in the large-N limit one finds \(n(T=0,\mu_{c})\to\frac{\pi}{3}|g|^{-3/2}\). For \(N=3\) one has \(n(T=0,\mu_{c})\approx 1.64\times\frac{\pi}{3}|g|^{-3/2}\), which gives a correction of 64% on the critical density. Figure 3: The plot of the gap solution as a function of the dimensionless variable \(T/\Lambda\) at \(\mu=0\). In this figure, we choose \(N=3\), and \(g\Lambda^{2}=-1\) (blue line). The black line sets \(g\Lambda^{2}=-1\) and the limit \(N\to\infty\). ### Phase portrait The phase diagram is shown in the fig. (6), and one can see that the critical curve is modified in the finite-\(N\) approach. Besides the quantitative modification, there is no qualitative difference, _i.e._, the phase transition still has a continuous behavior, and thus is a second-order phase transition. In order to access the information about the density one need to start from the thermodynamic potential defined as \(\Phi=V_{eff}(\tilde{A}_{0},\bar{\bf a}_{0})\). Now, the density can be properly and consistently defined as \[-\frac{\partial\Phi}{\partial\mu}=n-\frac{\partial\tilde{A}_{0}}{\partial\mu} \frac{\partial\Phi}{\partial\tilde{A}_{0}}-\frac{\partial\bar{\bf a}_{0}}{ \partial\mu}\frac{\partial\Phi}{\partial\bar{\bf a}_{0}}\;, \tag{40}\] and by use of the PMS and gap equations, one shows that \(n=-\frac{\partial\Phi}{\partial\mu}\). This expression is used to construct the phase portrait in Fig. 6. ## V Conclusions In this paper, we propose the study of \(N\) massless fermions based on the ref. [18] in the presence of temperature and chemical potential. The model has a fermionic quartic self-interaction that introduces an auxiliary background 4-vector \(A^{\mu}\), and another background 4-vector \(a^{\mu}\), and that, in the presence of a temperature and chemical potential, breaks the Lorentz symmetry. Using the path integral approach of quantum field theory, we obtain the effective potential of the model at the one-loop, in which the temperature is introduced via Matsubara formalism directly in the momentum integrals. Thereby, we obtain the principle of minimal sensitivity (PMS) and the GAP equations in terms of the 4-vector fields \(A^{\mu}\) and \(a^{\mu}\). The vacuum solution for the gap equation given by \(\bar{A}_{0}=\frac{\sqrt{3}\pi}{\sqrt{|g|}}\frac{1}{(1-1/N)^{3/2}}\) is representative in terms to show the non-perturbative properties of the OPT approach, since the non-polynomial character of the correction in comparison with the large-\(N\) results. The non-perturbative results show up also on the critical temperature at zero chemical potential \(T_{c}=\frac{\sqrt{3}}{\sqrt{|g|}}\left(\sqrt{1-1/N}\right)^{-1/2}\) and the critical chemical potential at zero temperature given by \(\mu_{c}=\frac{\pi}{\sqrt{|g|}}\left(1-1/3N\right)^{-1/2}\). These non-perturbative results are one of our main results. Going further, the phase portrait is shown in Fig. 6 and one can see that the finite-\(N\) corrections generate a region on the phase space \(T\times\mbox{density}\) which is larger than the large-\(N\) counterpart. Furthermore, one does not find any first-order phase transition, and the finite-\(N\) correction maintains the second-order phase transition between the symmetric and LSV phases. In this work, the mass of the particles was neglected in order to extract analytical results. The Lagrangian (1) may exhibit interesting effects if a mass term is present. In particular, this could lead to the occurrence of a first-order phase transition, which warrants further investigation. This type of transition could be important and generate LSV bubbles in the metastable region. Effects from \(\delta^{2}\) corrections can also bring new information about the phase transition and will be a target of forthcoming work. ## VI Appendix The fermion propagator can be rewritten as follows: \[S(p)=\frac{1}{\not{p}+\not{x}\gamma_{5}}=\frac{(\not{p}+\not{x}\gamma_{5})(p^ {2}-\mbox{a}^{2}-[\not{p}\,,\,\not{\frac{\alpha}{2}}]\,\gamma_{5})}{(p^{2}- \mbox{a}^{2})^{2}+4[\,p^{2}\,\mbox{a}^{2}-(p\cdot\mbox{a})^{2}]}\, \tag{41}\] and through reorganization of the terms the denominator can be rewritten as \(D(p)=(p+\mbox{a})^{2}(p-\mbox{a})^{2}\). For the study of the thermodynamic properties, one uses the Matsubara formalism, in which the \(p^{0}\)-momentum integration is replaced by the sum : \[\int^{\Lambda}\frac{d^{4}p}{(2\pi)^{4}}\to T\sum_{n=-\infty}^{\infty}\int^{ \Lambda}\frac{d^{3}p}{(2\pi)^{3}}\, \tag{42}\] where \(p_{0}=i\omega_{n}-\mu=(2n-1)\,i\,\pi\,T-\mu\). Therefore, one has some useful identities \[I_{0}(\mu,\omega_{p}) = T\sum_{n=-\infty}^{\infty}\ln\left[\left(\omega_{n}+i\mu\right) ^{2}+\omega_{p}^{2}\right] \tag{43a}\] \[= \omega_{p}+T\ln\left(1+e^{-\frac{\omega_{p}+\mu}{T}}\right)\] \[+\,T\ln\left(1+e^{-\frac{\omega_{p}-\mu}{T}}\right)\,\] \[I_{1}(\mu,\omega_{p}) = T\sum_{n=-\infty}^{\infty}\frac{1}{\left(\omega_{n}+i\mu\right) ^{2}+\omega_{p}^{2}}\] (43b) \[= \frac{1}{2\omega_{p}}\left[1-\frac{1}{1+e^{\left(\omega_{p}+\mu \right)/T}}-\frac{1}{1+e^{\left(\omega_{p}-\mu\right)/T}}\right]\,,\] \[I_{2}(\mu,\omega_{p}) = T\sum_{n=-\infty}^{\infty}\frac{\omega_{n}+i\mu}{\left(\omega_{ n}+i\mu\right)^{2}+\omega_{p}^{2}}\] (43c) \[= \frac{i}{2}\left[\frac{1}{1+e^{\left(\omega_{p}+\mu\right)/T}}- \frac{1}{1+e^{\left(\omega_{p}-\mu\right)/T}}\right]\,\] where \(\omega_{p}=|\mathbf{p}|\) for a massless fermion. ## Acknowledgments Y.M.P.G would like to thank R.O.R., M.B.P., and G.P.B. for the insightful discussions. Y.M.P.G. is supported by a postdoctoral grant from Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), grant No. E26/201.937/2020. ## References * (1) J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blahaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blahaha, J.P. Blahaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blahaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blahaha, J.P. Blahaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahahaha, J.P. Blahahahahaha, J.P. Blahahahahaha, J.P. Blahahahahaha, J.P. Blahahahahahaha, J.P. Blahahahahahaha, J.P. Blahahahahahaha, J.P. Blahahahahahahaha, J.P. Blahahahahahahaha, J.P. Blahahahahahahahahahahahahahahahahahahahahahahahahahahahaha
2303.05225
Active Learning Based Domain Adaptation for Tissue Segmentation of Histopathological Images
Accurate segmentation of tissue in histopathological images can be very beneficial for defining regions of interest (ROI) for streamline of diagnostic and prognostic tasks. Still, adapting to different domains is essential for histopathology image analysis, as the visual characteristics of tissues can vary significantly across datasets. Yet, acquiring sufficient annotated data in the medical domain is cumbersome and time-consuming. The labeling effort can be significantly reduced by leveraging active learning, which enables the selective annotation of the most informative samples. Our proposed method allows for fine-tuning a pre-trained deep neural network using a small set of labeled data from the target domain, while also actively selecting the most informative samples to label next. We demonstrate that our approach performs with significantly fewer labeled samples compared to traditional supervised learning approaches for similar F1-scores, using barely a 59\% of the training set. We also investigate the distribution of class balance to establish annotation guidelines.
Saul Fuster, Farbod Khoraminia, Trygve Eftestøl, Tahlita C. M. Zuiverloon, Kjersti Engan
2023-03-09T13:03:01Z
http://arxiv.org/abs/2303.05225v1
# Active Learning Based Domain Adaptation for Tissue Segmentation of Histopathological Images ###### Abstract Accurate segmentation of tissue in histopathological images can be very beneficial for defining regions of interest (ROI) for streamline of diagnostic and prognostic tasks. Still, adapting to different domains is essential for histopathology image analysis, as the visual characteristics of tissues can vary significantly across datasets. Yet, acquiring sufficient annotated data in the medical domain is cumbersome and time-consuming. The labeling effort can be significantly reduced by leveraging active learning, which enables the selective annotation of the most informative samples. Our proposed method allows for fine-tuning a pre-trained deep neural network using a small set of labeled data from the target domain, while also actively selecting the most informative samples to label next. We demonstrate that our approach performs with significantly fewer labeled samples compared to traditional supervised learning approaches for similar F1-scores, using barely a 59% of the training set. We also investigate the distribution of class balance to establish annotation guidelines. Computational Pathology, Bladder Cancer, Multiscale Segmentation, Active Learning, Domain Adaptation ## I Introduction Computer-aided diagnosis (CAD) systems that utilize machine learning techniques for medical imaging analysis have been shown to be an effective way to reduce subjectivity and speed up the diagnostic process [1]. Digital microscopy scanners are capable of generating high-resolution digital images from scanned tissue sections, also known as Whole Slide Images (WSI). These images can are pre-stored at various magnification levels, allowing pathologists to adjust the zoom level like they would with physical microscopes. Lower magnification is typically used to view tissue-level morphology, while higher magnification is useful for examining cell-level features. Bladder cancer is among the most commonly diagnosed cancer types. According to the World Health Organization, over 573,000 new cases and 213,000 deaths were registered in 2020 [2]. WSIs from bladder cancer are highly disorganized scanned tissue sections for several reasons. Urothelial carcinomas often present papillary structures, elongated finger-like bundles of tissue, that alter the normal appearance of the urothelial lining. Also, transurethral resection of a bladder tumour (TURBT) is a complicated operation that difficuts a clean tumour extraction as the cuentization process leaves damaged tissue areas. As a result, a significant amount of artifacts and other non-diagnostically-relevant tissue is present within the slides. Using all the regions of a WSI for model training or inference as input would often add unnecessary noise. Manual annotations of potential regions of interests (ROIs) is an arduous, time-consuming and labor-intensive task. Hence automatic methods have emerged to reduce this time constraint. Tissue segmentation in computational pathology (CPATH) enables the analysis of specific ROIs within a WSI and can also improve the performance of the model by actively selecting the most informative tissue types [3, 4]. A Multiscale Approach for Whole-Slide Image Segmentation of five Tissue Classes in Urothelial Carcinoma Slides [5] proposes a multiscale convolutional neural network (CNN) that can effectively segment five different tissue classes in non-muscle invasive bladder urothelial carcinoma slides. The model classifies all input areas into blood, damaged, muscle, stroma and urothel tissue. They demonstrate that their approach outperforms existing methods and is able to handle the large size and variability of WSIs within their private cohort. The presence of a domain shift between images obtained from different laboratories impedes the performance of deep learning models on out-of-distribution samples [6]. Thus, implementing the algorithm on a new dataset may require of an adaptation. However, the cost of labeling resources is critical, especially in the clinic. When dealing with limited data, it is frequently necessary to resort to the implementation of deep learning methodologies that are more cost-efficient, such as active learning (AL). AL is a variant of supervised learning (SL) involving human interaction during training, also referred to as having a human in the loop during training [7]. The goal of an AL setup is to actively interfere during the training procedure, extract new data points from the classes the model struggles to comprehend and append them to the training set [8]. Therefore, an human in the loop is expected to spend time resources for labeling data points from the class the model demands. Recent works within the field of CPATH have demonstrated implementing AL techniques results in similar performances to SL, with constrained data settings [9, 10, 11, 12, 13, 14, 15]. Analogously, AL can enhance the tissue segmentation model from [5] by selecting informative samples from an undetermined dataset with limited annotations. Despite the publication of protocols for critical bias assessment of clinical models, no official guidelines have been established regarding the required numbers of annotations, images, and laboratories to capture the variation present in real-world data [16, 17]. The need for more well-defined class sampling strategies within the field of histopathology arises. Due to privacy concerns, medical data cohorts often cannot be made publicly available. This leads to limited model predictive generalization as medical applications are developed for a target dataset. Consequently, domain adaptation of deep learning models into unexplored data domains is needed. In this work, we propose a domain adaptation framework for deep learning models within the field of histopathology. We choose to adapt an algorithm that segments bladder cancer WSIs into different tissue types. The model architecture uses a multiscale CNN backbone that incorporates information from different magnification levels, which has been developed using another dataset from a different hospital. In order to adapt the model to a new unseen domain, we adopt an AL strategy for a more efficient labeling effort. We proactively select samples to be included into the training data based on preemptive results on the validation set. We show that the proposed AL approach is more profitable and can be integrated to reduce labeling costs. On top of that, we also aim to guide pathologists in which annotations to provide deep learning models before investing substantial amounts of effort. We estimate a balance between class distribution and model performance, validated using a small initial subset of annotations. By assessing the model's initial performance, pathologists can then proceed with further annotations guided by this intuition. ## II Material and Methods ### _Dataset_ We have collected a set of high-risk non-muscle invasive bladder cancer (HR-NMIBC) WSI from the first TURBT from a multi-centre cohort provided by Erasmus MC, Rotterdam, The Netherlands. WSIs were stained with Haematoxylin and Eosin (H&E) and scanned using a 3DHistech P1000 scanner at 80x magnification stored as MRXS files. The total number of slides is 155, for which a pathologist has annotated the slides with tissue types. Data heterogeneity produces more generalizing models than using higher amounts of data from the same slides [12]. Thus, a pathologist was asked to annotate areas in a rough, imprecise manner in order to obtain several annotated regions per WSI, within a time limit. The time usage was limited to a maximum of one hour per WSI, including diagnostic labels not used in this work. As a result, some regions were annotated in every WSI, but not the entirety of present tissue was annotated. The aim of this annotation protocol was to collect diverse scenarios, hence capture the tissue heterogeneity characteristic from bladder cancer. Moreover, to avoid incorporating a human in the loop during training, we preemptively collected available tissue type annotations and defined pools of data to draw from. In total, 127, 16, and 12 WSIs were annotated and used for training, validation and test, respectively, where the split is done on WSI level to avoid cross-contamination. Tiles from annotated regions of urothel, stroma, muscle, blood and damaged tissue were extracted from the EMC cohort, using the strategy proposed in [5] where more details can be found. In short; tiles were extracted at 2.5x, 10x and 40x magnifications, thus forming a triplet, and a maximum of 500 triplets were extracted per WSI. A tile size of 128\(\times\)128 was used, and all three tiles in a triplet share a common physical point as the center pixel. Therefore, tiles at a lower magnification cover a larger physical area. The Fig. 1: Active learning framework TRI-AL. Pathologists annotate ROIs, from which tile triples are extracted at different magnifications levels (2.5x, 10x, 40x) to form a triplet. During the training stage, an starting training set is defined for training a multiscale model. The model performance is evaluated on the validation set. Then, stopping criteria decides to resume with another training iteration or conclude the learning. In case that the criteria is not met, new samples from the pools of data are drawn and appended to the current version of the train set. total number of tiles across sets is stated in Table I. ### _Model Architecture_ The tissue segmentation model TRI-CNN proposed in [5] is adopted into our pipeline. This model was trained using WSIs of NMIBC patients from Stavanger University Hospital (SUS), Stavanger, Norway. The model architecture consists of a multiscale CNN setup that aggregates local and global information. Triplets are fed through three weight-independent VGG16 backbones trained for each of the magnifications, concatenating each output feature vectors before classification. Then, the formed feature vector is fed through the classifier to predict a tissue class, using softmax activation. A representation of the model architecture is presented in Fig. 1. ### _Active Learning Procedure_ Active learning (AL) can be a powerful tool for improving the performance of deep learning models when labeled data is limited [8]. In AL, the model is initially trained on a small labeled dataset. Thereafter, based on intermediate performance metrics and a query strategy for requesting additional samples of the most informative class, a human annotator is asked to label a small number of additional samples. The model then uses this newly labeled data to update its parameters and improve its performance. This process is repeated until either the model reaches a satisfactory level of performance, the resources are exhausted or a set number of iterations has been conducted. One of the main advantages of AL is that it can significantly reduce the amount of labeled data required to train a deep learning model to achieve acceptable performance. Additionally, active learning can also improve the model's performance by allowing it to adapt to changing conditions or concepts over time. However, AL also has some limitations. It requires human intervention, which can be time-consuming or cumbersome to organize, and it may introduce bias into the training process. Overall, the choice between SL and AL will depend on the specific needs of the application and the availability of labeled data. AL strategies, or query strategies, involve analyzing the information value of unlabeled instances. The most commonly used query strategy is uncertainty sampling [7, 18]. This framework revolves around the model uncertainty in labeling certain instances, revolving in probabilistic methods, such as entropy and prediction confidence. Predictions on unlabeled instances presenting a high degree of entropy are selected for inclusion to the training data. These are later transferred to a domain expert for labeling. Inspired by these methodologies, but with the goal of domain adaptation, our sampling strategy falls under the uncertainty query strategies. For the AL procedure, we further divided the training set, \(\mathcal{X}\), into initial training subset \(\mathcal{X}_{0}\) and pools of training data \(\mathcal{P}\). The algorithm selects new samples based on model performance. Considering the false negative ratio (FNR) on the validation set \(\mathcal{V}\) and total sample size \(S\), \(N_{j}^{i}\) samples from the pools \(\mathcal{P}\) are appended into the current training set, for every class \(i\) on iteration \(j\). Furthermore, we define a relative class balance size \(\delta_{i}^{j}\) to represent the percentage of data points for class \(i\) on the current training set \(\mathcal{X}_{j}\). These steps are iteratively repeated until \(\mathcal{P}\) cannot provide more samples of the requested class or \(J\) iterations have passed, see Algorithm 1 for more details. ``` 1:procedureALtrain(\(\mathcal{X},\mathcal{V},J,S\)) 2:\(\mathcal{X}_{0},\mathcal{P}\leftarrow\mathcal{X}\quad\triangleright\) Split train set into initial subset and pools 3:while\(j<J\)or\(len(\mathcal{P}_{i})>N_{j}^{i}\)do 4:for\(j\gets 1\) to \(J\)do 5:\(MODEL_{TRAIN}(\mathcal{X}_{j})\) 6:\(MODEL_{EVAL}(\mathcal{V})\) 7:for\(i\gets 1\) to \(I\)do\(\triangleright\) Per class new data points 8:\(N_{j}^{i}=\frac{FNR_{j-i}^{i}\cdot S}{\sum_{i}FNR_{j-1}^{i}}\) 9:\(\mathcal{X}_{j}^{i}\)\(\leftarrow\)\(sample(\mathcal{P},N_{j}^{i})\) 10:endfor 11:endfor 12:endwhile 13:endprocedure ``` **Algorithm 1** Active Learning Train Procedure ## III Experiments All experiments are done using transfer learning from TRI-CNN, using the same hyperparameters as in the original algorithm [5]. All models were trained using unfrozen VGG16s, a multi-class cross entropy loss function, learning rate of 1.5e-4 and SGD optimizer. Early stopping was enabled for interrupting the training if there was no improvement in the validation loss for five consecutive epochs. The models were trained five times in order to determine the mean and standard deviation of the results. We evaluate the performance of TRI-CNN on our test set; then, we compare it to the proposed SL and AL strategies using data from Erasmus MC, Rotterdam, The Netherlands. For our experiments, we compare the performance of standard supervised learning (TRI-SL) on different sampling of the training set, in order to leverage the performance at various thresholds of dataset sizes. Based on the entire available training set, we trained models using a fraction of the data, namely from 20 to 100%, in intervals of 20%. As for the AL methods, we define a stopping criteria based on resource exhaustion and a fixed number of iterations. The initial training set \(\mathcal{X}_{0}\) is made of 25000 triplets per class. Every iteration \(i\), 20000 new samples \(S\) are added to training data \(\mathcal{X}_{j}\). We defined a maximum of 5 iterations for training the model TRI-ALITER, while the model TRI-ALCOOD refers to the model trained until a class pool runs out of data points. For comparison matters, we trained an algorithm TRI-ALENT based on entropy uncertainty. In this case, 30000 triplets from the pools \(\mathcal{P}\) are sampled based on class distribution \(\delta\) of the entire training set \(\mathcal{X}\). From these, the top 20000 triplets with the highest entropy are selected. \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **Train** & **Val** & **Test** \\ \hline **WSI** & 127 & 16 & 12 \\ \hline **Blood** & 350105 & 2500 & 4106 \\ **Damaged** & 69520 & 2500 & 57296 \\ **Musde** & 67007 & 2500 & 50347 \\ **Stroma** & 86978 & 2500 & 79338 \\ **Urothelium** & 91006 & 2500 & 38813 \\ \hline **Total** & 346016 & 12500 & 229900 \\ \hline \end{tabular} \end{table} TABLE I: Number of WSI and tiles per tissue type (class) in train/val/test split of the available dataset of high-risk non-muscle invasive bladder cancer. ## IV Results & Discussion Results in terms of F1 scores are presented in Table II. First, we observe that TRI-CNN, although achieves a respectable performance, falls short in comparison to the models trained on the new cohort, thus showing that domain adaptation is needed. TRI-CNN achieved a micro F1 score of 73.06. Regarding TRI-SL models, the performance improved along with the size of the dataset, considering that the best SL model is that using the entire training set. Nevertheless, AL strategies achieve higher performance using data more efficiently. The query strategy in TRI-AL\({}_{\text{ITER}}\) reaches a micro average F1 score of 90.34, compared to 90.23 for SL using 100% of the training samples. The AL data accounts for 59% of the available training samples. It is also worth mentioning the disparity in terms of performance for the blood tissue class in regards to other tissue types. One of the reasons refers to regular missclassifications. Most of the blood annotations are those of vessels present within the stroma, so its only natural that blood can be misinterpreted as stroma. However, the main reason is that the number of blood samples in the test set for blood is heavily limited in comparison to the other classes. Naturally, a minor number of blood false positives results in a significant downgrade in terms of metrics. Focusing on the TRI-AL\({}_{\text{ITER}}\) model performance results, blood has a recall of 86.31 while for stroma is 86.77, barely a 0.66 difference. The development of the training set class distribution for TRI-AL\({}_{\text{ITER}}\) in comparison to the ones used for TRI-SL can be seen as described in Fig 2. Looking at the relative data balance per class, we identify a tendency over the iterations where damaged, muscle and urothel class distribution \(\delta\) reach an asymptote. Also, we observe that the algorithm deems to suggest that stroma is significantly harder to learn and requires a more extensive number of data points in comparison to other classes. Even then, the margin for \(\delta\) progressively slows down and we would expect it to become stagnant. Regarding the comparison of the iteration TRI-AL\({}_{\text{ITER}}\) and data exhaustion TRI-AL\({}_{\text{OOD}}\) query strategies, we noticed that allowing the algorithm to add samples until a class pool \(\mathcal{P}_{i}\) is exhausted does not translate into overall better performance, although that might be different in case that the pool was larger. To further evaluate the performance of our model TRI-AL\({}_{\text{ITER}}\), we compare it to entropy-based class instance selection strategies TRI-AL\({}_{\text{ENT}}\). Our results indicate that, on average, TRI-AL\({}_{\text{ITER}}\) outperforms the entropy-based approach TRI-AL\({}_{\text{ENT}}\), both in terms of per-class and total aggregated metrics, indicating that it is better at distinguishing between different tissue types. However, it should be noted that we also observed lower variation in the performance of the entropy-based models between runs, as indicated by the standard deviation metrics. This suggests that the entropy-based approach may be more consistent and less susceptible to fluctuations in performance across different training iterations. Nevertheless, it is important to note that the entropy-based query strategy is more computationally expensive, as it requires the calculation of sampled data points. In contrast, our proposed method does not require any additional computation beyond the training of the model. Pathologists also visually inspected the model's segmentation results by overlaying the predicted masks over the raw WSIs. Illustrative examples of the model's segmentation results for a representative WSI of the test set are shown in Fig. 3 for a ROI and the entire WSI, respectively. Upon visual inspection of the results by experts, according to TRI-CNN segmentation, we have identified four main observations: staining effects leading to false positives of blood in regions with high levels of eosin stain, non-caueterized damaged areas such as blur or folding, the risk of misinterpreting infiltrative immune cells as urothelial cells, and the potential for the model to predict urothel with significant cytoplasm as stroma. As per the models trained in the new cohort (TRI-SL, TRI-AL), it was confirmed that these models accurately segmented and classified different tissue types. \begin{table} \begin{tabular}{|l|c||c|c|c|c|c|c|c|} \hline & TRI-CNN [5] & TRI-SL(20\%) & TRI-SL(40\%) & TRI-SL(60\%) & TRI-SL(80\%) & TRI-SL(100\%) & TRI-AL\({}_{\text{ITER}}\) & TRI-AL\({}_{\text{OOD}}\) & TRI-AL\({}_{\text{QT}}\) \\ \hline **Blood** & 29.06(-) & 53.05(23.3) & **58.46(2.50)** & 55.21(3.24) & 53.31(3.06) & 56.22(2.34) & 55.36(3.50) & 54.44(2.15) & 51.84(2.94) \\ **Damaged** & 75.65(-) & 89.80(3.99) & 91.82(2.03) & **92.70(2.07)** & 91.55(0.66) & 92.44(3.01) & 92.65(2.02) & 92.12(1.07) & 92.63(0.31) \\ **Musce** & 71.54(-) & 89.67(0.49) & 89.37(0.56) & 91.21(0.71) & 92.51(0.64) & 92.04(0.70) & **92.61(1.15)** & 92.97(0.15) & 92.97(0.35) & 92.11(0.57) \\ **Stroma** & 70.73(-) & 87.59(0.29) & 86.93(0.76) & 88.31(0.92) & 88.03(1.06) & **88.47(0.64)** & 88.40(1.24) & 88.07(0.49) & 87.41(0.58) \\ **Urethulum** & 82.45(-) & 90.48(0.75) & 91.80(1.8) & 92.66(0.30) & 92.20(0.64) & 93.41(0.28) & **93.45(0.46)** & 93.19(0.09) & 93.93(0.27) \\ \hline **Total (micro)** & 73.06(-) & 88.01(0.30) & 88.92(5.04) & 90.08(0.41) & 89.64(0.53) & 90.23(0.30) & **90.34(0.46)** & 89.95(0.23) & 89.69(0.41) \\ **Total (macro)** & 65.89(-) & 82.01(0.63) & 83.70(0.71) & 84.21(0.68) & 85.2(0.78) & **84.52(0.46)** & 84.51(0.97) & 84.04(0.42) & 83.48(0.74) \\ \hline \end{tabular} \end{table} TABLE II: F1 scores per tissue type. We compare the model pretrained TRI-CNN on another dataset, to supervised learning (TRI-SL) approaches with varying training data sizes, to entropy-based active learning (TRI-AL\({}_{\text{ENT}}\)) and our proposed active learning models (TRI-AL\({}_{\text{ITER}}\), TRI-AL\({}_{\text{OOD}}\)). Fig. 2: Class relative size per class \(\delta_{i}\) over the iterations for TRI-AL\({}_{\text{ITER}}\) model. We observe that the algorithm prioritizes harder classes over simpler ones as the iterations pass. As \(\mathcal{X}_{j}\) increases, \(\delta_{i}\) reaches an asymptote for damaged, muscle and urothel tissue classes, while penalizing blood in favor of stroma. Class distribution does not match that annotated from a pathologist, as per the TRI-SL section. ## V Conclusion & Future Work In this work, we proposed a active learning framework with a multiscale CNN for domain adaptation of a tissue segmentation model of bladder cancer histopathological images. Our proposed method achieved a F1 score of 90.34 using 59% of the training data, and outperformed supervised learning strategies that used all available samples. Our results suggest that active learning can be an effective strategy for reducing the labeling effort in histopathological image analysis. Regarding domain adaptation, we observed that we were able to customize the model to the new domain using a small labeled set. Moreover, we also presented a suggested data annotation per class burden for tissue segmentation of bladder cancer WSI. Furthermore, this model can be introduced as a pre-processing step for other applications that require tissue segmentation for ROI extraction. ## Acknowledgment This research has received funding from the European Union's Horizon 2020 research and innovation program under grant agreements 860627 (CLARIFY).
2308.08473
DataRaceBench V1.4.1 and DataRaceBench-ML V0.1: Benchmark Suites for Data Race Detection
Data races pose a significant threat in multi-threaded parallel applications due to their negative impact on program correctness. DataRaceBench, an open-source benchmark suite, is specifically crafted to assess these data race detection tools in a systematic and measurable manner. Machine learning techniques have recently demonstrated considerable potential in high-performance computing (HPC) program analysis and optimization. However, these techniques require specialized data formats for training and refinement. This paper presents the latest update to DataRaceBench, incorporating new data race contributions from Wu et al. \cite{wu2023model}, and introduces a derived dataset named DataRaceBench-ML (DRB-ML) \cite{drbml}. DRB-ML aligns with the emerging trend of machine learning and large language models. Originating from DataRaceBench, this dataset includes detailed labels that denote the presence of a data race and provides comprehensive details of associated variables, such as variable names, line numbers, and the operation (read/write). Unique to DRB-ML, we have also integrated a series of tailored prompt-response pairs specifically designed for LLM fine-tuning.
Le Chen, Wenhao Wu, Stephen F. Siegel, Pei-Hung Lin, Chunhua Liao
2023-08-16T16:23:13Z
http://arxiv.org/abs/2308.08473v1
# DataRaceBench V1.4.1 and DataRaceBench-ML V0.1: Benchmark Suites for Data Race Detection ###### Abstract. Data races pose a significant threat in multi-threaded parallel applications due to their negative impact on program correctness. DataRaceBench, an open-source benchmark suite, is specifically crafted to assess these data race detection tools in a systematic and measurable manner. Machine learning techniques have recently demonstrated considerable potential in high-performance computing (HPC) program analysis and optimization. However, these techniques require specialized data formats for training and refinement. This paper presents the latest update to DataRaceBench, incorporating new data race contributions from Wu et al. (Wu et al., 2019), and introduces a derived dataset named DataRaceBench-ML (DRB-ML) (Li et al., 2019). DRB-ML aligns with the emerging trend of machine learning and large language models. Originating from DataRaceBench, this dataset includes detailed labels that denote the presence of a data race and provides comprehensive details of associated variables, such as variable names, line numbers, and the operation (read/write). Unique to DRB-ML, we have also integrated a series of tailored prompt-response pairs specifically designed for LLM fine-tuning. Data race, Large Language Model, Machine Learning + Footnote †: journal: J. of LaTeX Templates + Footnote †: journal: J. of LaTeX Templates inclusions and their implications in the pursuit of optimized data race detection. ### Origins of Newly Added Microbenchmarks Wu et al. (Wu et al., 2019) proposed a straightforward model-checking technique that validates a program is free from data races. To evaluate the efficacy of the proposed verifier, they employed DataRaceBench in conjunction with several supplemental examples. We selected twenty C-language programs from these supplemental instances, processed them thoroughly, and ultimately incorporated them into DataRaceBench. ### Data Processing Adhering to the design principles of DataRaceBench, we thoroughly processed the newly incorporated microbenchmarks. Our processing steps encompassed the following: * We examined the length of each microbenchmark, ensuring it is as small as possible to represent typical data race detection cases. The twenty new microbenchmarks did not significantly alter the overall size of the DataRaceBench suite. The average size of microbenchmarks in DRB v1.4.1, measured by the string length, increased from 29.4 to 30.9 compared to DRB v1.4.0. * We confirmed that each program was self-contained and included a main function, thereby ensuring its independence and usability for both static and dynamic analysis tools. * We categorized each program into either the race-yes or race-no set based on its characteristics. * For programs categorized in the race-yes set, we identified and marked the pair of source locations responsible for causing data races. * Finally, we assigned appropriate data race property labels to the collected microbenchmarks following the DRB design. These labels are used to categorize different types of code patterns, such as unresolvable dependencies, missing data sharing clauses, missing synchronizations, and so on. ### New Microbenchmark Example Listing 1 shows an example of new microbenchmarks (DRB193) in DRB v1.4.1. Following the processing steps discussed in section 2.1, we transform the data race-related code to a DRB microbenchmark. DRB193 is a positive benchmark with known a data race pair due to different critical section names used. ``` 1/* 2:!! Copyright (c) 2017-20, Lawrence Livermore National Security, LLC 3:!! and DataRaceBench project contributors. See the DataRaceBench/COPVRIGHT file for details. 4:!! 5:!! 6:!! SPDX-License-Identifier: (BSD-3-Clause) 7:!! 8:!! 9:!! 10: /* 11: * This is a program based on a dataset contributed by 12: *Wenhao Wu and Stephen F. Siegel @Univ. of Delaware. 13 14: *Raceduetodifferentcriticalsectionnames 15: *DataRacepair:x026:7:W.vs.w043:7:M 16: */ 17: * 18: *include<stddio.h> 19: intmain() 20: { 21: int x = 0, s = 0; 22: *#pragmaomparallel sections shared(x, s) num_threads(2) 23: { 24: #pragmaompsection 25: { 26: * x = 1; 27: *pragmaompcritical(A) 28: { 29: * s = 1; 30: } 31: } 32: #pragmaompsection 33: { 34: *intdone = 0; 35: *while(!done) 36: *#pragmaompcritical(B) 37: { 38: *if(s) 39: *done = 1; 40: } 41: } 42: } 43: } 44: } 45: } 46: } 47: printf("k0n",x); 48: ``` Listing 1: DRB193-critical-section3-yes.c ## 3. DataRaceBench-ML V0.1 DataRaceBench has been proven to be a valuable resource for data race study. However, a gap exists between its current offerings and the specific needs of machine learning approaches designed for data race detection. To bridge this gap, we introduce DataRaceBench-ML (or DRB-ML) V0.1 in this section. DRB-ML is a derivative of DRB specifically crafted to support machine learning methodologies in the realm of data race detection. ### Label Extraction The efficacy of machine learning models, particularly supervised learning, hinges substantially on the quality and detail of the labels associated with the training data. In the context of data race detection, labels convey critical information, such as whether a data race exists within a given code segment and, if so, the precise location and nature of the race. A dataset with diverse labels can cater to many tasks for machine learning models, enhancing their training and ultimate performance. To facilitate the application of machine learning to DataRaceBench, we executed a comprehensive label extraction process. Figure 1 shows the information obtained from DRB to generate labels in DRB-ML. Each data entry in DRB-ML is formatted in JSON and includes the following keys: * "ID": A unique index number starting from 1. It preserves the original ordering in DRB and is stored as an integer. * "name": The original filename of the DRB file. This is directly copied from the DRB filenames and stored as a string. * "DRB_code": The original code present in DRB microbenchmarks. It is copied verbatim from the DRB benchmarks and stored as a string. * "trimmed_code": The DRB_code with all comments removed. We employed a script to process each microbenchmark in DRB and store the comment-free code as a string. * "code_len": An integer value representing the string length of the trimmed code. This label aids in thresholding data to match the input size constraints of various machine learning models, ensuring compatibility and adaptability. * "data_race": This is a boolean value representing the data race condition in DRB. The presence of a data race is indicated by 1, and its absence by 0. * "data_race_label": This label indicates the type of data race condition (race-yes or race-no) that DRB marks. We used DRB metadata to generate this label and stored it as a string variable. * "name": Variable names stored as a string variable. - "line": The line number indicating the variable's location, stored as an integer variable. - "col": The column number showing the variable's location, stored as an integer variable. - "operation": The operation performed on the variable, stored as a string variable. The value is either "w" (representing a write operation) or "\(\star\)" (representing a read operation). We automated the extraction process to ensure consistency and scalability. Leveraging a custom-developed script, we systematically traversed through each microbenchmark in DRB, extracting the pertinent information and transforming it into an effective format for machine learning. This process yielded a comprehensive and richly labeled dataset, making it a valuable resource for developing and evaluating machine learning models focused on data race analysis. ### Prompt-Response Pairs For Supervised Fine Tuning LLMs In the context of fine-tuning Large Language Models (LLMs), using prompt-response pairs plays a pivotal role. These pairs provide the model with clear instances of a problem (the prompt) and its desired solution (the response), essentially guiding the model's learning trajectory toward understanding and generating the expected outputs. In our work, we curated a unique set of prompt-response pairs specifically tailored for data race detection tasks. The "prompt" encapsulates the code segment under consideration, while the "response" comprises the corresponding label information obtained during our label extraction process. This strategy taps into the enormous capacity of LLMs to comprehend intricate text patterns, linking these patterns to labels that signify the presence or absence of a data race. We systematically produced these pairs using an automated procedure, ensuring extensive coverage of potential data race scenarios and providing a comprehensive learning experience for the LLM. These prompt-response pairs constitute a crucial component of the DataRaceBench-ML dataset, enhancing its effectiveness for fine-tuning LLMs and bolstering its relevance in real-world data race detection tasks. Incorporating such pairs in our dataset underscores the potential of LLMs to boost the performance of traditional data race detection tools, paving the way for a new era of more efficient, robust, and scalable solutions. ### DRB-ML Example Listing 2 shows an example in DRB-ML derived from microbenchmark in Listing 1. We omit the code content for readability. It is worth mentioning that the "line" value in DRB-ML is based on the code without comments. ``` 1{ 2"ID":193, 3"name":"DRB!93-critical-section3-yes.c" 4"DRB_code":"...", 5"trimmed_code":"...", 6"code_len":425, 7"data_race":1, 8"data_race_label":"y3", 9"var_pairs":[pair0] 10"pair":{ 11"name":["x","x"], 12"line":[9,26], 13"col":[7,7], 14"operation":["w","w"] 15} 16} ``` Listing 3 presents a pair of prompts and responses for DRB-ML labels in listing 2. Again, we omit the code content for readability. Figure 1. Transformation Process from DRB to DRB-ML Dataset:1. Data Race Status Extraction; 2. Retrieval of Data Race Variables and Lines; 3. DRB Code Comment Trimming; 4. DRB-ML Template Integration. ## 4. Conclusion This paper introduced two datasets: the latest version of DataRaceBench and the new DataRaceBench-ML (DRB-ML). While DataRaceBench is designed to evaluate traditional data race detection tools, DRB-ML is a unique dataset engineered to evaluate and enhance Large Language Models (LLMs) applied to data race detection. As the intersection of high-performance computing and machine learning expands, DRB-ML offers a beacon for future research, promising improved precision and real-world applicability in code analysis. ###### Acknowledgements. Prepared by LLNL under Contract DE-AC52-07NA27344 (LLNLPROC-853240) and supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research.
2303.12759
Using word embeddings to analyse audience effects and individual differences in parenting Subreddits
Human beings adapt their language to the audience they interact with. To study the impact of audience and gender in a natural setting, we choose a domain where gender plays a particularly salient role: parenting. We collect posts from the three popular parenting Subreddits (i.e., topical communities on Reddit) r/Daddit, r/Mommit, and r/Parenting. These three Subreddits gather different audiences, respectively, self-identifying as fathers and mothers (ostensibly single-gender), and parents (explicitly mixed-gender). By selecting a sample of users who have published on both a single-gender and a mixed-gender Subreddit, we are able to explore both audience and gender effects. We analyse posts with word embeddings by adding the username as a token in the corpus. This way, we are able to compare user-tokens to word-tokens and measure their similarity. We also investigate individual differences in this context by comparing users who exhibit significant changes in their behaviour (high self-monitors) with those who show less variation (low self-monitors). Results show that r/Parenting users generally discuss a great diversity of topics while fathers focus more on advising others on educational and family matters. Mothers in r/Mommit distinguish themselves from other groups by primarily discussing topics such as medical care, sleep and potty training, and food. Both mothers and fathers celebrate parenting events and describe or comment on the physical appearance of their children with a single-gender audience. In terms of individual differences, we find that, especially on r/Parenting, high self-monitors tend to conform more to the norms of the Subreddit by discussing more of the topics associated with the Subreddit. In conclusion, this study shows how mothers and fathers express different concerns and change their behaviour for different group-based audiences.
Melody Sepahpour-Fard, Michael Quayle, Maria Schuld, Taha Yasseri
2023-03-22T17:10:26Z
http://arxiv.org/abs/2303.12759v3
# How does the audience affect the way we express our gender roles? ###### Abstract In this paper, we explore the language of gender-specific groups of "mothers" and "fathers" in comparison to the same individuals interacting just as "parents." Human beings adapt their language to the audience they interact with. This may be for self-presentation reasons (to influence how they are seen by others) or because being an authentic member of a group requires alignment with its norms and culture. While audience effects have been studied theoretically and in small-scale research designs, large-scale studies of naturally-occurring audience effects are rare. To study the impact of audience and gender in a natural setting, we choose a domain where gender plays a particularly salient role: parenting. We look at the audience effects in interaction with gendered contexts emphasizing different social identities (i.e., mother, father, and parent). As a natural setting, we choose to study interactions on Reddit. We collect posts from the three popular parenting subreddits (i.e., topical communities on Reddit) r/Daddit, r/Mommit, and r/Parenting. These three parenting subreddits gather different audiences, respectively, self-identifying as fathers and mothers (ostensibly single-gender), and parents (explicitly mixed-gender). By selecting a sample of users who have published on both a single-gender and a mixed-gender subreddit, we are able to explore both audience and gender effects. We analyse posts with word embeddings by adding the user as a token in the corpus. This way, we are able to compare user-tokens to word-tokens and measure their similarity. Results show that mothers and fathers, when interacting in a mixed-gender context, behave similarly and discuss a great diversity of topics while focusing more around advising each other on educational and family matters. In single-gender subreddits, mothers and fathers are more focused on specific topics. Mothers in r/Mommit distinguish themselves from other groups by discussing the most topics such as medical care, sleep and potty training, and food. Both mothers and fathers celebrate parenting events and describe or comment the physical appearance of their children with a single-gender audience. In conclusion, this study shows how mothers and fathers express different concerns and change their behaviour for different group-based audiences. It also advocates for the use of Reddit and word embeddings to better understand the dynamics of audience and gender in a natural setting. gender stereotypes; parenting; audience effects; word embeddings; user embeddings; Reddit; natural language processing; computational social science ## Content Individuals identify with multiple social groups but only certain social identities are contextually salient. The way we behave in a given context depends on the audience in presence. The effect of an audience on speakers' behaviour has been extensively documented since Goffman's concept of presentation of self or identity management [1]. He uses the metaphor of a theatre with a backstage and a front stage, arguing that individuals are like actors when interacting, performing to present a desirable image of themselves [1]. Individuals are very cautious about conforming to norms on the front stage, whereas they prepare their presentation backstage [1]. The social identity performance [2] shows how the audience acts as a contextual identity clue, making social identity salient and therefore making relevant and appropriate a set of behaviours [2]. These behaviours would position individuals in relation to the social identity in question in a way that allows them to achieve their social objectives [2]. In addition to influencing individual behaviour, the audience's feedback shapes the definition of social identity too [2]. In contrast to the concept of self-presentation, which focuses on being favourably seen as a personal goal, social identity performance considers the situations where the actor defines themselves as a group member and performs as such when they think they are visible to an audience [2]. The present work studies audience effects in parenting communities on Reddit. We use a novel method [3] of word embedding using the agent as a token in the text corpus. This method allows not only to locate agents from the way they talk but also to compare them to meaningful word tokens. We use this method to focus on individuals and the influence gender and audience play in their interactions. To do so, we analyse a dataset of Reddit posts on the three popular parenting subreddits r/Parenting, r/Daddit, and r/Mommit. We select a sample of Reddit users who have published on these subreddits and study how the audience, either single-gender or mixed-gender, can influence the expression of concerns related to parenting. Given the salience of gender stereotypes in the parenting domain [4, 5, 6], the gender configuration of the audience can push users towards specific behaviours, in conformity or not with social expectations regarding gender norms [6]. The salience of social identities related to gender can facilitate gender-stereotypical behaviours [7, 8, 9, 10]. According to gender stereotypes, women have communal characteristics (e.g., care and selflessness) while men have agentic characteristics (e.g., motivation to master and assertiveness) [11]. In practice, these stereotypes are reflected in gender roles as standards of 'good' behaviour for men and women [12]. These roles are associated with fields of expected expertise such as work for men and family for women [10]. For instance, although fathers want to be more involved in caring [13], they are positioned as "helpers" rather than as primary caregivers [14, 15]. Audience and gender stereotypes are therefore frameworks that we will use to understand behaviours. Do mothers and fathers adapt their expression of concerns to the audience they interact with? On which topics specifically? Is one gender more likely to change behaviour? These are questions we will address in the present study. Reddit has several very popular parenting communities that make possible the study of those phenomena in a natural setting and at a large scale. We will use Reddit as a means through which we can observe how the change in audience can influence behaviour. Switching between different topical communities (i.e., subreddits) and potentially social identities, with either a single or mixed-gender audience, mothers and fathers may adapt their behaviour accordingly. On Reddit, users' interactions on parenting have been described through several studies. Using a quantitative approach, Ammari et al. [16] looked at the three popular parenting subreddits r/Parenting, r/Mommit, and r/Daddit with Latent Dirichlet Allocation topic modelling [17]. They collected all the comments published on these subreddits between 2008 and 2016, from which they were able to identify 47 topics. The authors found that users on r/Parenting talk more about discipline and teen talk. r/Mommit users discussed the most topics such as sleep training, breastfeeding, milestones, child weight gain, pregnancy recovery, and housework. On r/Daddit, the most frequent topics were congratulations, Neonatal Intensive Care Unit (NICU) experience, legal questions for custody battles, and Halloween costumes. Therefore, mothers discussed more issues related to health and raising children whereas fathers' discussions focused more on events such as birth, Halloween, or divorce. Using qualitative grounded theory, Feldman [18] analyzed posts published on r/Daddit and r/Mommit. They found that while fathers merge work and family and consider work as part of family duties, mothers see work and family as two distinct domains and emphasise the double burden. Mothers' posts are also more centred around _how_ to care for a child whereas fathers' are about _whether_ they care [18]. Additionally, in family relationships, fathers seek advice about how to provide support for the mother but mothers focus on a dyadic mother-child relationship [18]. Sepahpour-Fard and Quayle [19] showed that the audience might play a role in the interactions parents have with each other. They selected a set of Reddit users who have published on both a single-gender subreddit (r/Mommit or r/Daddit) and a mixed-gender subreddit (r/Parenting) and looked at how users change their expression of concerns depending on the audience. They assumed, based on the subreddits' descriptions, that r/Mommit is for mothers, r/Daddit for fathers, and r/Parenting for parents. Using Latent Dirichlet Allocation Topic Modelling [17], they measured the prevalence of topics in comments and averaged the prevalence by gender and audience to isolate potential effects. Their results showed that mothers generally express more concerns about the children's basic needs (sleep, food, and medical care) than fathers. Interestingly, they express those concerns more between themselves (i.e., in r/Mommit) than with fathers (i.e., in r/Parenting). In contrast, fathers focus on giving educational advice in r/Parenting while celebrating events (e.g., pregnancies and births) and sharing pictures with other fathers in r/Daddit. In addition to the conformity of Reddit parents with gender roles regarding parenting, this study adds a contextual approach which allows a better understanding of how users behave with different audiences. However, results are aggregated by groups of users and are therefore not showing what happens at the individual level. For example, users who publish a lot will have a larger weight in the model and results might be showing what they are concerned with, rather than what all parents express. In the present study, instead of looking at averages of posts such as in previous studies, we use word embeddings (see Methods) and add the user as a token in the text to locate each user from the way they speak [3]. This way, we can give each user the same weight and analyse the expression of concerns at the individual level in parenting subreddits. From this embedding where users are located, forming a speaker-landscape [3], we can then see how focused users are (i.e., discussing only specific topics), how users change behaviour in front of different audiences, and how close they are to a selection of topical keywords. From comments published in 2020 on the three most popular parenting subreddits (i.e. r/Daddit, r/Mommit, and r/Parenting), we selected only users who have published on both a single-gender subreddit and a mixed-gender subreddit, identifying mothers (i.e., users who publish on r/Mommit and r/Parenting) and fathers (users who publish on r/Daddit and r/Parenting). Their gender is inferred based on the subreddits' descriptions (see Methods for more detail). This setting allows us to capture both gender differences and audience effects. We can see if mothers and fathers differently adjust their social identity performance when interacting with different audiences and if the location of users in the embedding shows significant insight regarding the change of behaviour. ## Results ### Speaker-landscape We use the term'speaker-landscape' to refer to the embedding created by users from their linguistic use. We begin by visualising the vector representation of Reddit users and qualitatively study the structure of the landscape. In Figure 1, we can see the two-dimensional representation of users coloured by their self-identified status as mothers or fathers, according to the subreddit on which they posted. Each user has two data points representing them, one when interacting with a single-gender audience (posting on r/Mommit or r/Daddit) and one when interacting with a mixed-gender audience (posting on r/Parenting). Fathers on r/Daddit are clustered at the right of the landscape. In contrast, mothers on r/Mommit are mostly on the bottom half of the landscape and are more Figure 1: Speaker-landscape of mothers and fathers interacting with different audiences. Each dot represent an author posting on one subreddit (either r/Daddit, r/Mommit, or r/Parenting). spread out. In the presence of a mixed-gender audience (r/Parenting), both fathers and mothers seem to discuss a wide range of topics as their data points are spread out around the landscape. To quantify how spatially concentrated or dispersed each group is in the embedding space, we measure the average cosine similarity between the vector representations of each pair of users in a group (mothers on r/Mommit, mothers on r/Parenting, fathers on r/Dadit, and fathers on r/Parenting). We then measure the differences between the concentration of groups using T-tests and Kolmogorov-Smirnov tests (see Methods for more detail). Results show that all the differences between the different groups are significant (see Additional Files, Table 2). Fathers, when interacting with other fathers are the most concentrated group, meaning they tend to gather more around specific topics than other groups. Mothers in r/Mommit are also more concentrated than users on r/Parenting. When in a mixed-gender context on r/Parenting, parents interact about a greater set of topics than when in single-gender contexts and are therefore less spatially concentrated. Looking at mothers and fathers within the context of r/Parenting, Mothers are the most spatially spread group. These results confirm the qualitative analysis of the speaker-landscape with fathers in r/Daddit being the most concentrated group followed by mothers in r/Mommit, fathers in r/Parenting, and mothers in r/Parenting (Figure 1). Figure 2 shows the probability density function of the cosine similarity. In addition to all differences appearing to be significant (see Additional Files, Table 2), we can see two distinct types of behaviour among both mothers and fathers: they are more focused on specific topics of discussion when interacting with a single-gender audience while they diversify topics when gathered in a mixed-gender environment. Similarity between two versions of the same authorAre there gender differences between mothers and fathers? i.e., Does one gender change their behaviour more when switching from one subreddit to the other? We use the cosine similarity between two vectors (associated with the two data points representing each user in the two subreddits) to measure the extent to which users change their discourse when interacting with different audiences (Additional Files, Figure 5). The Kolmogorov-Smirnov test comparing two distributions shows that the difference between mothers and fathers is not significant. Therefore, there is a similar tendency for mothers and fathers to be more focused on specific topics within gender-specified contexts, but more diverse when gathered in the more pluralistic r/Parenting subreddit. Topics of discussionPrevious work by Sepahpour-Fard and Quayle [19] found, using Latent Dirichlet Allocation Topic Modelling, twelve topics in users' discourse on r/Mommit, r/Dadit, and r/Parenting: _Food_, _Sleep training_, _Medical care_, _Change/Potty training_, _Birth/Pregnancy_, _Thank you/Ampreciation_, _Physical appearance/Picture_, (_Education/Family advice_, _Work/Raise children_, _School/Teaching_, _Leisure activities_, and _Furniture/Design_. Each topic was associated with ten keywords representing them (more details available in [19]). To validate these keywords and topics, we first visualise the topic modelling keywords found in this study [19] (Figure 3). The keywords are coloured by the topic they are the most associated with. As expected, and validating our speaker-landscape, each topic's keywords are clustered together in the embedding. Figure 3: Topic Modelling keywords Each color represents a topic found through LDA topic modelling on the same dataset. Figure 2: Distribution of similarity between users within each group of gender and audience. The higher the cosine similarity value, the more similar agents are to each other. To identify which topics are the most discussed by each group of users, we measure the distance between topics and users interacting in the different subreddits by first calculating the centroid of each topic and the centroid of each group, and then measuring the cosine similarity between the centroids of topics and groups. Figure 4 shows the cosine similarity between the centroids of each set of users and the centroids of each set of keywords in a topic. Results show that for some of the topics such as _Education/Family advise_, _School/Teaching_ and _Physical appearance/Picture_, there seems to be a clear distinction between users' behaviour in single-gender subreddits and mixed-gender subreddits. Users discuss more _Education/Family advise_, _Work/Raise children_, and _School/Teaching_ in r/Parenting than in r/Daddi or r/Mommit while the topic _Physical appearance/Picture_ is the most prevalent among parents interacting in single-gender subreddits. Several topics are discussed the most by mothers on r/Mommit: _Birth/Pregnancy_, _Change/Potty training_, _Food_, and _Sleep training_. The topic _Medical Care_ and _Furniture/Design_ seem to distinguish mothers from fathers, regardless of the audience. ## Discussion We explored the effects of audience and gender on interactions in parenting subreddits. We validated the speaker landscape as a method for the study of individuals, groups, and group interactions. The results are consistent with previous results, which analyzed the same dataset using topic modelling [19] and therefore gave Figure 4: Topic Modelling keywords Each color represents a topic found through LDA topic modelling on the same dataset. us additional confidence about the position of users. However, going beyond topic modelling results, the speaker landscape approach could capture how individual users change their behaviour when switching subreddits without having to rely on averages biased towards those who post a lot. Coherently with previous works on the effect of an audience on behaviour [1, 2, 9], we found that both mothers and fathers change their topics of discussion when interacting with different audiences. We looked at the differences between groups (i.e., mothers on r/Mommit, fathers on r/Daddit, mothers on r/Parenting, and fathers on r/Parenting), and saw two clusters of behaviour appearing with users in a single-gender subreddit being more focused (i.e., higher cosine similarity value) than users in a mixed-gender subreddit. Results also showed that parents, when interacting in a mixed-gender subreddit, were more diverse linguistically. In contrast, in a single-gender subreddit, mothers and fathers were more clustered in the landscape which indicates fewer topics of discussion. Previous research [20] showed that teenage girls and boys (aged 13 to 20) tend to converge to more similar behaviour when in a mixed-gender group, girls and boys adopting less gender-specific writing styles. We observe the same behaviour, with mothers and fathers tending to be less'specialized' in specific topics (more spread in the landscape) when gathered in a mixed-gender subreddit. This phenomenon is also explained by the Communication Accommodation Theory [21], which describes how individuals adjust their language to each other to get approval, communicate more efficiently, and maintain a positive social identity. The latter might explain why users differ more when in mixed-gender groups than when gathered in a single-gender group. However, at an even more abstract level, groups and social identities, are founded in language; and language is a primary way for people to position themselves as group members (especially in online contexts where it is the only medium for social interaction and group definition). Moreover, for mothers specifically, we see that the topics they discuss the most overall (_Food_, _Sleep training_, _Change/Potty training_, and _Physical appearance/Picture_) are topics that they discuss less in a mixed-gender setting than in a single-gender setting. These are topics particularly associated with the gender role of mothers [10] as they are about the basic needs of children and the practical aspects of raising a child. By discussing them the most, they position themselves as more concerned by these areas. In a single-gender subreddit, fathers discussed more topics related to announcing births and pregnancies, appreciation, shared pictures and commented on physical appearance. In a mixed-gender environment, parents interacted more about general educational or family advice. Previous studies on the same parenting subreddits [16, 18, 19] found similar results with users discussing more sleep training, breastfeeding, and potty training on r/Mommit, congratulations on r/Daddit and, discipline on r/Parenting. Although allowing a deep analysis of parents' interactions on Reddit, our analysis has some limitations related to the method used. Future work should focus on untangling the potential biases caused by the method. As word embeddings are initially made for representing words as vectors and highlighting relationships between words, adding a non-word token (i.e., the user token) might misrepresent other word tokens. For instance, an important part of the word tokens is at the periphery of the landscape. This might be explained by the fact that ngrams surrounding user tokens might be more diverse than those surrounding other word tokens, therefore making user tokens more central in the landscape. In other words, if the word "bed" will be most of the time surrounded by words such as "night" or "sleep", a user token can once be surrounded by words such as "night" or "sleep" when talking about sleep training but by "school" and "teaching" when discussing education. More generally, the speaker-landscape distorts language by adding a non-word token in the text. The consequences of the latter should be studied to prevent biases in the representations of users and words. In our work, however, the speaker-landscape seems to make intuitive sense and is validated by the clustered topics' keywords in the landscape. Additionally, although we improved previous research by looking at individual differences and giving the same weight to each user, we kept looking at the global structure and averaged measures for the different groups. However, there might be some differences within the groups that should be more carefully analysed, e.g, there seems to be a cluster of fathers in r/Daddit gathered on a corner of the landscape. Future research should look closer at the landscape and analyse the dynamics within each group. Considering audience effects, there might be some users with large differences in behaviour between subreddits, but some others not changing much at all. Using word embeddings, we represented users spatially to analyse audience effects in different parenting subreddits. Our results show that mothers and fathers are specialized on specific topics when in r/Mommit and r/Daddit but converge into more similar topics when gathered in r/Parenting. In a single-gender, mothers preferentially discuss matters regarding the basic needs of children (food, sleep, health) between themselves while fathers interact the most about physical appearance and photos of children. Our study demonstrates how Reddit can be used as a means through which researchers can study audience effects and behaviours at a large scale while taking into account individual behaviour. ## Methods ### Data We collect comments published in 2020 on r/Daddit, r/Mommit, and r/Parenting, three communities on Reddit. Reddit is an American content aggregation platform. Users from any country can participate although English-speaking countries are largely over-represented [22, 23]. Pseudonymous users publish content (e.g., links, text, images, or videos) on topical communities or'subreddits' which are defined by descriptions and community rules, and governed by volunteer moderators. Subreddits can gather users around a large variety of topics such as politics, science, or video games. The broad theme of parenting has three popular subreddits: r/Parenting with 4,431,185 members, r/Mommit with 538,017 members, and r/Daddit with 389,765 members as of June 2022. r/Parenting is a mixed-gender subreddit. It was created in April 2008, and its full name is "Reddit Parenting - For those with kids of any age!". It is described as "the place to discuss the ins and out as well as ups and downs of child-rearing. From the early stages of pregnancy to when your teenagers are finally ready to leave the nest (even if they don't want to), we're here to help you through this crazy thing called parenting. You can get advice on potty training, talk about breastfeeding, discuss how to get your baby to sleep or ask if that one weird thing your kid does is normal." r/Mommit is a mother-centric single-gender subreddit. It was created in September 2010, and its full name is "Mommit - Come for the support, stay for the details.". It is described as "We are people. Mucking through the ickier parts of child-raising. It may not always be pretty, fun and awesome, but we do it. And we want to be here for others who are going through the same experiences and offer a helping hand." r/Mommit explicitly indicates that posters should be mothers. r/Daddit is a father-centric single-gender subreddit. It was created in August 2010. It is described as "This is a subreddit for Dads. Single Dads, new Dads, StepDads, tall Dads, short Dads, and any other kind of Dad. If you've got kids in your life that you love and provide for, come join us as we discuss everything from birth announcements to code browns in the shower.". Similarly to r/Mommit, r/Daddit is explicitly described to be exclusively for fathers. We collected comments using Pushshift Reddit API1[24]. Pushshift is a platform collecting social media data in real time. Then, we selected only users who have published on both a single-gender subreddit (r/Mommit or r/Daddit) and a mixed-gender subreddit (r/Parenting), on the assumption that posting on "mommit" or "daddit" indicates identification as a mother or father, in accordance with the subreddit descriptions. Before further preprocessing, our dataset contains 8,361 unique Reddit users who published 194,497 comments. Footnote 1: “Code in Python by Mikolaj Biesaga, The Robert Zajonc Institute for Social Studies, University of Warsaw. GitHub repository: [https://github.com/MikoBie/reddit](https://github.com/MikoBie/reddit) #### Preprocessing We first remove from the dataset 502 authors who have published comments both on r/Mommit and r/Daddit, as we cannot know whether they primarily identify as "mother" or "father". We remove comments written by the auto-moderator and deleted comments. We tokenize the documents, create ngrams, and remove a list of stop words, i.e., words occurring in abundance but providing little or no information about the content of text documents, using Scikit-learn [25] to which we add "ai", "im", "m", "s", "ve", "w", "d", "ive", "id", "itll", "shes", "hes", "theyre", "youre", "dont", "got", "havent", "didn". We finally lemmatize the text tokens and remove comments that became empty because of the cleaning process. The dataset now includes 148,304 posts with 24,035 posts on r/Mommit, 16,388 posts on r/Daddit, 78,450 posts on r/Parenting by mothers, and 29,431 posts on r/Parenting by fathers. Before running the word embedding algorithm, we added the user as a token in the text. The position of the agent token was defined randomly, i.e., for each comment, a random number is chosen within the range of its length (i.e., number of words) and the user token is added at that position. We believe this position to be the closest to the natural behaviour of other tokens and therefore to minimize the impact of adding a non-word token in the text. ### Word embeddings #### Definition A word embedding is a Natural Language Processing technique representing words as vectors. Words which share common linguistic characteristics (e.g., semantics and syntax) are located close to each other spatially. In other words, if two words having the same context (i.e., the same surrounding words) are passed as inputs, the outputs for these two words should be similar. To create the embedding, we use Word2Vec in the Gensim library [26]. Word2Vec is one of the algorithms which produces vectors for words in textual documents. It can be used through two models: the Continuous Bag-Of-Words model or the Skip-Gram model. While the Continuous Bag-Of-Words model predicts a target word from its context (i.e, surrounding words), the Skip-Gram model predicts the context from target words and is better for larger datasets. Parameters include the choice of model (i.e., Continuous Bag-Of-Words or Skip-Gram), subsampling (e.g., removing too frequent or rare words), dimensionality (i.e., the size of the vector), and window size (i.e., number of tokens to be considered as the context of the target word). To train the word embedding algorithm, we used the Skip-Gram model, kept all the words (the stopwords removal in the preprocessing stage had already removed the frequent and non-content words), set the size of the vector to 200 dimensions, and a window of 15 tokens, as 77% of posts are less than 30-tokens long (a window of 15 means the algorithm will look at the 15 words before and the 15 words after the target word). #### User embeddings and the concept of speaker-landscapes User embeddings represent users as vectors in an embedding space. By spatially projecting users, we can analyze the relationship between users, their concentration or spread in the embedding, and group dynamics. User embeddings can be used to detect sarcasm [27], build recommendations systems [28, 29], predict gender and depression [30], etc. In the present study, we create a'speaker-landscape', a novel method developed by Schuld et al. [3] where the username is added in the text document as a token. As word embeddings create a vector representation for a target word from its surrounding words, the target word vector represents its context. When adding the user as a token in a document, the embedding algorithm will locate the user spatially relying on the words used in the document. To create a'speaker-landscape' [3], we add the username at a random position in the Reddit post they published. This method allows to create a vector representation of users from the words they use and, as the user is a word token in the text, it can be directly compared to other word tokens (e.g. how close r/Daddit fathers are to word such as 'food','sleep', or 'health'?). #### Visualizing the embedding As the output of the embedding algorithm gives us 200-dimension vectors, we need to reduce the vectors into two-dimension vectors to be able to visualise them. For reducing vectors, we use UMAP [31]. UMAP uses graph layout algorithms to represent data as a high-dimensional graph from which it builds a low-dimensional graph with a similar structure. Two important parameters, n_neighbors and min_dist, are used to control the balance between local and global structure. A low value of n_neighbors will make UMAP consider more the local structure, focusing on the closest points to analyse the high-dimensional data whereas a high value will represent more the bigger picture. min_dist corresponds to the minimal distance between the points represented in low-dimensional space, i.e., low values will make the embeddings appear tightly packed together. It should be noted that, as UMAP has to warp the high-dimensional space to project the data to lower dimensions, the axes or distances in the low-dimensional representation are not directly interpretable. For visualising our results, we keep users who posted at least five comments on each subreddit to have a more robust position of users in the word embedding. This leaves us 2872 users to be visualized, 956 mothers and 480 fathers. Each user will be represented by two dots, each for an audience with which they interact (either single-gender or mixed-gender). Additionally, as users are tokens along with other word tokens in our corpus, we can directly compare their position with the position of other words. To have a meaningful set of word tokens, we use the results of topic modelling from a previous study conducted on the same dataset [19]. [19] found twelve topics to best represent discussions in r/Daddit, r/Mommit, and r/Parenting subreddits: _Thank you/Appreciation_ (i.e., thanking other users for their advice), _Medical care_, _Education/Family advice_ (i.e., giving advice to other parents), _Furniture/Design_ (i.e., discussing furniture for children in the house), _Birth/Pregnancy_ (i.e., announcements or congratulations related to child arrival), _Change/Potty training_, _Physical appearance/Picture_ (i.e., commenting on the physical appearance of children and sharing pictures), _Work/Raise children_ (i.e., discussing the balance between work and family), _Food_, _Leisure activities_, _School/Teaching_, and _Sleep training_. For each topic, we select six keywords among the topics' ten most associated keywords and add them to the visualisation. #### Measures used to analyze the output To measure the relationship between two tokens, we measure the similarity between their vector representation. For this, we use cosine similarity (i.e., the dot product of the vectors divided by the product of their length), which is defined as follows: \[\cos(\theta)=\frac{\mathbf{A}\cdot\mathbf{B}}{\|\mathbf{A}\|\|\mathbf{B}\|}= \frac{\sum_{i=1}^{n}\mathbf{A}_{i}\mathbf{B}_{i}}{\sqrt{\sum_{i=1}^{n}\mathbf{ A}_{i}^{2}}\sqrt{\sum_{i=1}^{n}\mathbf{B}_{i}^{2}}} \tag{1}\] The cosine values are between -1 and 1 and the greater the cosine value, the more similar the two tokens are. To compare the similarity measures between two groups of users or the same users but in two different contexts, we used the Kolmogorov-Smirnov test [32] and Student's t-test [33]. The Kolmogorov-Smirnov test for two samples gives the probability that the two samples were drawn from the same distribution. The Student's t-test is used to compare the means of two samples. Additionally, to measure the spread or concentration of a group and the distance between two groups, we used the centroid of the set of vectors. The centroid corresponds to the arithmetic mean position of a set of vectors. ## Acknowledgements This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6049 and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 802421). ## Author details \({}^{1}\)SFI Centre for Research Training in Foundations of Data Science, Ireland. \({}^{2}\)Department of Mathematics and Statistics, University of Limerick, Distrembroe Weg 20, 24105 Castarettoy, Limerick, Ireland. \({}^{3}\)Centre for Social Issues Research, University of Limerick, Castletoro, Limerick, Ireland. \({}^{5}\)Department of Psychology, University of Limerick, Castletoro, Limerick, Ireland. \({}^{5}\)Department of Psychology, School of Applied Human Sciences, University of KwaZulu-Natal, Durban, KuzZulu-Natal, South Africa. \({}^{6}\)Department of Psychology, University of Johannesburg, Johannesburg, South Africa. \({}^{7}\)School of Sociology, University College Dublin, Dublin, Ireland. \({}^{8}\)Geary Institute for Public Policy, University College Dublin, Dublin, Ireland.
2301.04742
HADA: A Graph-based Amalgamation Framework in Image-text Retrieval
Many models have been proposed for vision and language tasks, especially the image-text retrieval task. All state-of-the-art (SOTA) models in this challenge contained hundreds of millions of parameters. They also were pretrained on a large external dataset that has been proven to make a big improvement in overall performance. It is not easy to propose a new model with a novel architecture and intensively train it on a massive dataset with many GPUs to surpass many SOTA models, which are already available to use on the Internet. In this paper, we proposed a compact graph-based framework, named HADA, which can combine pretrained models to produce a better result, rather than building from scratch. First, we created a graph structure in which the nodes were the features extracted from the pretrained models and the edges connecting them. The graph structure was employed to capture and fuse the information from every pretrained model with each other. Then a graph neural network was applied to update the connection between the nodes to get the representative embedding vector for an image and text. Finally, we used the cosine similarity to match images with their relevant texts and vice versa to ensure a low inference time. Our experiments showed that, although HADA contained a tiny number of trainable parameters, it could increase baseline performance by more than 3.6% in terms of evaluation metrics in the Flickr30k dataset. Additionally, the proposed model did not train on any external dataset and did not require many GPUs but only 1 to train due to its small number of parameters. The source code is available at https://github.com/m2man/HADA.
Manh-Duy Nguyen, Binh T. Nguyen, Cathal Gurrin
2023-01-11T22:25:20Z
http://arxiv.org/abs/2301.04742v1
# HADA: A Graph-based Amalgamation Framework in Image-text Retrieval ###### Abstract Many models have been proposed for vision and language tasks, especially the image-text retrieval task. All state-of-the-art (SOTA) models in this challenge contained hundreds of millions of parameters. They also were pretrained on a large external dataset that has been proven to make a big improvement in overall performance. It is not easy to propose a new model with a novel architecture and intensively train it on a massive dataset with many GPUs to surpass many SOTA models, which are already available to use on the Internet. In this paper, we proposed a compact graph-based framework, named HADA, which can combine pretrained models to produce a better result, rather than building from scratch. First, we created a graph structure in which the nodes were the features extracted from the pretrained models and the edges connecting them. The graph structure was employed to capture and fuse the information from every pretrained model with each other. Then a graph neural network was applied to update the connection between the nodes to get the representative embedding vector for an image and text. Finally, we used the cosine similarity to match images with their relevant texts and vice versa to ensure a low inference time. Our experiments showed that, although HADA contained a tiny number of trainable parameters, it could increase baseline performance by more than 3.6% in terms of evaluation metrics in the Flickr30k dataset. Additionally, the proposed model did not train on any external dataset and did not require many GPUs but only 1 to train due to its small number of parameters. The source code is available at [https://github.com/m2man/HADA](https://github.com/m2man/HADA). Keywords:image-text retrieval graph neural network fusion model. ## 1 Introduction Image-text retrieval is one of the most popular challenges in vision and language tasks, with many state-of-the-art models (SOTA) recently introduced [19, 3, 18, 28, 10, 25, 17]. This challenge includes 2 subtasks, which are image-to-text retrieval and text-to-image retrieval. The former subtask is defined as an image query is given to retrieve relevant texts in a multimodal dataset, while the latter is vice versa. Most of the SOTA models in this research field shared 2 things in common: (1) they were built on transformer-based cross-modality attention architectures [3, 19] and (2) they were pretrained on the large-scale multimodal data crawled from the Internet [28, 18, 19, 17, 13]. However, these things have their own disadvantages. The attention structure between 2 modalities could achieve an accurate result, but it cost a large amount of inference time due to the massive computation. For instance, UNITER [3] contained roughly 303 millions parameters and it took a decent amount of time to perform the retrieval in real-time [31]. Many recent work has resolved this model-related problem by introducing joint-encoding learning methods. They can learn visual and semantic information from both modalities without using any cross-attention modules, which can be applied later to rerank the initial result [18, 25, 31]. Figure 1 illustrated the architecture of these pipelines. Regarding the data perspective, the large collected data usually come with noisy annotation, and hence could be harmful to the models that are trained on it. Several techniques have been proposed to mitigate this issue [19, 18, 17]. However, training on the massive dataset still creates a burden on the computation facility, such as the number of GPUs, which are required to train the model successfully and efficiently [28]. It has motivated us to answer the question: _Can we combine many SOTA models, which are currently available to use, to get a better unified model without intensively training with many GPUs?_ In this paper, we introduced a graph-basedamalgamation framework, called **HADA**, which formed a graph-based structure to fuse the features produced by other pretrained models. We did not Figure 1: Two most popular pipelines of the SOTA for image-text retrieval challenge. (a) A cross-modality transformer network is applied to measure the similarity between an image and a text based on their features. (b) Each modality used their own transformer network to get its global embedding. use any time-consuming cross-modality attention network to ensure fast retrieval speed. A graph neural network was employed to extract visual and textual embedded vectors from fused graph-based structures of images and texts, where we can measure their cosine similarity. To the best of our knowledge, the graph structure has been widely applied in the image-text retrieval challenge [26, 7, 27, 35, 21]. Nevertheless, it was utilized to capture the interaction between objects or align local and global information within images. HADA is the first approach that applies this data structure to combine SOTA pretrained models by fusing their features in each modality. We trained HADA only on the Flickr30k dataset without using any large-scale datasets. We applied Momentum Distillation technique [18], which was shown that can not only mitigate the harmful effect of noise annotation but also improve the accuracy on a clean dataset. Our experiments showed that HADA, with the tiny extra number of training parameters, could improve total recalls by 3.64% compared to the input SOTA without training with millions of additional image-text pairs as other models. This is the most crucial part since it is not easy to possess multiple GPUs to use, especially for small and medium businesses or start-up companies. Therefore, we believe that HADA can be applied not only in the academic field, but also in industry. Our main contribution can be summarised as follow: (1) We introduced HADA, a compact pipeline that can combine 2 or many SOTA pretrained models to address the image-text retrieval challenge. (2) We proposed a way to fuse the information between input pretrained models by using graph structures. (3) We evaluated the performance of HADA on the well-known Flickr30k dataset [37] and MSCOCO dataset [20] without using any other large-scale dataset but still improved the accuracy compared to the baseline input models. ## 2 Related Work A typical vision-and-language model, including image-text retrieval task, was built with the usage of transformer-based encoders. In specific, OSCAR [19], UNITER [3], and VILLA [10] firstly employed Faster-RCNN [29] and BERT [6] to extract visual and text features from images and texts. These features were then fed into a cross-modality transformer block to learn the contextualized embedding that captured the relations between regional features from images and word pieces from texts. An additional fully connected layer was used to classify whether the images and texts were relevant to each other or not, based on the embedding vectors. Although achieving superior results, these approaches had a drawback of being applied to real-time use cases. It required a huge amount of time to perform the retrieval online, since the models have to process the intensive cross-attention transformer architecture many times for every single query [31]. Recently, there have been some works proposing an approach to resolve that problem by utilising 2 distinct encoders for images and text. The data from each modality now can be embedded offline and hence improve the retrieval speed [31, 18, 17, 25, 13, 28]. In terms of architecture, all approaches used the similar BERT-based encoder for semantic data but different image encoders. While LightningDOT [31] encoded images with detected objects extracted by the Faster-RCNN model, FastnSlow [25] applied the conventional Resnet network to embed images. On the other side, ALBEF [18] and BLIP [17] employed the Vision Transformer backbone [8] to get the visual features corresponding to their patches. Because these SOTA did not use the cross-attention structure, which was a critical point to achieve high accuracy, they applied different strategies to increase performance. Specifically, pretraining a model on a large dataset can significantly improve the result [18, 19, 13]. For instance, CLIP [28] and ALIGN [13] were pretrained on 400 millions and 1.8 billions image-text pairs, respectively. Another way was that they ran another cross-modality image-text retrieval model to rerank the initial output and get a more accurate result [18, 31]. Regarding to graph structures, SGM [35] introduced a visual graph encoder and a textual graph encoder to capture the interaction between objects appearing in images and between the entities in text. LGSGM [26] proposed a graph embedding network on top of SGM to learn both local and global information about the graphs. Similarly, GSMN [21] presented a novel technique to assess the correspondence of nodes and edges of graphs extracted from images and texts separately. SGRAF [7] build a reasoning and filtration graph network to refine and remove irrelevant interactions between objects in both modalities. Although there are many SOTAs with different approaches for image-text retrieval problems, there is no work that tries combining these models but introducing a new architecture and pretrain on a massive dataset instead. Training an entire new model from scratch on the dataset is not an easy challenge since it will create a burden on the computation facilities such as GPUs. In this paper, we introduced a simple method which combined the features extracted from the pretrained SOTA by applying graph structures. Unlike other methods that also used this data structure, we employed graphs to fuse the information between the input features, which was then fed into a conventional graph neural network to obtain the embedding for each modality. Our HADA consisted of a small number of trainable parameters, hence can be easily trained on a small dataset but still obtained higher results compared to the input models. ## 3 Methodology This section will describe how our HADA addressed the retrieval challenge by combining any available pretrained models. Figure 2 depicted the workflow of HADA. We started with only 2 models (\(N_{models}=2\)) as illustrated in Figure 2 for simplicity. Nevertheless, HADA can be extended with a larger \(N_{models}\). HADA began with using some pretrained models to extract the features from each modality. We then built a graph structure to connect the extracted features together, which were fed into a graph neural network (GNN) later to update them. The outputs of the GNN were concatenated with the original global features produced by the pretrained models. Finally, simple linear layers were employed at the end to get the final representation embedding features for images and texts, which can be used to measure the similarity to perform the retrieval. For evaluation, we could extract our representation features offline to guarantee the high speed inference time. ### Revisit State-of-the-art Models We only used the pretrained models without using the cross-modality transformer structure to extract the features as depicted in Figure 1 to reduce the number of computations and ensure the high speed inference time. Basically, they used a unimodal encoder to get the features of an image or a text followed by a transformer network to embed them and obtain the _[CLS]_ embedding. This _[CLS]_ token was updated by 1 or many fully connected layers to become a representative global feature that can be compared with that of the remaining modality to get the similarity score. HADA began with the output of the transformer layer from the pretrained models. In detail, for an input image \(I\), we obtained the sequence of patch tokens from each model \(i\) denoted as \(\mathbf{v^{(i)}}=\{v^{(i)}_{cls},v^{(i)}_{1},v^{(i)}_{2},...,v^{(i)}_{N_{i}}\}\), where \(v^{(i)}_{j}\in\mathbb{R}^{d^{(i)}_{v}}\) and \(N_{i}\) was the length of the sequence. This length depended on the architecture of the image encoder network employed in the pretrained model. For example, it could be the number of patches if the image encoder was a Vision Transformer (ViT) network [8], or the number of detected objects or regions of interest if the encoder was a Faster-RCNN model [29]. Additionally, we also extracted the global visual representation feature \(h^{(i)}_{v}\in\mathbb{R}^{d^{(i)}_{h}}\) from \(v^{(i)}_{cls}\) as illustrated in Figure 1. Regarding the semantic modality, we used the same process as that of the visual modality. Specifically, we extracted the sequence of patch tokens \(\mathbf{w^{(i)}}=\{w^{(i)}_{cls},w^{(i)}_{1},w^{(i)}_{2},...,w^{(i)}_{L}\}\) where \(w^{(i)}_{j}\in\mathbb{R}^{d^{(i)}_{w}}\) and \(L\) was the length of Figure 2: The pipeline of the proposed HADA. The red borders indicated trainable components. The ITM and ITC infered the training tasks which will be discussed later. the text, and the global textual representation embedding \(h_{w}^{(i)}\in\mathbb{R}^{d_{h}^{(i)}}\) for an input text \(T\) using the pretrained model \(i\). The input model \(i\) matched a pair of an image \(I\) and a text \(T\) by calculating the dot product \(\langle h_{v}^{(i)},h_{w}^{(i)}\rangle\) of their global features. However, HADA not only used the global embedding but also the intermediate transformer tokens to make the prediction. We used our learned _[CLS]_ tokens to improve the global features. In contrast, using the original global features could ensure high performance of the pretrained models and mitigate the effect of unhelpful tokens. ### Create Graph Structure Each pretrained model \(i\) produced different _[CLS]_ features \(v_{cls}^{(i)}\) and \(w_{cls}^{(i)}\) for an image and text, respectively. Since our purpose was to combine the models, we needed to fuse these _[CLS]_ tokens to obtain the unified ones for each modality separately. In each modality, for example, the visual modality, HADA not only updated \(v_{cls}^{(i)}\) based on \(\mathbf{v}^{(i)}\) solely but also on those of the remaining pretrained models \(\{\mathbf{v}^{(i)}\ |\ j\neq i\}\). Because these \(\mathbf{v}\) came from different models, their dimensions could be not similar to each other. Therefore, we applied a list of linear layers \(f_{v}^{(i)}:\mathbb{R}^{d_{v}^{(i)}}\rightarrow\mathbb{R}^{d_{p}}\) to map them in the same dimensional space: \[\mathbf{p}^{(\mathbf{i})}=\{f_{v}^{(i)}(x)|x\in\mathbf{v}^{(\mathbf{i})}\}=\{ p_{cls}^{(i)},p_{1}^{(i)},p_{2}^{(i)},...,p_{N_{i}}^{(i)}\}\] We performed a similar process for the textual modality to obtain: \[\mathbf{s}^{(\mathbf{i})}=\{f_{w}^{(i)}(x)|x\in\mathbf{w}^{(\mathbf{i})}\}=\{ s_{cls}^{(i)},s_{1}^{(i)},s_{2}^{(i)},...,s_{L}^{(i)}\},\text{where}\ f_{w}^{(i)}:\mathbb{R}^{d_{w}^{(i)}} \rightarrow\mathbb{R}^{d_{s}}\] We then used graph structures \(\mathcal{G}\mathcal{P}=\{\mathcal{V}_{p},\mathcal{E}_{p}\}\) and \(\mathcal{G}_{s}=\{\mathcal{V}_{s},\mathcal{E}_{s}\}\) to connect these mapped features together, where \(\mathcal{V}\) and \(\mathcal{E}\) denoted the list of nodes and edges in the graph \(\mathcal{G}\) accordingly. In our HADA, nodes indicated the mapped features. Specifically, \(\mathcal{V}_{p}=\{\mathbf{p}^{(\mathbf{i})}\}\) and \(\mathcal{V}_{s}=\{\mathbf{s}^{(\mathbf{i})}\}\) for all \(i\in[1,N_{models}]\). Regarding edges, we symbolized \(e_{a\to b}\) as a directed edge from node \(a\) to node \(b\) in the graph, thus the set of edges of the visual graph \(\mathcal{E}_{p}\) and the textual graph \(\mathcal{E}_{s}\) were: \[\mathcal{E}_{p}=\{e_{x\to p_{cls}^{(j)}}\ |\ x\in\mathbf{p}^{( \mathbf{i})}\ \text{and}\ i,j\in[1,N_{models}]\}\] \[\mathcal{E}_{s}=\{e_{x\to s_{cls}^{(j)}}\ |\ x\in\mathbf{s}^{( \mathbf{i})}\ \text{and}\ i,j\in[1,N_{models}]\}\] To be more detailed, we created directed edges that went from every patch features to the _[CLS]_ feature, including from the _[CLS]_ itself, for all pretrained models but not in the reversed direction, as shown in Figure 2. The reason was that _[CLS]_ was originally introduced as a representation of all input data, so it would summarize all patch tokens [8, 2, 6]. Therefore, it would be the node that received information from other nodes in the graph. This connection structure ensured that HADA can update the _[CLS]_ tokens based on the patch tokens from all pretrained models in a fine-grained manner. ### Graph Neural Network Graph neural networks (GNN) have witnessed an increase in its popularity over the past few years, with many GNN structures having been introduced recently [15, 5, 34, 11, 30, 1]. HADA applied the modified Graph Attention Network (GATv2), which was recommended to be used as a baseline whenever employing GNN [1], to fuse the patch features from different pretrained models together to get the unified _[CLS]_ features. Let \(\mathcal{N}_{k}=\{x\in\mathcal{V}\mid e_{x\to k}\in\mathcal{E}\}\) be the set of neighbor nodes from which there was an edge connecting to node \(k\) in the graph \(\mathcal{G}\). GATv2 used a scoring function \(se\) to weight every edge indicating the importance of the neighbor nodes \(x\) in \(\mathcal{N}_{k}\) before updating the node \(k\in\mathbb{R}^{d}\): \[se(e_{x\to k})=\mathbf{A}^{\top}\text{LeakyRELU}(\mathbf{W}_{1}x+ \mathbf{W}_{2}k])\] where \(\mathbf{A}\in\mathbb{R}^{d^{\prime}}\), \(\mathbf{W}_{1}\in\mathbb{R}^{d^{\prime}\times d}\), and \(\mathbf{W}_{2}\in\mathbb{R}^{d^{\prime}\times d}\) were learnable parameters. These weights were then normalized across all neighbor nodes in \(\mathcal{N}_{k}\) by using a softmax function to get the attention scores: \[\alpha_{e_{x\to k}}=\frac{\exp(se(e_{x\to k}))}{\sum_{y\in \mathcal{N}_{k}}\exp(se(e_{y\to k}))}\] The updated node \(k^{\prime}\in\mathbb{R}^{d^{\prime}}\) was then calculated based on its neighbors in \(\mathcal{N}_{k}\), including \(k\) if we add an edge connect it to itself: \[k^{\prime}=\sigma(\sum_{x\in\mathcal{N}_{k}}\alpha_{e_{x\to k}} \cdot\mathbf{W}_{1}x)\] where \(\sigma\) was an nonlinearity activate function. Furthermore, this GATv2 network could be enlarged by applying a multi-head attention structure and improved performance [34]. The output now was a concatenation of each head output, which was similar to Transformer architecture [33]. An extra linear layer was used at the end to convert these concatenated nodes to the desired dimensions. We used distinct GATv2 structures with \(H\) attention heads for each modality in this stage, as illustrated in Figure 2. HADA took the input graphs \(\mathcal{G}_{p}\) and \(\mathcal{G}_{s}\) with nodes \(\mathcal{V}_{p}\) and \(\mathcal{V}_{s}\) in the vector space of \(d_{p}\) and \(d_{s}\) dimensions and updated them to \(\mathcal{V^{\prime}}_{p}=\{\mathbf{p^{\prime}}^{(i)}\}\) and \(\mathcal{V^{\prime}}_{s}=\{\mathbf{s^{\prime}}^{(i)}\}\) with dimensions of \(d^{\prime}_{p}\) and \(d^{\prime}_{s}\). We then concatenated the updated _[CLS]_ nodes \(p^{\prime}_{cls}\) and \(s^{\prime}_{cls}\) from all pretrained models with their corresponding original global embedding \(h_{v}\) and \(h_{w}\). Finally, we fed them into a list of linear layers to get our normalized global representation \(h_{p}\in\mathbb{R}^{d_{h}}\) and \(h_{s}\in\mathbb{R}^{d_{h}}\). ### Training Tasks Image-Text Contrastive Learning.HADA encoded the input image \(\boldsymbol{I}\) and text \(\boldsymbol{T}\) to \(h_{p}\) and \(h_{s}\), accordingly. We used a similarity function that was a dot product \(S(\boldsymbol{I},\boldsymbol{T})=\langle h_{p},h_{s}\rangle=h_{p}^{\top}h_{s}\) to ensure that a pair of relevant image-text (positive pair) would have a higher similar representation compared to irrelevant pairs (negative pairs). The contrastive loss for image-to-text (i2t) retrieval and text-to-image (t2i) retrieval for the mini-batch of \(M\) relevant pairs \((\textbf{{I}}_{m},\textbf{{T}}_{m})\) were: \[\mathcal{L}_{i2t}(\textbf{{I}}_{m})=-\text{log}\frac{\text{exp}(S(\textbf{{I}}_ {m},\textbf{{T}}_{m})/\tau)}{\sum_{i=1}^{M}\text{exp}(S(\textbf{{I}}_{m}, \textbf{{T}}_{i})/\tau)}\] \[\mathcal{L}_{t2i}(\textbf{{T}}_{m})=-\text{log}\frac{\text{exp}(S(\textbf{{T}}_ {m},\textbf{{I}}_{m})/\tau)}{\sum_{i=1}^{M}\text{exp}(S(\textbf{{T}}_{m}, \textbf{{I}}_{i})/\tau)}\] where \(\tau\) was a temperature parameter that could be learned during training. This contrastive learning has been used in many vision-and-language models and has been proven to be effective [18, 31, 17, 28]. In our experiment, we trained HADA with the loss that optimized both subtasks: \[\mathcal{L}_{ITC}=\frac{1}{M}\sum_{m=1}^{M}(\mathcal{L}_{i2t}(\textbf{{I}}_{m })+\mathcal{L}_{t2i}(\textbf{{T}}_{m}))\] Inspired by ALBEF [18], we also applied momentum contrast (MoCo) [12] and their momentum distillation strategy for this unsupervised representation learning to cope with the problem of noisy information in the dataset and improve accuracy. #### 3.2.2 Image-Text Matching This objective was a binary classification task to distinguish irrelevant image-text pairs, but were similar representations. This task would ensure that they were different in fine-grained details. We implemented an additional disciminator layer \(dc:\mathbb{R}^{4d_{h}}\rightarrow\mathbb{R}\) on top of the final embedding features \(h_{p}\) and \(h_{s}\) to classify whether the image \(I\) and the text \(T\) is a positive pair or not: \[dc(h_{p},h_{s})=\text{sigmoid}(\textbf{W}^{\top}[h_{p}\|h_{s}\|\text{abs}(h_{ p}-h_{s})\|h_{p}\odot h_{s}])\] where \(\textbf{W}\in\mathbb{R}^{4d_{h}}\) was trainable parameters, \(\|\) indicated the concatenation, \(\text{abs}(.)\) was the absolute value, and \(\odot\) denoted elementwise multiplication. We used binary cross-entropy loss for this ordinary classification task: \[\mathcal{L}_{itm}(\textbf{{I}},\textbf{{T}})=y\text{log}(dc(h_{p},d_{s}))+(1- y)\text{log}(1-dc(h_{p},d_{s}))\] where \(y\) was the one-hot vector representing the ground truth label of the pair. For each positive pair in the minibatch of \(M\) positive pairs, we sampled 1 hard negative text for the image and 1 hard negative image for the text. These negative samples were chosen from the current mini-batch in which they were not relevant based on the ground-truth labels, but have the highest similarity dot product score. Therefore, the objective for this task was: \[\mathcal{L}_{ITM}=\frac{1}{3M}\sum_{m=1}^{M}(\mathcal{L}_{itm}(\textbf{{I}}_{ m},\textbf{{T}}_{m})+\mathcal{L}_{itm}(\textbf{{I}}_{m},\textbf{{T}}^{\prime}_{m})+ \mathcal{L}_{itm}(\textbf{{I}}^{\prime}_{m},\textbf{{T}}_{m}))\] where \(\textbf{{T}}^{\prime}_{m}\) and \(\textbf{{I}}^{\prime}_{m}\) were the hard negative text and image samples in the mini-batch that were corresponding with the \(\textbf{{I}}_{m}\) and \(\textbf{{T}}_{m}\), respectively. The final loss function in HADA was: \[\mathcal{L}=\mathcal{L}_{ITC}+\mathcal{L}_{ITM}\] ## 4 Experiment ### Dataset and Evaluation Metrics We trained and evaluated HADA on 2 different common datasets in the image-text retrieval task which are Flickr30k [37] and MSCOCO [20]. Flickr30k dataset consists of 31K images collected on the Flickr website, while MSCOCO comprises 123K images. Each image contains 5 relevant texts or captions that describe the image. We used Karpathy's split [14], which has been widely applied by all models in the image-text retrieval task, to split each dataset into train/evaluate/test on 29K/1K/1K and 113K/5K/5K images on Flickr30k and MSCOCO, respectively. The common evaluation metric in this task is the Recall at K (\(R@K\)) when many SOTAs used this metric [18, 31, 17, 28, 13, 19, 3, 10]. This metric is defined as the proportion of the number of queries that we found the correct relevant output in the top K of the retrieved ranked list: \[R@K=\frac{1}{N_{q}}\sum_{q=1}^{N_{q}}\textbf{1}(q,K)\] where \(N_{q}\) is the number of queries and \(\textbf{1}(q,K)\) is a binary function returning 1 if the model find the correct answer of the query \(q\) in the top \(K\) of the retrieved output. In particular, for the image-to-text subtask, \(R@K\) is the percentage of the number of images where we found relevant texts in the top K of the output result. In our experiment, we used R@1, R@5, R@10, and RSum which was the sum of them. ### Implementation Details In our experiment, we combined 2 SOTA models that had available pretrained weights fine-tuned on the Flickr30k dataset: ALBEF1 and LightningDOT2. None of them used the cross-modality transformer structure when retrieved to ensure the fast inference speed3. Although they used the same BERT architecture to encode a text, the former model employed the ViT network to encode an image, while the latter model applied the Faster-RCNN model. We chose these 2 models because we wanted to combine different models with distinct embedding backbones to utilize the advantages of each of them. Regarding ALBEF, their ViT network encoded an image to 577 patch tokens including the _[CLS]_ one (\(N_{ALB}=576\) and \(d_{v}^{(ALB)}=768\)). This _[CLS]_ was projected to the lower dimension to obtain the global feature (\(d_{h}^{(ALB)}=256\)). Because LightningDOT encoded an image based on the detected objects produced by the Faster-RCNN model, its \(N_{DOT}\) varied depending on the number of objects in the image. The graph neural network, unlike other conventional CNN, can address this inconsistent number of inputs due to the flexible graph structure with nodes and edges. Unlike ALBEF, the dimensions of image features and global features from LightningDOT were the same with \(d_{v}^{(DOT)}=d_{h}^{(DOT)}=768\). In terms of text encoder, the output of both models was similar since they used the same BERT network: \(d_{w}^{(ALB)}=d_{w}^{(DOT)}=768\). We projected these features to a latent space where \(d_{p}=d_{s}=512\), which were the average of their original dimensions. We used a 1-layer GATv2 network with \(H=4\) multi-head attentions to update the graph features while still keeping the input dimensions of \(d_{p}^{\prime}=d_{s}^{\prime}=512\). We also applied Dropout with \(p=0.7\) in linear layers and graph neural networks. In total, our HADA contained roughly 10M trainable parameters. The input pretrained models were pretrained on several large external datasets. For example, ALBEF was pretrained on 14M images compared to only 29K images on Flickr30k that we used to train HADA. We used this advantage in our prediction instead of train HADA in millions of samples. We modified the similarity score to a weighted sum of our predictions and the original prediction of the input models. Therefore, the weighted similarity score that we used was: \[S(\textbf{{I}},\textbf{{T}})=(1-\alpha)\langle h_{p},h_{s}\rangle+\alpha\langle h _{v}^{(ALB)},h_{w}^{(ALB)}\rangle\] where \(\alpha\) was a trainable parameter. We did not include the original result of the LightningDOT model, since its result was lower than ALBEF by a large margin and therefore could have a negative impact on overall performance8. Footnote 8: We tried including the LightningDOT in the weighted similarity score, but the result was lower than using only ALBEF. We trained HADA for 50 epochs (early stopping9 was implemented) using the batch size of 20 on 1 NVIDIA RTX3080Ti GPU. We used the AdamW [23] optimizer with a weight decay of 0.02. The learning rate was set at \(1e^{-4}\) and decayed to \(5e^{-6}\) following cosine annealing [22]. Similarly to ALBEF, we also applied RandAugment [4] for data augmentation. The initial temperature parameter was 0.07 [36] and we kept it in range of \([0.001,0.5]\) during training. To mitigate the dominant effect of ALBEF global features on our weighted similarity score, we first trained HADA with \(\alpha=0\). After the model had converged, we continued to train, but initially set \(\alpha=0.5\) and kept it in the range of \([0.1,0.9]\). ### Baselines We built 2 baselines that also integrated ALBEF and LightningDOT as an input to show the advantages of using graph structures to fuse these input models. #### 4.3.1 Baseline B1. We calculated the average of the original ranking results obtained from ALBEF and LightningDOT and considered them as the distance between images and text. This meant that the relevant pairs should be ranked at the top, whilst irrelevant pairs would have lower places. #### 4.3.2 Baseline B2. Instead of using a graph structure to fuse the features extracted from the pretrained models, we only concatenated their global embedding and fed them into the last linear layers to obtain the unified features. We trained this baseline B2 following the same strategy as described in Section 4.2 using the weighted similarity score. ### Comparison to Baseline Table 1 illustrated the evaluation metrics of the difference models in the Flickr30k dataset. Similarly to LightningDOT, our main target was to introduce an image-text retrieval model that did not implement a cross-modality transformer module to ensure that it can perform in real-time without any delay. Thus, we only reported the result from LightningDOT and ALBEF that did not use the time-consuming compartment to rerank in the subsequent step. If the model has a better initial result, it can have a better reranked result by using the cross-modality transformer later. We also added UNITER [3] and VILLA [10], which both used cross-modality transformer architecture to make the prediction, to the comparison. \begin{table} \begin{tabular}{c|c c c c|c c c c|c} \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**Image-to-Text**} & \multicolumn{4}{c|}{**Text-to-Image**} & \multicolumn{1}{c|}{**Total**} \\ \cline{2-11} & R@1 & R@5 & R@10 & RSum & R@1 & R@5 & R@10 & RSum & RSum & \(\triangle\)**R** \\ \hline UNITER\({}^{\intercal}\) & 87.3 & 98 & 99.2 & 284.5 & 75.56 & 94.08 & 96.76 & 266.4 & 550.9 & \(\downarrow\)13.68 \\ VILLA\({}^{\intercal}\) & 87.9 & 97.2 & 98.8 & 283.9 & 76.26 & 94.24 & 96.84 & 267.34 & 551.24 & \(\downarrow\)13.34 \\ \hline LightningDOT & 83.6 & 96 & 98.2 & 277.8 & 69.2 & 90.72 & 94.54 & 254.46 & 532.26 & \(\downarrow\)32.32 \\ LightningDOT\({}^{\intercal}\) & 83.9 & 97.2 & 98.6 & 279.7 & 69.9 & 91.1 & 95.2 & 256.2 & 535.9 & \(\downarrow\)128.68 \\ ALBEF & 92.6 & 99.3 & 99.9 & 291.8 & 79.76 & 95.3 & 97.72 & 272.78 & 564.58 & 0 \\ \hline B1 & 90.7 & 99 & 99.6 & 289.3 & 79.08 & 94.5 & 96.94 & 270.52 & 559.82 & \(\downarrow\)14.76 \\ B2 & 91.4 & 99.5 & 99.7 & 290.6 & 79.64 & 95.34 & 97.46 & 272.44 & 563.04 & \(\downarrow\)11.54 \\ HADA & **93.3** & **99.6** & **100** & **292.9** & **81.36** & **95.94** & **98.02** & **275.32** & **568.22** & \(\uparrow\)3.64 \\ \hline \end{tabular} \end{table} Table 1: Performance of models on Flickr30k Dataset. The symbol \(\uparrow\) indicated the results were originally reported in their research, while others were from our re-implementation using their public pretrained checkpoints. The column \(\triangle\)R showed the difference compared to ALBEF. It was clearly that our HADA obtained the highest metrics at all recall values compared to others. HADA achieved a slightly better R@5 and R@10 in Image-to-Text (I2T) and Text-to-Image (T2I) subtasks than ALBEF. However, the gap became more significant at R@1. We improved the R@1 of I2T by 0.7% (\(92.96\to 93.3\)) and the R@1 of T2I by 1.6% (\(79.76\to 81.36\)). In total, our RSum was 3.64% higher than that of ALBEF (\(564.58\to 568.22\)). The experiment also showed that LightningDOT, which encoded images using Faster-RCNN, was much behind ALBEF when its total RSum was lower than that of ALBEF by approximately 30%. The reason might be that the object detector was not as powerful as the ViT network and LightningDOT was pretrained on 4M images compared to 14M images used to train ALBEF. Although also using object detectors as the backbone but applying a cross-modality network, UNITER and VILLA surpassed LightningDOT by a large margin at 15%. It proved that this intensive architecture made the large impact on the multimodal retrieval. Regarding our 2 baselines B1 and B2, both of them were failed to get better results than the input model ALBEF. Model B1, with the simple strategy of taking the average ranking results and having no learnable parameters, performed worse than model B2 which used a trainable linear layer to fuse the pretrained features. Nevertheless, the RSum of B2 was lower than HADA by 5.18%. It showed the advantages of using graph structure to fuse the information between models to obtain the better result. ### Ablation Study To show the stable performance of HADA, we used it to combine 2 other different pretrained models, including BLIP [17] and CLIP [28]. While CLIP is well-known for its application in many retrieval challenges [24, 32, 9, 31], BLIP is the enhanced version of ALBEF with the bootstrapping technique in the training process. We used the same configuration as described in 4.2 to train and evaluate HADA in Flickr30k and MSCOCO datasets. We used the pretrained BLIP and CLIP from LAVIS library [16]. It was noted that the CLIP we used in this experiment was the zero-shot model, since the fine-tuned CLIP for these datasets is not available yet. Table 2 showed the comparison between HADA and the input models. CLIP performed worst on both Flickr30k and MSCOCO with huge differences compared to BLIP and HADA because CLIP was not fine-tuned for these datasets. Regarding Flickr30k dataset, HADA managed to improve the RSum by more than 3.9% compared to that of BLIP. Additionally, HADA obtained the highest scores in all metrics for both subtasks. Our proposed framework also increased the RSum of BLIP by 1.49% in MSCOCO dataset. However, BLIP performed slightly better HADA in the I2T subtask while HADA achieved higher performance in the T2I subtask. ## 5 Conclusion In this research, we proposed a simple graph-based framework, called HADA, to combine 2 pretrained models to address the image-text retrieval problem. We created a graph structure to fuse the extracted features obtained from the pretrained models, followed by the GATv2 network to update them. Our proposed HADA only contained roughly 10M learnable parameters, helping it become easy to train using only 1 GPUs. Our experiments showed the promisingness of the proposed method. Compared to input models, we managed to increase total recall by more than 3.6%. Additionally, we implemented other 2 simple baselines to show the advantage of using the graph structures. This result helped us resolve 2 questions: (1) increase the performance of SOTA models in image-text retrieval task and (2) not requiring many GPUs to train on any large-scale external dataset. It has opened the possibility of applying HADA in industry where many small and medium start-ups do not possess many GPUs. Although we achieved the better result compared to the baselines, there are still rooms to improve the performance of HADA. Firstly, it can be extended not only by 2 pretrained models as proposed in this research, but can be used with more than that number. Secondly, the use of different graph neural networks, such as the graph transformer [30], can be investigated in future work. Third, the edge feature in the graph is also considered. Currently, HADA did not implement the edge feature in our experiment, but they can be learnable parameters in graph neural networks. Last but not least, pretraining HADA on a large-scale external dataset as other SOTA have done might enhance its performance. ## 6 Acknowledgement This publication has emanated from research supported in party by research grants from Science Foundation Ireland under grant numbers SFI/12/RC/2289, SFI/13/RC/2106, and 18/CRT/6223. \begin{table} \begin{tabular}{l|c|c c c c|c c c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**Image-to-Text**} & \multicolumn{4}{c|}{**Text-to-Image**} & \multicolumn{1}{c|}{**Total**} \\ \cline{3-11} & & R@1 & R@5 & R@10 & RSum & R@1 & R@5 & R@10 & RSum & RSum \\ \hline \multirow{3}{*}{Flickr30k} & BLIP & 94.3 & 99.5 & 99.9 & 293.7 & 83.54 & 96.66 & 98.32 & 278.52 & 572.22 & 0 \\ & CLIP & 88 & 98.7 & 99.4 & 286.1 & 68.7 & 90.6 & 95.2 & 254.5 & 540.6 & \(\downarrow\)31.62 \\ & HADA & **95.2** & **99.7** & **100** & **294.9** & **85.3** & **97.24** & **98.72** & **281.26** & **576.16** & \(\uparrow\)3.94 \\ \hline \multirow{3}{*}{MSCOCO} & BLIP & **75.76** & **93.8** & **96.62** & **266.18** & 57.32 & 81.84 & 88.92 & 228.08 & 494.26 & 0 \\ & CLIP & 57.84 & 81.22 & 87.78 & 226.84 & 37.02 & 61.66 & 71.5 & 170.18 & 397.02 & \(\downarrow\)197.24 \\ \cline{1-1} & HADA & 75.36 & 92.98 & 96.44 & 264.78 & **58.46** & **82.85** & **89.66** & **230.97** & **495.75** & \(\uparrow\)1.49 \\ \hline \end{tabular} \end{table} Table 2: Performance of models on the test set in Flickr30k and MSCOCO datasets. The column \(\vartriangle\)R showed the difference compared to BLIP in that dataset.
2302.12316
Model for 1/f Flux noise in Superconducting Aluminum Devices: Impact of External Magnetic Fields
Superconducting quantum interference devices (SQUIDs) and related circuits made of aluminum are known to display $1/\omega$ flux noise, where $\omega$ is frequency. A recent experiment showed that the application of an external magnetic field in the $10-100$~G range changed the noise to a single Lorentzian peaked at $\omega=0$. Here it is shown that a model based on independent impurity spin flips with coexisting cross and direct mechanisms of spin relaxation may explain these experiments. The model shows that application of an external magnetic field can be used to reduce the impact of flux noise in qubits.
José Alberto Nava Aquino, Rogério de Sousa
2023-02-23T20:26:56Z
http://arxiv.org/abs/2302.12316v2
# Model for 1/f Flux noise in Superconducting Aluminum Devices: Impact of External Magnetic Fields ###### Abstract Superconducting quantum interference devices (SQUIDs) and related circuits made of aluminum are known to display \(1/\omega\) flux noise, where \(\omega\) is frequency. A recent experiment showed that the application of an external magnetic field in the \(10-100\) G range changed the noise to a single Lorentzian peaked at \(\omega=0\). Here it is shown that a model based on independent impurity spin flips with coexisting cross and direct mechanisms of spin relaxation may explain these experiments. The model shows that application of an external magnetic field can be used to reduce the impact of flux noise in qubits. Superconducting qubits are a promising candidate for building practical quantum computers due to their relatively low fabrication costs, low decoherence and scalability compared to other qubit technologies.[1, 2] Frequency tunability is required to scale up to thousands of qubits and prevent errors associated to frequency crowding. This requires the addition of SQUIDs to the circuits, making them sensitive to flux noise.[3] Flux noise can alter the energy levels of the qubits and lead to errors in quantum operations, making it a crucial factor in the development of practical quantum computers.[4, 5] Despite extensive research, the underlying mechanism of flux noise remains unclear, making it difficult to mitigate its impact. However, flux noise is widely believed to arise from the dynamics of magnetic impurities near the superconducting wires.[6, 7, 8, 9, 10, 11, 12] Several experiments[13, 14, 15, 16] show that the frequency dependence of flux noise follows a power law, \(\tilde{S}_{\Phi}(\omega)\propto 1/\omega^{\alpha}\), with exponent \(\alpha\) independent of frequency \(\omega\) over several decades. Experimental measurements on Nb devices show \(\alpha(T)\) going from \(0.8\) to \(0.4\) with increasing temperature \(T\).[14] On the other hand, in Al devices, \(\alpha=0.96-1.05\).[12] Recently, we showed that the \(\alpha(T)\) measured in Nb devices can be explained by a model that assumes the impurity spins interact via ferromagnetic exchange interactions.[17] In contrast, the exponent \(\alpha\) measured in Al devices could not be explained by spin-spin exchange; it required instead the assumption of an extra individual spin relaxation channel, presumably due to the interaction between each spin and other non-spin degrees of freedom such as phonons, electron gas, or amorphous two-level systems.[17] A recent experiment, focused in Al superconducting qubits, presented for the first time measurements of flux noise in the presence of an external magnetic field \(B\).[18] While at \(B=0\) the flux noise showed the expected \(1/\omega\) frequency dependence[12] consistent with our model for spin relaxation disorder,[17] the measurements at weak fields (\(B=10-100\) G) were quite puzzling. At \(B>0\) the noise transitioned smoothly into a Lorentzian in frequency, suggesting dramatic reduction in spin disorder. Here we propose a model for these observations, and show that fitting to experimental data will shed light on the microscopic mechanism responsible for flux noise in Al devices. Consider a set of spin impurities spatially distributed on the surfaces and interfaces of the wires forming the superconducting device. Each impurity is located at position \(\mathbf{R}_{j}\) and has spin described by the dimensionless operator \(\hat{\mathbf{s}}_{j}\). Their magnetic moment \(-g\mu_{B}\hat{\mathbf{s}}_{j}\) imprints a flux on the device[19] \[\hat{\Phi}=-\sum_{j}\mathbf{F}_{j}\cdot\hat{\mathbf{s}}_{j}, \tag{1}\] where the flux vector \(\mathbf{F}_{j}=g\mu_{B}\mathbf{B}_{I}(\mathbf{R}_{j})/I\) accounts for the dependence of the impurity-generated flux on different spin orientations. Here \(g\) is the impurity's \(g\)-factor, \(\mu_{B}\) is the Bohr magneton, \(\mathbf{B}_{I}(\mathbf{R}_{j})\) is the magnetic field generated by the wire at the spin site, and \(I\) is the total current flowing through the wire. The spins themselves are coupled to the wire's current according to the Hamiltonian, \[\mathcal{H}=g\mu_{B}\sum_{j}\mathbf{B}_{j}\cdot\hat{\mathbf{s}}_{j}, \tag{2}\] where \(\mathbf{B}_{j}=\mathbf{B}_{I}(\mathbf{R}_{j})+\mathbf{B}_{\text{ext}}\) is the spin's local field, including an externally applied \(\mathbf{B}_{\text{ext}}\). For typical SQUIDs, \(I\lesssim 1\)\(\mu\)A, leading to peak \(B_{I}<\mu_{0}I/b\sim 0.1\) G where \(b\sim 0.1\)\(\mu\)m is the thickness of the superconducting wire. As a result it is safe to approximate \(\mathbf{B}_{j}\approx\mathbf{B}_{\text{ext}}\) when \(B_{\text{ext}}>1\) G. Flux noise arises from time-dependent correlations of the flux fluctuation operator \(\delta\hat{\Phi}(t)=\hat{\Phi}(t)-\langle\hat{\Phi}\rangle\), \[\tilde{S}_{\Phi}(\omega) = \int_{-\infty}^{\infty}dte^{i\omega t}\left\langle\delta\hat{\Phi }(t)\delta\hat{\Phi}(0)\right\rangle \tag{3}\] \[= \sum_{j,k,a,b}F_{j}^{a}\tilde{S}_{jk}^{ab}(\omega)F_{k}^{b},\] where the superscripts \(a,b=x,y,z\) denote the components of the flux vector, and the spin noise is defined as \[\tilde{S}_{jk}^{ab}(\omega)=\int_{-\infty}^{\infty}dte^{i\omega t}\langle[ \tilde{s}_{j}^{a}(t)-\langle\tilde{s}_{j}^{a}\rangle][\tilde{s}_{k}^{b}(0)- \langle\hat{s}_{k}^{b}\rangle]\rangle. \tag{4}\] In Ref. [17] a numerical method to compute Eq. (4) for a general Hamiltonian of interacting spins was described. While it was shown that spin-spin interactions lead to the dominant mechanism of flux noise in Nb devices, the so called spin diffusion mechanism, comparison with experiments in Al devices suggested a quite different picture. Namely, that the spins near Al wires were fluctuating independently with spin-spin interaction and spin diffusion playing a minor role. In the current letter, it is assumed that independent spin flips dominate \(\omega>0\) noise, so that \(\tilde{S}_{jk}(\omega)\ll\tilde{S}_{jj}(\omega)\) when \(j\neq k\). Together with \(\tilde{S}_{jj}^{ab}(\omega)=-\tilde{S}_{jj}^{ba}(\omega)\) when \(a\neq b\),[20] the flux noise simplifies to \[\tilde{S}_{\Phi}(\omega)\approx\sum_{j}\left[\left|\mathbf{F}_{j} \cdot\hat{\mathbf{B}}_{j}\right|^{2}\tilde{S}_{jj}^{\parallel}(\omega )+\left|\mathbf{F}_{j}\times\hat{\mathbf{B}}_{j}\right|^{2} \tilde{S}_{jj}^{\perp}(\omega)\right]. \tag{5}\] Here \(\hat{\mathbf{B}}_{j}\) is the unit vector pointing along the local magnetic field acting on the spin, \(\tilde{S}_{jj}^{\parallel}(\omega)\equiv\tilde{S}_{jj}^{B_{j}\hat{B}_{j}}(\omega)\), and \(\tilde{S}_{jj}^{\perp}(\omega)\equiv\tilde{S}_{jj}^{\hat{P}_{j}\hat{P}_{j}}(\omega)\) were \(\hat{\mathbf{P}}_{j}\) is any direction perpendicular to \(\mathbf{B}_{j}\). It is assumed that each spin evolves independently of the others according to the equation of motion \[\frac{d\mathbf{s}}{dt}=\frac{g\mu_{B}}{\hbar}\mathbf{B}\times \mathbf{s}-\Gamma(\mathbf{s}-\mathbf{s}^{\rm inst~{} eq}), \tag{6}\] where \(\mathbf{s}=\langle\hat{\mathbf{s}}_{j}\rangle\) represents a quantum average (the subscript \(j\) is now omitted for simplicity). In addition to spin precession, this equation includes an isotropic _spin energy relaxation_\(\Gamma\), which drives \(s\) towards its "instantaneous equilibrium" value \(\mathbf{s}^{\rm inst~{}eq}=\mathbf{s}^{\rm eq}-\tilde{ \chi}(0)g\mu_{B}\delta\mathbf{B}(t)\), where \(\tilde{\chi}(0)\) is the \(\omega=0\) susceptibility. For small deviations from equilibrium we write \(\mathbf{B}\to\mathbf{B}+\delta\mathbf{B}(t)\), and \(\mathbf{s}\to\mathbf{s}^{\rm eq}+\delta\mathbf{s}(t)\), where both \(\delta\mathbf{B}(t)\) and \(\delta\mathbf{s}(t)\) are small time-dependent perturbations. Plugging these into Eq. (6) and dropping non-linear terms enables the calculation of the dynamical susceptibility and the spin noise through the fluctuation-dissipation theorem (See [17] for details). In the current independent spin model the spin noise can be calculated analytically: \[\tilde{S}^{\parallel}(\omega) = \frac{2\hbar\omega}{1-e^{-\hbar\omega/k_{B}T}}\frac{\Gamma\tilde{ \chi}(0)}{\omega^{2}+\Gamma^{2}}, \tag{7a}\] \[\tilde{S}^{\perp}(\omega) = \frac{\Gamma}{1-e^{-\hbar\omega/k_{B}T}}\left[\frac{\tilde{\chi}(0 )(\hbar\omega-g\mu_{B}B)-s^{\rm eq}}{(\omega-g\mu_{B}B/\hbar)^{2}+\Gamma^{2}}\right.\] (7b) \[+ \left.\frac{\tilde{\chi}(0)(\hbar\omega+g\mu_{B}B)+s^{\rm eq}}{( \omega+g\mu_{B}B/\hbar)^{2}+\Gamma^{2}}\right],\] where \(s^{\rm eq}=-\frac{1}{2}{\rm tanh}(g\mu_{B}B/2k_{B}T)\) and \(\tilde{\chi}(0)=1/[4k_{B}T\cosh^{2}(g\mu_{B}B/2k_{B}T)]\). In the presence of an external magnetic field, \(\tilde{S}^{\parallel}(\omega)\) is drastically different from \(\tilde{S}^{\perp}(\omega)\), as illustrated in Fig. 1. From Eq. (5) it follows that at each spin location, the flux vector's component parallel to the external field will produce Lorentzian flux noise peaked at \(\omega=0\). In contrast, the component of the flux vector perpendicular to \(B\) will produce instead two Lorentzians peaked at \(\omega=\pm g\mu_{B}B/\hbar=\pm 2\pi(B/1~{}{\rm G})\times 2.8~{}{\rm MHz}\). This shows that even a \(\mathbf{B}_{\rm ext}\) of a few Gauss has a huge impact on flux noise: it shifts the perpendicular flux vector contribution to a much higher frequency range. Figure 1: Flux noise produced by a single spin in a magnetic field \(B\). Spin fluctuations along \(B\) (\(\tilde{S}^{\parallel}(\omega)\)) remain peaked at \(\omega=0\) for all \(B\). In contrast, spin fluctuations perpendicular to \(B\) (\(\tilde{S}^{\perp}(\omega)\)) give rise to “spin precession peaks” centered at \(\omega=g\mu_{B}B/\hbar\). As a result the flux noise contribution due to the component of the flux vector perpendicular to \(B\) is expelled from the low frequency range (See Eq. (5)). Both contributions are Lorentzian with linewidth set by the spin relaxation rate \(\Gamma\). While a single spin produces Lorentzian noise, a system of spins with a wide distribution of relaxation rates \(\Gamma_{j}\) can give rise to \(1/\omega\) noise in a large interval of frequencies.[6, 17, 21] Motivated by recent flux noise measurements in Al devices under weak magnetic fields,[18] we propose the following relaxation rate \(\Gamma_{j}\) for a spin \(\bbox{s}_{j}\) placed on the surface or interface of a superconducting wire, \[\Gamma_{j}=\Gamma_{0}e^{-\lambda}+\Gamma_{1}(k_{B}T)(g\mu_{B}B)^{n}. \tag{8}\] The first contribution to Eq. (8) models _cross spin relaxation_ due to the impurity spin interaction with one or more amorphous two-level systems (TLSs), where \(\lambda\) is a random variable uniformly distributed in the interval \([0,\lambda_{\rm max}]\).[6, 21] Cross relaxation occurs when a TLS switches due to thermal fluctuation, causing a simultaneous impurity spin flip. Since the majority of these processes occur for TLSs with energy splitting of the order of \(k_{B}T\), cross relaxation is independent of magnetic field when \(g\mu_{B}B<K_{B}T\).[6] The second contribution to Eq. (8) models _direct spin relaxation_, whereby the spin emits a phonon, either due to modulation of its hyperfine interaction (\(\Gamma_{j}\propto T\;B^{2}\)), or due to the spin-orbit admixture mechanism (\(\Gamma_{j}\propto T\;B^{4}\)). In both cases the rates scale linearly in \(T\) because they are proportional to the phonon occupation at the Zeeman frequency; they also scale as \(B^{n}\) because the direct mechanism requires the breaking of time-reversal symmetry.[22, 23] The constants \(\Gamma_{0}\), \(\Gamma_{1}\), and \(n\) are to be determined by fitting to experiments. The \(\Gamma_{j}\) are distributed according to probability density \(p(\Gamma)=1/(\lambda_{\rm max}|\frac{d\Gamma}{d\lambda}|)\), leading to the disorder-averaged single spin noise, \[\tilde{S}^{a}_{\rm avg}(\omega) = \int_{\Gamma_{\rm min}}^{\Gamma_{\rm max}}\tilde{S}^{a}(\omega)p( \Gamma)d\Gamma=\frac{\hbar\omega\tilde{\chi}(0)/\lambda_{\rm max}}{1-e^{- \hbar\omega/k_{B}T}}\sum_{\gamma^{a}}\left\{\frac{\arctan(\Gamma_{\rm max}/ \gamma^{a})-\arctan(\Gamma_{\rm min}/\gamma^{a})}{\gamma^{a}}\right. \tag{9}\] \[+\left.A_{\Gamma}(\gamma^{a})\frac{\Gamma_{1}(k_{B}T)(g\mu_{B}B) ^{n}}{(\gamma^{a})^{2}+[\Gamma_{1}k_{B}T(g\mu_{B}B)^{n}]^{2}}\right\},\] where \(a=\parallel,\perp\) and \[A_{\Gamma}(\omega)=\lambda_{\rm max}-\frac{1}{2}{\rm log}\left(\frac{\Gamma_ {\rm max}^{2}+\omega^{2}}{\Gamma_{\rm min}^{2}+\omega^{2}}\right)-\frac{\Gamma _{1}(k_{B}T)(g\mu_{B}B)^{n}}{\omega}\left[\arctan\left(\frac{\Gamma_{\rm max} }{\omega}\right)-\arctan\left(\frac{\Gamma_{\rm min}}{\omega}\right)\right], \tag{10}\] with \(\gamma^{\parallel}=\omega\), \(\gamma^{\perp}=|\omega\pm g\mu_{B}B/\hbar|\). The minimum and maximum rates are \(\Gamma_{\rm min}=\Gamma_{0}e^{-\lambda_{\rm max}}+\Gamma_{1}(k_{B}T)(g\mu_{B} B)^{n}\) and \(\Gamma_{\rm max}=\Gamma_{0}+\Gamma_{1}(k_{B}T)(g\mu_{B}B)^{n}\), and \(s^{\rm eq}\) was exact to first order in \(g\mu_{B}B/k_{B}T\). When \(B=0\), the curly bracket of Eq. (9) gives rise to \(1/|\omega|\) noise for a wide range of frequencies \(\Gamma_{\rm min}\ll|\omega|\ll\Gamma_{\rm max}\). As \(B\) is increased from zero, disordered spin precession peaks at \(\propto 1/|\omega\pm g\mu_{B}B/\hbar|\) are added to the low frequency range due to the nonzero components of \(\bbox{F}_{j}\perp\bbox{B}\). However, as \(B\) increases further there will be a range of frequencies satisfying \(|\omega|,|\omega\pm g\mu_{B}B/\hbar|<\Gamma_{1}(k_{B}T)(g\mu_{B}B)^{n}\approx \Gamma_{\rm min}\). In this range the \(1/|\gamma^{a}|\) contribution to flux noise is exponentially suppressed, and the second term in the curly bracket of Eq. (9) becomes dominant, with \(A_{\Gamma}(\gamma^{a})\approx\lambda_{\rm max}-{\rm log}(\Gamma_{0}/\Gamma_{ \rm min})-1\), independent of \(\omega\). As a result, both the low frequency noise and the spin precession peaks are converted into simple "reduced disorder" Lorentzian noises, centered at \(\omega=0\) for \(a=\parallel\) and \(\omega=\pm g\mu_{B}B/\hbar\) for \(a=\perp\). Figure 2 shows the predicted transition from \(1/\omega\) to Lorentzian flux noise with increasing magnetic field. The behavior is in qualitative agreement to recent experiments in Al devices (Fig. 3(a) of Ref. [18]). In conclusion, a model for spin noise is proposed to explain flux noise experiments in superconducting Al devices. The model is based on the assumption that the interaction between each impurity spin and the vibrational modes of the lattice, formed by amorphous TLSs and phonons, dominate finite frequency fluctuations. Within this scenario, the impact of spin-spin interactions such as dipolar and exchange can still be observed as a Curie-Weiss temperature \(T_{\rm CW}\) in the zero-frequency susceptibility, because spin relaxation \(\Gamma\) does not impact \(\tilde{\chi}(0)\). Thus one can use \(\tilde{\chi}(0)=1/[4k_{B}(T-T_{\rm CW})\cosh^{2}(g\mu_{B}B/2k_{B}T)]\) in Eq. (9) to model spin-spin interactions. An analytical expression for the flux noise in the presence of an external magnetic field \(B\) is obtained, revealing a transition from \(1/\omega\) flux noise at \(B=0\) to Lorentzian flux noise in the frequency range where direct spin relaxation by phonon emission is stronger than cross relaxation due to amorphous TLSs. The transition is similar to what is observed in recent experiments.[18] Fitting the current theory to experimental data will yield the characteristic exponent \(n\) for direct phonon emission, elucidating whether the mechanism is mediated by spin-orbit coupling or hyperfine interaction. This will in turn yield valuable information on the identity of the spins causing flux noise. The model predicts that application of a \(B\) field reduces low frequency noise arising from \(\bbox{F}_{j}\parallel\bbox{B}\) by a factor of \(1/\cosh^{2}\left(g\mu_{B}B/2k_{B}T\right)\). It also shifts the contribution of \(\mathbf{F}_{j}\perp\mathbf{B}\) out of the low frequency range, transferring noise to a spin precession peak centered at \(\omega=\pm g\mu_{B}B/\hbar\). Thus, an external \(B\) can be used to reduce the impact of flux noise in qubits. ###### Acknowledgements. The authors thank M. Amin, R. Harris, and T. Lanting for several insights into flux noise, and D. A. Rower for discussions and sharing experimental results before publication. This work was supported by NSERC (Canada) through its Discovery program (Grant number RGPIN-2020-04328).
2303.05543
Advanced Techniques in Automated High Resolution Scanning Transmission Electron Microscopy
Scanning transmission electron microscopy is a common tool used to study the atomic structure of materials. It is an inherently multimodal tool allowing for the simultaneous acquisition of multiple information channels. Despite its versatility, however, experimental workflows currently rely heavily on experienced human operators and can only acquire data from small regions of a sample at a time. Here, we demonstrate a flexible pipeline-based system for high-throughput acquisition of atomic-resolution structural data using a custom built sample stage and automation program. The program is capable of operating over many hours without human intervention improving the statistics of high-resolution experiments.
Alexander Pattison, Cassio C. S. Pedroso, Bruce E. Cohen, Justin C. Ondry, A. Paul Alivisatos, Wolfgang Theis, Peter Ercius
2023-03-09T19:05:03Z
http://arxiv.org/abs/2303.05543v2
# Advanced Techniques in Automated High Resolution Scanning Transmission Electron Microscopy ###### Abstract Scanning transmission electron microscopy is a common tool used to study the atomic structure of materials. It is an inherently multimodal tool allowing for the simultaneous acquisition of multiple information channels. Despite its versatility, however, experimental workflows currently rely heavily on experienced human operators and can only acquire data from small regions of a sample at a time. Here, we demonstrate a flexible pipeline-based system for high-throughput acquisition of atomic-resolution structural data using a custom built sample stage and automation program. The program is capable of operating over many hours without human intervention improving the statistics of high-resolution experiments. Introduction Scanning transmission electron microscopy (STEM) is an important technique in the material sciences communities, capable of providing sub-Angstrom resolution mapping of the positions of atomic columns and distributions of elements within a material [1]. The addition of complementary modalities such as electron energy loss spectroscopy (EELS) and X-ray energy dispersive spectroscopy (EDS) has also served to increase the range and quantity of information that can be acquired through STEM. More recently, the development of high-speed pixelated direct electron detectors (DED) have made it possible to record a full convergent-beam electron diffraction pattern for every probe position in a single STEM scan, generating four-dimensional datasets (called 4D-STEM). [2] These datasets contain information about local crystallinity, determine crystal orientation [3], locate defects [4] and perform phase contrast imaging [5]. Despite these and other developments continuing to expand the technical capabilities of STEM, the process of acquiring data has changed little since the technique's inception, at least from the operator's perspective. Although modern microscopes are (mostly) computer controlled and provide access to powerful scripting libraries, STEM experiments still generally require trained operators to be physically present to control the instrument, decide how to move the sample and choose what data to acquire. Aside from the tedium involved in such a process, this requirement a) limits access to this technique to those able to procure the services of experienced microscopist, b) limits the duration of any single microscope session to the working hours of the microscopist, c) introduces variation in the data acquisition parameters and quality and d) introduces subjective biases into resultant datasets. Additionally, human microscopists tend to expend extra electron dose when searching a sample area for features of interest by using a "live feed" consisting of repeated consecutive images. This is of particular concern when looking at beam-sensitive samples whose structures break down under repeated imaging. Automation is an obvious solution to this problem. Unlike humans, computers can work around the clock, perform a repetitive task consistently and operate without subjective bias (not withstanding those programmed into them). They can also make decisions based on single images rather than using the "live feed" that many human operators resort to, thereby reducing the electron dose and limiting sample damage. These features are of particular interest to those working in the fields of cryo-EM and/or electron tomography, which is why many of the automation packages created for electron microscopy have been developed with one or other technique in mind. These include AutoEM and its successor AutoEMation [6], JADAS [7], JAMES [8], SAM [9], TOM [10], SerialEM [11] and Legionon [12]. SerialEM and Legionon are especially popular as they are cross-compatible with Thermo Fischer (formerly FEI) and JEOL microscopes and are both open source [13], with active developer communities dedicated to expanding their functionality in response to users [14, 15]. However, these programs are primarily focused on the application of conventional TEM for the biological sciences rather than STEM and the material sciences and, as such, have been optimised for a different set of experiments. While biological samples are typically far more sensitive to electron irradiation than samples studied by material scientists, the samples and set of tasks are usually also more homogeneous in nature. In cryo-EM, for example, it is quite common to combine the signal from tens of thousands of separate biomolecules to create an averaged three-dimensional model of the structure, a process known as single-particle reconstruction [16]. Here, automation is used to acquire many images of well-separated macromolecules suspended in ice with a well-established set of imaging parameters common across different macromolecules. Materials science samples are instead highly heterogeneous, where the features of interest are defects and grain boundaries whose natures are rarely reproducible or controllable. This heterogeneity has dissuaded broad adoption of automation due to the difficulty of making generalized programs. Secondly, the nature of STEM experiments (scanning a focused probe) leads to several key differences between STEM and TEM automation especially at high resolution such as the focusing, longer data acquisition times, and the need for higher dose in general. In this paper, we demonstrate an automated control system for STEM using a custom-built, all piezo-electric stage with a flexible control system that can be customised for specific samples and experiments. This system was designed for the TEAM 0.5 at the National Center for Electron Microscopy (NCEM) facility of the Molecular Foundry at LBNL and takes full advantage of the unique capabilities of its TEAM stage. We also show integration with a fast DED showing the power of automated multimodal experimental data acquisition. Sample stage capabilities The TEAM Stage is a custom-built, all-piezo driven stage with an ultra-low drift rate (\(\sim 11\) pm/min) and five-axis movement: \(x\), \(y\), \(z\), \(\alpha\) and \(\gamma\). \(\alpha\) tilt and in-plane \(\gamma\) rotation provide full \(\pm 180^{o}\) range of motion [17], meaning that crystalline samples can be precisely manipulated for full rotation tomography [18], crystal zone-axis tilting [19] and atomic electron tomography [20, 21]. The stage combines piezo motion with an accurate measurement system to implement open- and closed-loop computer control with a scripting interface. Other stages rely on sets of gears with backlash that makes accurate, repeatable motion for high resolution experiments difficult to implement. The unique properties of the TEAM Stage make it well-suited to automated and remote operation [22] via computer control. In this paper, we exploit the capabilities of this unique stage for high resolution experiments. Even so, inaccuracies in the stage motion and position measurement system limit its full potential for completing autonomous high resolution experiments. Further development in stage technology to allow for accurate motion, repeatable motion, high stability, large range of tilts, _in-situ_, and exchangeable sample magazines is required to realize the full potential of autonomous electron microscopy experiments. ## 3 Automation implementation A fully automated microscopy system needs to be able to perform, at minimum, five basic tasks to function effectively: 1) decide where to move the stage, 2) tell the stage to move, 3) check that the stage has moved to the right location (registration), 4) find the optimal focal conditions and 5) acquire data. We implemented a server/client model based on Python. A server is loaded onto the microscope PC that accepts general control commands through a networking interface, completes each assigned task (move the stage, acquire an image, change the focus, etc.) and sends results back to a local or remote client. The server is general by design and exposes all necessary microscope systems (lenses, detectors, stages, etc.) from a single program allowing for orchestration of common tasks as a queue of short sub-tasks to recover easily in case of task failure. Multiple clients can be connected from a local and/or remote PC to implement capabilities specific to a given experiment. Stage movement (task 2) and imaging (task 5) are achieved by communication with the microscope's and stage's scripting interfaces. Registration (task 3) is achieved by cross-correlation, a standard function found in most image processing libraries that can be performed quickly on modern desktop computers. More complex analysis is implemented in the client described later. Figure 1: a) HAADF-STEM image of AuNPs with b) accompanying stigmation tableau. c) Line scans and d) normalized FFTs (right) of line scans of sharp edge (orange, \(q\) = 54.2) and blurred edge (blue, \(q\) = 3.0). Beyond Hand Panels: Assistive Functionality Towards Full Automation Computer control of the microscope can be used to develop new techniques that assist the user during an experiment short of full automation. A typical microscope operator station is designed around the hand panels with knobs and buttons for commonly completed tasks. Such hand panels could be considered somewhat antiquated in that they are meant for analog operation of microscope parameters by an operator with physical access to the machine. For remote operation, hand panels must be shipped to the remote site or an inefficient on-screen version is made available. Implementation of common tasks as an automated function can both improve local operation and enable remote operation. Common procedures such as object centering, focusing, and lens alignment could all be made into discrete functions for this purpose. As an example, stigmating images is a very common practice. The user typically watches a live update of the image as the stigmator value is manually changed to achieve the best image possible. This requires significant dose, constant operator attention, and direct, analog control of the stigmator knobs. Figure 1a-b) shows an alternative method where the computer acquires an image from a sub-region with a discrete set of stigmation values. The user (or an image quality metric) can then simply choose the best image. Such a "stigmation tableau" provides a defined task with control over how accurate the stigmation steps should be and provides for remote/automated interaction for a very common task. This concept can be applied to almost any operational task (removal of the hand panel control) to take full advantage of computer-aided microscope control. ## 5 Autofocusing Autofocusing (task 4) is one of the more difficult tasks in high resolution STEM imaging due to several complications. Firstly, the complexity and heterogeneous nature of materials science samples makes development of a general algorithm difficult. Secondly, the electron beam is highly converged and very sensitive to the focus value especially for atomic resolution imaging. Accuracy of a few nanometers is needed for the ideal imaging conditions. Optimized autofocusing routines that are fast and accurate are highly desirable for automated STEM experiments. We have developed two autofocusing routines that improve upon the traditional techniques. ### One-dimensional intensity line cuts A common method in STEM is to use 10 to 15 scanned images of approximately 1 second each at different foci where the image quality is determined by auto-correlation. This method is fairly slow, requires high quality images to reduce noise, and adds large amounts of dose to the entire sample area of interest. We instead implemented a more efficient approach for STEM by analysing the sharpness of intensity data from a focal series of one-dimensional line scans taken at the same position. One-dimensional scans reduce the time needed by the square of the number of image pixels and only doses a very small region of the sample. The sharpness (quality factor) is determined for each line scan by summing the magnitudes of their fast Fourier transforms (FFT) and dividing by the magnitude of the zero frequency. The FFT of a sharp edge will have uniform magnitude at all frequencies, but a blurred edge will present suppressed higher frequencies and a lower sum, as demonstrated in Figure 1c-d). This normalized sum of frequencies, \(q\), is a useful metric of focus. In practice, the system only requires about 5 to 10 line cuts in real experiments to accurately focus significantly reducing the time and dose required. ### Bayesian optimization The accuracy of line-cuts is limited by the number of one-dimensional line scans according to pre-selected parameters, and it can fail if they are taken at a weakly scattering position on the sample. We have found that for atomic resolution STEM images the image quality tends to be highly peaked requiring several line-cut iterations with reduced step size at each stage to achieve the best focus. Thus, the line scan method works best on samples with very high contrast like well separated nanoparticles and less well on continuous thin films. Lastly, parallel beam TEM can only be acquired as full images and this method can not be used. Thus, in some cases it is better to use full images to achieve the best focus, and an algorithm that minimizes the number of full images is desirable. We have implemented a Bayesian optimization technique to auto-focusing. Bayesian optimization is an efficient method of estimating expensive to-compute unknown functions by minimizing uncertainty [23] and is being used for autonomous experiments in synchrotrons [24], fMRI studies [25] and scanning-probe microscopy [26]. It allows us to use the confidence in the peak value of a scalar image quality metric to determine when the optimal focus has been found. We can also adapt the algorithm to achieve higher accuracy (at the cost of more images/dose) on radiation-hard materials or lower accuracy (reducing dose) on dose sensitive materials. Bayesian optimization requires a scalar metric of focus quality. Kirkland demonstrated in simulation that normalized image variance is a good, simple indicator of the focal quality of an atomic resolution high angle annular dark field (HAADF-) STEM image. [27]. Normalized image variance is calculated as \[F(df)=\frac{\sigma^{2}}{\mu^{2}}=MAX \tag{1}\] where \(\sigma^{2}\) is the variance of the pixel intensities of an image and \(\mu\) is the mean pixel intensity, such that \[\mu=\frac{1}{N}\sum_{ij}z_{ij} \tag{2}\] \[\sigma^{2}=\frac{1}{N}\sum_{ij}(z_{ij}-\mu)^{2} \tag{3}\] where \(z_{ij}\) is the pixel intensity at position \(i,j\) and \(N\) is the total number of pixels [28]. In STEM, while acquiring an image and calculating the variance are simple operations, each image taken increases the applied dose and the time required (\(\sim\)1-3 seconds) for high quality scanned images at atomic resolution is non-trivial. Thus, taking images from which to evaluate focal quality can be considered expensive in terms of a long-running automation experiment where auto-focusing is used in multiple steps. In Bayesian optimization, a surrogate model is generated as a proxy for the black-box function, usually using Gaussian processes (GP) [23]. Bayes' Theorem is then used to incorporate prior (and subsequent posterior) knowledge in order to select the most beneficial measurement value (i.e. focus) to acquire next. This selection is determined by an acquisition function, the choice of which determines the behaviour of the optimization routine [23]. A common choice for determining the maximum of a function is the upper confidence bound (UCB) method, where the choice of the next measurement point is determined by the maximum of the UCB. This process can be mathematically represented by: \[x^{*}=\arg\max\left[f(x)+A\times\sigma(x)\right] \tag{4}\] where \(x^{*}\) is the next sample point to be chosen, \(f(x)\) is the posterior focus quality at defocus setting \(x\), \(\sigma(x)\) is the posterior uncertainty of the posterior focus quality at defocus setting \(x\) and \(A\) is a hyperparameter that determines the size of the confidence bound [23]. Increasing \(A\) favors exploration (searching new regions to try to find the maximum) while decreasing \(A\) favors exploitation (searching close to the previously discovered maximum). In this work, a Bayesian optimization routine using Gaussian processes and the UCB acquisition function was implemented using the Gaussian process module in _scikit-learn_ version 1.0.2 [29]. Since the relationship between defocus and image variance of a high resolution image approximates a smooth Gaussian with a single peak, \(A\) was set to 1.5 to more strongly favor exploitation rather than exploration, thereby decreasing the number of acquisitions required to find the maximum. Gaussian processes are designed to explore a function space and to accurately predict the function everywhere, which is incompatible with the goal of only finding the maximum image variance (i.e. best focus). Still, the Bayesian optimization method allows us to use the confidence in the peak to define our desired accuracy in the image focus, which might require fewer or more images than what was pre-determined for i.e. a focal series. Thus, we developed an early stopping algorithm to further reduce the number of scans according to these conditions: 1. Check that the maximum of the lower confidence bound, which should always be close to the global maximum of the upper confidence bound, is greater than all the local maxima of the upper confidence bound except for the global maximum. This indicates that the current peak of the surrogate function is the main peak of surrogate function to a high degree confidence. 2. Check that the average distance between the last \(n\) sampled defocus values is smaller than a threshold \(t\). In this work, \(n\) was set to 3 and \(t\) to 1 nm. 3 is a good minimum for \(n\), since this, in principle, corresponds to testing one value in the centre of the peak and one value either side of it. The choice of \(t\) is sample-dependent, with thinner samples having narrower defocus tolerances than thicker samples. This optimization algorithm was tested on a HfO\({}_{2}\) thin film sample using an 7.8 nm by 7.8 nm region. For early stopping condition 2, \(n\) was set to 3 and \(t\) was set to 1 nm. The defocus was changed by 20 nm from the visually determined optimum and the optimizer was instructed to search with a boundary of \(\pm 50\) nm defocus. The optimizer was initialised with 5 starting points: 3 chosen randomly and 2 placed at the bounds. An example of this optimization routine is shown in Figure 2, which shows how the routine quickly and accurately determines the optimal defocus value in just five more scans. The exact number of scans needed fluctuates between runs due to the randomness of the initialization and noise. This will also vary depending on the sample and the values of the hyperparameters, such as number of starting points and the choice of \(A\) in the UCB function. We also found that this technique works at low resolution on discrete nanoparticles making this a generalizable auto-focus routine. The ability to efficiently and accurately auto-focus on any arbitrary sample is vital to obtaining high-resolution data from automated STEM. Additionally, normalized image variance can also be used to correct non-round aberrations (astigmatism, coma, etc.) making Bayesian optimization a general tool for electron microscope alignment [28]. Future work will explore the use of Bayesian optimization to compensate for higher order lens aberrations (such as astigmatism) in multiple dimensions simultaneously and line cuts to further reduce applied dose. ## 6 Automated Workflows Once the common steps of STEM operation become discrete functions, an experiment can be expressed as a set of repeatable steps integrated into a workflow. Many processes (like defocus and stigmation) can be written as self-contained elements that can be reused in different parts of the workflow or in different workflows. Thus, in order to be as broadly usable as possible, our automation software program is highly modular. The workflow for each experiment consists of a pipeline of elements that can be customised and chained together to accomplish almost any task. This means that any new function developed for specific workflows can be easily reused in future experiments. Each element consists of one of three main methods: * Tiling: This creates a grid of target sample positions at which to acquire data. The shape of Figure 2: Auto-focusing on a HfO\({}_{2}\) thin film. The HAADF-STEM images on the left correspond to the defocus values denoted by the yellow stars on the right. The graphs on the right demonstrate the Bayesian optimization process, with the blue lines representing the surrogate model, the shaded bars representing the confidence bounds (\(1.5\sigma\)), the red dots denoting previously sampled defocus values, the yellow stars denoting the most recently sampled defocus value (corresponding to image on the left), and blue cross the denoting next defocus value to be sampled. The red sample points in the first row were all randomly chosen as part of the initial data set, while all subsequent sample points were determined by the program. the grid, spacing or overlap between adjacent tiles, and field-of-view of these tiles can all be specified by the user. This method is typically used to study continuous thin films or samples with a dense packing of nanoparticles. * Point of Interest: This identifies points of interest in an overview image for the microscope to investigate. Points of interest are typically identified automatically in an initial low resolution overview image. This method is typically used for samples with a sparse dispersion of nanoparticles. * Record: This contains all the parameters needed for recording data and is typically the final element in any pipeline. Available data streams are the HAADF detector and a direct electron detector. Any number of elements can be chained together into a pipeline to perform complex tasks and then saved as a text file for reuse later. A pipeline is implemented in a workflow client that steps through each stage of the pipeline adapting to the current microscope and sample state at each step. The automation program with an example of a pipeline (left panel) that acquires a 3 by 3 grid is shown in Fig 3 (see appendix for full pipeline text). The pipeline is organized as a hierarchical set of elements with the associated parameters and are described below. The first element _1_start_ generates a large-field-of-view reference image to be used for subsequent registration. This is done by using the tiling method and setting 'nrperdim' (number of tiles per dimensions) to 1 in each dimension. The field-of-view of these tiles is determined by the magnification setting (320k\(\times\) in this case). The second element _2_tiling_ generates the grid of positions for the microscope to step between and takes images. In this case, the size of the grid is calculated from the specified magnification, stride and edge tile clearance rather than being explicitly stated by the user. Stride determines the spacing between adjacent tiles, with a stride of 1 meaning that the spacing between the tile centres equals the size of the tiles (i.e. the edges of each tile touch perfectly). Edge tile clearance determines how many tiles to remove around the edge of the grid. This is done because the registration process for tiles around the edge is typically more prone to errors due to the lesser overlap between the registration image and the reference image. With a stride of 1, a 5 by 5 grid of 1.8Mx tiles would fit inside a single 320kx reference image, but the edge tile clearance of 1 reduces this to 3 by 3. Overlapping of images is desirable to create a montage with no breaks between regions but sacrifices efficiency by imaging the same region many times. Further, stitching high resolution images becomes problematic due to drift at the atomic scale. Alternatively, a sufficient gap can be used to ensure that every image only contains new sample regions which is desirable when trying to determine morphological statistics from many objects. The third element _3_record_ contains the necessary recording parameters: dwell time, magnification, number of scan positions and camera length. After the high-resolution data acquisition, it lowers the magnification to acquire an overview useful to more easily register this image relative to the original reference image. We also implement the ability to acquire multimodal data which in this case is a 4D-STEM data set. Each high-level element contains additional parameters pertaining to lower-level functions. * Focus: contains all parameters connected to auto-focusing including repeats to check accuracy, number of defocus steps and other advanced parameters. Figure 3: High throughput automation client shows the pipeline (left panel) of steps to be completed and the associated parameters. The current status of the machine, its location, and the relevant images/data are also displayed to the operator. * Registration: contains all parameters that handle registration, including the final magnification at which to perform registration for this step (registration_end_mag), the maximum permitted value of the microscope defocus (dfmax), and the tolerance for deviation from the target position (xytol). * Display: contains all parameters that determine how information is displayed in the workflow display including data stitching and which panel the output appears. * Output: contains the image parameters for the output from all elements (except for the final record operation, which uses'record'). The pipeline is a hierarchical set of tasks that takes the output from a previous pipeline element and applies the next set of tasks taking into account the current state of the microscope. For example, the tiling element takes the initial overview image and generates a sequence of outputs that are passed down to the next element, which in turn generates its own set of outputs. This makes the system more robust and allows it to restart at different steps in the full process, avoiding the need to completely restart a pipeline if one failure occurs. There are also high-level parameters, such as the number of attempts to reach a position (stepper_attempts), whether or not the stage returns to its original position after the pipeline has been completed (return_to_start), how many errors to tolerate before aborting (errorskips), and the ability to send updates as chat messages during operation (chatmessageevery). ## 7 Multimodal automated data acquisition Fig 3 shows the program used to acquire a 3 by 3 grid of 4D-STEM datasets with 512 by 512 scan positions taken at 1.8Mx STEM magnification and a camera length of 170 mm (see Appendix A for the full pipeline). The total time was about 10 minutes. The sample is a dense array of core-shell SrYbF\({}_{5}\) @ CaF\({}_{2}\) metal fluoride nanoparticles used as upconverting optical probes in biological imaging [30, 31]. We show the ability to measure the atomic structure of several of these dose-sensitive nanoparticles using differential phase contrast STEM by 4D-STEM. Figure 4 shows the initial overview image used for position generation and registration where the colored dots indicate the positions and order of acquisition (black to yellow). Each data acquisition includes a HAADF-STEM image (Figure 5) and a 4D-STEM scan acquired simultaneously. The 4D-STEM data was analyzed using the center of mass technique to create a differential phase contrast measurement [32]. The core/shell structure is obviously visible in the HAADF-STEM data due to the Z-contrast between the shell Ca and SrYb core; however, the weakly scattering F atoms are not resolved. DPC-STEM is sensitive to these weakly scattering atomic columns and are imaged in the center nanoparticle which is on the [100] zone axis. Automation allowed us to image several nanoparticles with advanced methods and minimal dose. Figure 4: Overview HAADF-STEM image of tiled 4D-STEM data acquisition. The colored dots indicate the positions of each data acquisition determined by cross-correlation using the post-acquisition zoom-out image. Figure 5: Simultaneously acquired HAADF-STEM images for each stage position. The core@shell morphology and atomic structure is clearly visible due to the Z-contrast of this technique. Figure 6: The differential phase contrast STEM signal extracted from each 4D-STEM data set acquired at each stage position. The lightly and heavily scattering atom columns are resolved in the on-axis nanoparticle in the center image. The center image is enlarged to show the available high resolution structure allowing analysis of the core@shell interface. Summary We have implemented a pipeline-based system that utilizes the capabilities of the custom-built all-piezo TEAM stage for high-throughput acquisition of atomic-resolution HAADF and phase contrast STEM images on the TEAM 0.5 microscope. We have demonstrated the application of this system for the investigation of core-shell CaF@SrYbF nanoparticles. Alongside this, we have developed multiple techniques for STEM autofocusing and implemented them as part of this pipeline system. This system is available through the user program at the Molecular Foundry user center. ## 9 Acknowledgements This work was partially funded by the US Department of Energy in the program "4D Camera Distillery: From Massive Electron Microscopy Scattering Data to Useful Information with AI/ML." Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We would like to thank Gatan, Inc. as well as P Denes, A Minor, J Ciston, C Ophus, J Joseph, and I Johnson who contributed to the development of the 4D Camera. Pipeline The pipeline used for Fig 4, 5 and 6 1_start: tiling: nrperdim: (1, 1) mag: 320000 focus: repeats: 1 display: stitch: True panel: 0 output: mag: 320000 pixels: 1024 stepper_attempts: 4 return_to_start: True next_pipeline: 2_tiling 2_tiling: edge_tile_clearance: 1 mag: 1800000 stride: 1 focus: repeats: 1 nrslices: 9 extra_focus_steps: [5.0e-09] registration: registration_end_mag: 1300000 dfmax: 5e-07 xytol: 5e-09 display: panel: 1 output: pixels: 512 mag: 1800000 errorskips: 10000 chatmessageevery: 10 next_pipeline: 3_record 3_record: record: dwell: 3e-06 mag: 1800000 pixels: 512 zoomoutmag: 320000 STEM camera length index: 10 4D-STEM: True display: panel: 2
2304.10857
SequeL: A Continual Learning Library in PyTorch and JAX
Continual Learning is an important and challenging problem in machine learning, where models must adapt to a continuous stream of new data without forgetting previously acquired knowledge. While existing frameworks are built on PyTorch, the rising popularity of JAX might lead to divergent codebases, ultimately hindering reproducibility and progress. To address this problem, we introduce SequeL, a flexible and extensible library for Continual Learning that supports both PyTorch and JAX frameworks. SequeL provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches. The library is designed towards modularity and simplicity, making the API suitable for both researchers and practitioners. We release SequeL\footnote{\url{https://github.com/nik-dim/sequel}} as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
Nikolaos Dimitriadis, Francois Fleuret, Pascal Frossard
2023-04-21T10:00:22Z
http://arxiv.org/abs/2304.10857v1
# SequeL: A Continual Learning Library in PyTorch and JAX ###### Abstract Continual Learning is an important and challenging problem in machine learning, where models must adapt to a continuous stream of new data without forgetting previously acquired knowledge. While existing frameworks are built on PyTorch, the rising popularity of JAX might lead to divergent codebases, ultimately hindering reproducibility and progress. To address this problem, we introduce SequeL, a flexible and extensible library for Continual Learning that supports both PyTorch and JAX frameworks. SequeL provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches. The library is designed towards modularity and simplicity, making the API suitable for both researchers and practitioners. We release SequeL1 as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes. Footnote 1: [https://github.com/nik-dim/sequel](https://github.com/nik-dim/sequel) ## 1 Introduction The field of Continual Learning (CL), also known as Lifelong Learning [31], Incremental Learning [33], or Sequential Learning, has seen fast growth in recent years. Continual Learning addresses the important setting of incrementally learning from a stream of data sources, disposing of the long-standing i.i.d. assumption of traditional machine learning. However, the pace of innovation has led to diverging settings in terms of datasets, assumptions, and requirements. As a consequence, several works have attempted to unify the Continual Learning paradigms [24, 3]. The plethora of Continual Learning settings is accompanied by a variety of Deep Learning libraries, such as PyTorch, JAX, and TensorFlow, leading to further division. Each Deep Learning library has different advantages, and researchers opt for the one that better suits their needs and prior experience. Over time, the influx of new methods results in disconnected repositories, stagnating progress due to limited reusability and lack of reproducibility. In this work, we propose _SequeL_, i.e., Sequential Learning, a Continual Learning framework written in both PyTorch and JAX. _SequeL_ aims to unite the divergent codebases while allowing researchers to prototype fast without delving into engineering code, e.g., training loops and metric tracking. Users can develop in the framework of their choosing while accessing the already implemented baselines. For example, consider the case where one researcher wants to implement a novel algorithm in JAX, but all the baselines are in PyTorch. Reimplementing everything from scratch is time-consuming and prone to mistakes. Instead, they can use our proposed framework to integrate their method and compare with baselines in an equal footing. Overall, _SequeL_ offers a unified and flexible framework for Continual Learning research, which is easily extensible and accessible in order to foster reproducibility. We believe _SequeL_ can help researchers to better compare methods and scale up to more complex Continual Learning settings. ## 2 Framework The framework contains the following modules: Benchmarks (Section 2.1), Backbones (Section 2.2), Callbacks (Section 2.3), Loggers (Section 2.4), and Algorithms (Section 2.5). Code 1 shows the interplay between the different modules; a benchmark and a backbone along with loggers are fed into the algorithm instance that is endowed with additional custom functionalities via callbacks. The algorithm serves as a trainer module and also houses the conceptual details of CL methodologies, such as Averaged-Gradient Episodic Memory [8]. _SequeL_ provides flexibility by allowing users to develop their algorithm in either JAX or PyTorch. The framework has been geared towards ease-of-use and division of engineering and algorithmic details. ### Benchmarks The _Benchmarks_ module provides several widely-used Continual Learning benchmarks, both in the New Instance (NI) and New Class (NC) scenarios. The currently supported benchmarks are Split/Permuted/Rotated MNIST, Split CIFAR10/100 and Split TinyImageNet. The _Benchmarks_ module is implemented in PyTorch, since it is better suited for dynamically generating and handling data streams. During training, the input and targets are transformed to the appropriate format to ensure compatibility with both PyTorch and JAX. All supported benchmarks are based on BaseBenchmark class that handles most use cases such as loading training and validation streams for task \(t\), for all tasks up to \(t\), i.e., \(\{1,2,\dots,t\}\). Similar functionalities are provided for memory capabilities, such as loading memory streams for one or more tasks, augmenting the current task dataset with the memories of all previous tasks etc. The benchmark module also handles data-loader construction, and the input to the algorithm is a tuple \((x,y,t)\) of inputs \(x\), targets \(y\) and task IDs \(t\). Implementing new benchmarksLet \(T\) be the number of tasks. Each Benchmark must implement the method prepare_datasets that returns two dictionaries of \(T\) key-value pairs that contain the dataset for the corresponding task \(t\in[T]\), for training and validation. For New Class scenarios, such as SplitMNIST or SplitCIFAR100, the method creates a disjoint datasets and for New Instance scenarios, such as PermutedMNIST or RotatedMNIST, each dataset is coupled with a specific torchvision transformation, regarding the fixed permutation or rotation. ### Models/Backbones The _Models_ module contains neural networks widely used in the literature, such as MultiLayer Perceptrons and Convolutional Neural Networks, Residual Networks [18]. For both JAX and PyTorch, a BaseBackbone class is defined that inherits from flax.nn.Module and torch.nn.Module, respectively and endows the model with the functionality of selecting the output head for NC benchmarks. The user can easily extend the module with models stemming from the literature or that are custom-made by changing the base class to BaseBackbone. A utility model is also provided that receives as input a torch/flax.nn.Module and wraps it with the appropriate BaseBackbone to facilitate importability. ### Callbacks A callback provides hooks for any point in training and validation, similar to Pytorch Lightning [15]. It offers the ability to extend or probe the algorithm and/or model during fitting. Metric callbacks have been implemented for both JAX and PyTorch and handle the monitoring of metrics, e.g., accuracy and forgetting, and the ad hoc tracking via the _Loggers_ module. Utility callbacks can be implemented, such as the TqdmCallback that provides additional information during training via a progress bar. ### Loggers Experiment tracking has become an indispensable part of the ML pipeline. Hence, _SequeL_ includes five different loggers: LocalLogger, ConsoleLogger, WandLogger2, TensorBoardLogger and CometLogger, allowing users to track their runs with the preferred service. Specifically, LocalLogger saves the evolution and final metrics in a local file, while ConsoleLogger prints information as a table in the console. WandbLogger, TensorBoardLogger and CometLogger use the APIs of the homonym services, allowing the tracking of images, tables etc. and the integration with powerful visualization tools and dashboards. Footnote 2: Wandb refers to Weights & Biases [4] ### Algorithms The _Algorithms_ module controls the program flow and incorporates all the aforementioned modules. By calling the fit method, training with validation occurs for the selected backbone for the given benchmark, tracking metrics via the corresponding callback and logging them to the desired service via a logger. The parent class BaseAlgorithm handles the engineering code, while the algorithmic parts are implemented by the children classes. This design choice is motivated by the desire to have access to all internal variables, such as the input \(x\) and task ID \(t\) of the current batch, without using a separate training module. As a result, the engineering logic is kept separate from research code via inheritance. Figure 1 shows a simplified version of the program flow. Each event is encircled by the homonym callbacks. The BaseAlgorithm class is framework agnostic and primarily sets the control flow of the program, such as training for one task and then validating current and preceding tasks. The peculiarities and design constraints imposed by the PyTorch and JAX philosophy are handled by the corresponding base classes, PyTorchBaseAlgorithm and JaxBaseAlgorithm. For instance, for PyTorch the current batch is moved to the appropriate CUDA device, while for JAX it is converted to the NumPy format. The BaseAlgorithm offers basic functionality and uses callbacks for specific and custom functionalities. It also inherits from BaseCallback and provides the same hooks outlined in Section 2.3. Overall, each event E in training, e.g., training_epoch() is surrounded by four hooks in the following sequence: on_before_E, on_before_E_callbacks, E, on_after_E and on_after_E_callbacks. Hence, the user can choose to implement an algorithm via specific callbacks or in child classes so that the research code is concentrated in a single file. For improved readability, the currently supported methods opt for the latter. Calculation of metrics, utilities for printing to console etc. are reserved for callbacks and the corresponding hooks. The framework includes implementations for Naive Finetuning, Elastic Weight Consolidation (EWC) [22], Synaptic Intelligence (SI) [38], Memory Aware Synapses (MAS) [3], Averaged-Gradient Episodic Memory (A-GEM) [8], Less-Forgetting Learning (LFL) [21], Experience Replay (ER) [9], Dark Experience Replay (DER and DER++) [7], Stable SGD [27], Kernel Continual Learning (KCL) [11], Look Ahead Model Agnostic Meta Learning (LaMAML) [17], and Mode Connectivity Stochastic Gradient Descent (MC-SGD) [26]. **Implementing new algorithms**_SequeL_ supports regularization and replay algorithms, via out-of-the-box components. Parent classes are implemented for the specific realizations of regularization-based algorithms such as Elastic Weight Consolidation. For replay methods, the MemoryMechanism class and the corresponding callback handle saving samples in the memory and their selection process. For regularization algorithms, the overall loss for sample \((\mathbf{x},y,t)\) of a classification problem is \(\mathcal{L}(\mathbf{x},y)=\mathcal{L}_{CE}(f_{\mathbf{\theta}}(\mathbf{x}),y)+\lambda\sum _{i}\Omega_{i}(\theta_{i}-\theta_{i,\text{old}})^{2}\) where \(f\) is a neural network parameterized by \(\mathbf{\theta}\), \(\mathbf{\theta}_{\text{old}}\) are the parameters at the end of training of the previous task, \(\Omega_{i}\) refers to the importance of parameter \(i\) and \(\lambda\) is the regularization coefficient. To add a new regularization method, the user needs only to implement the calculate_parameter_importance method to calculate \(\Omega_{i}\), while the storing of the old parameters and the calculation of the regularization loss is handled by the parent class. In case of algorithms such as Synaptic Intelligence [38] that keep an online internal parameter \(\omega_{i}\) that is later used to compute \(\Omega_{i}\), the method on_after_training_step houses the corresponding algorithmic details. ReproducibilityTo encourage transparency, _SequeL_ uses Hydra [37] configuration files to formalize experiments. While an experiment can be constructed as in Code 1, an alternative lies in defining a configuration file, as in Code 2. Instead of obfuscating hyperparameters and impeding reproducible results, an experiment defined with Hydra can be easily shared and reported. This feature is enabled by a series of routers that select the correct benchmark or model, and the implementation of the from_config method for all related module classes. _SequeL_ includes such configuration files reproducing Continual Learning baselines reported in various papers. Example configuration files along with reproducibility runs tracked via Weights&Biases are provided; the experiments focus on RotatedMNIST and include classic algorithms, such as EWC [22] and Naive SGD, as well as more involved baselines in MCSGD [26] and LaMAML [17]. The list will be expanded to ensure correctness. See Appendix A for more details. Figure 1: Control flow of the fitting process. Every point is surrounded by callback hooks. For instance, training_step() is preceded by on_before_training_step() and on_before_training_step_callbacks() and proceeded by on_after_training_step() and on_after_training_step_callbacks(). Hyperparameter TuningAnother benefit of the Hydra-based [37] setup is its out-of-the-box hyperparameter tuning capabilities, allowing for the quick setup of ablation studies. Specifically, the user picks as a basis the aforementioned config file and defines the settings of a grid search, such as \(\texttt{batch\_size}\in\{10,20,30\}\) and \(\texttt{lr}\in\{0.01,0.1\}\). ## 3 Related Work The progress of the Machine Learning community can be attributed to large extent to the development of Deep Learning libraries, such as PyTorch [32], TensorFlow [2] and JAX [5], which abstract low-level engineering code and provide a high-level API to the user. Thus, researchers and practitioners can reliably develop new methodologies by focusing on the algorithmic inner workings. The progress of the field in conjunction with the fact that majority of the ML pipelines are similar has pushed for the creation of frameworks that provide further abstractions. Pytorch Lightning [15] and fastai [19] are general ML libraries that extend flexibility via a wide range of callbacks and loggers, while minimizing the engineering overhead. The progressive increase in abstraction has led to the development of libraries specialized towards specific ML sub-fields and that use the aforementioned software as building blocks. For instance, the HuggingFace [36] library for Transformers [34] includes pretrained models for Natural Language Processing and, more recently, Computer Vision [13]. PyTorch Geometric [16] offers a comprehensive suite of tools geared towards Graph Neural Networks (GNNs). Deep Graph Library [35] also focuses on deep learning for graphs and is also framework agnostic, i.e., it supports PyTorch, Apache MXNet and TensorFlow. _SequeL_ shares this trait and offers the user the flexibility of two ecosystems in PyTorch and JAX. Multiple packages exist in the Reinforcement Learning literature, e.g., OpenAI Gym [6] and OpenAI baselines [12]. MMSegmentation [29] and Segmentation-Models-PyTorch [20] are semantic segmentation toolboxes. _SequeL_'s Hydra integration shares the design philosophy of the configuration files used in the former. An important aspect of the ML toolbox focuses on experiment tracking and monitoring and is becoming more important given the increasing complexity of models and methods, the need of rigorous ablation studies and hyperparameter tunings. Several frameworks address these imperatives, such as MLFlow [10], Weights and Biases [4], Comet and TensorBoard [2]. _SequeL_ incorporates the logging capabilities of such libraries and allows users to track and visualize their experiments with the service of their choosing. The overarching effort to create easy-to-use and reliable tools has also been observed in the Continual Learning realm. Several libraries have been proposed, such as Avalanche [23], CL-Gym [28] and Sequoia [30]. Avalanche and CL-Gym share a similar design in terms of module structure and focus on the supervised setting. The algorithm selection in CL-Gym is limited and, while Avalanche offers a wide range of algorithms, the focus lies on more classical algorithms. For instance, Kernel Continual Learning [11] and Dark Experience Replay [7] are not implemented in either framework. AvalancheRL [25] extends Avalanche with functionalities for Reinforcement Learning. Sequoia [30] focuses on the Reinforcement Learning perspective of Continual Learning and uses components of OpenAI Gym [6], Avalanche [1] and Continuum [14]. Compared to the aforementioned libraries, _SequeL_ supports both PyTorch and JAX, simplifying the comparison and importability of novel methods and implementations of existing approaches irrespective of framework. ## 4 Conclusion In conclusion, we have presented _SequeL_, a novel Continual Learning framework written in both PyTorch and JAX, aimed at unifying divergent codebases and facilitating reproducible research in the field of Continual Learning. Our library provides a convenient and flexible platform for researchers to prototype and test their novel algorithms, as well as compare them to existing state-of-the-art methods. We believe that our library will contribute to the growth of Continual Learning research and provide a valuable resource for the community.
2301.04024
Geometry on real projective Cayley-Klein spaces
We investigate several topics of the geometry on real Cayley-Klein spaces. An important concern for us is to define a distance function on the projective space in such a way that the distance between two anisotropic subspaces of the same dimension can be easily calculated and case distinctions are avoided as far as possible.
Manfred Evers
2023-01-10T15:15:03Z
http://arxiv.org/abs/2301.04024v2
# Geometry on real projective Cayley-Klein spaces ###### Abstract. We investigate several topics of the geometry on real Cayley-Klein spaces. An important concern for us is to define a distance function on the projective space in such a way that the distance between two anisotropic subspaces of the same dimension can be easily calculated and case distinctions are avoided as far as possible. ## Introduction We introduce a real semi Cayley-Klein space as a pair \((\text{P}\boldsymbol{V},\,\beta)\) consisting of a projective space \(\text{P}\boldsymbol{V}\) over a real vector space \(\boldsymbol{V}\) and a symmetric bilinear form \(\beta\colon\boldsymbol{V}\times\boldsymbol{V}\to\mathbb{R}\) different from the zero function. A point \(P\in\text{P}\boldsymbol{V}\) is an isotropic point of \((\text{P}\boldsymbol{V},\,\beta)\) if for any nonzero vector \(\boldsymbol{p}\in\boldsymbol{V}\) representing \(P\) ( we write \(P=\mathbb{R}\boldsymbol{p}\)) the equation \(\beta(\boldsymbol{p},\boldsymbol{p})=0\) applies, otherwise \(P\) is anisotropic. Let \(Q\) be the set of isotropic points of \((\text{P}\boldsymbol{V},\,\beta)\). A reflection \(\boldsymbol{\phi}\) in an anisotropic point \(P=\mathbb{R}\boldsymbol{p}\) of \((\text{P}\boldsymbol{V},\,\beta)\) is an automorphism of \(\text{P}\boldsymbol{V}\) which is different from the identical mapping and maps any point \(Q=\mathbb{R}\boldsymbol{q}\) of \(\text{P}\boldsymbol{V}\) to a point \(Q^{\prime}=\mathbb{R}\big{(}\beta(\boldsymbol{p},\boldsymbol{p})\boldsymbol{q }-2\beta(\boldsymbol{p},\boldsymbol{q})\boldsymbol{p}\big{)}\). These point reflections are idempotent mappings. Points that stay fixed under all point reflections are called singular points in \((\text{P}\boldsymbol{V},\,\beta)\). All singular points are isotropic, but not all isotropic points have to be singular. The singular points form the radical of \(\beta\), denoted by \(\text{rad}(\beta)\). This is a projective subspace of \(\text{P}\boldsymbol{V}\), which might be empty, but on the other hand it can also be a hyperplane in \(\text{P}\boldsymbol{V}\). If \(P=\mathbb{R}\boldsymbol{p}\) and \(Q=\mathbb{R}\boldsymbol{q}\) are two anisotropic points in \((\text{P}\boldsymbol{V},\,\beta)\), then \(\xi(P,Q):=1-\beta(p,q)/(\beta(p,p)\beta(q,q))\) is a real number in the interval \([0,1]\), which can be interpreted as a distance between the points \(P\) and \(Q\) and is called _quadrance_ of the points \(P\) and \(Q\).1 If \(\boldsymbol{\phi}\) is an arbitrary reflection in an anisotropic point, then \(\xi(P,Q)=\xi(\boldsymbol{\phi}(P),\boldsymbol{\phi}(Q))\) for all anisotropic points \(P\) and \(Q\). Footnote 1: We adopt the definition of the function and the name _quadrance_ from N. Wildberger [30, 31]. The quadrance function \(\xi\colon(\text{P}\boldsymbol{V}\smallsetminus Q)\times(\text{P}\boldsymbol{V} \smallsetminus Q)\to\mathbb{R}\) can strongly degenerate. For example, if the set \(\text{rad}(\beta)\) of singular points is a hyperplane in \(\text{P}\boldsymbol{V}\), then \(\xi\) is the zero function. Of course, in this case the distance function is too coarse to allow meaningful distance measurement. But if we define on \(\text{rad}(\beta)\) another bilinear form \(\beta^{\prime}\), we can significantly improve the situation: We can now introduce reflections in points which are anisotropic in the semi Cayley-Klein space \((\text{rad}(\beta),\,\beta^{\prime})\). But more importantly, by the help of \(\beta^{\prime}\) the function \(\xi\) can be modified to a function \(\xi^{\prime}\colon(\text{P}\boldsymbol{V}\smallsetminus Q)\times(\text{P} \boldsymbol{V}\smallsetminus Q)\to\mathbb{R}\) such that \(\xi^{\prime}(P,Q)\neq 0\) for almost all distinct anisotropic points \(P\), \(Q\) and \(\xi^{\prime}(P,Q)=\xi(P,Q)\) for all anisotropic points \(P,Q\in\text{P}\boldsymbol{V}\) with \(\xi(P,Q)\neq 0\). And if \(\boldsymbol{\phi}\) is a reflection in an anisotropic point of \((\text{P}\boldsymbol{V},\,\beta)\), then \(\xi^{\prime}(P,Q)=\xi^{\prime}(\boldsymbol{\phi}(P),\boldsymbol{\phi}(Q))\) for all anisotropic points \(P\) and \(Q\). We now come to the notion of a real Cayley-Klein space. A Cayley-Klein space with underlying projective space \(\mathrm{P}\boldsymbol{V}\) is a nested sequence of finitely many, say \(\rho+1\), semi Cayley-Klein spaces \((A_{i},\beta_{i})\), such that \(A_{0}=\mathrm{P}\boldsymbol{V}\), \(A_{i+1}\) is a nonempty radical \(\mathrm{rad}(\beta_{i})\), \(0\leq i\leq\rho-1\), and \(A_{\rho+1}:=\mathrm{rad}(\beta_{\rho})=\emptyset\). The first section gives a short introduction to the subject. We assume that the reader is familiar with projective geometry, but in order to introduce the terminology and fix notations, we give some basic definitions, rules and theorems. As a main basis serves the book _Vorlesungen uber hohere Geometrie_[10] by O. Giering, which was published in 1982 and deals with the topic of Cayley-Klein spaces in detail. There are a number of more recent works on this subject; we cite [15, 17, 21, 22, 27, 28]. An outline of the historical development is given in [1]. In the second section, we study the geometry on semi Cayley-Klein spaces. This includes the determination of distances between projective subspaces. Another focus is on the investigation of geometric figures such as circles and simplices. In the third section, the geometry on Cayley-Klein spaces is examined. In contrast to the calculations in the second section, distance measurement exhibits a fine structure. In Section 4 it is shown how geometric algebra (GA) can be used to calculate reflections and their compositions. ## 1. Fundamentals / terminology and notation ### Real and complex projective spaces Let \(\mathbb{F}\) be the field of real or complex numbers and \(\boldsymbol{V}\) be a vector space of finite dimension \(n{+}1\) over \(\mathbb{F}\). We introduce the projective space of \(\boldsymbol{V}\) by \(\mathrm{P}\boldsymbol{V}=\{\,\mathbb{F}\boldsymbol{v}\mid\,\boldsymbol{v} \in\boldsymbol{V}\smallsetminus\{\boldsymbol{0}\}\,\}\). The dimension \(n\) of this projective space is defined by \(n=\dim(\boldsymbol{V})-1\). A subset \(U\) of \(\mathrm{P}\boldsymbol{V}\) is called a _subspace_ if there exists a linear subspace \(\boldsymbol{U}\) of \(\boldsymbol{V}\) with \(\boldsymbol{U}=\mathrm{P}\boldsymbol{U}\); we write \(U\leq\mathrm{P}\boldsymbol{V}\). In particular, the empty set is a projective subspace of \(\mathrm{P}\boldsymbol{V}\). It is quite common to notate this subspace by \(0\); the dimension of \(0\) is -1. If \(U_{1}=\mathrm{P}\boldsymbol{U}_{1}\) and \(U_{2}=\mathrm{P}\boldsymbol{U}_{2}\) are subspaces of \(\mathrm{P}\boldsymbol{V}\), their _join_ is the projective subspace \(\mathrm{P}(\boldsymbol{U}_{1}+\boldsymbol{U}_{2})\); we denote it by \(U_{1}\sqcup U_{2}\). The intersection of two subspaces \(U_{1}\), \(U_{2}\) of \(\mathrm{P}\boldsymbol{V}\) is always a subspace, which is called the _meet_ of \(U_{1}\) and \(U_{2}\) and which we denote by \(U_{1}\sqcap U_{2}\)2). The elements of \(\mathrm{P}\boldsymbol{V}\) are called _points_. Let \(\boldsymbol{B}:=(\boldsymbol{b}_{1},\cdots,\boldsymbol{b}_{n+1})\) be an ordered basis of \(\boldsymbol{V}\). Given a vector \(\boldsymbol{v}\in\boldsymbol{V}\) with coordinates \((v_{1},\ldots v_{n+1})\) with respect to this basis, we denote the vector \(\boldsymbol{v}\) by \([v_{1},\ldots,v_{n+1}]_{\boldsymbol{B}}\) and the point \(\mathbb{F}\boldsymbol{v}\) by \([v_{1}:\cdots:v_{n+1}]_{\boldsymbol{B}}\). Footnote 2: The symbols \(\sqcap\) and \(\sqcup\) are preferred to \(\wedge\) and \(\vee\), since the latter are often used for exterior products (in Grassmann algebras, see section 4). If a subspace contains just one point \(\boldsymbol{P}\), then we follow a convention and denote this subspace by \(P\) instead of \(\{\,\boldsymbol{P}\}\). A \(k\)-dimensional subspace of \(\mathrm{P}\boldsymbol{V}\) is also called a \(k\)-plane (in \(\mathrm{P}\boldsymbol{V}\)). There are separate names for two special cases: A _hyperplane_ of \(\mathrm{P}\boldsymbol{V}\) is a subspace (a plane) of dimension \(n-1\), and if \(P\) and \(\boldsymbol{Q}\) are different points in \(\mathrm{P}\boldsymbol{V}\), the projective one-dimensional subspace \(P\sqcup Q\) is called a _line_ passing through \(P\) and \(Q\). If \(U\) is a subspace of \(\mathrm{P}\boldsymbol{V}\), then a subspace \(\mathrm{P}^{\prime}\leq\mathrm{P}\boldsymbol{V}\) with \(U\sqcup U^{\prime}=\mathrm{P}\boldsymbol{V}\) and \(U\sqcap U^{\prime}=\emptyset\) is called a _projective complement_ of \(U\). Given a subset \(S\) of \(\mathrm{P}\boldsymbol{V}\), the set \(\mathrm{span}(S):=\bigcap\{U\mid U\,\text{ is a subspace of }\,\mathrm{P}\boldsymbol{V}\text{and }\,S\subseteq U\,\}\) is a projective subspace of \(\mathrm{P}\boldsymbol{V}\), called the _span_ of \(S\). Put \(\dim(S):=\dim(\mathrm{span}(S))\). If \(U\leq\mathrm{P}\boldsymbol{V}\) is a \(k\)-plane, a set of \(k{+}1\) points spanning \(U\) is called a _minimal generating set_ of \(U\). We say that the points of a set \(S\subseteq\mathrm{P}\boldsymbol{V}\) are _in a general position_ if \(\dim(\tilde{S})=k-1\) for each subset \(\tilde{S}\) of \(S\) with \(\tilde{S}=k\leq n{+}1\). An \((n{+}2)\)-tuple \((P_{0},\ldots,P_{n+1})\) of points is called a _projective frame_ of \(\mathrm{P}\boldsymbol{V}\) if these points are in general position. We give an example: Suppose \(\boldsymbol{V}=\mathbb{R}^{n+1}\) with canonical basis \(\boldsymbol{B}=(\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{n+1})\). Put \(E_{i}:=\mathbb{R}\boldsymbol{e}_{i}\), \(1\leq i\leq n+1\), and \(E_{0}:=\mathbb{R}(\boldsymbol{e}_{1}+\cdots+\boldsymbol{e}_{n+1})\). Then \((E_{0},E_{1},\ldots,E_{n+1})\) is a projective frame. It is called _the canonical frame_ of \(\mathbb{P}\mathbb{R}^{n+1}\). The point \(E_{0}=[1:1:\cdots:1]_{\boldsymbol{B}}\) is called _unit-point_ of the frame. Each \((n+1)\)-elementary subset of \(\{E_{0},E_{1},\ldots,E_{n+1}\}\) is a minimal generating set of \(\mathrm{P}\boldsymbol{V}\). ### The cross ratio Given four vectors \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d}\in\mathbb{F}^{2} \smallsetminus\{\boldsymbol{0}\}\), the cross ratio of the four points \(\mathbb{F}\boldsymbol{a},\mathbb{F}\boldsymbol{b},\mathbb{F}\boldsymbol{c },\mathbb{F}\boldsymbol{d}\in\mathbb{P}\mathbb{F}^{2}\) is given by the point \[P =\ (\mathbb{F}\boldsymbol{a},\mathbb{F}\boldsymbol{b};\mathbb{F} \boldsymbol{c},\mathbb{F}\boldsymbol{d})\] \[:=\Big{(}\det\left(\begin{array}{cc}a_{1}&a_{2}\\ d_{1}&d_{2}\end{array}\right)\det\left(\begin{array}{cc}c_{1}&c_{2}\\ b_{1}&b_{2}\end{array}\right)\ :\ \det\left(\begin{array}{cc}a_{1}&a_{2}\\ c_{1}&c_{2}\end{array}\right)\det\left(\begin{array}{cc}d_{1}&d_{2}\\ b_{1}&b_{2}\end{array}\right)\ \Big{)}\in\mathrm{P}\mathbb{F}^{2}\,,\] cf. [25, Ch. 13] for the case \(\mathbb{F}=\mathbb{R}\). If \(\varphi\colon\mathbb{F}^{2}\to\mathbb{F}^{2}\) is an automorphism, then \[(\mathbb{F}\boldsymbol{a},\mathbb{F}\boldsymbol{b};\mathbb{F}\boldsymbol{c}, \mathbb{F}\boldsymbol{d})=(\mathbb{F}\varphi(\boldsymbol{a}),\mathbb{F} \varphi(\boldsymbol{b});\mathbb{F}\varphi(\boldsymbol{c}),\mathbb{F}\varphi( \boldsymbol{d})).\] _Remark_: Usually the double ratio is represented as the ratio \[\big{(}\det\left(\begin{array}{cc}a_{1}&a_{2}\\ c_{1}&c_{2}\end{array}\right)\det\left(\begin{array}{cc}d_{1}&d_{2}\\ b_{1}&b_{2}\end{array}\right)\big{)}\ :\ \big{(}\det\left(\begin{array}{cc}a_{1}&a_{2}\\ d_{1}&d_{2}\end{array}\right)\det\left(\begin{array}{cc}c_{1}&c_{2}\\ b_{1}&b_{2}\end{array}\right)\big{)}\] of two numbers. It should be noted that the divisor can be zero. ### Projective collineations Let \(\varphi\colon\boldsymbol{V}\to\boldsymbol{V}^{\prime}\) be an injective linear map between two \(\mathbb{F}\)-vector spaces \(\boldsymbol{V}\) and \(\boldsymbol{V}^{\prime}\). Then \(\varphi\) induces a map \(\phi\colon\mathrm{P}\boldsymbol{V}\to\mathrm{P}\boldsymbol{V}^{\prime}\) between the projective spaces \(\mathrm{P}\boldsymbol{V}\) and \(\mathrm{P}\boldsymbol{V}^{\prime}\) by \(\phi(\mathbb{F}\boldsymbol{v})=\mathbb{F}\varphi(\boldsymbol{v})\). Since \(\phi\colon\mathrm{P}V\to\mathrm{P}V^{\prime}\) maps a set of collinear points of \(\mathrm{P}\boldsymbol{V}\) to a set of collinear points of \(\mathrm{P}\boldsymbol{V}^{\prime}\), \(\phi\) is called a _projective collineation_. If \(\varphi\colon\boldsymbol{V}\to\boldsymbol{V}^{\prime}\) is bijective, \(\phi\colon\mathrm{P}\boldsymbol{V}\to\mathrm{P}\boldsymbol{V}^{\prime}\) is an _isomorphism_. If \(A,B,C,D\) are four points in a \(1\)-dimensional projective space \(\mathrm{P}\boldsymbol{V}\) and \(\phi\colon\mathrm{P}\boldsymbol{V}\to\mathrm{P}\mathbb{F}^{2}\) is an isomorphism, then we put \((A,B;C,D):=(\phi(A),\phi(B);\phi(C),\phi(D))\). It can be easily checked that the point \((A,B;C,D)\) (the cross ratio of the four points) does not depend on the special choice of the isomorphism. Four collinear points \(A\), \(B,C,D\) in a projective space _form a harmonic range_ if \((A,B;C,D)\)\(=(1:-1)\). The automorphism group \(\mathrm{Aut}(\mathrm{P}\boldsymbol{V})\) of \(\mathrm{P}\boldsymbol{V}\) can be identified with the projective linear group \(\mathrm{PGL}(\boldsymbol{V})=\mathrm{GL}(\boldsymbol{V})/\mathbb{Z}\), where the center \(\mathbb{Z}\) of \(\mathrm{GL}(\boldsymbol{V})\) consists of all nonzero multiples of the identity. Given a basis \(\boldsymbol{B}\) of \(\mathrm{P}\boldsymbol{V}\), an element \(\phi\) of \(\mathrm{Aut}(\mathrm{P}\boldsymbol{V})\) can be represented by an invertible \((n+1)\times(n+1)\)-matrix \(\mathfrak{A}_{\phi}=(\boldsymbol{a}_{ij})_{1\leq i,j\leq n+1}\) which acts on a row \((x_{1},\ldots,x_{n+1})\) of \(n+1\) entries by matrix multiplication from the right: \(\phi([x_{1}:\cdots:x_{n+1}]_{\boldsymbol{B}})=[y_{1}:\cdots:y_{n+1}]_{ \boldsymbol{B}}\,,\ (x_{1},\ldots,x_{n+1})\mathfrak{A}_{\phi}=(y_{1},\ldots,y_{n+1})\). All these automorphisms are collineations. Furthermore, these mappings preserve the cross ratio of quadruples of collinear points. Two subsets of \(\mathrm{P}\boldsymbol{V}\) are called _projectively equivalent_ if one can be mapped onto the other by an automorphism. Given two projective frames \((P_{0},\ldots,P_{n+1})\), \((Q_{0},\ldots,Q_{n+1})\), there exists precisely one automorphism \(\phi\in\mathrm{Aut}(\mathrm{P}\boldsymbol{V})\) with \(\phi(P_{i})=Q_{i}\), \(0\leq i\leq n+1\). The only automorphism that fixes a projective frame is the identity; any other automorphism has at most \(n+1\) independent fixed points. An automorphism \(\phi\neq\mathrm{id}\) is called _biaxial collineation with axes \(A\) and \(B\)_ if \(A\) and \(B\) are nonempty complementary subspaces of \(\mathrm{P}\boldsymbol{V}\) and \(\phi\) fixes all points of \(A\) and of \(B\). Such a collineation also fixes all lines incident with \(A\) and \(B\). If \(A\) is a hyperplane and \(B\) consists of one point, \(\phi\) is called _dilation_ with axis \(A\) and center \(B\). A dilation \(\phi\) is a special case of a _central collineation_ (also called _perspectivity_), all the hyperplanes through one point, its center, are invariant under \(\phi\). There is another kind of central collineation which is called _elation_. As a dilation, an elation also leaves a hyperplane, its axis, pointwise invariant and fixes all hyperplanes through its center; but the center of an elation is a point on its axis. Given any central collineation \(\phi\) with center \(Z=\mathbb{R}\boldsymbol{z}\) and axis \(\boldsymbol{U}=\mathbb{R}\boldsymbol{U}\), we can find a linear form \(m:\boldsymbol{V}\to\mathbb{R}\) with \(\boldsymbol{U}=\ker(m)\) and \(m(\boldsymbol{z})\neq-1\) such that \[\phi(\mathbb{R}\boldsymbol{p})=\mathbb{R}(\boldsymbol{p}+m(\boldsymbol{p}) \,\boldsymbol{z})\text{ for all }\boldsymbol{p}\in\boldsymbol{V}.\] The points \(P=\mathbb{R}\boldsymbol{p},Z,\phi(P)\) are collinear. And \(\phi(P)=P\) precisely when \(P=Z\) or \(m(\boldsymbol{p})=0\). \(\operatorname{Aut}(\mathbb{P}\boldsymbol{V})\) is generated by central collineations; each automorphism is the product of at most \(n{+}1\) central collineations. We denote the set of all projective subspaces of \(\mathbb{P}\boldsymbol{V}\) by \(\operatorname{sub}(\mathbb{P}\boldsymbol{V})\). An automorphism \(\phi\in\operatorname{Aut}(\mathbb{P}\boldsymbol{V})\) induces a mapping \(\operatorname{sub}(\mathbb{P}\boldsymbol{V})\to\operatorname{sub}(\mathbb{P} \boldsymbol{V})\), which we also denote by \(\phi\), with \(\phi(U_{1}\sqcup U_{2})=\phi(U_{1})\sqcup\phi(U_{2})\) and \(\phi(U_{1}\sqcap U_{2})=\phi(U_{1})\sqcap\phi(U_{2})\). A mapping \(\kappa:\operatorname{sub}(\mathbb{P}\boldsymbol{V})\to\operatorname{sub}( \mathbb{P}\boldsymbol{V})\) is called a _correlation_ if it maps subspaces of dimension \(k\) to subspaces of dimension \(n{-}k{-}1\) such that \(\kappa(U_{1}\sqcup U_{2})=\kappa(U_{1})\sqcap\kappa(U_{2})\) and \(\kappa(U_{1}\sqcap U_{2})=\kappa(U_{1})\sqcup\kappa(U_{2})\). ### Quadrics We already introduced \(\boldsymbol{V}\) as an \(\mathbb{F}\)-vector space of dimension \(n{+}1\). Let \(\beta\colon\boldsymbol{V}\times\boldsymbol{V}\to\mathbb{F}\) be a symmetric bilinear form. Given a basis \(\boldsymbol{B}=(\boldsymbol{b}_{1},\ldots,\boldsymbol{b}_{n+1})\), we define a matrix \(\mathfrak{A}=(\mathfrak{a}_{ij})\in\mathbb{F}^{n+1}\times\mathbb{F}^{n+1}\) by \(\mathfrak{a}_{ij}:=\beta(\boldsymbol{b}_{i},\boldsymbol{b}_{j})\). The rank of this matrix does not depend on the special choice of the basis \(\boldsymbol{B}\), so we can assign this number to the bilinear form \(\beta\). It is always possible to find an orthogonal basis \((\boldsymbol{b}_{1},\ldots,\boldsymbol{b}_{n+1})\) for \(\beta\), i.e. a basis of \(\boldsymbol{V}\) with \(\beta(\boldsymbol{b}_{i},\boldsymbol{b}_{j})=0\) for \(i\neq j\). If \(\boldsymbol{B}\) is an orthogonal basis, the associated matrix is diagonal. In the following we always assume that \(\operatorname{rank}(\beta)=\operatorname{rank}(\mathfrak{A})\) is not zero. The set \(Q_{\beta}:=\{P=\mathbb{F}\boldsymbol{b}\in\mathbb{P}\boldsymbol{V}\mid\beta( \boldsymbol{b},\boldsymbol{b})=0\}\) is called the _quadric_ associated with \(\beta\). In case of \(\mathbb{F}=\mathbb{R}\) this quadric can be the empty set, but if \(\mathbb{F}=\mathbb{C}\) this quadric is always nonempty. Two points \(P=\mathbb{F}\boldsymbol{p},Q=\mathbb{F}\boldsymbol{q}\in\mathbb{P}\boldsymbol{V}\) are called _conjugate_ with respect to \(\beta\), if \(\beta(\boldsymbol{p},\boldsymbol{q})=0\). Two sets \(S_{1},S_{2}\subset\mathbb{P}\boldsymbol{V}\) are _conjugate_ with respect to \(\beta\), if \(P_{1}\) is conjugate to \(P_{2}\) for all points \(P_{1}\in S_{1}\) and \(P_{2}\in S_{2}\). If \(S\) is a subset of \(\mathbb{P}\boldsymbol{V}\), the set \(\operatorname{polar}_{\beta}(S):=\{P\in\mathbb{P}\boldsymbol{V}\mid\beta\text{ is conjugate to }S\}\) is a projective subspace of \(\mathbb{P}\boldsymbol{V}\) and is called _the polar_ of \(S\) with respect to \(\beta\). Obviously, \(S\subseteq\operatorname{polar}_{\beta}(\operatorname{polar}_{\beta}(S))\). The mapping \(\operatorname{polar}_{\beta}:\operatorname{sub}(\mathbb{P}\boldsymbol{V})\to \operatorname{sub}(\mathbb{P}\boldsymbol{V})\) is a correlation iff \(\operatorname{rank}(\beta)=n{+}1\). ### Real semi Cayley-Klein spaces We are especially interested in the case \(\mathbb{F}=\mathbb{R}\). We choose a symmetric bilinear form \(\beta\colon\boldsymbol{V}\times\boldsymbol{V}\to\mathbb{R}\) with \(\operatorname{rank}(\beta)>0\) and call it the _absolute bilinear form_ of \(\boldsymbol{V}\). If \(\boldsymbol{B}=(\boldsymbol{b}_{1},\ldots,\boldsymbol{b}_{n+1})\) is an orthogonal basis for \(\beta\), we put \(\beta_{+}:=\#\{\boldsymbol{b}_{i}|\beta(\boldsymbol{b}_{i},\boldsymbol{b}_{i }){>}0\}\), \(\beta_{-}:=\#\{\boldsymbol{b}_{i}|\ \beta(\boldsymbol{b}_{i},\boldsymbol{b}_{i })<0\}\), \(\beta_{0}:=\#\{\boldsymbol{b}_{i}|\ \beta(\boldsymbol{b}_{i},\boldsymbol{b}_{i })=0\}\). These three numbers do not change if we change the basis for another orthogonal basis (Sylvester's law of inertia). The triple \((\beta_{+},\beta_{-},\beta_{0})\) is called the _signature_ of \(\beta\). Now it is obvious that we can even find an orthogonal basis \(\boldsymbol{B}=(\boldsymbol{b}_{1},\ldots,\boldsymbol{b}_{n+1})\) such that \(\beta(\boldsymbol{b}_{i},\boldsymbol{b}_{i})=1\) for all \(i\in\{1,\ldots,\beta_{+}\}\), \(\beta(\boldsymbol{b}_{i},\boldsymbol{b}_{i})=-1\) for all \(i\in\{\beta_{+}{+}1,\ldots,\beta_{+}{+}\beta_{-}\}\) and \(\beta(\boldsymbol{b}_{i},\boldsymbol{b}_{i})=0\) for all \(i>\operatorname{rank}(\beta)\). \(\mathbb{P}\boldsymbol{V}\) can now be decomposed into three disjoint subsets: \(\mathbb{P}\boldsymbol{V}=Q_{\beta}\dot{\cup}Q_{\beta}^{+}\dot{\cup}Q_{\beta}^{-}\) where \(Q_{\beta}\) is the quadric associated with \(\beta\), \(Q_{\beta}^{+}:=\{\mathbb{R}\boldsymbol{v}\mid\beta(v,v)>0\}\) and \(Q_{\beta}^{-}:=\{\mathbb{R}\boldsymbol{v}\mid\beta(v,v)<0\}\). Since \(\beta\colon\boldsymbol{V}\times\boldsymbol{V}\to\mathbb{R}\) is a nonzero function, \(Q_{\beta}^{+}:=Q_{\beta}^{+}\cup Q_{\beta}^{-}\) is a dense subset of \(\mathbb{P}\boldsymbol{V}\) (with respect to the topology on \(\mathsf{P}\boldsymbol{V}\), which is the quotient topology of the canonical topology on \(\boldsymbol{V}\)). We call \((\mathsf{P}\boldsymbol{V},\,\beta)\) a _semi Cayley-Klein space_. Let \(\boldsymbol{U}\) be a linear subspace of \(\boldsymbol{V}\). We assume that the restriction \(\beta^{\prime}\,:=\,\beta|\boldsymbol{U}\times\boldsymbol{U}\) is a nonzero function. Then \((\mathsf{P}\boldsymbol{U},\,\beta^{\prime})\) is a _semi CK subspace_ of \((\mathsf{P}\boldsymbol{V},\,\beta)\). Let \(U\,=\,\mathsf{P}\boldsymbol{U}\,\leq\,\mathsf{P}\boldsymbol{V}\) be a plane of dimension \(s\leq n\) and let \(\boldsymbol{u}_{1},\ldots,\boldsymbol{u}_{s+1}\) be a basis of \(\boldsymbol{U}\). Define a matrix \(\mathfrak{R}=(\mathfrak{n}_{ij})\) by \(\mathfrak{n}_{ij}\,:=\,\beta(\boldsymbol{u}_{i},\boldsymbol{u}_{j})\). The rank of this matrix \(\mathfrak{R}\) does not dependent of the special choice of the basis. If \(\det(\mathfrak{R})=0\), we call \(U\)_isotropic_, otherwise _anisotropic_. As a special case, a point \(P\in\mathbb{R}\boldsymbol{V}\) is isotropic iff \(P\in\mathcal{Q}_{\beta}\). If \(P\) is an anisotropic point, then \(P\cap\operatorname{polar}_{\beta}(P)=\emptyset\). \(P\) is called _double point of a plane \(U\)_ if \(U\subseteq\operatorname{polar}_{\beta}(P)\). A plane \(U\) is anisotropic iff it does not contain any double point. _Proof_: We assign a matrix \(\mathfrak{R}\) to \(U\) as described above. If there is a double point \(P\) in \(U\), we may assume that it is \(\mathbb{R}\boldsymbol{u}_{1}\). In this case, the first row of the matrix \(\mathfrak{R}\) is a zero-row and \(\det(\mathfrak{R})=0\). Now we assume that \(U\) is isotropic and \(\det(\mathfrak{R})=0\). We may also assume that the matrix \(\mathfrak{R}\) is a diagonal matrix. Then there exists a vector \(\boldsymbol{u}_{i}\) with \(\beta(\boldsymbol{u}_{i},\boldsymbol{u}_{j})=0\) for all \(j\), and \(\mathbb{R}\boldsymbol{u}_{i}\) must be a double point of \(U\). A point \(P\,=\,\mathbb{R}(\boldsymbol{p})\in\,\mathsf{P}\boldsymbol{V}\) is called _singular_ if it is a double point of \(\mathsf{P}\boldsymbol{V}\). The singular points form a plane in \(\mathsf{P}\boldsymbol{V}\) which is called the _vertex_ of \(\mathcal{Q}_{\beta}\) or the _radical_\(\operatorname{rad}(\beta)\) of \(\beta\). _Remark_: A point \(P\) is a double point of the \(0\)-dimensional space \(P\) iff \(P\in\mathcal{Q}_{\beta}\). Properties of polars: (1) The polar of a singular point is \(\mathsf{P}\boldsymbol{V}\), the polar of a nonsingular point is a hyperplane in \(\mathsf{P}\boldsymbol{V}\). (2) \(A\,\leq\,\mathsf{P}\boldsymbol{V}\) is an anisotropic plane iff \(\operatorname{polar}_{\beta}(A)\,:=\,\bigcap_{P\in A}\operatorname{polar}_{ \beta}(P)\) is a projective complement of \(A\) in \(\mathsf{P}\boldsymbol{V}\). A subspace \(U\,\leq\,\mathsf{P}\boldsymbol{V}\) of positive dimension located entirely in the quadric \(\mathcal{Q}_{\beta}\) is called _totally isotropic_. It is obvious, that all subspaces of \(\operatorname{rad}(\beta)\) are totally isotropic if their dimension is positive. But even if \(\operatorname{rad}(\beta)\) is empty, \((\mathsf{P}\boldsymbol{V},\,\beta)\) can have totally isotropic subspaces. An example is \((\,\mathsf{P}\mathbb{R}^{4},\,\beta\,)\) with \(\beta\big{(}(v_{1},v_{2},v_{3},v_{4}),(w_{1},w_{2},w_{3},w_{4})\big{)}=v_{1}w_ {1}+v_{2}w_{2}-v_{3}w_{3}-v_{4}w_{4}\). The maximal dimension of a totally isotropic subspace is \(n-\max(\beta_{+},\beta_{-})\). ### Automorphisms on a semi CK space \((\,\mathsf{P}\boldsymbol{V},\,\beta\,)\) Let \((\mathsf{P}\boldsymbol{V},\,\beta)\), \((\mathsf{P}\tilde{\boldsymbol{V}},\,\tilde{\beta})\) semi CK spaces. Let \(\phi\colon\mathsf{P}\boldsymbol{V}\to\,\mathsf{P}\tilde{\boldsymbol{V}}\) be a projective collineation and \(\phi\colon\boldsymbol{V}\to\,\tilde{\boldsymbol{V}}\) a representative of \(\phi\). Then \(\phi\) is called a _projective collineation_ from the semi CK space \((\mathsf{P}\boldsymbol{V},\,\beta)\) to the semi CK space \((\mathsf{P}\tilde{\boldsymbol{V}},\,\tilde{\beta})\) if \(\tilde{\beta}(\varphi(\boldsymbol{p}),\,\varphi(\boldsymbol{q}))\,\)=\(\,0\,\Leftrightarrow\,\beta(\boldsymbol{p},\boldsymbol{q})\,\)=\(\,0\,\) for all points \(P=\,\mathbb{R}\boldsymbol{p}\) and \(Q=\,\mathbb{R}\boldsymbol{p}\). If this mapping \(\phi\) is bijective, then \(\phi\) is an isomorphism between _semi CK spaces_ and, in case of \((\mathsf{P}\boldsymbol{V},\,\beta)\) = \((\mathsf{P}\tilde{\boldsymbol{V}},\,\tilde{\beta})\), \(\phi\) is an automorphism on \((\mathsf{P}\boldsymbol{V},\,\beta)\). Special automorphisms on a semi CK space are reflections: Let \(Z=\,\mathbb{R}\boldsymbol{z}\in\mathsf{P}\boldsymbol{V}\) be an anisotropic point and let \(\phi\in\operatorname{Aut}(\mathsf{P}\boldsymbol{V})\) be a central collineation with center \(Z\) and axis \(\operatorname{polar}_{\beta}(Z)\). Then \(\operatorname{polar}_{\beta}(Z)\) is a hyperplane not incident with \(Z\), and \(\phi\) is a _dilation_. Because \(\operatorname{rad}(\beta)\leq\operatorname{polar}_{\beta}(Z)\), all points of \(\operatorname{rad}(\beta)\) are fixed points. There exists some number \(\lambda\in\,\mathbb{R}\!\smallsetminus\!\{0,1\}\) such that \(\phi\) maps a point \(P=\,\mathbb{R}\boldsymbol{p}\in\,\mathsf{P}\boldsymbol{V}\) to a point \[P^{\prime}=\,\mathbb{R}\big{(}\,\lambda\beta(\boldsymbol{z},\boldsymbol{z}) \boldsymbol{p}+(1-\lambda)\beta(\boldsymbol{z},\boldsymbol{p})\boldsymbol{z} \big{)}.\] If \(\lambda=-1\), \(\phi\) is an involution and \(\phi\) is called a _reflection in the point \(Z\)_. It can be easily checked that a reflection maps an anisotropic point to an anisotropic point lying in the same connected component of \(\mathcal{Q}_{\beta}^{\times}\). If \(P\) is the only isotropic point on the (isotropic) line \(P\sqcup Z\), then \(P\) is a fixed point of the reflection \(\phi\), and if the line \(P\sqcup Z\) contains still another isotropic point \(Q\), then \(P\) and \(Q\) are interchanged by \(\phi\). Given an anisotropic plane \(U\), then a biaxial collineation \(\phi\) with axes \(U\) and \(\operatorname{polar}_{\beta}(U)\) is called a _reflection in \(U\)_. If \(Z\) is an anisotropic point of \(U\) and \(P\) is any point of \(P\boldsymbol{V}\), then \(\phi(P)\) is the image of \(P\) under a reflection in \(Z\). ### Barycentric coordinates of points in a semi CK space Let from now on \(\boldsymbol{V}\) be the real vector space \(\mathbb{R}^{n+1}\) with canonical basis \(\boldsymbol{B}=(\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{n+1})\). Given a vector \(\boldsymbol{v}=[v_{1},\ldots,v_{n+1}]_{\boldsymbol{B}}\in\boldsymbol{V}\), we write \(\boldsymbol{v}=(v_{1},\ldots,v_{n+1})\), and for the point \(\mathbb{R}\boldsymbol{v}\in\mathbb{P}\boldsymbol{V}\) we use the notation \((v_{1}:\ldots:v_{n+1})\). Let \((\beta_{+},\beta_{-},\beta_{0})\) be a triple of nonnegative integers with \(\beta_{+}+\beta_{-}+\beta_{0}=n+1\) and let \(\mathfrak{R}^{(\beta_{+},\beta_{-},\beta_{0})}=\mathfrak{R}=(\mathfrak{a}_{ij})\) denote the \((n+1)\times(n+1)\)-diagonal matrix with \(\mathfrak{a}_{ii}=1\) for \(1\leq i\leq\beta_{+}\), \(\mathfrak{a}_{ii}=-1\) for \(\beta_{+}<i\leq\beta_{+}+\beta_{-}\) and \(\mathfrak{a}_{ii}=0\) for \(i>\beta_{+}+\beta_{-}\). Associated with \(\mathfrak{R}\) is the bilinear form \(\beta:\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\to\mathbb{R},\ \beta( \boldsymbol{e}_{i},\boldsymbol{e}_{j})=\mathfrak{a}_{ij}\). (\(\operatorname{P}\boldsymbol{V}\), \(\beta\)) is a semi CK space. _Remark_: Every real semi CK space can be subsumed here (up to isomorphism). Instead of \(\beta(\boldsymbol{v},\boldsymbol{w})\) we will usually write \(\boldsymbol{\nu}^{[\mathfrak{R}]}\boldsymbol{\nu}\). (We regard \([\mathfrak{R}]\) as a generalized inner product.) For \(Q_{\beta}\) we write \(Q_{\mathfrak{R}}\), the polar of a set \(S\subset\operatorname{P}\boldsymbol{V}\) with respect \(\beta\) to will be denoted by \(S^{\mathfrak{R}}\) instead of \(\operatorname{polar}_{\beta}(S)\), and the semi CK space (\(\operatorname{P}\boldsymbol{V}\), \(\beta\)) we denote by (\(\operatorname{P}\boldsymbol{V}\), \(\mathfrak{R}\)). We assign a vector \(P^{\circ}\in\boldsymbol{V}\) to each anisotropic point \(P\): First, we define a function \(\chi\colon\boldsymbol{V}\to\{-1,0,1\}\) by \[\chi(p_{0},\ldots,p_{n+1})=\begin{cases}0,\text{ if }(p_{1},\ldots,p_{n+1})=(0,\ldots,0)\;,\\ 1,\text{ if }(p_{1},\ldots,p_{n+1})>(0,\ldots,0)\text{ with respect to the lexicographic order,}\\ -1,\text{ if }(p_{1},\ldots,p_{n+1})<(0,\ldots,0)\text{ with respect to the lexicographic order,}\end{cases}\] then we put \(P^{\circ}:=\dfrac{\chi(p)}{\sqrt{|\operatorname{P}[\mathfrak{R}]}\,p|}\). We introduce a function \(\operatorname{sgn}\colon\operatorname{P}\boldsymbol{V}\to\{-1,0,1\}\) by \(\operatorname{sgn}(P)=0\) if \(P\) is isotropic, and \(\operatorname{sgn}(P)=\operatorname{P}^{\circ}[\mathfrak{R}]\,P^{\circ}\) if \(P\) is anisotropic. Since the matrix \(\mathfrak{R}\) is nonzero, there exist \(n+1\) independent anisotropic points \(P_{1}\),..., \(P_{n+1}\) generating \(\operatorname{P}\boldsymbol{V}\). Given a point \(Q=\mathbb{R}\boldsymbol{q}\), there exists an \((n+1)\)-tuple \((\tilde{q}_{1},\ldots,\tilde{q}_{n+1})\) of real numbers such that \(\boldsymbol{q}=\tilde{q}_{1}P^{\circ}_{1}+\cdots+\tilde{q}_{n+1}P^{\circ}_{n+1}\). The tuple \((\tilde{q}_{1},\ldots,\tilde{q}_{n+1})\) is uniquely determined by the point \(Q\) and the tuple \((P_{1},\ldots,P_{n+1})\) except for multiplication by a nonzero real number, and is called _tuple of homogeneous coordinates_ of \(Q\) with respect to \((P_{1},\ldots,P_{n+1})\). After having fixed the tuple \((P_{1},\ldots,P_{n+1})\), we write \(Q=[\tilde{q}_{1}:\ldots:\tilde{q}_{n+1}]\). If \(Q\) is anisotropic, there exists a uniquely determined \((n+1)\)-tuple \((\tilde{q}_{1},\ldots,\tilde{q}_{n+1})\) such that \(Q^{\circ}=\tilde{q}_{1}P^{\circ}_{1}+\cdots+\tilde{q}_{n+1}P^{\circ}_{n+1}\). This tuple is the tuple of _absolute coordinates_ of \(Q\) with respect to \((P_{1},\ldots,P_{n+1})\). ### The dual of a semi CK space Let \(\boldsymbol{V}^{*}=\operatorname{Hom}(\boldsymbol{V},\mathbb{R})\) be the vector space dual to \(\boldsymbol{V}\) and let \(\boldsymbol{e}^{1},\ldots,\boldsymbol{e}^{n+1}\) be the dual basis of \(\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{n+1}\). Put \(E^{i}\colon=\mathbb{R}\boldsymbol{e}^{i},i=1,\ldots,n+1\). The projective space \(\operatorname{P}\boldsymbol{V}^{*}\) is called the _dual_ of \(\operatorname{P}\boldsymbol{V}\). Let \((\operatorname{P}\boldsymbol{V},\mathfrak{R})\) be a semi CK space, and let \(\mathfrak{R}^{\sharp}=\operatorname{adj}(\mathfrak{R})\) denote the adjuate of \(\mathfrak{R}\). We call the semi CK space \((\operatorname{P}\boldsymbol{V}^{*}\), \(\mathfrak{R}^{\sharp})\) the _dual_ of \((\operatorname{P}\boldsymbol{V},\mathfrak{R})\). We now assume that \(\operatorname{rank}(\mathfrak{R})=n+1\). If \(P_{1},P_{2}\) are anisotropic points in \(\operatorname{P}\boldsymbol{V}\), then \(H_{1}:=P_{1}^{\mathfrak{R}}\), \(H_{2}:=P_{2}^{\mathfrak{R}}\) are anisotropic hyperplanes in \(\operatorname{P}\boldsymbol{V}\) and there are uniquely determined anisotropic elements \(x_{1},x_{2}\in\operatorname{P}\boldsymbol{V}^{*}\) with \(x_{1}(Q_{1})=x_{2}(Q_{2})=0\) for all \(Q_{1}\in H_{1}\) and all \(Q_{2}\in H_{2}\). For \(x_{1},x_{2}\) and \(P_{1}\), \(P_{2}\) the equation \(\dfrac{(x_{1}[\mathfrak{R}^{-1}]x_{2})^{2}}{(x_{1}[\mathfrak{R}^{-1}]x_{1})(x_{2 }[\mathfrak{R}^{-1}]x_{2})}=\dfrac{(P_{1}[\mathfrak{R}]P_{2})^{2}}{(P_{1}[ \mathfrak{R}]P_{1})(P_{2}[\mathfrak{R}]P_{2})}\) applies. ### Perspectivity and orthology Let \(U\leq\mathbb{P}\boldsymbol{V}\) be a plane of dimension \(s>1\). \(U\) is arguesian: Let \((P_{1},\ldots,P_{s+1})\), \((Q_{1},\ldots,Q_{s+1})\) be two systems of points, each generating \(U\). \((P_{1},\ldots,P_{s+1})\) and \((Q_{1},\ldots,Q_{s+1})\) are _perspective_ if there exists exactly one point \(Z\) such that all the sets \(P_{i}\sqcup Q_{i}\sqcup Z\), \(1\leq i\leq s+1\), are lines. If such a point \(Z\) exists, it is called the _perspector_ of \((P_{1},\ldots,P_{s+1})\) and \((Q_{1},\ldots,Q_{s+1})\). Suppose this perspector \(Z\) is different from all the points \(P_{i},Q_{i}\), \(i=1,\ldots,s+1\), then there exists a uniquely determined central collineation \(\phi\colon U\to U\) mapping \((P_{1},\ldots,P_{s+1},Z)\) onto \((Q_{1},\ldots,Q_{s+1},Z)\). The axis of \(\phi\), a hyperplane of \(U\), is called _perspectrix_ of \((P_{1},\ldots,P_{s+1})\) and \((Q_{1},\ldots,Q_{s+1})\). A perspectrix can be assigned to these two tuples even if the perspector is one of the points \(P_{i},Q_{i},1\leq i\leq s+1\). For example, if \(Z=P_{s+1}\neq Q_{s+1}\), then \(Q_{1}\sqcup\cdots\sqcup Q_{s}\) is the perspectrix. We now assume that \(U\) is anisotropic. Given an \((s+1)\)-tuple \(R_{1},\ldots,R_{s+1}\) of points spanning \(U\), put \(R_{i}^{\delta}:=(\bigsqcup_{j\neq i}R_{j})^{\mathfrak{M}}\sqcap U\), \(1\leq i\leq s+1\). If \((P_{1},\ldots,P_{s+1})\) and \((Q_{1},\ldots,Q_{s+1})\) are perspective with center \(Z\) and perspectrix \(S\leq U\), then \((P_{1}^{\delta},\ldots,P_{s+1}^{\delta})\), \((Q_{1}^{\delta},\ldots,Q_{s+1}^{\delta})\) are also perspective; their perspector is \(S^{\mathfrak{M}}\sqcap U\) and their perspectrix is \(Z^{\mathfrak{M}}\sqcap A\). Two tuples \((P_{1},\ldots,P_{s+1})\) and \((Q_{1},\ldots,Q_{s+1})\) are called _orthologic_ with _orthology center_\(Z\) if \((P_{1},\ldots,P_{k+1})\) and \((Q_{1}^{\delta},\ldots,Q_{k+1}^{\delta})\) are perspective at \(Z\). If \((P_{1},\ldots,P_{s+1})\) and \((Q_{1},\ldots,Q_{s+1})\) are orthologic, then \((Q_{1},\ldots,Q_{s+1})\) and \((P_{1},\ldots,P_{s+1})\) are also orthologic. If \(S\) is the perspectrix of \((P_{1},\ldots,P_{s+1})\) and \((Q_{1}^{\delta},\ldots,Q_{s+1}^{\delta})\), then \(S^{\mathfrak{M}}\sqcap A\) is the associated orthology center. ## 2. Metric structures on \(\mathbb{P}\boldsymbol{V}\) ### Metric structures on the projective line K. von Staudt [24], E. Laguerre [18] and A. Cayley [2] showed us how quadrics in \(\mathbb{P}\boldsymbol{V}\) can be used to introduce metric structures on \(\mathbb{P}\boldsymbol{V}\). An important role within the connection between quadrics and distances plays the cross ratio of points and of lines. Later, systematic studies by Felix Klein [16] led to a classification of metric geometries. We give a rough illustration of the ideas of these great mathematicians for the simple case \(n=1\). We start with the inner product \([\mathfrak{M}]\) on the projective line \(\mathbb{P}\mathbb{R}^{2}\), which is given by the matrix \(\mathfrak{A}=\left(\begin{array}{cc}1&0\\ 0&r\end{array}\right)\). (Every quadric on the projective line is projectively equivalent to a quadric \(Q_{\mathfrak{M}}\) with a suitable number \(r\in\{-1,0,1\}\).) The set of isotropic points is empty if \(r>0\), and consists of the two points \(J_{-}=(-\sqrt{-r}:1)\) and \(J_{+}=(\sqrt{-r}:1)\) if \(r<0\). If \(r=0\), there is exactly one isotropic point. This point is a double point of \(\mathbb{P}\mathbb{R}^{2}\); the points \(J_{-}\) and \(J_{+}\) merge, so to speak, for \(r=0\) to one point \(J=(0:1)\). In the case of \(r>0\), Laguerre [18] encouraged us to think of \(\mathbb{P}\mathbb{R}^{2}\) embedded in the complex projective space \(\mathbb{P}\mathbb{C}^{2}\) and to think of the points \(J_{-}\) and \(J_{+}\) as complex valued because these two complex valued points (in fact, his focus was on the angle between lines rather than on the distance between points) are helpful for introducing a metric. The following classification comes from F. Klein: depending on whether \(r>0\) or \(r=0\) or \(r<0\), we speak of the _elliptic_ or _parabolic_ or _hyperbolic case_. Let us first assume \(r\neq 0\). In this case \((\mathbb{P}\mathbb{R}^{2},\mathfrak{M})\) is an anisotropic CK space. We take two independent anisotropic points \(P_{1}=\mathbb{R}\boldsymbol{p}_{1}=\mathbb{R}(p_{11}\,,p_{12})\) and \(P_{2}=\mathbb{R}\boldsymbol{p}_{2}=\mathbb{R}(p_{21},p_{22})\), \(\chi(p_{11},p_{12})=\chi(p_{21},p_{22})\), and determine homogeneous barycentric coordinates for \(J_{-}\) and \(\boldsymbol{J}_{+}\) with respect to \((P_{1},\,P_{2})\): \[J_{\mp} =\mathbb{R}\big{(}(p_{21}\pm\sqrt{-r}\,p_{22})(p_{11},p_{12})-(p_{1 1}\pm\sqrt{-r}\,p_{12})(p_{21},p_{22})\big{)}\] \[=\frac{p_{21}\pm\sqrt{-r}\,p_{22}}{\sqrt{|p_{21}^{\,2}-rp_{22}^{\, 2}|}}P_{1}-\frac{p_{11}\pm\sqrt{-r}\,p_{12}}{\sqrt{|p_{11}^{\,2}-rp_{12}^{\,2} |}}P_{2}.\] The cross ratio of the four points \(P_{1},\,P_{2},\,J_{-},\,J_{+}\) is \[(P_{1},\,P_{2};J_{-},\,J_{+})=((p_{11}+\sqrt{-r}\,p_{12})(p_{21}-\sqrt{-r}\,p_ {22})):((p_{11}-\sqrt{-r}\,p_{12})(p_{21}+\sqrt{-r}\,p_{22}))\,.\] In the elliptic case, this cross ratio is a complex number of modulus \(1\) and, following Laguerre, Cayley and Klein, we can define a distance \(d_{12}\in\mathbb{R}\) between the points \(P_{1}\) and \(P_{2}\) by \(d_{12}=c\,\ln(\sqrt{(P_{1},\,P_{2};J_{-},J_{+})}\) with a constant \(c\) that can be fixed to \(c=\frac{1}{2\mathrm{i}}\) by requiring that the distance \(d\) of two points on the elliptic line is within the range \(0\leq d\leq\frac{1}{2}\pi\) and that the distance of these points is \(\frac{1}{2}\pi\) precisely when one point is conjugate to the other. If we take this distance function, we get the equation (see [17, ch. 5] \[\cos(d_{12})=P_{1}^{\,\circ\,[\,\Re]\,}P_{2}^{\,\circ\,}\,\,\,\,\,\,\,\,\,(*).\] At the beginning of the \(20^{\,th}\) century, the Hungarian mathematician C. Voros3) proposed to proceed in the hyperbolic case quite analogously to the elliptic case: All hyperbolic lines have the same length \(\pi\mathbf{i}\), and the distance function satisfies the equation Footnote 3: \(\,\)I take this information from [13]. \[\cosh^{2}(d_{12})=\,\frac{(\boldsymbol{p}_{1}[\,\Re]\,}p_{2})^{2}}{( \boldsymbol{p}_{1}[\,\Re]\,}\boldsymbol{p}_{1})(\boldsymbol{p}_{2}[\,\Re]\, \boldsymbol{p}_{2})\,=\,\mathrm{sgn}(P_{1})\,\mathrm{sgn}(P_{2})\,(P_{1}^{\, \circ\,}[\,\Re]\,}P_{2}^{\,\circ\,})^{2}\quad(**).\] If \(\mathrm{sgn}(P_{1})\mathrm{sgn}(P_{2})=1\), then \(\cosh(d_{12})>1\) and \(d_{12}\) is a real number. If \(\mathrm{sgn}(P_{1})\mathrm{sgn}(P_{2})=-1\), then \(\cosh(d_{12})\) is purely imaginary. In this case, equation \((**)\) can be satisfied by allowing \(d_{12}\) to be a complex number of the form \(t+\frac{1}{2}\pi\mathbf{i}\), \(t\in\mathbb{R}\). We adopt Voros' proposal and, in order to achieve as much harmony as possible in formulae that are valid in the elliptic plane and corresponding ones that are valid in the hyperbolic plane, we replace the elliptic distance \(d_{12}\) by \(d_{12}\mathbf{i}\). Now formula \((**)\) is valid also on the elliptic line. The points conjugate to \(P_{1}\) and \(P_{2}\) are \(P_{1}^{\,\Re}=(p_{12}\!:\!r\,p_{11})\) and \(P_{2}^{\,\Re}=(p_{22}\!:\!r\,p_{21})\), respectively. The cross ratio \((P_{1}^{\,\Re},\,P_{2}^{\,\Re},\,J_{-},\,J_{+})\) is the same as the cross ratio \((P_{1},\,P_{2}^{\,\cdot\,}J_{-},\,J_{+})\). We calculate the cross ratio of the points \(P_{1},\,P_{2}^{\,\Re},\,P_{2},\,P_{1}^{\,\Re}\) and get \[(P_{1},\,P_{2}^{\,\Re};\,P_{2},\,P_{1}^{\,\Re})=\,\frac{r\,(p_{11}p_{22}-p_{12 }p_{21})^{2}}{(p_{11}^{\,2}+r\,p_{12}^{\,2})(p_{21}^{\,2}+r\,p_{22}^{\,2})}=- \sinh^{2}(d_{12})\quad(***)\,.\] We adopt from N.J. Wildberger [31] the name _quadrudar_ of \(P_{1}\) and \(P_{2}\) for the real number \[\xi_{0}(P_{1},\,P_{2})\,:=-\sinh^{2}(d_{12}) =1-\mathrm{sgn}(P_{1})\,\mathrm{sgn}(P_{2})\,(P_{1}^{\,\circ\,}[ \,\Re]\,}P_{2}^{\,\circ\,})^{2}\] \[=\,\frac{\det\!\left(\begin{array}{ccc}\boldsymbol{p}_{1}\,[\, \Re]\,}\boldsymbol{p}_{1}&\boldsymbol{p}_{1}\,[\,\Re]\,}\boldsymbol{p}_{2}\\ \boldsymbol{p}_{2}\,[\,\Re]\,}\boldsymbol{p}_{1}&\boldsymbol{p}_{2}\,[\,\Re]\, \boldsymbol{p}_{2}\end{array}\right)\] \[(\boldsymbol{p}_{1}\,[\,\Re]\,}\boldsymbol{p}_{1})(\boldsymbol{p}_{2} \,[\,\Re]\,}\boldsymbol{p}_{2})\.\] If \(Q=xP_{1}+y\,P_{2},\,x,y\in\mathbb{R}\), is anisotropic, then (see [8] for a proof) \[x^{2}\ :\ y^{2}=(\mathrm{sgn}(P_{2})\sinh^{2}(d(Q,P_{1})))\,:(\mathrm{sgn}(P_{1}) \sinh^{2}(d(Q,P_{2})))\.\] Let us consider now the parabolic case. If \(r=0\), there is precisely one isotropic point, the double point \(J=(0:1)\). Given two distinct anisotropic points \(P_{1},P_{2}\in\mathbb{PR}^{2}\), we can find real numbers \(p_{12},p_{22}\) such that \(P_{1}=(1:p_{12})\) and \(P_{2}=(1:p_{22})\). If we now apply the formula \(\cosh(d(P_{1},P_{2}))=P_{1}^{\circ}[\mathfrak{N}]\,P_{2}^{\circ}\), we find out that in the parabolic case the projective line is a null line: The distance \(d(P_{1},P_{2})\) and the quadrance \(\xi_{0}(P_{1},P_{2})\) of any two anisotropic points \(P_{1},P_{2}\) is \(0\). We can still postulate that the total length of a projective line is \(\pi\mathfrak{1}\), we just have to demand that the length of a line segment is \(0\) iff the double point \(J\) lies outside this segment. We can accept this "gross structure", the distance of two anisotropic points on a parabolic line being always \(0\), but it is possible to find a "fine structure" by using possibilities that Nonstandard Analysis offers us. _Excursus_: Let \(\mathbb{R}^{*}\) and \(\mathbb{C}^{*}\) denote the hyperreal and hypercomplex numbers. Any function \(f\colon\mathbb{C}\to\mathbb{C}\) can be extended to a function \(f^{*}\colon\mathbb{C}^{*}\to\mathbb{C}^{*}\) in a canonical way (s. [26]). In the following write \(f\) instead of \(f^{*}\); we skip \(*\). If \(f\) is meromorphic in a neighborhood of \(0\) in \(\mathbb{C}\), we can find a series \((a_{n})_{n\,=\,0,\ldots,\infty}\in\mathbb{C}^{\mathbb{N}}\) and an integer \(n\), both uniquely determined, such that \(a_{0}\neq 0\) and \(f(\epsilon)=\epsilon^{-n}(a_{0}+a_{1}\epsilon+a_{2}\epsilon^{2}+\cdots)\) for all infinitesimals \(\epsilon\neq 0\). Let \(\star_{\epsilon}(f(\epsilon))\,:=\,\epsilon^{-n}a_{0}\) be the leading term of \(f(\epsilon)\). Now we define an \(\epsilon\)_-quadrance_ of anisotropic points \(P_{1}=(p_{11}\,:p_{12})\) and \(P_{2}=(p_{21}\,:p_{22})\) by \[\xi_{\epsilon}(P_{1},P_{2})\,:=\,\star_{\epsilon}\big{(}1-\frac{(P_{1}^{\circ }[\mathfrak{N}_{\epsilon}]\,P_{2}^{\circ})^{2}}{(P_{1}^{\circ}[\mathfrak{N}_{ \epsilon}]\,P_{1}^{\circ})(P_{2}^{\circ}[\mathfrak{N}_{\epsilon}]\,P_{2}^{ \circ})}\big{)},\] where \(\epsilon\) is a nonzero infinitesimal real number and \(\mathfrak{N}_{\epsilon}\,:=\,\text{diag}(1,\epsilon)\). While the two functions \(\xi_{\epsilon}\) and \(\xi_{0}\) agree on the elliptic and on the hyperbolic line, they differ on the parabolic line. On the parabolic line the following applies: Let \(P_{1}\) and \(P_{2}\) be two distinct anisotropic points, then \[\xi_{\epsilon}(P_{1},P_{2})=\epsilon\,(\frac{p_{21}}{p_{11}}-\frac{p_{22}}{p_{ 21}})^{2}\neq 0=\xi_{0}(P_{1},P_{2})\,.\] And if \(Q=(q_{1}:q_{2})\in\mathbb{PR}^{2}\) is anisotropic and \(Q=xP_{1}+y\,P_{2}\), \(x,y\in\mathbb{R}\), then \[|x|\,:\,\,|y|=|q_{1}p_{22}-q_{2}p_{21}|\,:\,|q_{1}p_{12}-q_{2}p_{11}|\,.\] Let \(Q_{1},Q_{2}\) be two anisotropic points. We can find real numbers \(s_{11},s_{12},s_{21},s_{22}\) such that \(Q_{1}^{\circ}=s_{11}P_{1}^{\circ}+s_{12}P_{2}^{\circ}\) and \(Q_{2}^{\circ}=s_{21}P_{1}^{\circ}+s_{22}P_{2}^{\circ}\). Then, \[\xi_{\epsilon}(Q_{1},Q_{2})=\xi_{\epsilon}(P_{1},P_{2})(s_{11}-s_{21})(s_{12}- s_{22})\,.\] _Remarks:_ (1) In the following we often use the name _squared distance_ for \(\xi_{\epsilon}(P_{1},P_{2})\) if \(P_{1},P_{2}\) are anisotropic points in a parabolic plane. (2) Usually, the (squared) distance of anisotropic points in a parabolic space (in a euclidean space or a Lorentz-Minkowski space, for example) is given by a real number \(d\) instead of an infinitesimal number \(d\)\(\epsilon\). (3) E. Study [29, SS23] was presumably the first to use an infinitesimal number (an infinitesimal dual number \(\epsilon\)) for calculating distances between points in a euclidean space. Considerations of this kind are already presented in Klein's paper [16, p. 612]. ### Point reflections on a one-dimensional semi CK space \((\mathbb{PR}^{2},\mathfrak{N}\,)\) The reflection \(\sigma_{Z}\) in an anisotropic point \(Z\in\mathbb{PR}^{2}\) fixes the points \(Z\) and \(Z^{\mathfrak{N}}\). Every other point \(P\) is mapped to a point \(P^{\prime}\) such that the points \(Z\), \(P\), \(Z^{\mathfrak{N}}\), \(P^{\prime}\) form a harmonic range. It can be easily checked that \(P^{\prime}\,[\mathfrak{N}]\,P^{\prime}=P\,[\mathfrak{N}]\,P\) and that \(P^{\prime}\,[\mathfrak{N}]\,Z=P\,[\mathfrak{N}]\,Z\). From this follows that \(P\) and \(P^{\prime}\) are points in the same connected component of \(Q^{\times}\) and that \(d(P^{\prime},Z)=d(P,Z)\). The last equation applies for the elliptic and hyperbolic lines, but also for the parabolic line if we regard it as a null line. The double point on the parabolic line is a fixed point of \(\sigma_{Z}\) the two isotropic points on the hyperbolic line are interchanged by \(\sigma_{Z}\). If we now use a parabolic quadrance on the parabolic line given by \(\xi_{\epsilon}(P,Q)=\epsilon\,(p-q)^{2}\) for anisotropic points \(P=(1\,:\,p)\) and \(Q=(1\,:\,q)\), we can write \(Z=(1\,:\,z)\), \(P^{\prime}=(1\,:\,p^{\prime})\) and get \((1\,:\,p^{\prime})=P^{\prime}=2Z-P=(1\,:\,2z-p)\). It follows \(\xi_{\epsilon}(P^{\prime},Z)=\xi_{\epsilon}(P,Z)\); the parabolic \(\epsilon\)-quadrance of two points is preserved by reflections. In the following we will first use a gross distance measure. In Section 3 we turn to geometries where parabolic lines represent a generic and not just a singular case; this is when fine distance measurement comes into play. ### Gross measurement in a semi CK space Let \((\mathrm{P}\boldsymbol{V},\mathfrak{R}\,)\) be a semi CK space. A line without any isotropic point is an elliptic line. A line is hyperbolic iff it passes through anisotropic points of different sign; in this case it contains precisely two nonsingular isotropic points. A line that is neither hyperbolic nor elliptic, must either be totally isotropic or it must pass through exactly one isotropic point, which in this case must be a double point. A line with just one isotropic point is a parabolic line. We introduce line segments with anisotropic endpoints and the length of these segments. Lines are compact sets, such as all subspaces of \(\mathrm{P}\boldsymbol{V}\). We give all lines, even those entirely located in \(Q_{\mathfrak{R}\mathfrak{I}}\), the same measure \(\pi\mathbf{i}\), with imaginary unit \(\mathbf{i}\). Given a line \(L\leq\mathrm{P}\boldsymbol{\mathrm{V}}\) and an anisotropic point \(P\) on \(L\), then there is exactly one point \(Q\), denoted by \(\mathrm{conj}(P,L)\), which is conjugate to \(P\); it is the meet of \(L\) and \(P^{\mathfrak{R}\mathfrak{I}}\). If \(Q=\mathrm{conj}(P,L)\) is anisotropic, then \(P=\mathrm{conj}(Q,L)\). Given two distinct anisotropic points \(P\) and \(Q\), we introduce two line segments \([P,Q]_{+}\,:=\,\{sP+tQ|\,s,t\in\mathbb{R},st\geq 0\}\) and \([P,Q]_{-}\,:=\,\{sP+tQ|\,s,t\in\mathbb{R},st\leq 0\}\). Thus, \([P,Q]_{+}\) and \([P,Q]_{-}\) are the closures of the two connected components of the set \(P\sqcup Q-\{P,Q\}\). We define lengths \(\mu_{0}([P,Q]_{\pm})\) of these two segments as complex numbers with imaginary parts in the interval \([\,0,\pi\,]\), determined by the following conditions: * \(\cosh^{2}(\mu_{0}([P,Q]_{+})=\mathrm{sign}(P)\,\mathrm{sign}(Q)\,(P^{\circ}[ \mathfrak{R}]\,Q^{\circ})^{2}\) and \(\mu_{0}([P,Q]_{-})=\pi\mathbf{i}-\mu([P,Q]_{+})\). * If \(R\) is an anisotropic inner point of \([P,Q]_{+}\), then \(\mu_{0}([P,R]_{+})+\mu_{0}([R,Q]_{+})=\mu_{0}([P,Q]_{+})\). * \(\mu_{0}([P,Q]_{+})=\mu_{0}([P,Q]_{-})=\frac{1}{2}\pi\mathbf{i}\) precisely when \(P\) and \(Q\) are mutually conjugate. * If the line \(P\sqcup Q\) is elliptic, then \((P^{\circ}[\mathfrak{R}]\,Q^{\circ})^{2}<1\) and \(\mu_{0}([P,Q]_{+})\), \(\mu_{0}([P,Q]_{-})\) are purely imaginary. * If \(\xi_{0}(P,Q)=0\), then there is exactly one isotropic point \(R\) on \(P\sqcup Q\). If \(R\in[P,Q]_{+}\), then \(\mu_{0}([P,Q]_{+})=\pi\mathbf{i}\); otherwise, \(\mu([P,Q]_{+})=0\). * If the line \(P\sqcup Q\) is hyperbolic, then it intersects \(Q_{\mathfrak{R}}\) transversally in two isotropic points. Here, we consider two cases: Case 1: The points \(P\) and \(Q\) are in the same connected component of \(Q_{\mathfrak{R}}^{\times}\). Both isotropic points on \(P\sqcup Q\) are either in \([P,Q]_{+}\) or in \([P,Q]_{-}\). Let \(I\) be that of these two intervals without any isotropic point. Then \(\mu_{0}(I)\) is one of two real numbers with the same absolute value which satisfy the equation \(\cosh^{2}(\mu_{0}(I))=(P^{\circ}[\mathfrak{R}]\,Q^{\circ})^{2}\). Having the choice between a positive or a negative number for \(\mu_{0}(I)\), we decide for the negative number if \(P\) and \(Q\) are points in \(Q_{\mathfrak{R}}^{-}\), and for the positive number if \(P\) and \(Q\) are in \(Q_{\mathfrak{R}}^{+}\). The length of the other interval is \(\pi\mathbf{i}-\mu_{0}(I)\). Case 2: The points \(P\) and \(Q\) are in different connected components of \(Q_{\mathfrak{R}}^{\times}\). Let us assume that \(P\in Q_{\mathfrak{R}}^{+}\) and \(Q\in Q_{\mathfrak{R}}^{-}\). Then \(R:=\mathrm{conj}(P,P\sqcup Q)\in Q_{\mathfrak{R}}^{-}\), and \(S:=\mathrm{conj}(Q,P\sqcup Q)\in Q_{\mathfrak{R}}^{+}\). In this case, \[\mu_{0}([P,Q]_{+})=\mu_{0}([P,S]_{+})+\frac{1}{2}\pi\mathbf{i},\,\,\mu_{0}([P, Q]_{-})=-\mu_{0}([P,S]_{+})+\frac{1}{2}\pi\mathbf{i}\,\,\,\text{if}\,\,S\in[P,Q]_{+},\] \[\mu_{0}(\,[P,Q]_{+})=-\mu_{0}(\,[P,S]_{+})+\tfrac{1}{2}\pi\mathbf{i},\ \mu_{0}(\,[P,Q]_{-})=\mu_{0}(\,[R,Q]_{+})+\tfrac{1}{2}\pi \mathbf{i}\ \text{if}\ S\in[P,Q]_{-}.\] An analysis of the different cases shows that by knowing the number \(P^{\circ\,[\mathbf{i}]}\ Q^{\circ}\) and one of the two numbers \(\operatorname{sgn}(P)\), \(\operatorname{sgn}(Q)\) we can determine \(\mu_{0}(\,[P,Q]_{\pm})\). We now define the lengths of the segments with one or two isotropic boundary points, guided by the principle: Isotropic boundary points are distributed equally to adjacent segments. Given an anisotropic point and a double point on a parabolic line, then the length of each of the two segments with these points as boundary points is \(\frac{1}{2}\pi\mathbf{i}\). If \(P\) is an anisotropic and \(Q\) an isotropic point on a hyperbolic line, then the line splits into two segments with these points as boundary points. One contains still another isotropic point while the other does not. The length of the first segment is \(-\operatorname{sgn}(P)\infty+\frac{3}{4}\pi\mathbf{i}\), the length of the second \(\operatorname{sgn}(P)\infty+\frac{1}{4}\pi\mathbf{i}\). A hyperbolic line \(L\) is a disjoint union of the three sets \(L^{0}\,:=\,L\cap Q_{\mathfrak{A}},L^{\pm}\,:=\,L\cap Q_{\mathfrak{A}}^{\pm}\). The length of the open segment \(L^{\pm}\) is \(\pm\infty\), the length of the closed segment \(L^{\pm}\cup L_{0}\) is \(\pm\infty+\frac{1}{2}\pi\mathbf{i}\). In order to define a distance of anisotropic points, we introduce an order \(<\) on the complex numbers by \[a_{1}+b_{1}\mathbf{i}<a_{2}+b_{2}\mathbf{i}\ \ \text{iff}\quad\left\{\begin{array}{ll} \text{or}&b_{1}<b_{2}\\ \text{$b_{1}=b_{2}$ \ and $a_{1}<a_{2}$}\.\end{array}\right.\] Put \(\mathbb{D}\,:=\,\{a+b\,\mathbf{i}\,|\,a\in\mathbb{R},b\in[0,\frac{1}{2}\pi]\}\) and define the function \(d_{0}\,:Q_{\mathfrak{A}}^{\times}\times Q_{\mathfrak{A}}^{\times}\to\mathbb{D}\) by \[d_{0}(P,Q)=\begin{cases}0,\ \text{ if }P=Q\,\\ \mu_{0}(\,[P,Q]_{+}),\ \text{if }\ P\neq Q\ \text{ and }\ \mu_{0}(\,[P,Q]_{+})<\mu_{0}(\,[P,Q]_{-})\,\\ \mu_{0}(\,[P,Q]_{-}),\ \text{otherwise}\,.\end{cases}\] This function \(d_{0}\) is continuous with respect to the canonical topology on \(PV\), and we call \(d_{0}(P,Q)\) the _gross distance_ between the (anisotropic) points \(P\) and \(Q\). _Remarks_: (1) On an elliptic line, the distance of two points is uniquely determined by its value under \(\cosh\). The situation is different for a hyperbolic line \(P\sqcup Q\); in order to determine the distance \(d_{0}(P,Q)\), we also need to know one of the numbers \(\operatorname{sgn}(P),\operatorname{sgn}(Q)\). (2) This distance function can be extended to isotropic points, as long as both points are not points on a totally isotropic line. But this extension is not continuous. (3) So far, no distance has been defined yet between points on the total vertex \(U\) of \(Q\). This can be achieved by declaring a quadric in \(U\) as the absolute quadric, such that \(U\) becomes a CK space, see Section 3. ### The quadrance of two equidimensional anisotropic planes Let \(A_{1}\) and \(A_{2}\) be two anisotropic planes of the same dimension \(s\leq n\) generated by points \(P_{1}=\mathbb{R}\boldsymbol{p}_{1},\ldots,P_{s+1}=\mathbb{R}\boldsymbol{p}_{s +1}\) and \(Q_{1}=\mathbb{R}\boldsymbol{q}_{1},\ldots,Q_{s+1}=\mathbb{R}\boldsymbol{q}_{s +1}\), respectively. We define two real numbers \[\zeta_{0}(A_{1},A_{2})\,:=\frac{(\det(\boldsymbol{p}_{i}[\mathfrak{A}] \boldsymbol{q}_{i})_{1\leq i,j\leq s+1})^{2}}{(\det(\boldsymbol{p}_{i}[ \mathfrak{A}]\boldsymbol{p}_{j})_{1\leq i,j\leq s+1})(\det(\boldsymbol{q}_{i} [\mathfrak{A}]\boldsymbol{q}_{j})_{1\leq i,j\leq s+1})}\quad(\natural)\] and \(\xi_{0}(A_{1},A_{2}):=1-\zeta_{0}(A_{1},A_{2})\). If \(A_{1}\), \(A_{2}\) are points, \(\xi_{0}(A_{1},A_{2})\) is the quadrance of the two points. We adopt this name for higher dimensional planes of the same dimension. ### Reflections on a semi CK space are isometries Let \(\phi\in\operatorname{Aut}(\mathbb{P}\boldsymbol{V})\) be a reflection in an anisotropic plane \(U\leq\mathbb{P}\boldsymbol{V}\) and let \(A_{1}\), \(A_{2}\leq\mathbb{P}\boldsymbol{V}\) be two anisotropic planes of the same dimension, then \(\xi_{0}(\phi(A_{1}),\phi(A_{2}))=\xi_{0}(A_{1},A_{2})\). _Proof._ First, we give a proof for the special case \(\dim(A_{1})=\dim(A_{2})=0\). In this case, \(\phi(A_{1})\) and \(\phi(A_{2})\) are anisotropic points. Put \(M:=U\sqcap(A_{1}\sqcup\phi(A_{1}))\), \(\tilde{M}:=U^{\mathfrak{M}}\sqcap(A_{1}\sqcup\phi(A_{1}))\), \(N:=U\sqcap(A_{2}\sqcup\phi(A_{2}))\), \(\tilde{N}:=U^{\mathfrak{M}}\sqcap(A_{2}\sqcup\phi(A_{2}))\). Only one of the following four cases can occur: (1) \(M=A_{1}+\phi(A_{1})\), \(\tilde{M}=A_{1}-\phi(A_{1})\), \(N=A_{2}+\phi(A_{2})\), \(\tilde{N}=A_{2}-\phi(A_{2})\), (2) \(M=A_{1}-\phi(A_{1})\), \(\tilde{M}=A_{1}+\phi(A_{1})\), \(N=A_{2}+\phi(A_{2})\), \(\tilde{N}=A_{2}-\phi(A_{2})\), (3) \(M=A_{1}+\phi(A_{1})\), \(\tilde{M}=A_{1}-\phi(A_{1})\), \(N=A_{2}-\phi(A_{2})\), \(\tilde{N}=A_{2}+\phi(A_{2})\), (4) \(M=A_{1}-\phi(A_{1})\), \(\tilde{M}=A_{1}+\phi(A_{1})\), \(N=A_{2}-\phi(A_{2})\), \(\tilde{N}=A_{2}+\phi(A_{2})\). These cases can be treated essentially the same, therefore we only look at the first case: We have \(0=(A_{1}^{\circ}-\phi(A_{1})^{\circ})[\mathfrak{N}(A_{2}^{\circ}+\phi(A_{2})^ {\circ})\) and \(0=(A_{1}^{\circ}+\phi(A_{1})^{\circ})[\mathfrak{N}(A_{2}^{\circ}-\phi(A_{2})^ {\circ})\) and therefore \(\phi(A_{1})^{\circ}[\mathfrak{N}]A_{2}^{\circ}=\phi(A_{2})^{\circ}[\mathfrak{N }]A_{1}^{\circ}\). From this follows that \(\xi(A_{1},\phi(A_{2}))=\xi(A_{2},\phi(A_{1}))\). By interchanging the roles of \(A_{2}\) and \(\phi(A_{2})\) we get \(\xi(\phi(A_{1}),\phi(A_{2}))=\xi(A_{1},A_{2})\). We now assume that \(A_{1}\) is generated by independent anisotropic points \(P_{1}\),..., \(P_{s}\) and \(A_{2}\) is generated by independent anisotropic points \(Q_{1}\),..., \(Q_{s}\). Then \(\xi_{0}(X,Y)=\xi_{0}(\phi(X),\phi(Y))\) for all \(X\), \(Y\!\in\!\{P_{1},\ldots P_{s},Q_{1},\ldots,Q_{s}\}\) and \(\xi_{0}(A_{1},A_{2})=\xi_{0}(\phi(A_{1}),\phi(A_{2}))\). \(\square\) ### Angles between planes We define: Two planes \(A_{1}\), \(A_{2}\leq\mathbb{P}\boldsymbol{V}\)_form dihedral angles_ if \(\dim(A_{1}\sqcap A_{2})=\dim(A_{1})-1=\dim(A_{2})-1\). Let \(A_{1}\), \(A_{2}\) be two planes which form dihedral angles. Put \(H:=A_{1}\sqcup A_{2}\) and \(U:=A_{1}\sqcap A_{2}\). \(H\!\smallsetminus(A_{1}\cup A_{2})\) consists of two connected components. We call the closure of these components (_dihedral_) _angles_. \(A_{1}\) and \(A_{2}\) are the _sides_ of these angles, and \(U\) is their _vertex_. _Example_: Two distinct points form dihedral angles with vertex \(0\). If the vertex \(U\) is nonempty and anisotropic, \(U^{\mathfrak{M}}\sqcap H\) is a line which meets \(A_{1}\!\smallsetminus\!U\) and \(A_{2}\!\smallsetminus\!U\) in points \(P_{1}\) and \(P_{2}\), respectively. The segment \([P_{1};P_{2}]_{+}\) belongs to one angle, the segment \([P_{1},P_{2}]_{-}\) to the other. We notate the first angle by \([A_{1},A_{2}]_{+}\), the second by \([A_{1},A_{2}]_{-}\). If \(P_{1}\) and \(P_{2}\) are anisotropic, then \(\xi_{0}(A_{1},A_{2})=\xi_{0}(P_{1},P_{2})\). Thus, we put \(d_{0}(A_{1},A_{2}):=d_{0}(P_{1},P_{2})\). If the vertex is isotropic while the two sides \(A_{1}\), \(A_{2}\) are anisotropic, then \(\xi_{0}(A_{1},A_{2})=0\) and \(d_{0}(A_{1},A_{2})=0\). _Two examples_: (1) If \(\boldsymbol{V}=\mathbb{R}^{4}\), \(\mathfrak{M}=\operatorname{diag}(1,1,-1,-1)\), \(P_{1}=(1:1:0:0)\), \(P_{2}=(1:-1:1:0)\), \(P_{3}=(0:0:1:0)\), then \(P_{1}[\mathfrak{R}]P_{2}=0=P_{1}[\mathfrak{R}]P_{3}\) and \(\zeta_{0}(P_{1}\sqcup P_{2},P_{1}\sqcup P_{3})=-1\), \(\mu([P_{1}\sqcup P_{2},P_{1}\sqcup P_{3}]_{+})=\mu([P_{2},P_{3}]_{+})=\ln( \sqrt{2}+1)+\frac{1}{2}\pi\mathbf{i}\), \(d_{0}(P_{1}\sqcup P_{2},P_{1}\sqcup P_{3})=\mu([P_{2},P_{3}]_{-})=\ln(\sqrt{2} -1)+\frac{1}{2}\pi\mathbf{i}\). (2) If \(\boldsymbol{V}=\mathbb{R}^{3}\), \(\mathfrak{M}=\operatorname{diag}(1,1,0)\), \(P_{1}=(1:1:1)\), \(P_{2}=(-1:1:1)\), \(P_{3}=(1:-1:1)\), then \(\mu([P_{1}\sqcup P_{2},P_{1}\sqcup P_{3}]_{+})=\mu([P_{2},P_{3}]_{+})=\pi \mathbf{i}\) and \(\xi_{0}(P_{1}\sqcup P_{2},P_{1}\sqcup P_{3})=\xi_{0}(P_{2},P_{3})=0\). The distance \(d_{0}(A_{1},A_{2})\) between almost all planes \(A_{1}\), \(A_{2}\) forming dihedral angles is nonzero as long as their dimension is less than \(\operatorname{rank}(\mathfrak{R})-1\). If their dimension exceeds \(\operatorname{rank}(\mathfrak{R})-2\), their distance is always \(0\). ### Projections and parallels Consider some point \(P=\mathbb{R}\boldsymbol{p}\) and some anisotropic plane \(U=\mathbb{R}\boldsymbol{U}\) of dimension \(k<n\) with \(P\not\in U^{\mathfrak{M}}\). The _perpendicular_ from \(P\) to \(U\) is the \((n{-}k)\)-plane \(\operatorname{perpp}(P,U):=P\sqcup U^{\mathfrak{M}}\). This plane meets \(U\) at the point \(\operatorname{ped}(P,U):=\operatorname{perpp}(P,U)\sqcap U\), the _projection_ or _pedal_ of \(P\) in \(U\). If \(P\in U\), then \(\operatorname{ped}(P,U)=P\). Regardless of the choice of point \(P\), at least one of the points \(\operatorname{ped}(P,U),\operatorname{ped}(P,U^{\mathfrak{M}})\) is well-defined. If \(P\) is a point neither on \(U\) nor on \(U^{\mathfrak{M}}\), then both are well-defined and together with \(P\) they form a collinear triad. If \(P\in U^{\mathfrak{M}}\), then \(d_{0}(P,U)=\frac{1}{2}\pi\operatorname{\mathbf{i}}\), otherwise \(d_{0}(P,U)=d_{0}(P;\operatorname{ped}(P,U))\). If \(U\) is a hyperplane with \(U^{\mathfrak{M}}=Q\) and \(P\neq Q\), then \(\operatorname{ped}(P,U)=(Q\,\{\mathfrak{M}\}P)Q-(Q\,\{\mathfrak{M}\}Q)P\). Suppose \(A\) is a second plane with \(\dim A\leq\dim U\) and \(A\sqcap U^{\mathfrak{M}}=\emptyset\). Then \(\operatorname{ped}(A,U):=\bigcup_{P\in A}\operatorname{ped}(P,U)\leq U\) is the _pedal of \(A\) in \(U\)_, this is a plane of the same dimension as \(A\). As distance \(d_{0}(A,U)\) we define the distance between \(A\) and \(\operatorname{ped}(A,U)\). _Example_: \(n=4\), \(\mathfrak{M}=\operatorname{diag}(1,1,1,1,0)\). Put \(U=E_{1}\sqcup E_{2}\sqcup E_{3}\), \(P=(1\!:\!0\!:\!0\!:\!1\!:\!0)\), \(Q=(0\!:\!1\!:\!0\!:\!1\!:\!0)\), \(A=P\sqcup Q\). Then: \(\operatorname{ped}(A,U)=E_{1}\sqcup E_{2}\) and \(\xi_{0}(A,U)=\xi_{0}(A,E_{1}\sqcup E_{2})=2/3\). We call the \(k\)-plane \(\operatorname{par}(P,U):=(U\sqcap P^{\mathfrak{M}})\sqcup P\) the _plane through \(P\) parallel to \(U\)_. Caution: In hyperbolic geometry two planes are often defined to be parallel to each other if their intersection is an isotropic set. Both concepts of parallelism have to be kept entirely apart. ### The angle between a line and a plane Let \(H\) be an anisotropic plane and let \(L\) be an anisotropic line. We assume that \(L\) meets \(H\) in one point \(P\). The lines \(L\) and \(P\sqcup\operatorname{ped}(L;H)\) form two angles with vertex \(P\). If \(Q\) is a point on \(L\) and \(Q\neq P\), then \[\sinh(\,\mu(\angle_{1}))=\sinh(\,\mu(\angle_{2}))=\frac{\sinh(d_{0}(Q; \operatorname{ped}(Q;H))}{\sinh(d_{0}(Q,P))}=\frac{\cosh(d_{0}(Q,H^{\mathfrak{M }})}{\sinh(d_{0}(Q,P))}\operatorname{\mathbf{i}}\,.\] If \(L\) is a line totally inside the hyperplane \(H\), then the angle distance between \(L\) and \(H\) is put to \(0\). ### Symmetry points, midpoints, segment bisectors and angle bisectors Let \(U\) be an anisotropic plane and \(S\) be a nonempty subset of \(\operatorname{P}\boldsymbol{V}\). If \(S\) is invariant under the reflection in \(U\), then \(U\) and \(U^{\mathfrak{M}}\) are called _symmetry axes_ of \(S\). If a symmetry axis consists of only one point, then this point is called _symmetry point_ of \(S\). Let \(P\) and \(Q\) be two distinct anisotropic points in \(\operatorname{P}\boldsymbol{V}\). If \(\operatorname{sgn}(P)\neq\operatorname{sgn}(Q)\), then \(\{P,Q\}\) has no symmetry points. If both points have the same sign, then the points \(P+Q\) and \(P-Q\) are called the _midpoints_ of \(\{P,Q\}\). \(P+Q\) is called the _midpoint_ of \([P,Q]_{+}\) and \(P-Q\) the _midpoint_ of \([P,Q]_{-}\). At least one of the two midpoints is anisotropic and a reflection in this point leaves both midpoints fixed. Thus, both midpoints are symmetry points of \(\{P,Q\}\). If \(P\sqcup Q\) is an anisotropic line and \(P\pm Q\) an anisotropic point, \((P\pm Q)^{\mathfrak{M}}\) is a _perpendicular bisector_ of the segment \([P,Q]_{\mp}\). Let \(A_{1},A_{2}\) be two distinct anisotropic planes of the same dimension, and let \(\angle(A_{1},A_{2})\) be an angle with sides \(A_{1}\), \(A_{2}\) and vertex \(U:=A_{1}\sqcap A_{2}\) situated in a anisotropic plane \(A_{1}\sqcup A_{2}\). If \(U\) is anisotropic, the line \(L:=U^{\mathfrak{M}}\sqcap A\) meets \(A_{1}\cup A_{2}\) in two points. If the segment \(\angle(A_{1},A_{2})\sqcap L\) has a midpoint \(M\), \(M\sqcup U\) is called _angle bisector_ of \(\angle(H_{1},H_{2})\). ### Distance of two equidimensional anisotropic planes Let \(A_{1},A_{2}\leq\operatorname{P}\boldsymbol{V}\) be two distinct anisotropic planes of the same dimension \(m>0\). We put \(E:=A_{1}\sqcup A_{2}\), \(A_{1}^{\prime}:=E\sqcap A_{1}^{\mathfrak{M}}\), \(A_{2}^{\prime}:=E\sqcap A_{2}^{\mathfrak{M}}\) and assume that \(E\) is anisotropic and that \(A_{1}\sqcap A_{2}^{\prime}=A_{2}\sqcap A_{1}^{\prime}=0\). Then there are points \(Q_{1},\ldots,Q_{m+1}\in A_{1}\) and \(R_{1},\ldots,R_{m+1}\in A_{2}\) such that 1. \(\cosh(d_{0}(Q_{i},Q_{j}))=\cosh(d_{0}(R_{i},R_{j}))=0,\,1\leq i<j\leq m+1\). 2. \(\cosh(d_{0}(Q_{i},R_{j}))=0,\,i\neq j\). 3. If \(Q_{i}\neq R_{i}\), then \(Q_{i}\sqcup R_{i}\) is a line that meets both planes, \(A_{1}\) and \(A_{2}\), perpendicularly. 4. \(\zeta_{0}(A_{1},A_{2})=\prod_{i=1}^{m+1}(\cosh(d_{0}(Q_{i},R_{i})))^{2}\). Proof.: The proof will be given in several steps. We first assume that \(A_{1}\) and \(A_{2}\) are disjoint planes. The theorem is true if \(A_{1}\) and \(A_{2}\) are lines that do not intersect. The proof of this statement is rather technical and we use a CAS-system for calculations. Let \(P_{1},P_{2},P_{3},P_{4}\) be four anisotropic points and \(A_{1}=P_{1}\sqcup P_{2},A_{2}=P_{3}\sqcup P_{4}\). Put \(c_{ij}:=P_{i}^{\circ}(\mathbb{R})P_{j}^{\circ},1\leq i<j\leq 4\). For our purposes we may assume that \(c_{12}=c_{34}=0\). We introduce the number \[w:=\sum_{i=1}^{2}\sum_{k=3}^{4}\big{(}c_{il}^{\,2}c_{kk}^{\,2}c_{ jl}^{4}+2c_{ii}^{\,2}c_{kk}c_{ll}c_{jl}^{\,2}c_{jk}^{\,2}c_{jl}^{\,2}+2c_{kk}^{\,2}c_{ ii}c_{jl}^{\,2}c_{jl}^{\,2}\big{)}\\ +\ 2c_{11}c_{22}c_{33}c_{44}\big{(}c_{13}^{\,2}c_{24}^{\,2}-4c_{13 }c_{14}c_{23}c_{24}+c_{14}^{\,2}c_{23}^{\,2}\big{)},\] where the indices are chosen such that \(\{i,j\}=\{1,2\}\) and \(\{k,l\}=\{3,4\}\). We show that this number \(w\) is a non-negative real number by examining four different cases. (1) Both lines \(A_{1}=P_{1}\sqcup P_{2}\) and \(A_{2}=P_{3}\sqcup P_{4}\) are elliptic lines, and both lie in \(Q^{+}\) or both in \(Q^{-}\). Thus, \(c_{11}=c_{22}=c_{33}=c_{44}\in\{-1,1\}\) and \(c_{13},c_{14},c_{23},c_{24}\in\mathbb{R}\). In this case, \[w=\big{(}(c_{13}+c_{23})^{2}+(c_{14}+c_{24})^{2}\big{)}\big{(}(c_{13}-c_{23}) ^{2}+(c_{14}-c_{24})^{2}\big{)}\in\mathbb{R}^{\geq 0}.\] (2) Both lines \(A_{1}\) and \(A_{2}\) are elliptic lines, \(P_{1}\sqcup P_{2}\subseteq Q^{+}\), \(P_{3}\sqcup P_{4}\subseteq Q^{-}\). Here, \(c_{11}=c_{22}=1,c_{33}=c_{44}=-1\) and \(c_{13},c_{14},c_{23},c_{24}\in\mathbb{R}\). With \(c_{13}=c_{13}^{*}\mathbf{i},c_{14}=c_{14}^{*}\mathbf{i},c_{23}=c_{23}^{*} \mathbf{i},c_{24}=c_{24}^{*}\mathbf{i}\), we get \[w=\big{(}(c_{13}^{*}+c_{23}^{*})^{2}+(c_{14}^{*}+c_{24}^{*})^{2}\big{)}\big{(} (c_{13}^{*}-c_{23}^{*})^{2}+(c_{14}^{*}-c_{24}^{*})^{2}\big{)}\in\mathbb{R}^{ \geq 0}.\] (3) \(A_{1}\) is elliptic, \(A_{2}\) hyperbolic; \(c_{11}=c_{22}=c_{33}=1,c_{44}=-\,1;c_{13},c_{23}\in\mathbb{R}\), \(c_{14},c_{24}\in\mathbb{R}\mathbf{i}\). Then, \[w=\big{(}(c_{13}+c_{24}\mathbf{i})^{2}+(c_{23}-c_{14}\mathbf{i})^{2}\big{)} \big{(}(c_{13}-c_{24}\mathbf{i})^{2}+(c_{23}+c_{14}\mathbf{i})^{2}\big{)}\in \mathbb{R}^{\geq 0}.\] (4) Both lines \(A_{1}\) and \(A_{2}\) are hyperbolic lines; \(c_{11}=c_{33}=1,c_{22}=c_{44}=-\,1;c_{13},c_{24}\in\mathbb{R}\), \(c_{14},c_{23}\in\mathbb{R}\). With \(c_{14}=c_{14}^{*}\mathbf{i},c_{23}=c_{24}^{*}\mathbf{i}\), we get \[w=\big{(}(c_{13}+c_{24})^{2}+(c_{14}^{*}+c_{23}^{*})^{2}\big{)}\big{(}(c_{13}-c _{24})^{2}+(c_{14}^{*}-c_{23}^{*})^{2}\big{)}\in\mathbb{R}^{\geq 0}.\] Define points \(Q_{1},Q_{2}\) on \(P_{1}\sqcup P_{2}\) and \(R_{1},R_{2}\) on \(P_{3}\sqcup P_{4}\) by \[Q_{1}:=(-\sqrt{w}+q+2(c_{13}c_{23}c_{22}c_{44}+c_{14}c_{24}c_{22} c_{33})P_{1}\\ +(+\sqrt{w}+q-2(c_{13}c_{23}c_{11}c_{44}+c_{14}c_{24}c_{11}c_{33})P_{2}, \\ Q_{2}:=(+\sqrt{w}+q+2(c_{13}c_{23}c_{22}c_{44}+c_{14}c_{24}c_{22} c_{33})P_{1}\\ +(-\sqrt{w}+q-2(c_{13}c_{23}c_{11}c_{44}+c_{14}c_{24}c_{11}c_{33})P_{2}, \\ \text{with}\ \ q:=c_{11}(c_{33}c_{24}^{\,2}+c_{44}c_{23}^{\,2})-c_{22}(c_{33}c_{14}^{\,2}+c_{44}c_{13 }^{\,2}),\\ R_{1}:=(-\sqrt{w}+r+2(c_{13}c_{23}c_{22}c_{44}+c_{14}c_{24}c_{22} c_{33})P_{3}\\ +(+\sqrt{w}+r+2(c_{13}c_{23}c_{22}c_{44}+c_{14}c_{24}c_{22}c_{33})P_{3} \\ R_{2}:=(+\sqrt{w}+r+2(c_{13}c_{23}c_{22}c_{44}+c_{14}c_{24}c_{22} c_{33})P_{3}\\ +(-\sqrt{w}+r-2(c_{13}c_{23}c_{11}c_{44}+c_{14}c_{24}c_{11}c_{33})P_{4}, \\ +(-\sqrt{w}+r+2(c_{13}c_{23}c_{11}c_{44}+c_{14}c_{24}c_{11}c_{33})P_{4}, \end{array}\] \[\text{with}\ \ r:=c_{33}(c_{11}c_{24}^{\ 2}+c_{22}c_{14}^{\ 2})-c_{44}(c_{11}c_{23}^{\ 2}+c_{22}c_{13}^{\ 2}).\] The mapping \((A_{1}\!\!\smallsetminus\!Q)\times(A_{2}\!\!\smallsetminus\!Q)\to\mathbb{C}\), \((S_{1},S_{2})\ \mapsto\ S_{1}[\mathfrak{R}_{1}]S_{2}\), is stationary at \((Q_{1},R_{1})\) and at \((Q_{2},R_{2})\), and \(Q_{1}\sqcup R_{1}\) and \(Q_{2}\sqcup R_{2}\) are the two lines that meet each of the sets \(A_{1}\), \(A_{2}\), \(A_{1}^{\prime}\), \(A_{2}^{\prime}\). Thus, \(Q_{1}\!\!=\!\text{ped}(R_{1},P_{1}\sqcup P_{2})\), \(R_{1}\!\!=\!\text{ped}(Q_{1};P_{3}\sqcup P_{4})\), \(Q_{2}\!\!=\!\text{ped}(R_{2};P_{1}\sqcup P_{2})\), \(R_{2}\!\!=\!\text{ped}(Q_{2};P_{3}\sqcup P_{4})\). \(d_{0}(Q_{1},Q_{2})=d_{0}(R_{1},R_{2})=d_{0}(Q_{1},R_{2})=d_{0}(R_{1},Q_{2})= \frac{1}{2}\pi\mathbf{i}\). Since \(A_{1}\cap A_{2}=\emptyset\), we therefore get \(Q_{2}\sqcup R_{2}\leq(Q_{1}\sqcup R_{1})^{\mathfrak{R}}\). Moreover, \(\zeta_{0}(Q_{1}\sqcup Q_{2},R_{1}\sqcup R_{2})\) \[=\frac{\big{(}\det\begin{pmatrix}Q_{1}^{\circ(\mathfrak{R}_{1})}R_{1}^{ \circ}&0\\ 0&Q_{2}^{\circ(\mathfrak{R}_{2})}R_{2}^{\circ}\end{pmatrix}\big{)}^{2}}{\det \begin{pmatrix}Q_{1}^{\circ(\mathfrak{R}_{1})}Q_{1}^{\circ}&0\\ 0&Q_{2}^{\circ(\mathfrak{R}_{2})}Q_{2}^{\circ}\end{pmatrix}\det\begin{pmatrix} R_{1}^{\circ}(\mathfrak{R}_{1})R_{1}^{\circ}&0\\ 0&R_{2}^{\circ(\mathfrak{R}_{1})}R_{2}^{\circ}\end{pmatrix}}\] \[=\big{(}\cosh(d_{0}(Q_{1},R_{1}))\big{)}^{2}\big{(}\cosh(d_{0}(Q _{2},R_{2}))\big{)}^{2}.\] Let us now assume that \(m>1\) and that the statement holds for \(m-1\). First of all, there exists (at least) one line \(L\) which meets the planes \(A_{1}\), \(A_{2}\), \(A_{1}^{\prime}\), \(A_{2}^{\prime}\). This is a consequence of a lemma in enumerative geometry (Schubert calculus) which states that, if four \(m\)-planes in a projective space of dimension \(2m+1\) are in a "general position", then there exist \(m+1\) lines which meet all these planes, see [23] for a proof. If \(L\) meets \(A_{1}\) at \(Q_{1}\), then there exist anisotropic points \(Q_{2},\ldots,Q_{m+1}\in A_{1}\) such that \(d_{0}(Q_{i},Q_{j})=\frac{1}{2}\pi\mathbf{i}\), \(1\!\leq\!i\!<\!j\!\leq\!m\!+\!1\), and if \(L\) meets \(A_{2}\) at \(R_{1}\), then there are anisotropic points \(R_{2}\),..., \(R_{m+1}\in A_{2}\) with \(d_{0}(R_{i},R_{j})=\frac{1}{2}\pi\mathbf{i}\), \(1\!\leq\!i\!<\!j\!\leq\!m\!+\!1\). The proof for \(m=1\) shows that \(d_{0}(Q_{i},R_{j})=\frac{1}{2}\pi\mathbf{i}\) for \(i\neq j\). \(U_{1}\!:\!=\!\text{span}(Q_{2},\ldots,Q_{m+1})\) and \(U_{2}\!:\!=\!\text{span}(R_{2},\ldots,R_{m+1})\) are disjoint \((m\!-\!1)\)-planes in \((P_{1}\sqcup Q_{1})^{\mathfrak{R}}\). We apply formula (\(\pm\)) getting \(\zeta_{0}(A_{1},A_{2})=\big{(}\cosh(d_{0}(P_{1},Q_{1}))\big{)}^{2}\,\zeta_{0}( U_{1},U_{2})\). We complete the proof for disjoint planes \(A_{1}\) and \(A_{2}\) by including the induction assumption. We now assume that \(A_{1}\) and \(A_{2}\) intersect in a non-empty subspace \(U\) of dimension \(s\). We can find points \(Q_{1},\ldots,Q_{m+1}\) which span \(A_{1}\) such that \(\text{span}(Q_{1},\ldots,Q_{s+1})=U\) and \(\cosh(d_{0}(Q_{i},Q_{j}))=0\) for \(1\!\leq\!i\!<\!j\!\leq\!m\!+\!1\). Put \(R_{i}=Q_{i}\), \(1\leq\!i\leq\!s\!+\!1\). We can find points \(R_{s+2}\),..., \(R_{m+1}\) such that \(\cosh(d_{0}(R_{i},R_{j}))=0\) for \(1\!\leq\!i\!<\!j\!\leq\!m\!+\!1\). The intersections \(A_{1}\sqcap U^{\mathfrak{R}}\) and \(A_{2}\sqcap U^{\mathfrak{R}}\) are two disjoint subspaces of equal dimension in \(U^{\mathfrak{R}}\). With formula (\(\pm\)) we get \(\zeta_{0}(A_{1},A_{2})=\zeta_{0}(A_{1}\sqcap U^{\mathfrak{R}},A_{2}\sqcap U^{ \mathfrak{R}})\). We can apply the theorem to the subspaces \(A_{1}\sqcap U^{\mathfrak{R}}\) and \(A_{2}\sqcap U^{\mathfrak{R}}\) and finish the proof. \(\square\) ### The function \(\psi\) and staudtian of a finite set of points We introduce a function \(\psi\) which is defined on the powerset of \(P\boldsymbol{V}\) and has values in \(\mathbb{R}\) by: \(\psi(S)=0\), unless \(S\) consists of finitely many anisotropic points. If \(S\) consists of \(s\in\mathbb{N}^{+}\) anisotropic points \(P_{1}\),..., \(P_{s}\), then \(\psi(S)=\det((P_{i}^{\circ}[\mathfrak{R}]\,P_{j}^{\circ})_{1\leq i,j\leq s})\). If \(\psi(S)\neq 0\), we call the number \(\frac{1}{(s-1)!}\sqrt{|\psi(S)|}\) the _staudtian_ of \(S\) (cf. [13, 7] for the case \(n\!=\!2\)). Given two anisotropic points \(P_{1}\), \(P_{2}\), then \(\psi(\{P_{1},P_{2}\})=\xi_{0}(P_{1},P_{2})=-\sinh^{2}(d_{0}(P_{1},P_{2}))\). Let \(P_{1},\ldots,P_{s}\) be independent anisotropic points and \(S\!\!=\!\{P_{1},\ldots,P_{s}\}\). If the plane span\((S)\) is isotropic, then \(\psi(S)=0\). If the planes \(\operatorname{span}(S)\) and \(P_{1}\sqcup\cdots\sqcup P_{s-1}\) are anisotropic, then \[-\psi(S\!\smallsetminus\!\{P_{s}\})\sinh^{2}(d_{0}(P_{s},\operatorname{span}(S\! \smallsetminus\!\{P_{s}\})))=\psi(S).\] If \(P_{1},\,\ldots,\,P_{s}\) are anisotropic and \(Q=q_{1}\,P_{1}+\cdots+q_{s}\,P_{s}\) with \(q_{1}q_{2}\cdots q_{s}\neq 0\) is another anisotropic point, dependent on \(P_{1}\,,\,\ldots,\,P_{s}\), then for \(0<i,j\leq s\) \[\psi(S\!\smallsetminus\!\{P_{i}\}\cup\{Q\})\,q_{j}^{2}=\psi(S\!\smallsetminus\!\{P _{j}\}\cup\{Q\})\,q_{i}^{2}\,.\] ### Trigonometry Three independent points \(P_{1},P_{2},P_{3}\) in \(\operatorname{P}\!\boldsymbol{V}\) form a triangle. We assume that the plane \(P_{1}\sqcup P_{2}\sqcup P_{3}\) and the three sidelines \(P_{2}\sqcup P_{3},\,P_{3}\sqcup P_{1},\,P_{1}\sqcup P_{2}\) of this triangle are anisotropic. The sidelengths are \(a_{1}\,:=\mu([P_{2},P_{3}]_{+}),a_{2}\,:=\mu([P_{3},P_{1}]_{+}),a_{3}\,:=\mu([ P_{1},P_{2}]_{+})\); the measures of the interior angles are \(\alpha_{1}\,:=\,\mu([P_{1}\sqcup P_{2},P_{1}\sqcup P_{3}]_{+}),a_{2}\,:=\,\mu( [P_{2}\sqcup P_{3},P_{2}\sqcup P_{1}]_{+}),a_{3}\,:=\mu([P_{3}\sqcup P_{1},P_ {1}\sqcup P_{2}]_{+})\). We mention that \[\sinh(a_{1})=\sinh(d_{0}(P_{2},P_{3})),\,\ldots,\qquad\sinh(a_{1})\,\,=\,\sinh (d_{0}(P_{1}\sqcup P_{2},P_{1}\sqcup P_{3})),\ldots,\] \[\cosh^{2}(a_{1})=\cosh^{2}(d_{0}(P_{2},P_{3})),\,\ldots,\quad\cosh^{2}(\alpha_ {1})=\cosh^{2}(d_{0}(P_{1}\sqcup P_{2},P_{1}\sqcup P_{3})),\,\ldots.\] The law of sines: \[\sinh^{2}(a_{1})\,:\,\sinh^{2}(\alpha_{1})\,=\,\sinh^{2}(a_{2})\,:\,\sinh^{2} (\alpha_{2})\,=\,\,\sinh^{2}(a_{3})\,:\,\sinh^{2}(\alpha_{3})\,.\] Two laws of cosines: \[\cosh^{2}(\alpha_{1})=\big{(}\frac{\cosh(a_{1})-\cosh(a_{2})\cosh(a_{3})}{ \sinh(a_{2})\sinh(a_{3})}\big{)}^{2}\] \[\cosh^{2}(a_{1})=\big{(}\frac{\cosh(a_{1})+\cosh(a_{2})\cosh(a_{3})}{\sinh(a_{ 2})\sinh(a_{3})}\big{)}^{2}\,.\] Two generalized laws of sines: Let \(U\leq\operatorname{P}\!\boldsymbol{V}\) be a anisotropic plane of dimension \(s\geq 2\) and let \(S\,:=\,\{P_{1},\ldots,\,P_{s+1}\}\) be a set of independent anisotropic points in \(U\). Hence \(S\) is the set of vertices of an \(s\)-simplex in \(U\) and the sets \(S_{i}\,:=\,S\!\smallsetminus\!\{P_{i}\},\,i=1,\,\ldots,\,s+1\), are the sets of vertices of its \((s-1)\)-faces. Define for \(i\neq j\) points \(P_{ij}\,:=\,(P_{i}\sqcup P_{j})\sqcap P_{i}^{\operatorname{P}\!\boldsymbol{ \Psi}}\) and put \(\alpha_{i}\,:=\,\big{\{}P_{ij}|1\leq j\leq s+1\) and \(j\neq i\big{\}}\). \(\alpha_{i}\) can Figure 1. All figures were created with the software program GeoGebra [33]. be interpreted as the _interior angle_ of the simplex _at the vertex_\(P_{i}\). Then, cf. [5]: (1) If \(\operatorname{span}(S_{1})\),..., \(\operatorname{span}(S_{s+1})\) are anisotropic and \(i\in\{1,\ldots,s+1\}\), \[\frac{\psi(\alpha_{i})}{\psi(S_{i})}=\frac{\psi^{s-1}(S)}{\prod_{j=1}^{s+1} \psi(S_{j})}\.\] (2) If for distinct numbers \(i,j\in\{1,\ldots,s+1\}\) the three planes generated by \(S_{i},S_{j}\) and \(S\smallsetminus\{P_{i},P_{j}\}\) are anisotropic, then \[\xi_{0}(\operatorname{span}(S_{i}),\operatorname{span}(S_{j}))=-\sinh^{2} \bigl{(}d\,\bigl{(}\operatorname{span}(S_{i}),\operatorname{span}(S_{j}) \bigr{)}\bigr{)}=\frac{\psi(S)\,\psi(S\smallsetminus\{P_{i},P_{j}\})}{\psi(S_{i })\,\psi(S_{j})}\] ### Quadrics and spheres Let \((P_{1},\ldots,P_{s+1})\) be a tuple of anisotropic points generating an anisotropic \(s\)-plane \(A\). Let \(\mathfrak{N}=(\mathfrak{n}_{ij})\in\mathbb{R}^{(s+1)\times(s+1)}\) be an indefinite, regular symmetric matrix. Then \(Q_{\mathfrak{N}}=Q(\mathfrak{N},(P_{1},\ldots,P_{s+1})):=\{r_{1}P_{1}+\cdots+ r_{s+1}P_{s+1}|\sum_{1\leq i,j\leq s+1}\mathfrak{n}_{ij}r_{i}r_{j}=0\}\) is a non-empty quadric in \(A\). The polar of a point \(R=\sum_{j}r_{j}P_{j}\) with respect to \(Q_{\mathfrak{N}}\), \(\operatorname{polar}(R,Q_{\mathfrak{N}})\)\(:=\{\sum_{j}t_{j}P_{j}\mid\sum_{1\leq i,j\leq s+1}\mathfrak{n}_{ij}t_{i}r_{j}=0\}\), is a hyperplane of \(A\). An anisotropic point \(R\) is a symmetry point of \(Q_{\mathfrak{N}}\) iff \(\operatorname{polar}(R,Q_{\mathfrak{N}})=R^{\mathfrak{N}}\sqcap A\). _Proof_: Take any line through \(R\) that meets \(Q_{\mathfrak{N}}\) in two points \(T_{1}\) and \(T_{2}\). This line meets the \(\operatorname{polar}(R,Q_{\mathfrak{N}})\) at the harmonic conjugate \(R^{\prime}\) of \(R\) with respect to \(T_{1}\) and \(T_{2}\). Precisely when \(R\) is an anisotropic midpoint of \(\{T_{1},T_{2}\}\), the point \(R^{\prime}\) is a (second) anisotropic midpoint of \(\{T_{1},T_{1}\}\) and lies on \(R^{\mathfrak{N}}\). \(Q_{\mathfrak{N}}\) is a _hypersphere_ of \(A\) if there exists an anisotropic hyperplane \(H\) of \(A\) whose anisotropic points are symmetry points of \(Q_{\mathfrak{N}}\). In this case, the point \(H^{\mathfrak{N}}\sqcap A\) is called the _center_ of the hypersphere. Figure 2. The blue conic is the absolute in a hyperbolic plane. The green conics are 1-spheres (circles) with center \(M\). The numbers indicate their quadrances. Let us assume that \(Q_{\mathfrak{R}}\) is a _hypersphere_ of \(A\) and \(M\) is its center. Then \(M^{\mathfrak{R}}\cap Q_{\mathfrak{R}}\) is the set of isotropic points of this sphere. A sphere can have real isotropic points; but these form a thin set within this sphere. Suppose the center \(M\) of \(Q_{\mathfrak{R}}\) is anisotropic. Then \(M\) is also a symmetry point of \(Q_{\mathfrak{R}}\). If \(T_{1}\) and \(T_{2}\) are two distinct anisotropic points on \(Q_{\mathfrak{R}}\), then \(d_{0}(T_{1},M)=d_{0}(T_{2},M)\), because: Let \(R\) be the intersection of the line \(T_{1}\sqcup T_{2}\) with the hyperplane \(M^{\mathfrak{R}}\) of \(\mathbf{P}\boldsymbol{V}\), then the triple \((T_{1},M,T_{2})\) is mapped onto the triple \((T_{2},M,T_{1})\) by the (distance preserving) reflection in the point \(R\). If \(M\) is anisotropic, we can define the _radius_\(r\) of a hypersphere as the distance between its center and any of its anisotropic points, and we call \(\xi_{0}(Q_{\mathfrak{R}}):=1-\cosh^{2}(r)\) the quadrance of \(Q_{\mathfrak{R}}\). If the center \(M\) of \(Q_{\mathfrak{R}}\) is isotropic, \(M\) is a touchpoint of the two quadrics \(Q_{\mathfrak{R}}\) and \(Q_{\mathfrak{R}}\), and \(Q_{\mathfrak{R}}\) is called a _horosphere_. If \(Q_{\mathfrak{R}}\) has several (more than one) isotropic points, its center is a point outside that sphere and the lines connecting an isotropic point of \(Q_{\mathfrak{R}}\) with the center are _tangents_ of the sphere. The union of these tangents through \(M\) is called the _tangent-cone_ of \(Q_{\mathfrak{R}}\) with vertex \(M\). Suppose \(Q_{\mathfrak{R}}\) has an anisotropic center \(M=m_{1}P_{1}+\cdots+m_{s+1}P_{s+1}\) and radius \(r\in\mathbb{D}\), then \[Q_{\mathfrak{R}}=\big{\{}r_{1}P_{1}+\cdots+r_{s+1}P_{s+1}|(\sum_{1\leq i,j\leq s +1}\!\!\!n_{ij}r_{i}m_{j})^{2}=\cosh^{2}(r)\,(\,\sum_{1\leq i,j\leq s+1}\!\!\!n _{ij}m_{i}m_{j})(\sum_{1\leq i,j\leq s+1}\!\!\!n_{ij}r_{i}r_{j})\big{\}}.\] If \(Q_{\mathfrak{R}}\) passes through a point \(T=t_{1}P_{1}+\ldots+t_{s+1}P_{s+1}\), \(Q_{\mathfrak{R}}\) consists of all points \(Q=r_{1}P_{1}+\cdots+r_{s+1}P_{s+1}\) satisfying the equation \[(\,\sum_{1\leq i,j\leq s+1}\!\!\!n_{ij}r_{i}m_{j})^{2}(\sum_{1\leq i,j\leq s+ 1}\!\!\!n_{ij}t_{i}t_{j})=(\,\sum_{1\leq i,j\leq s+1}\!\!\!n_{ij}t_{i}m_{j})^{ 2}(\,\sum_{1\leq i,j\leq s+1}\!\!\!n_{ij}r_{i}r_{j}).\] Let \(H_{1}\) and \(H_{2}\) be two hyperplanes of \(A\) touching \(Q_{\mathfrak{R}}\) in anisotropic points \(Q_{1},Q_{2}\), respectively. If the center \(M\) of \(Q_{\mathfrak{R}}\) is anisotropic, then \(\zeta_{0}(H_{1},M^{\mathfrak{R}})=\zeta_{0}(H_{2},M^{\mathfrak{R}})\). Here are some _examples_ of hyperspheres: (1) Suppose \(A\) is an anisotropic line. Then two distinct anisotropic points \(Q_{1},Q_{2}\in A\) form a hypersphere iff they have a midpoint. A single anisotropic point \(Q\in A\) (when counted with multiplicity 2) is a hypersphere in \(A\) with radius 0. (2) A (non-empty) tangent-cone of a hypersphere in an anisotropic plane \(A\) with \(\dim(A)>1\) is a hypersphere in \(A\) with radius \(r=0\). (3) If \(M\) is an anisotropic point in an anisotropic plane \(A\), then \(A\cap M^{\mathfrak{R}}\) is (doubly counted) a hypersphere in \(A\); \(M\) is its center and its radius is \(\frac{1}{2}\pi\mathbf{i}\). ### Simplices and their classical centers4 Footnote 4: For a definition of the term _triangle center_ see [14]. See also [6, 8, 31, 32]. Let \((P_{1},\ldots,P_{s+1})\) be a tuple of anisotropic points generating an anisotropic \(s\)-plane \(A\). We denote a point \(R=r_{1}P_{1}+\cdots+r_{s+1}P_{s+1}\in A\) by \([r_{1}:\cdots:r_{s+1}]\). The set \(\{[r_{1}:\cdots:r_{s+1}]\,|\,r_{1}r_{2}\cdots r_{s+1}\neq 0\,\}\) consists of \(2^{s}\) connected components. The closure of each component is a _simplex_ with vertices \(P_{1},\ldots,P_{s+1}\). We will especially focus on the simplex \(\Sigma=\Sigma_{0}:=\{[r_{1}:\cdots:r_{s+1}]\,|\,p_{1},\ldots,p_{s+1}\geq 0\,\}\); it will be our reference simplex. The other simplices with these vertices will be called the _mates_ of \(\Sigma\). A plane of dimension \(k\leq s\) through \(k+1\) vertices of \(\Sigma\) is called a \(k\)_-sideplane_, the intersection of \(\Sigma\) with one of its \(k\)-sideplanes a \(k\)_-face_ of \(\Sigma\). Instead of \(1\)-sideplanes we usually speak of _sidelines_, and the \(1\)-faces are also called _edges_, the \((n-1)\)-faces _facets_ of \(\Sigma\). Even the vertices \(P_{i}\) are accepted as sidelplanes of \(\Sigma\). The classical centers of an \(n\)-simplex are the _centroid_\(G\), the _circumcenter_\(O\) and the _incenter_\(I\). We want all the edges of \(\Sigma\) to have midpoints and demand that all sidelines of \(\Sigma\) are anisotropic and all vertices \(P_{1}\),..., \(P_{s+1}\) have the same sign. In this case, the _centroid_ of \(\Sigma\) is the point \(G=[1\,:\,\cdots\,:\,1]\) and, more generally, the centroid of a \(k\)-face \(\Sigma\cap(P_{i_{1}}\sqcup\cdots\sqcup P_{i_{k+1}})\) is the point \(P_{i_{1}}+\cdots+P_{i_{k+1}}\). The line joining the centroid of a \(k\)-face of \(\Sigma\) with the centroid of the opposite \((s{-}k{-}1)\)-face passes through \(G\). The centroids of the mates of \(\Sigma\) are the points \(R=[r_{1}\,:\,\cdots\,:\,r_{s+1}]\) with \(P\neq G\) and \(r_{j}\in\{-1,1\}\) for \(1\leq j\leq s{+}1\). Without loss of generality we can put \(r_{1}\) to \(1\). We label the centroids of \(\Sigma\) and its mates by integers from \(0\) to \(2^{s}-1\) as follows. Given any centroid \([1\,:\,r_{2}\,:\,\cdots\,:\,r_{s+1}]\), define numbers \(r_{j}^{\star}\in\{0,1\}\) by \(r_{j}^{\star}=1\) iff \(r_{s+1-j}=-1\). Then \(G_{k}\,:=[1\,:\,r_{2}\,:\,\cdots\,:\,r_{s+1}]\) iff \(k=\sum_{0\leq j\leq s}r_{j}^{\star}2^{j}\). Here are two examples: \(G_{0}=G\) and \(G_{5}=[1\,:\,\cdots\,:\,1\,:\,-1\,:\,1\,:\,-1]\). The simplices with vertices \(P_{1},\ldots,P_{s+1}\) will now be labeled such that \(\Sigma_{k}\) is the simplex with \(G_{k}\in\Sigma_{k}\) for \(k=0,\ldots,2^{s}{-}1\). We accept each point \(G_{k},0\leq k\leq 2^{s}{-}1\), as a centroid of each of the simplices \(\Sigma_{j},0\leq j\leq 2^{s}{-}1\), but define \(G_{k}\) to be the _proper_ centroid of \(\Sigma_{j}\) iff \(k=j\). We specify the edge lengths of simplex \(\Sigma_{k}\). The edge of \(\Sigma_{k}\), \(k=\sum_{0\leq j\leq s}r_{j}^{\star}2^{j}\), with vertices \(P_{j}\) and \(P_{l}\) has the length \[\mu_{k;jl}\,=\,\left\{\begin{array}{ll}\mu_{0;jl}=\,\mu([P_{j},P_{l}]_{+}) \,,&\mbox{if }r_{j}^{\star}r_{l}^{\star}=1,\\ \mu([P_{j},P_{l}]_{-})\,,&\mbox{if }r_{j}^{\star}r_{l}^{\star}=-1.\end{array}\right.\] A _circumsphere_ of \(\Sigma\) is a hypersphere in \(A\) passing through all vertices of \(\Sigma\). There are \(2^{s}\) circumspheres \(S_{0}\,,\ldots,S_{2^{s}-1}\) of \(\Sigma\), and all of them are also hyperspheres of the mates of \(\Sigma\). But as for the centroids, there is a bijective correspondence between these hyperspheres and simplices in such a way that we say that \(S_{k}\) is the _proper circumsphere_ of \(\Sigma_{k}\). All inner points of \(\Sigma_{k}\) are points inside the proper circumsphere of \(\Sigma_{k}\), but the center of this circumsphere need not lie inside \(\Sigma_{k}\). The proper circumsphere of \(\Sigma\) is \(S_{0}=\big{\{}[r_{1}\,:\,\cdots\,:\,r_{s+1}]\,|\sum_{1\leq i,j\leq s+1}\sinh^ {2}(\frac{1}{2}d_{0}(P_{i},P_{j}))\,r_{i}r_{j}=0\big{\}}\). _Proof_: It is obvious that if we intersect a circumsphere \(S_{i}\) with \(k\)-sideplanes of \(\Sigma\), we get spheres of dimension \(k-1\), each passing through \(k+1\) of the vertices of \(\Sigma\). For \(k=2\) we get circumcircles of triangles. The validity of the equation is proven for these one-dimensional subsets of \(S\) (see [7, 8]). From this follows the correctness of the equation. \(\square\) Define \(\mathfrak{G}=(\mathfrak{c}_{ij})\in\mathbb{R}^{(s+1)\times(s+1)}\) by \(\mathfrak{c}_{ij}\,:=\,P_{i}^{\,\circ}[\mathfrak{A}]P_{j}^{\,\circ}\). The proper circumcenter of \(\Sigma_{0}\) is \(O=O_{0}=[r_{1}\,:\,\cdots\,:\,r_{s+1}]\) with \((r_{1},\ldots,r_{s+1})\,\mathfrak{G}=(1,1,\ldots,1)\), the proper circumcenter of \(\Sigma_{1}\) is \(O_{1}=[r_{1}\,:\,\cdots\,:\,r_{s+1}]\) with \((r_{1},\ldots,r_{s+1})\,\mathfrak{G}=(1,\ldots,1,-1)\) etc., cf. [13, 8]. _Proof_: We give a proof for \(O=O_{0}\): \(O\) is a point on all bisectors of segments \([P_{i},P_{j}],0\leq i<j\leq s{+}1\). Therefore, \(\mathfrak{c}_{0}(O,P_{i}{-}P_{j})=1\)\((0{\leq}i{<}j{\leq}s{+}1)\) and \(O^{\mathfrak{A}}\sqcap A=\{[x_{1}\,\ldots\,x_{n+1}]\,|\,x_{1}+\cdots+x_{n+1}=0\}\). \(\square\) _Remark_: If \(Q=[q_{1}\,:\,\ldots\,,q_{s+1}]\) is a point in \(A\) with \(q_{1}q_{2}\,\ldots\,q_{s+1}\neq 0\), we call the \((s{-}1)\)-plane \(Q^{\Sigma}\,:=\big{\{}[x_{1}\,:\,\ldots\,:\,x_{s+1}]\,|\,\frac{x_{1}}{q_{1}} +\cdots+\frac{x_{s+1}}{q_{s+1}}=0\big{\}}\) the \(\Sigma\)-polar of \(Q\). For \(i=0,\,\ldots,2^{s}{-}1\) the equation \(O_{i}\,:=A\sqcap(G_{i}^{\Sigma})^{\mathfrak{M}}\) applies. An equation for the radius \(r\) of \(S_{0}\) is \[\tanh^{2}(r)=\tanh^{2}(d_{0}(O,P_{1}))=1-\operatorname{sgn}(P_{1})\big{(}(1, \ldots,1)(\mathfrak{G}^{-1})(1,\ldots,1)\big{)}.\] _Example_: Let \(P_{1}\), \(P_{2}\), \(P_{3}\) be three distinct points in the elliptic plane \((\mathbb{P}\mathbb{R}^{3},\mathfrak{M}=\operatorname{diag}(1,1,1))\). Put \(c_{ij}\,:=\,\cosh(d_{0}(P_{i},P_{j}))\leq 1\). Then \(\mathfrak{G}=\left(\begin{array}{ccc}1&c_{12}&c_{13}\\ c_{12}&1&c_{23}\\ c_{13}&c_{23}&1\end{array}\right)\). We calculate \(O\) and \(\tanh^{2}(r)\): \[O =(1-c_{23})(1+c_{23}-c_{12}-c_{13})\,P_{1}+(1-c_{13})(1+c_{13}-c_ {12}-c_{23})\,P_{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(1-c_{12})(1+c_{12}-c _{13}-c_{23})\,P_{3},\] \[\tanh^{2}(r) =\frac{2(c_{12}-1)(c_{13}-1)(c_{23}-1)}{\det(\mathfrak{G})}= \frac{2(c_{12}-1)(c_{13}-1)(c_{23}-1)}{2c_{12}c_{13}c_{23}-c_{12}^{2}-c_{13}^{ 2}-c_{23}^{2}+1}.\] We denote the \((s-1)\)-sideplane of \(\Sigma\) opposite the vertex \(P_{k}\) by \(H_{k}\) and put \(Q_{k}:=H_{k}^{\mathfrak{M}}\sqcap\ A\). There are \(2^{s}\) simplices \(\Sigma^{0},\ldots,\Sigma^{2^{s}-1}\) with vertices \(Q_{1},\ldots,Q_{s+1}\) and there is a \(1\,:\,1\)-correspondence \(\Sigma_{k}\mapsto\Sigma^{k}\) given by \(\Sigma^{k}\,:=A\setminus\bigcup\limits_{R\text{ inner point of }\Sigma_{k}}R^{\mathfrak{M}}\). We call \(\Sigma^{k}\) the _dual_ of \(\Sigma_{k}\). An _insphere_ of \(\Sigma\) is a hypersphere in \(A\) which touches each \((s-1)\)-dimensional sideplane \(H_{k}\) of \(\Sigma\). Thus, its center has the same distance from all these planes and hence the same distance from the vertices \(Q_{k}\) of \(\Sigma^{0}\). In other words, the incenters of \(\Sigma\) are the circumcenters of \(\Sigma^{0}\). Since circumcenters of a simplex exist only if all its vertices have the same sign, a necessary condition for the existence of incenters of \(\Sigma\) is that all vertices of \(\Sigma^{0}\) have the same sign. We assume now, that \(\Sigma_{0}\) has got an insphere and that the facet of \(\Sigma_{0}\) opposite vertex \(P_{i}\) has the staudtian \(s_{i}\). Then \(s_{i}/s_{j}\in\mathbb{R}^{+}\) for \(1\leq i,j\leq n+1\). The proper incenter of \(\Sigma_{0}\) is \(I_{0}=[s_{1}\,:\,\ldots\,:\,s_{n+1}]=[1/\sinh(d_{0}(P_{1},H_{1})):\,\cdots\,: \,1/\sinh(d_{0}(P_{s+1},H_{s+1}))]\). \(I_{0}\) can also be written as a function of the interior angles \(\alpha_{i}\) at the vertices of \(\Sigma_{0}\): \(I_{0}=[\sqrt{\nu(\alpha_{1})}\,:\,\cdots\,:\sqrt{\nu(\alpha_{s+1})}\,]\). The proper circumcenter of \(\Sigma^{k}\) is the proper incenter of \(\Sigma_{k}\), and the proper incenter of \(\Sigma^{k}\) is the proper circumcenter of \(\Sigma_{k}\). We denote the proper incenter of \(\Sigma_{k}\) by \(I_{k}\). This incenter \(I_{k}\) is always a point inside the simplex \(\Sigma_{k}\), but all the points inside this insphere may lie outside the simplex, see [8]. Since \(I_{k}\in\Sigma_{k}\), \(I_{k}\) is the barycentric product of \(I_{0}\) with the centroid \(G_{k}\). For example, if \(s=3\) and \(k=2\), then \(I_{k}=[s_{1}\,:\,s_{2}\,:\,-s_{3}\,:\,s_{4}]\). Let \(H_{i}\), as before, be the \((s-1)\)-sideplane of \(\Sigma\) opposite vertex \(P_{i}\). The points \(R_{1},\ldots,R_{s+1}\), \(R_{i}\,:=(P_{i}\sqcup H_{i}^{\mathfrak{M}})\sqcap H_{i}\), are called the pedals of \(\Sigma\). The tuples \(P_{1}\),..., \(P_{s+1}\) and \(R_{1},\ldots,R_{s+1}\) are, in general, not perspective. If they are, the perspector is called _orthocenter_ of \(\Sigma\). ### Radical centers of spheres Let \((P_{1},\ldots,P_{s+1})\) be a tuple of anisotropic points generating an anisotropic \(s\)-plane \(A\). We assume that all these points have the same sign and that all lines \(P_{i}\sqcup P_{j}\) are anisotropc lines of the same type, either all elliptic or all hyperbolic. As in the last section, we regard \((P_{1},\ldots,P_{s+1})\) as vertices of \(2^{s}\) simplices \(\Sigma_{i}\) and we define the matrix \(\mathfrak{G}\) by \(\mathfrak{c}_{ij}\,:=\,P_{i}^{\circ}[\mathfrak{M}]P_{j}^{\circ}\). Around each point \(P_{i}\) we put an \((s-1)\)-sphere \(S_{i}\) (a hypersphere in \(A\)) in such a way that all anisotropic points on these spheres have the same sign. We denote the radius of \(S_{i}\) by \(\rho_{i}(\neq\frac{1}{2}\pi i)\) and put \(r_{i}:=\cosh(\rho_{i}),1\leq i\leq s+1\). If \(Z\) is outside one of these spheres, then it is outside of all these spheres and a _radical_\((s-1)\)_-sphere_ can be drawn around \(Z\), a hypersphere in \(A\) that meets all the spheres \(S_{1},\ldots,S_{n+1}\) orthogonally. The radius \(\rho\) of this radical sphere satisfies the equations \(\cosh(\rho)=\dfrac{\cosh(d_{0}(Z,P_{k}))}{\cosh(\rho_{k})))}\), \(k=1,\ldots,s{+}1\). \(Z\) is not the only radical center of these spheres; the other radical centers are the points \(Z^{\prime}\!=\!(z^{\prime}_{1}\!:\cdots:z^{\prime}_{s+1})\neq Z\), with \((z^{\prime}_{1},\ldots,z^{\prime}_{s+1})\,\mathfrak{G}\in\{(t_{1}r_{1},\ldots, t_{s+1}r_{s+1})\,|\,t_{1},\ldots,t_{s+1}=\pm 1\}\). If \(Z\) is outside one of these spheres, then it is outside of all these spheres and a _radical_\((s{-}1)\)-_sphere_ can be drawn around \(Z\), a hypersphere in \(A\) that meets all the spheres \(S_{1},\ldots,S_{n+1}\) orthogonally. The radius \(\rho\) of this radical sphere satisfies the equations \(\cosh(\rho)=\dfrac{\cosh(d_{0}(Z,P_{k}))}{\cosh(\rho_{k})))}\), \(k=1,\ldots,s{+}1\). \(Z\) is not the only radical center of these spheres; the other radical centers are the points \(Z^{\prime}\!=\!(z^{\prime}_{1}\!:\cdots:z^{\prime}_{s+1})\neq Z\), with \((z^{\prime}_{1},\ldots,z^{\prime}_{s+1})\,\mathfrak{G}\in\{(t_{1}r_{1},\ldots, t_{s+1}r_{s+1})\,|\,t_{1},\ldots,t_{s+1}=\pm 1\}\). Figure 3. A 3-simplex (triangle) \((P_{1},P_{2},P_{3})\) is situated in an elliptic plane. The blue dotted circle \(\{(q_{1}\!:\!q_{2}\!:\!q_{3})\,|\,q_{1}^{\,2}+q_{2}^{\,2}=q_{3}^{\,2}\}\) is used as a construction aid and serves for orientation. The points \(Z,Z_{1},Z_{2},Z_{3}\) are the radical centers of the black circles around the vertices of the triangle. The red and orange lines are the radical axes. The red circle is the radical circle with center \(Z\). A special case: If all the radii \(\rho_{k}\) agree, then the radical centers are the circumcenters of \(\Sigma\) and \(Z\) is the proper circumcenter \(\mathcal{O}\) of \(\Sigma_{0}\). Let us consider the plane \(H_{12}:=Z\sqcup(P_{1}\sqcup P_{2})^{\mathfrak{M}}\). \(Z\) is not a point on \((P_{1}\sqcup P_{2})^{\mathfrak{M}}\sqcap A\), therefore \(H_{12}\) is a hyperplane in \(A\) and all points in \(H_{12}\) have the same power with respect to \(S_{1}\) and \(S_{2}\). But there is another hyperplane \(\tilde{H}_{12}\) in \(A\) consisting also of points with the same power with respect to both hyperspheres. Both planes are _radical planes_ of \(S_{1}\) and \(S_{2}\) in \(A\). \(P_{1}^{\mathfrak{M}}\sqcap A\), \(H_{12}\), \(P_{2}^{\mathfrak{M}}\sqcap A\), \(\tilde{H}_{12}\) form a harmonic pencil, see Figure 3. To each \(S_{k}\) is assigned a dual \((s-1)\)-sphere \(S_{k}^{\delta}=\{T^{\mathfrak{M}}\sqcap A\}\)\(T\) is a tangent hyperplane of \(S_{k}\)in \(A\)). Let us look at the two spheres \(S_{1}\) and \(S_{2}\). Their duals have together two radical hyperplanes, \(H_{12}^{\prime}\) and \(\tilde{H}_{12}^{\prime}\). The duals in \(A\) of these hyperplanes are the points \(R_{12}^{\pm}=\sinh(\rho_{2})P_{1}\pm\sinh(\rho_{1})P_{2}\) on the line \(P_{1}\sqcup P_{2}\). These two points are called _centers of similitude_ of \(S_{1}\) and \(S_{2}\). Together with \(P_{1}\) and \(P_{2}\) they form a harmonic range \(P_{1}\), \(R_{12}^{+}\), \(P_{2}\), \(R_{12}^{-}\). If the spheres \(S_{1}\), \(S_{2}\) have a common tangent plane, it passes either through \(R_{12}^{+}\) or through \(R_{12}^{-}\). We call the point \([1/\sinh(\rho_{1}):\cdots:1/\sinh(\rho_{n+1})]\) the _inner center of similitude_ of the collection \(\{S_{k}|1\leq k\leq s+1\}\), and it is obvious that subsets of this collection have their Figure 4. \((P_{1}P_{2}P_{3})\) is a triangle in an elliptic plane. The point \(T\) is the inner center of similitude of the black circles around the vertices of this triangle. own inner center of similitude. The barycentric product of an inner center of similitude with a centroid of \(\Sigma\) gives a _center of similitude_. Two special cases: (1) If all radii \(\rho_{k}\) are the same, then the inner center of similitude is \(G_{0}\). (2) If \(\rho_{k}\) equals the distance between the vertex \(P_{k}\) and its opposite \((n{-}1)\)-sideplane \(H_{k}\), \(1\leq k\leq s{+}1\), then \(I_{0}\) is the inner center of similitude. ## 3. Geometry on Cayley-Klein spaces ### Cayley-Klein spaces A _Cayley-Klein space_ with underlying projective space \(\mathbb{P}\boldsymbol{V}\) is a nested sequence of finitely many, say \(\rho+1\), semi CK spaces \((A_{i},\beta_{i})\), such that \(A_{0}=\mathbb{P}\boldsymbol{V}\), \(A_{i+1}\) is a nonempty radical \(\operatorname{rad}(\beta_{i})\), \(0\leq i\leq\rho-1\), and \(A_{\rho+1}\mathbin{:=}\operatorname{rad}(\beta_{\rho})=\emptyset\). Since the subspaces \(A_{i}\) can be read off from the functions \(\beta_{i-1}\), we denote this CK-space by \(\big{(}\mathbb{P}\boldsymbol{V};\beta_{0},\beta_{1},\dots,\beta_{\rho}\big{)}\). If \(\beta_{0}\) is nondegenerate (i.e. \(\rho=0\)), then \((\mathbb{P}\boldsymbol{V},\beta_{0})\) is a CK-space. We call a point \(P\in\mathbb{P}\boldsymbol{V}\)_anisotropic of degree \(k\in\{0,\dots,\rho\}\)_ if \(P\in A_{k}\) and \(P\not\in Q_{\beta_{k}}\), and we call a point \(P\)_isotropic of degree \(k\)_ (or shorter \(k\)_-isotropic_) if \(P\in Q_{\beta_{k}}A_{k+1}\). A point \(P\) is called _isotropic_ if it is \(k\)-isotropic for some \(k\in\{0,\dots,\rho\}\), otherwise it is called _anisotropic_. A mapping \(\phi\in\operatorname{Aut}(\mathbb{P}\boldsymbol{V})\) is an _automorphism_ on \(\big{(}\mathbb{P}\boldsymbol{V};\beta_{0},\beta_{1},\dots,\beta_{\rho}\big{)}\) if the restriction of \(\phi\) to \(A_{i}\) is an automorphism on \((A_{i},\beta_{i}),0\leq i<\rho\). See [10] for a more detailed study of these mappings. ### The \(\varepsilon\)-quadrance as a distance-function on Cayley-Klein spaces Let \((n^{\downarrow},n^{+},n^{-},n^{\uparrow})\) be a quadruple of nonnegative integers with \(n^{\downarrow}+n^{+}+n^{-}+n^{\uparrow}=n{+}1\). A matrix \(\mathfrak{A}=(\mathfrak{a})_{ij}\in\mathbb{R}^{(n+1)\times(n+1)}\) we denote by \(\mathfrak{A}^{(n^{\downarrow},n^{+},n^{-},n^{\uparrow})}\) if \(\mathfrak{a}_{ii}=1\) for \(n_{\downarrow}\leq i\leq n_{\downarrow}+n_{+}\), \(\mathfrak{a}_{ii}=-1\) for \(n^{\downarrow}+n^{+}<i\leq n^{\downarrow}+n^{+}+n^{-}\), and \(\mathfrak{a}_{ij}=0\) otherwise. We now refer to the CK space introduced in the last section. We put \(\beta_{i}^{\downarrow}\mathbin{:=}\sum_{k<i}(\beta_{k}^{\,+}+\beta_{k}^{\,-})\), \(\beta_{i}^{\,\uparrow}\mathbin{:=}\sum_{k>i}(\beta_{k}^{\,+}+\beta_{k}^{\,-})\), \(\beta_{0}^{\,\downarrow}\mathbin{:=}0\), \(\beta_{\rho}^{\,\uparrow}\mathbin{:=}0\) and \(\mathfrak{A}_{i}\mathbin{:=}\mathfrak{A}^{(\beta_{i}^{\,+},\beta_{i}^{\,-}, \beta_{i}^{\,-},\beta_{i}^{\,\uparrow})}\). Then \(\big{(}\mathbb{P}\boldsymbol{V};\beta_{0},\beta_{1},\dots,\beta_{\rho}\big{)}\) and \(\big{(}\mathbb{P}\mathbb{R}^{n+1};\mathfrak{A}_{1},\dots,\mathfrak{A}_{\rho} \big{)}\) are isomorphic CK spaces. Let \(\varepsilon\mathbin{\in}\mathbb{R}^{*}\) be a positive infinitesimal. We put \(\mathfrak{A}_{\varepsilon}\mathbin{:=}\mathfrak{A}_{0}+\sum_{1\leq i\leq \rho}\mathfrak{A}_{i}\varepsilon^{i}\). \(\mathfrak{A}_{\varepsilon}\) holds the complete information about the CK space \(\big{(}\mathbb{P}\mathbb{R}^{n+1};\mathfrak{A}_{0},\dots,\mathfrak{A}_{\rho} \big{)}\). In the following we study the CK space \(\big{(}\mathbb{P}\mathbb{R}^{n+1};\mathfrak{q}_{\varepsilon}\big{)}\) more closely. It can be easily checked that a point \(P=\mathbb{R}\boldsymbol{p}\in\mathbb{P}\boldsymbol{V}\) is isotropic precisely when \(\boldsymbol{p}\,\,\{\mathfrak{A}_{\varepsilon}\}\,\boldsymbol{p}=0\). Let \(U\) be a plane of dimension \(s\) generated by points \(P_{1}=\mathbb{R}\boldsymbol{p}_{1}\), \(\dots\), \(P_{s+1}=\mathbb{R}\boldsymbol{p}_{s+1}\). Then \(U\) is called _isotropic_ if the matrix \((\boldsymbol{p}_{i}\{\mathfrak{A}_{\varepsilon}\}\boldsymbol{p}_{j})_{{}_{ij}}\) is degenerate. We now assume that \(U\) is anisotropic. If \(\tilde{U}\) is a second anisotropic plane of the same dimension \(s\), generated by points \(\tilde{P}_{1}=\mathbb{R}\tilde{\boldsymbol{p}}_{1},\dots,\tilde{P}_{s+1}= \mathbb{R}\tilde{\boldsymbol{p}}_{s+1}\), we define the \(\varepsilon\)-quadrance of \(U\) and \(\tilde{U}\) by \[\xi_{\varepsilon}(U,\tilde{U})\mathbin{:=}\star_{\varepsilon}\big{(}1-\frac{ \big{(}\det\big{(}\boldsymbol{p}_{i}\{\mathfrak{A}_{\varepsilon}\}\boldsymbol{p} _{j}\big{)}_{{}_{ij}}\big{)}^{2}}{\big{(}\det\big{(}\boldsymbol{p}_{i}\{ \mathfrak{A}_{\varepsilon}\}\boldsymbol{p}_{j}\big{)}_{{}_{ij}}\big{)}\big{(} \det\big{(}\tilde{\boldsymbol{p}}_{i}\{\mathfrak{A}_{\varepsilon}\}\boldsymbol{p} _{j}\big{)}_{{}_{ij}}\big{)}}\big{)}\,.\] We call a plane \(U\mathbin{\leq}\mathbb{P}\boldsymbol{V}\) regular if \(U\mathbin{\sim}A_{0}\neq\emptyset\), and we call a regular plane \(U\leq\mathbb{P}\boldsymbol{V}\) a _light-plane_ if \(\xi_{\varepsilon}(Q,R)\in\mathbb{R}\,\varepsilon^{2}\) for all regular points \(P,Q\in U\). (According to this definition regular points are light-planes of dimension \(0\).) ### Total polars and polar varieties There are various definitions for a _(total) polar of a plane in a CK-space_ in the literature. We adopt the definition given in [11]. If \(P\) is an anisotropic point of degree \(s<\rho\) in \((\mathrm{P}V,\mathfrak{A}_{1},\ldots,\mathfrak{A}_{\rho})\), then any hyperplane in \(\mathrm{P}V\) that contains \(P^{\mathfrak{A}_{s+1}}\) is a total polar of \(P\). Let \(U\leq\mathrm{P}V\) be a \(k\)-plane. We can find an independent set \(B=\{P_{1},\ldots,P_{k+1}\}\) of points that generate \(U\) such that \(\#(B\cap A_{j})=\dim(U\cap A_{j})+1\), \(1\leq j\leq\rho+1\). If \(H_{i}\) is a total polar of \(P_{i}\) for \(1\leq i\leq s+1\) and \(H_{1}\),..., \(H_{s+1}\) are independent, the \((n-s)\)-plane \(\bigcap\limits_{1\leq i\leq s+1}H_{i}\) is a total polar of U. polar of U. If \(\bar{U}\) is a total polar of \(U\), then \(U\) is a total polar of \(\bar{U}\). If \(P\) is a point in \(U\) and \(Q\) a point in a total polar of \(U\), then \(\xi_{\epsilon}(P,Q)=1\). The set of all total polars of a \(k\)-plane \(U\) is called _polar variety_ of \(U\), denoted by \(U^{\perp}\). ### Examples (1) \(\big{(}\mathrm{P}\mathbb{R}^{1};\mathfrak{A}_{\epsilon}=\mathrm{diag}(1,0) \big{)}\) is a CK space of dimension \(0\); it consists of one point, \(E_{1}=\mathbb{R}\boldsymbol{e}_{1}\). \(\xi_{\epsilon}(E_{1},E_{1})=0\). The only total polar of \(E_{1}\) is the empty set. (In all CK spaces with underlying projective space \(\mathrm{P}V\) the total polar of \(0\) is \(\mathrm{P}V\) and vice versa.) (2) The isotropic points on the hyperbolic line \(\big{(}\mathrm{P}\mathbb{R}^{2};\mathfrak{A}_{\epsilon}=\mathrm{diag}(1,-1) \big{)}\) are the points \(P_{1}=(1\,:\,1)\) and \(P_{2}=(1\,:\,-1)\). (They are isotropic of degree \(0\).) The quadrance of two distinct points on this line is not defined if one of these is \(P_{1}=(1\,:\,1)\) or \(P_{2}=(1\,:\,-1)\). The \(\epsilon\)-quadrance of two anisotropic points \(Q=(q_{1}\!:\!q_{2})\) and \(R=(r_{1}\!:\!r_{2})\) is \(\xi_{\epsilon}(Q;R)=\frac{(q_{1}r_{2}-r_{1}q_{2})^{2}}{(q_{1}^{\,2}-q_{2}^{\,2 })(r_{1}^{\,2}-r_{2}^{\,2})}\). As \(\mathfrak{A}_{\epsilon}\), \(\xi_{\epsilon}(Q;R)\) is independent of \(\epsilon\). \(Q^{\perp}=\{(q_{2}\!:\!q_{1})\}\), \(P_{1}^{\perp}=\{P_{1}\}\) and \(P_{2}^{\perp}=\{P_{2}\}\). (3) On the parabolic line \(\big{(}\mathrm{P}\mathbb{R}^{2};\mathfrak{A}_{\epsilon}\big{)}\), \(\mathfrak{A}_{\epsilon}=\mathrm{diag}(1,\epsilon)\), the point \(P=(0\!:\!1)\) is anisotropic of degree \(1\), all other points are anisotropic of degree \(0\). The \(\epsilon\)-quadrance between two points \(Q=(1\!:\!q_{2})\) and \(R=(1\!:\!r_{2})\) is \(\epsilon(q_{2}-r_{2})^{2}\), the quadrance between \(P\) and \(Q\) is \(1\). The only total polar of \(Q\) is the point \(P\), while \(P^{\perp}=\mathrm{P}\mathbb{R}^{2}\). (4) Consider the CK space \((\mathrm{P}\mathbb{R}^{3};\mathfrak{A}_{\epsilon})\), \(\mathfrak{A}_{\epsilon}=\mathrm{diag}(1,1,\epsilon)\). It is called the _co-euclidean plane_ or _polar-euclidean plane_. The point \(E_{3}\) is anisotropic of degree \(1\); it is often called the _absolute midpoint_. All other points are anisotropic of degree \(0\). \(E_{3}^{\perp}\) is the set of all lines in \(\mathrm{P}\mathbb{R}^{3}\). \(E_{1}^{\perp}\) consists of only one element, the line \(E_{2}\sqcup E_{3}\). The polar variety of the line \(E_{1}\sqcup E_{2}\) consists of one point, the point \(E_{3}\), while the polar variety of the line \(E_{1}\sqcup E_{3}\) is the set (the range) of points on the line \(E_{1}\sqcup E_{2}\), thus \((E_{1}\sqcup E_{3})^{\perp}=E_{1}\sqcup E_{2}\). Let \(Q=(q_{1}\!:\!q_{2}\!:\!q_{3})\) and \(R=(r_{1}\!:\!r_{2}\!:\!r_{3})\) be anisotropic points of degree \(0\). If \(Q\) and \(R\) are not on one line with \(E_{3}\), then \(\xi_{\epsilon}(Q,R)=\frac{(q_{1}r_{2}-r_{1}q_{2})^{2}}{(q_{1}^{\,2}-q_{2}^{\,2 })(r_{1}^{\,2}-r_{2}^{\,2})}\). If \(Q,R,E_{3}\) are collinear, then \(\xi_{\epsilon}(Q,R)=\varepsilon\,\frac{(q_{1}r_{3}-r_{1}q_{3})^{2}+(q_{2}r_{3}- r_{2}q_{3})^{2}}{(q_{1}^{\,2}-q_{2}^{\,2})(r_{1}^{\,2}-r_{2}^{\,2})}\). The quadrance between \(E_{3}\) and an anisotropic point \(P\ \neq E_{3}\) is \(\xi_{0}(E_{3},P)=\xi_{\epsilon}(E_{3},P)=1\). (The quadrance between two points which are anisotropic of different degree takes always the value \(1\).) Let us calculate the quadrance between two lines in this plane. All lines are anisotropic, and for all lines \(L\) we find \(\xi_{\epsilon}(L,L)=0\). Let \(L_{1},L_{2}\) be two distinct lines. If \(L_{1},L_{2}\) both pass through \(E_{3}\), then \(\xi_{\epsilon}(L_{1},L_{2})=\xi_{\epsilon}(S,T)\), where \(S\in L_{1}\) and \(T\in L_{2}\) are any two points different from \(E_{3}\). If just one of these lines passes through \(E_{3}\), then \(\xi_{\epsilon}(L_{1},L_{2})=1\). We now assume that neither \(L_{1}\) nor \(L_{2}\) passes through \(E_{3}\) and that they meet at a point \(P=(p_{1}\!:\!p_{2}\!:\!p_{3})\). If \(Q=(q_{1}\!:\!q_{2}\!:\!q_{3})\neq P\) is another point on \(L_{1}\) and \(R=(r_{1}\!:\!r_{2}\!:\!r_{3})\neq P\) a point on \(L_{2}\) we calculate \(\xi_{\epsilon}(L_{1},L_{2})\), getting \[\xi_{\epsilon}(L_{1},L_{2})=\epsilon\,\frac{(p_{1}^{2}+p_{2}^{2})( \det\left(\begin{array}{ccc}p_{1}&p_{2}&p_{3}\\ q_{1}&q_{2}&q_{3}\\ r_{1}&r_{2}&r_{3}\end{array}\right)^{2}}{(p_{1}q_{2}-p_{2}q_{1})^{2}(p_{1}r_{2}- p_{2}r_{1})^{2}}\.\] When we choose \(Q=L_{1}\sqcap p^{\mathbb{M}_{0}}\) and \(R=L_{2}\sqcap P^{\mathbb{M}_{0}}\), then \(\xi_{\epsilon}(L_{1},L_{2})=\xi_{\epsilon}(Q,R)\). (5) We calculate the \(\epsilon\)-quadrance of two lines in the euclidean space \((\mathbb{P}\mathbb{R}^{4},\ \mathfrak{R}_{\epsilon}),\mathfrak{R}_{\epsilon}= \operatorname{diag}(1,\epsilon,\epsilon,\epsilon)\). Let \(P:=(1\!:\!p_{2}\!:\!p_{3}\!:\!p_{4})\) and \(Q:=(1\!:\!q_{2}\!:\!q_{3}\!:\!q_{4})\) be two regular points in \(\mathbb{P}\mathbb{R}^{4}\), and let \(R:=(0\!:\!r_{2}\!:\!r_{3}\!:\!r_{4})\), \(S:=(0\!:\!s_{2}\!:\!s_{3}\!:\!s_{4})\in A_{1}\) be points "at infinity". Put \(\boldsymbol{p}:=(p_{2},p_{3},p_{4})\), \(\boldsymbol{q}:=(q_{2},q_{3},q_{4})\), \(\boldsymbol{r}:=(r_{2},r_{3},r_{4})\) and \(\boldsymbol{s}:=(s_{2},s_{3},s_{4})\). Without loss of generality we may assume that \(|\boldsymbol{r}|^{2}=r_{2}^{2}+r_{3}^{2}+r_{4}^{2}=1=|\boldsymbol{s}|^{2}\). If \(R\neq S\), then \(\xi_{\epsilon}(P\sqcup R,Q\sqcup S)=|\boldsymbol{r}\times\boldsymbol{s}|^{2}= \xi_{\epsilon}(R,S)\). If \(R=S\), then \(\xi_{\epsilon}(P\sqcup R,Q\sqcup S)=\epsilon|\boldsymbol{r}\times(\boldsymbol {p}-\boldsymbol{q})|^{2}\). This number can be interpreted as the euclidean squared distance of the parallel lines \(P\sqcup R\), \(Q\sqcup S\). There is one total polar of \(P\sqcup R\) passing through \(P\). This plane meets the line \(Q\sqcup R\) at a point \(Q^{\prime}\) with \(\xi_{\epsilon}(P,Q^{\prime})=\epsilon|\boldsymbol{r}\times(\boldsymbol{p}- \boldsymbol{q})|^{2}\). _Remark_: E. Study [29, p. 205] introduced an angular distance of two skew lines which is a dual number made up of the angle and the euclidean distance of these lines. ### Reflections on \(\big{(}\mathbb{P}\mathbb{R}^{n+1};\mathfrak{R}_{\epsilon}\big{)}\) Let \(Q\), \(R\), \(S\in\mathbb{P}\mathbb{R}^{n+1}\) be anisotropic of degree \(0\). (1) Assume that \(U\) is a plane in \(\mathbb{P}\mathbb{R}^{n+1}\) and \(R\in U\). The image of \(Q\) under the reflection in \(R\) has the same distance from \(U\) as \(Q\). (2) Let \(Q^{\prime}\) be the image of \(Q\) under a reflection in \(R\) and \(Q^{\prime\prime}\) the image of \(Q^{\prime}\) under a reflection in \(S\). Put \(E:=Q\sqcup R\sqcup S\). If \(\dim(E)<2\), then the distance of \(Q\) and \(Q^{\prime\prime}\) is twice distance of \(R\) and \(S\). Otherwise there is uniquely determined point \(P\in E\), the pole of \(R\sqcup S\) in \(E\), and angle between \(P\sqcup Q\) and \(P\sqcup Q^{\prime\prime}\) is twice the angle between \(P\sqcup R\) and \(P\sqcup S\). _Proof_ of (1). Put \(m:=\dim(U)\). We may assume that \(Q\not\in U\). We can find points \(P_{1}=\mathbb{R}(\boldsymbol{p}_{1}),\ldots,P_{m+2}=\mathbb{R}(\boldsymbol{p}_ {m+2})\in\mathbb{P}\mathbb{R}^{n+1}\) such that \(U=P_{1}\sqcup\cdots\sqcup P_{m+1}\), \(Q\in U\sqcup P_{m+2}\) and \(\boldsymbol{p}_{i}[\mathfrak{w}_{0}]\boldsymbol{p}_{j}=0\) for \(1\leq i<j\leq m+2\). Put \(\tau_{i}:=\boldsymbol{p}_{i}\,[\mathfrak{w}_{0}]\,\boldsymbol{p}_{i}\), \(i=1,\ldots,n+2\). We can find real numbers \(q_{1},\ldots,q_{m+2},r_{1},\ldots,r_{m+1}\) with \(Q=\mathbb{R}(\sum_{1\leq i\leq m+2}q_{i}\boldsymbol{p}_{i})\), \(R=\mathbb{R}(\sum_{1\leq i\leq m+1}r_{i}\boldsymbol{p}_{i})\). The image of \(Q\) under the reflection in \(R\) is the point \[Q^{\prime}=\mathbb{R}\big{(}q_{m+2}P_{m+2}+\sum_{1\leq i\leq m+1}(q_{i}+\lambda r _{i})\boldsymbol{p}_{i}\big{)}\ \text{with}\ \lambda=-2\frac{\sum_{1\leq i\leq m+1}q_{i}r_{i}\tau_{i}}{\sum_{1\leq i\leq m +1}{r_{i}^{2}\tau_{i}}}.\] The distance between \(Q\) and \(U\) is the same as the distance between the points \(Q\) and \(\tilde{Q}:=\mathbb{R}(\sum_{1\leq i\leq m+1}q_{i}P_{i})=(P_{m+2}\sqcup Q)\sqcap U\) and the distance between \(Q^{\prime}\) and \(U\) agrees with the distance between \(Q^{\prime}\) and \(\tilde{Q}^{\prime}:=\mathbb{R}\big{(}\sum_{1\leq i\leq m+1}(q_{i}+\lambda r_{i })\boldsymbol{p}_{i}\big{)}=(P_{m+2}\sqcup Q^{\prime})\sqcap U\). So we calculate \(\xi_{\epsilon}(Q,\tilde{Q})\) and \(\xi_{\epsilon}(Q^{\prime},\tilde{Q}^{\prime})\) and get \[\xi_{\epsilon}(Q,\tilde{Q})=\xi_{\epsilon}(Q^{\prime},\tilde{Q}^{\prime})=\frac{ q_{m+2}(\boldsymbol{p}_{m+2}\,[\mathfrak{w}_{\epsilon}]\,\boldsymbol{p}_{m+2})}{ \sum_{1\leq i\leq m+2}q_{i}^{2}\tau_{i}}\.\] Proof of (2). If \(Q=R=S\), the statement is obviously true. Let us assume now that \(\dim(E)\geq 1\). Let \(P_{1}=\mathbb{R}\boldsymbol{p}_{1},P_{2}=\mathbb{R}\boldsymbol{p}_{2},P_{3}= \mathbb{R}\boldsymbol{p}_{3}\leq\mathbb{P}\mathbb{R}^{n+1}\) be independent points with \(R=P_{1}\), \(R\sqcup S\leq P_{1}\sqcup P_{2}\), \(R\sqcup S\sqcup Q\leq P_{1}\sqcup P_{2}\sqcup P_{3}\) and \(\boldsymbol{p}_{1}[\mathfrak{w}_{0}]\boldsymbol{p}_{2}=\boldsymbol{p}_{1}[ \mathfrak{w}_{0}]\,\boldsymbol{p}_{3}=\boldsymbol{p}_{2}[\mathfrak{w}_{0}] \boldsymbol{p}_{3}=0\). (If n=1, we imagine \(\mathbb{P}\mathbb{R}^{2}\) embedded isometrically in a plane spanned by \(P_{1},P_{2},P_{3}\).) Put \(\tau_{i}\colon=\boldsymbol{p}_{i}\,\{\mathfrak{R}_{0}\}\,\boldsymbol{p}_{i}\), \(i=1,2,3\). Then, \(Q^{\prime}=\mathbb{R}(-q_{1}\boldsymbol{p}_{1}+q_{2}\boldsymbol{p}_{2}+q_{3} \boldsymbol{p}_{3})\) and \(Q^{\prime\prime}=\mathbb{R}((-q_{1}+\lambda s_{1})\boldsymbol{p}_{1}+(q_{2}+ \lambda s_{2})\boldsymbol{p}_{2}+q_{3}\boldsymbol{p}_{3})\) with \(\lambda=2\dfrac{q_{1}s_{1}\tau_{1}-q_{2}s_{2}\tau_{2}}{s_{1}^{\,2}\tau_{1}+s_ {2}^{\,2}\tau_{2}}\). We first consider the case \(Q\in P_{1}\sqcup P_{2}\). In this case, \[\xi_{\epsilon}(Q,Q^{\prime\prime})=4\,\dfrac{s_{2}^{\,2}(\boldsymbol{p}_{2}\, \{\mathfrak{R}_{\epsilon}\}\,\boldsymbol{p}_{2})}{s_{1}^{\,2}\tau_{1}}=4\,\xi_ {\epsilon}(R,S)\,\,\,\,\text{if $\tau_{2}=0$}\] and \[\xi_{\epsilon}(Q,Q^{\prime\prime})=4\,\dfrac{s_{1}^{\,2}s_{2}^{\,2}\tau_{1} \tau_{2}}{(s_{1}^{\,2}\tau_{1}+s_{2}^{\,2}\tau_{2})^{2}}=4\,\xi_{\epsilon}(R,S )\big{(}1-\xi_{\epsilon}(R,S)\big{)}\,\,\,\,\text{if $\tau_{2}\neq 0$}\.\] This proves the statement if \(Q\in P_{1}\sqcup P_{2}\). (The result is independent of the choice of point \(P_{3}\).) We now omit the restriction \(Q\in P_{1}\sqcup P_{2}\) and calculate \(\xi_{\epsilon}(P_{3}\sqcup R,\,P_{3}\sqcup S)\) and \(\xi_{\epsilon}(P_{3}\sqcup Q,\,P_{3}\sqcup Q^{\prime\prime})\). We have \(P=P_{3}\) and get \[\xi_{\epsilon}(P\sqcup Q,\,P\sqcup Q^{\prime\prime})=4\,\dfrac{s_{2}^{\,2}( \boldsymbol{p}_{2}\,\{\mathfrak{R}_{\epsilon}\}\,\boldsymbol{p}_{2})}{s_{1}^ {\,2}\tau_{1}}=4\,\xi_{\epsilon}(P\sqcup R,\,P\sqcup S)\,\,\,\text{if $\tau_{2}=0$}\] and \[\xi_{\epsilon}(P\sqcup Q,\,P\sqcup Q^{\prime\prime})=4\,\dfrac{s_{ 1}^{\,2}s_{2}^{\,2}\tau_{1}\tau_{2}}{(s_{1}^{\,2}\tau_{1}+s_{2}^{\,2}\tau_{2} )^{2}}\] \[=4\,\xi_{\epsilon}(P\sqcup R,\,P\sqcup S)\big{(}1-\xi_{\epsilon} (P\sqcup R,\,P\sqcup S)\big{)}\,\,\,\text{if $\tau_{2}\neq 0$}\.\qed\] ### Metric affine spaces We next consider a CK space \((\mathbb{PR}^{n+1},\mathfrak{R}_{0},\,\ldots,\,\mathfrak{R}_{\rho})\), with \(\mathfrak{R}_{0}=\text{diag}(1,0,\ldots,0)\) and \(\rho>0\). The geometry on \(\mathbb{PR}^{n+1}\setminus A_{1}\) is called metric affine. Therefore we call \((\mathbb{PR}^{n+1};\mathfrak{R}_{0},\,\ldots,\,\mathfrak{R}_{\rho})\) a metric affine CK space. If \(Q=(q_{1}:\,\cdots:q_{n+1})\in\mathbb{PR}^{n+1}\setminus A_{1}\), we put \(Q^{\circ}\,:=\,(1,q_{2}/q_{1}\,,\,\ldots,q_{n+1}/q_{1})\). Given two regular points \(Q\) and \(R\), then \(\,\xi_{\epsilon}(Q,R)=\,\star_{\epsilon}\,\big{(}(Q^{\circ}-R^{\circ})\{ \mathfrak{R}_{\epsilon}\}(Q^{\circ}-R^{\circ})\big{)}\) This number can be interpreted as the _squared distance_ of the two points. (This squared distance can be negative.) For anisotropic points \(P\) and \(Q\) of different degree we have \(\xi_{\epsilon}(P,Q)=1\). Special cases: (1) If \(\rho=1\) and \((A_{1};\mathfrak{R}_{1})\) is an elliptic space, the geometry on \(\mathbb{P}\boldsymbol{V}\setminus A_{1}\) is called _euclidean_. (2) If \(n=4\) and \((A_{1};\mathfrak{R}_{1})\) is a hyperbolic space, the geometry on \(\mathbb{P}\boldsymbol{V}\setminus A_{1}\) is called _Minkowski-geometry (of space-time)_. (3) If \(\rho=2\) and \((A_{1};\mathfrak{R}_{2},\mathfrak{R}_{3})\) is a polar-euclidean space, the geometry on \(\mathbb{P}\boldsymbol{V}\setminus A_{1}\) is called _galilean_. (4) A _flag-space_ can be characterized by \(\rho=n\). Let \(M\) be a regular point and \(r\in\mathbb{R}\,\epsilon\) be a real infinitesimal. The _hypersphere with center \(M\) and squared radius \(r\)_ is a quadric \(S(M,r)\) consisting of all points \(Q\) which fulfill the condition \(\xi_{\epsilon}(Q,M)=r\) if \(Q\) is regular, and the condition \(Q\,\{\mathfrak{R}_{1}\}Q=0\), otherwise. Together with \(M\) all points of \(A_{1}\) are symmetry points of \(S(M,r)\), but \(S(M,r)\) has more symmetry points if \(\rho>1\). Let us denote the set of symmetry points of \(S(M,r)\) by \(\mathfrak{C}(S(M,r))\). _Examples_: (1) We determine the sphere \(S(M,r)\) with regular center \(M=E_{1}\) and squared radius \(r=1\epsilon>0\) for the spaces listed above, together with its symmetry points. (1a) In the euclidean space of dimension \(n\) this sphere is the set \(\{(1\colon p_{2}\colon\ldots\colon p_{n+1})\mid\,\,p_{2}^{\,2}+\cdots+p_{n+1}^{ \,2}=1\}\), and \(\mathfrak{C}(S(M,1))=\{M\}\cup A_{1}\). (1b) In the 4-dimensional Minkowski space-time we have \(S(M,1)=\{(p_{1}\colon\ldots\colon p_{5})\mid p_{1}^{\,2}=p_{2}^{\,2}+p_{3}^{\,2}+p_{4 }^{\,2}-p_{5}^{\,2}\}\) and \(\mathfrak{C}(S(M,r))=\{M\}\cup A_{1}\). (1c) In the galilean space of dimension \(n\) this sphere is the set \(\{(1\colon p_{2}\colon\ldots\colon p_{n+1})\mid\,\,p_{2}^{\,2}+\cdots+p_{n+1}^{ \,2}=1\}\), and \(\mathfrak{C}(S(M,1))=\{M\}\cup A_{1}\). (1b) In the 4-dimensional Minkowski space-time we have \(S(M,1)=\{(p_{1}\colon\ldots\colon p_{5})\mid p_{1}^{\,2}=p_{2}^{\,2}+p_{3}^{\,2}+p_{4 }^{\,2}-p_{5}^{\,2}\}\) and \(\mathfrak{C}(S(M,r))=\{M\}\cup A_{1}\). (1c) In the galilean space of dimension \(n\) this sphere is the set \(\{(1\colon p_{2}\colon\ldots\colon p_{n+1})\mid\,\,p_{2}^{\,2}+\cdots+p_{n+1}^{ \,2}=1\}\), and \(\mathfrak{C}(S(M,1))=\{M\}\cup A_{1}\). (1b) In the 4-dimensional Minkowski space-time we have \(S(M,1)=\{(p_{1}\colon\ldots\colon p_{5})\mid p_{1}^{\,2}=p_{2}^{\,2}+p_{3}^{\,2}+p_{4 }^{\,2}-p_{5}^{\,2}\}\) and \(\mathfrak{C}(S(M,r))=\{M\}\cup A_{1}\). (1c) In the galilean space of dimension \(n\) this sphere is the set \(\{(1\colon p_{2}\colon\ldots\colon p_{n+1})\mid\,\,p_{2}^{\,2}+\cdots+p_{n+1}^{ \,2}=1\}\), and \(\mathfrak{C}(S(M,1))=\{M\}\cup A_{1}\). \(n\!>\!1\) this sphere is \(\{(p_{1}\!:\ldots:\!p_{n+1})\mid p_{1}^{2}=p_{2}^{2}+\cdots+p_{n}^{2}\}\) and the set of its symmetry-points is \((M\sqcup E_{n+1})\cup A_{1}\). (4) In the flag-space of dimension \(n\!>\!1\) we have \(S(M,1)=\{(p_{1}\!:\ldots:\!p_{n+1})\mid p_{1}^{2}=p_{2}^{2}\}\) and \(\mathfrak{S}(S(M,1))=(M\sqcup E_{3}\sqcup\cdots\sqcup E_{n+1})\cup A_{1}\). (2) Now we consider spheres with center \(M=E_{1}\) and radius \(r=0\). (2a) In the euclidean space of dimension \(n\) this sphere is the set \(\{(1\!:\!p_{2}\!:\ldots:\!p_{n+1})\mid p_{2}^{2}+\cdots+p_{n+1}^{2}=0\}\). So it is the point \(E_{1}\) with multiplicity \(2\) (the double point \(E_{1}\)) and \(\mathfrak{S}(S(M,1))=\{M\}\cup A_{1}\). (2b) In the 4-dimensional Minkowski space-time \(S(M,1)\) is the light cone consisting of all light lines through \(E_{1}\). All of these light lines are null lines. \(\mathfrak{S}(S(M,0))=\{M\}\cup A_{1}\). (2c) In the galilean space of dimension \(n\!>\!1\) the sphere \(S(M,0)\) is the point \(E_{1}\) with multiplicity \(2\). But if we accepted the line \(E_{1}\sqcup E_{n+1}\) as a null line (for example by demanding \(\epsilon^{2}=0\)), then this line (with multiplicity \(2\)) would be \(S(M,0)\). Each point on this line is a symmetry point. (2d) In the flag-space of dimension \(n\!>\!1\) the sphere \(S(M,0)\) is the point \(E_{1}\) with multiplicity \(2\) unless the hyperplane \(E_{1}\sqcup E_{3}\sqcup\cdots\sqcup E_{n+1}\) is regarded a null plane. All points of this hyperplane are symmetry points. An \((n\!-\!1)\)-dimensional quadric \(Q\) in a metric affine CK space \(\mathbb{P}\boldsymbol{V}\) is called a _horosphere_ if there exists a regular plane \(K\) of positive dimension such that \(K\) is an axis of symmetry of \(Q\) and \(Q\) touches the hyperplane \(A_{1}\) in a way that the points of \(K\cap A_{1}\) are the points of tangency. We regard \(K\cap A_{1}\) as the center of the horosphere. _Examples_: In the galilean CK space of dimension \(n\) the horospheres passing through the point \(E_{1}\) are the quadrics \(\{(p_{1}\!:\ldots:\!p_{n+1})|\,p_{1}p_{n+1}=a\,(p_{2}^{2}+\cdots+p_{n}^{2})\}\), \(a\in\mathbb{R}^{\times}\), while in the \(n\)-dimensional flag-space these are the quadrics \(\{(p_{1}\!:\ldots:\!p_{n+1})|\,p_{1}p_{n+1}=a\,p_{2}^{2}\}\), \(a\in\mathbb{R}^{\times}\). In the first case \(E_{n+1}\) is the center of the horosphere, in the second \(E_{3}\sqcup\cdots\sqcup E_{n+1}\). ### \((n\!+\!1)\)-simplices in a metric affine CK space of dimension \(n\) Let \(P_{1},\ldots,P_{n+1}\in\mathbb{P}\boldsymbol{V}\!\setminus\!A_{1}\) be independent regular points in a metric affine CK space \((\mathbb{P}\boldsymbol{V},\mathfrak{R}_{0},\ldots,\mathfrak{R}_{\rho})\) of dimension \(n>1\). Then \(\Sigma:=\{[q_{1}\!:\ldots:\!q_{n+1}]\!\mid\!q_{1},\ldots,q_{n+1}\in\mathbb{R}^{ \geq 0}\}\) is an \((n\!+\!1)\)-simplex with vertices \(P_{1},\ldots,P_{n+1}\). The hyperplane at infinity \(A_{1}\) consists of all points \(Q=[q_{1}\!:\cdots\!:\!q_{n+1}]\) with \(q_{1}+\cdots+q_{n+1}=0\). While all points of \(\Sigma\) are regular, this is not the case for the mates of \(\Sigma\). We assume that for each sideplane \(U\) of \(\Sigma\) the intersection of \(U\) with \(A_{1}\) is anisotropic with respect to \(\{\mathfrak{R}_{1}\}\). (It is possible to find such a collection of points \(P_{1},\ldots,P_{n+1}\).) It follows that none of the lines \(P_{i}\sqcup P_{j}\), \(1\leq i<j\leq n\!+\!1\), is a light-line, thus \(d_{ij}:=\!\varepsilon_{\epsilon}(P_{i},P_{j})\not\in\mathbb{R}\,\epsilon^{2}\) for \(1\!\leq\!i<j\!\leq\!n\!+\!1\). If \(Q=(q_{1}\!:\ldots:\!q_{n+1})\in\mathbb{P}\boldsymbol{V}\) is a regular point, then the vector \(Q^{\circ}=(1,\dfrac{q_{2}}{q_{1}},\ldots,\dfrac{q_{n+1}}{q_{1}})\in \boldsymbol{V}\) can be assigned to \(Q\). If \(Q\) is regular and \(Q^{\circ}=q_{1}P_{1}^{\,\circ}+\cdots+q_{n+1}P_{n+1}^{\,\circ}\), we write \(Q=[q_{1},\ldots,q_{n+1}]\). Given two regular points \(Q=[q_{1},\ldots,q_{n+1}]\) and \(R=[r_{1},\ldots r_{n+1}]\), their squared distance can be calculated by \(\xi_{\epsilon}(Q,R)=-\sum_{1\leq i<j\leq n+1}d_{ij}(q_{i}-r_{i})(q_{j}-r_{j})\). (For a proof see [4].) Altidudes of \(\Sigma\): Put \(S:=\{P_{1},\ldots,P_{n+1}\}\), and for \(i\in\{1,\ldots,n\!+\!1\}\) put \(S_{i}:=\text{span}(S\!\smallsetminus\!\{P_{i}\})\). The absolute polar of \(S_{i}\) is an anisotropic point \(R_{i}\) in \(A_{\rho}\). The line \(L_{i}:=P_{i}\sqcup R_{i}\) is called the _altitude for \(S_{i}\)_. The squared distance of two regular points on an altitude is an infinitesimal number in \(\mathbb{R}\,\epsilon^{\rho}\). Thus, the altitudes are light lines iff \(\rho>1\). Let \(Q\) be the intersection of the hyperplane \(S_{n+1}=P_{1}\sqcup\cdots\sqcup P_{n}\) with its altitude \(L_{n+1}\). Then \(\psi(S)=\psi(\{P_{1},\ldots,P_{n}\})\,\psi(P_{n+1},Q)\in\mathbb{R}\,\epsilon^{n +\rho-1}\). If \(\rho=1\), \(\psi(S)=\star_{\varepsilon}(\det(1-\frac{1}{2}d_{ij}))\). _Remark_: If \(\rho=1\), \(\psi(S)\) can also be calculated using the _Cayley-Menger determinant_: \[\psi(S)=(-1)^{n+1}2^{-n}\det(\left(\begin{array}{cc}0&\mathbf{1}\\ \mathbf{1}&(d_{ij})_{1\leq i,j\leq n+1}\end{array}\right))\,.\] The measure of dihedral angles: \[\xi_{\varepsilon}(S_{i},S_{j})\,\psi(S\verb|\{P_{i}\})\,\psi(S\verb|\{P_{j} \})=\psi(S\verb|\{P_{i},P_{j}\})\,\psi(S),\ 1\leq i<j\leq n+1.\] Important centers of \(\Sigma\). The proper centroid \(G\) of \(\Sigma\) is the regular point \([1\,:\,\cdots\,:\,1]\). The line through a proper centroid \(Q\) of a \(k\)-face of \(\Sigma\) and the centroid \(R\) of its opposite \((n-k-1)\)-face passes through \(G\), as was mentioned before; but in addition we can state that \[(n-k)^{2}\xi_{\varepsilon}(G,Q)=(k+1)^{2}\xi_{\varepsilon}(G,R).\] The circumcenter. A regular point \(O=(1\,:\,o_{2}\,:\,\cdots\,:\,o_{n+1})\) is called a _circumcenter_ of \(\Sigma\) if \[(O^{\circ}-P_{1}^{\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}(O^{\circ}-P_{ 1}^{\circ})=(O^{\circ}-P_{1}^{\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}(O ^{\circ}-P_{i}^{\circ}),\ 2\leq i\leq n+1\.\] \(\Sigma\) has exactly one regular circumcenter iff \(\rho=1\). _Proof._ We rearrange the last system of equations: \[(P_{i}^{\circ}-P_{1}^{\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}O^{\circ}=( P_{i}^{\circ}-P_{1}^{\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}(\frac{1}{2}(P _{i}^{\circ}+P_{1}^{\circ})),\ 2\leq i\leq n+1\.\] This system consists of \(n+1\) equations. The first of these equations is trivially fulfilled. The remaining \(n\) equations form a system of maximal rank and there is a single solution for the n-tuple \((o_{2},\,\ldots,o_{n+1})\) precisely when \(\rho=1\). We now assume that \(\rho=1\). Then the circumcenter \(O\) of \(\Sigma\) is a regular point. We want to determine an \((n+1)\)-tuple \((o_{1}:\,\ldots\,:o_{n+1})\) of real numbers with \(O=[o_{1}\,:\,\ldots\,:o_{n+1}]\) and define an \((n+1)\times(n+1)\)-matrix \(\widehat{\mathfrak{D}}\) by \(\widehat{\mathfrak{D}}_{ij}=1-\frac{1}{2}d_{ij}\). We can find a tuple \((o_{1},\ldots,o_{n+1})\) such that: \[(o_{1},\ldots,o_{n+1})\widehat{\mathfrak{D}}=(1,1,\ldots,1)\ \ \ (\star).\] Using Cramer's rule, we get \(O=[\det\widehat{\mathfrak{D}}^{[1]}:\,\det\widehat{\mathfrak{D}}^{[2]}:\, \ldots\,:\det\widehat{\mathfrak{D}}^{[n+1]}]\), where \(\widehat{\mathfrak{D}}^{[l]}\) is the matrix formed by replacing the \(i\)-th row of \(\widehat{\mathfrak{D}}\) by the row \((1,\ldots,1)\). The circumsphere consists of all points \(X=[x_{1},\ldots,x_{n+1}]\) satisfying the equation \[\sum_{1\leq i<j\leq n+1}d_{ij}x_{i}x_{j}=0\ \ \ (\star\star)\,.\] The squared radius of the circumsphere is \(r=-\frac{\det(d_{ij})}{2\det(\left(\begin{array}{cc}0&\mathbf{1}\\ \mathbf{1}&(d_{ij})_{1\leq i,j\leq n+1}\end{array}\right))}.\) _Examples_: (1) In a 2-dimensional CK space with \(\rho=1\) we get \[O=[d_{23}(d_{12}+d_{13}-d_{23}),d_{13}(d_{12}-d_{13}+d_{23}),d_{12}(-d_{12}+d_ {13}+d_{23})].\] (2) The two above formulae \((\star),(\star\star)\) lead to the correct result also for the 2-dimensional CK space with \(\rho=2\) (the galilean plane): A point \([q_{1}:q_{2}:q_{3}]\) in a metric affine CK space of dimension 2 is a point on the line \(A_{1}\) iff \(q_{1}+q_{2}+q_{3}=0\). It follows that the circumcenter \(O=[d_{23}(d_{12}+d_{13}-d_{23}):\,d_{13}(d_{12}-d_{13}+d_{23}):\,d_{12}(-d_{12 }+d_{13}+d_{23})]\) is a point on \(A_{1}\) iff \(-\,d_{12}^{2}-d_{13}^{2}-d_{23}^{2}+2(d_{12}d_{13}+d_{12}d_{23}+d_{13}d_{23})=0\). But this equation is a necessary and sufficient condition for the plane to be a galilean plane, cf [6]. We will except \(\{[x_{1}\,:\,x_{2}\,:\,x_{3}]\,|\,\sum_{1\leq i<j\leq 3}d_{ij}x_{i}x_{j}=0\,\}\) as a circumcircle of the triangle (\(P_{1},P_{2},P_{3}\)) even though we know that it is a horocircle. We give up the restriction \(\rho=1\). A plane \(M\leq A_{1}\) is called the _circumcenter_ of \(\Sigma\) if there exists a horosphere with center \(M\) passing through all vertices of \(\Sigma\). For \(\Sigma\) there is a uniquely determined plane in \(\mathbb{P}\boldsymbol{V}\) which is its circumcenter. As before, we use the letter \(O\) to denote this plane. We call \(G\sqcup O\) the _Euler-plane_ of \(\Sigma\). The _incenter_. A point of \(\Sigma\) is an _incenter_ of \(\Sigma\) if it is the center of an hypersphere (called an insphere) which touches each of \((n-1)\)-sideplanes of \(\Sigma\). The squared radius \(r\) of this insphere is a nonzero real infinitesimal \(\in\mathbb{R}\epsilon^{\rho}\). Put \(\mathcal{S}\,:=\,\{P_{1},\ldots,P_{n+1}\}\) and \(\mathcal{S}_{i}\,:=\,\{P_{1},\ldots,P_{n+1}\}\!\smallsetminus\!\{P_{i}\}\). Necessary and sufficient for the existence of an incenter is that \(\psi(S_{i})\,\psi(S_{j})\!\geq\!0\) for all \(1\!\leq\!i\!<\!j\!\leq\!n\!+\!1\). If all these inequalities are true, the incenter is the point \[I=[\sqrt{(|\psi(S_{1})|\,\colon\,\ldots\,\,\colon\,\sqrt{|\psi(S_{n+1})|}\, \,]}\,\,].\] The squared radius of the insphere can be calculated by \(r=\dfrac{\psi(S)}{\sum_{1\leq i\leq n+1}\psi(S_{i})}\). The _Monge point_. We recall that \(G\) is the proper centroid of \(\Sigma\). For \(1\!\leq\!i\!<\!j\!\leq\!n\!+\!1\) let \(\tilde{H}_{ij}\leq A_{1}\) be the total polar of the edge \(P_{i}\sqcup P_{j}\) of \(\Sigma\) and let \(\tilde{H}_{ij}\) be the hyperplane \(\tilde{H}_{ij}\sqcup Z_{ij}\) where \(Z_{ij}\,:=\mathbb{R}(((n+1)^{\circ})-P_{i}^{\,\circ}-P_{j}^{\,\circ})\) is the centroid of the \((n-2)\)-face opposite \(\tilde{P}_{i}\sqcup P_{j}\). If \(\rho=1\), the \(\frac{1}{2}n(n+1)\) hyperplanes \(\tilde{H}_{ij}\) meet at one point \(H\), which is called the _Monge point_ of \(\Sigma\). For \(G,O\) and \(H\) the equation \(H^{\circ}=\dfrac{n+1}{n-1}\,G^{\circ}-\dfrac{2}{n-1}\,O^{\circ}\) applies. _Proof_: We have to show that there is exactly one point \(H\) that satisfies the following system of linear equations: \((P_{i}^{\,\circ}-P_{j}^{\,\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}H^{ \circ}=(P_{i}^{\,\circ}-P_{j}^{\,\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1} \}Z_{ij}^{\,\circ},\,1\leq i<j\leq n+1\), and also satisfies the equation \(H^{\circ}=\frac{n+1}{n-1}\,G^{\circ}-\frac{2}{n-1}\,O^{\circ}\). First, the linear system has at most one solution for \(H^{\circ}\). The proof of this statement is quite analogous to the proof for the circumcenter and is therefore omitted. On the other hand, \(H^{\circ}=\frac{n+1}{n-1}\,G^{\circ}-\frac{2}{n-1}\,O^{\circ}\) is a solution of the system: \[\begin{array}{l}(P_{i}^{\,\circ}-P_{j}^{\,\circ})\{\mathfrak{M}_{0}+ \mathfrak{M}_{1}\}((n\!+\!1)G^{\circ}-2O^{\circ})=(P_{i}^{\,\circ}-P_{j}^{\, \circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}((n\!+\!1)G^{\circ}-(P_{i}^{\, \circ}+P_{j}^{\,\circ}))\\ =(P_{i}^{\,\circ}-P_{j}^{\,\circ})\{\mathfrak{M}_{0}+\mathfrak{M}_{1}\}(n\!- \!1)Z_{ij}^{\,\circ},\,1\leq i<j\leq n+1.\ \square\end{array}\] _Remarks_: (1) If \(n=2\), the hyperplanes \(\tilde{H}_{12},\,\tilde{H}_{13},\,\tilde{H}_{23}\) are lines, which are called _altitudes_ of the triangle. These three altitudes meet at the point \(H\), which, in this case, is called the _orthocenter_ of \(\Sigma\). (2) For a tetrahedron in a 3-dimensional euclidean space the point \(H\) was discovered by G. Monge [20]. (3) If \(Q\) is a point on the circumsphere of \(\Sigma\), then the point \(R\) with \(R^{\circ}=\dfrac{n\!-\!1}{n}H^{\circ}+\dfrac{1}{n}P^{\circ}\) is a point on the Feuerbach sphere of \(\Sigma\), which is the \((n\!-\!1)\)-sphere through the \(n\!+\!1\) centers of the facets of \(\Sigma\); see [9] for a proof. There is a series of centers of \(\Sigma\) whose construction is similar to that of O and H. We describe four of them. Again \(\rho=1\) is assumed. (1) Let \(Q_{ij}\) be the mirror image of \(P_{i}+P_{j}\) in the point \(Z_{ij}\). The hyperplanes \(\tilde{H}_{ij}\sqcup Q_{ij}\) meet at a point. (2) Let \(T_{1},\,\ldots,T_{n+1}\) be the reflections of the vertices \(P_{1},\,\ldots,\,P_{n+1}\) in their opposite sitelplanes. Put \(T_{ij}=T_{i}+T_{j}\). The hyperplanes \(\tilde{H}_{ij}\sqcup T_{ij},1\leq i\leq j\leq n+1\), meet at a point. (3) Let \(T_{1},\ldots\,T_{n+1}\) be the points introduced in (2). Let \(\hat{H}_{ij}\) be the total polar of the line \(T_{i}\sqcup T_{j}\). The hyperplanes \(\hat{H}_{ij}\sqcup(P_{i}+P_{j}),1\leq i\leq j\leq n{+}1\), meet at a point. (4) Let \(T_{1},\ldots\,T_{n+1}\), \(Z_{ij},1\leq i<j\leq n{+}1\), be the points and \(\hat{H}_{ij}\) be the plane as in (1) and (3). The hyperplanes \(\hat{H}_{ij}\sqcup Z_{ij},1\leq i\leq j\leq n{+}1\), meet at a point. We calculate the barycentric coordinates of the centers \(O\), \(I\), \(H\) and the point \((S_{4}\sqcap A_{1})^{\mathfrak{H}_{1}}\) for an 4-simplex in a euclidean space of dimension 3: Let \(d_{ij}\epsilon\) be the squared distance of the vertices \(P_{i}\) and \(P_{j}\), then \(O=[d_{12}(d_{23}+d_{24}-d_{34})d_{34}+d_{13}(d_{23}-d_{24}+d_{34})d_{24}+d_{14 }(-d_{23}+d_{24}+d_{34})d_{23}+2d_{23}d_{24}d_{34}:\,\ldots\,\text{cyclic}\, \ldots\,]\), \(I=[\sqrt{2(d_{23}d_{24}+d_{23}d_{34}+d_{24}d_{34})-(d_{23}d_{24}d_{34})}:\, \ldots\,\text{cyclic}\,\ldots\,]\), \(H=[d_{23}(d_{14}-d_{12})(d_{14}-d_{13})+d_{24}(d_{13}-d_{12})(d_{13}-d_{14})+d _{34}(d_{12}-d_{13})(d_{12}-d_{14})-d_{23}d_{24}d_{34}:\,\ldots\,\text{cyclic}\, \ldots\,]\), \((S_{4}\sqcap A_{1})^{\mathfrak{H}_{1}}=[-d_{14}d_{23}{+}d_{24}(-d_{12}{+}d_{13 }{+}d_{23})+d_{34}(d_{12}{-}d_{13}{+}d_{23})+d_{23}(d_{12}{+}d_{13}{-}d_{23}):\) \(-\,d_{24}d_{13}+d_{14}(-d_{12}+d_{13}+d_{23})+d_{13}(d_{12}-d_{13}+d_{23})+d_{ 34}(d_{12}+d_{13}-d_{23}):\) \(-\,d_{34}d_{12}+d_{12}(-d_{12}+d_{13}+d_{23})+d_{14}(d_{12}-d_{13}+d_{23})+d_{ 24}(d_{12}+d_{13}-d_{23}):\) \(-\,d_{12}^{2}-d_{13}^{2}-d_{23}^{2}+2(d_{12}d_{13}+d_{12}d_{23}+d_{13}d_{23})]\). The medial simplex and the anticomplementary simplex of \(\Sigma\). The _medial simplex_\(\Sigma^{[1]}\) of \(\Sigma\) is the simplex whose vertices are the centroids of the \((n-1)\)-faces of \(\Sigma\). We get \(\Sigma^{[1]}\) by applying a homothety with center \(G\) (the centroid of \(\Sigma\)) and factor \(-\dfrac{1}{n}\) to the simplex \(\Sigma\). We can iterate this process getting a sequence of simplices where \(\Sigma^{[n+1]}\) is the medial simplex of \(\Sigma^{[n]}\). The inverse of a homothety with center \(G\) and factor \(-\dfrac{1}{n}\) is a homothety with center \(G\) and factor \(-\,n\). The simplex \(\Sigma^{[-1]}\) is called the _anticomplementary_ of \(\Sigma\). If \(T\leq A_{0}\) is some _center (center plane)_ of \(\Sigma\) such as \(G\) or \(O\) or \(G\sqcup O\) or \(G\sqcup O\sqcup I\), then we denote the correspondent centers of \(\Sigma^{[n]}\) by \(T^{[n]}\), \(n\in\mathbb{Z}\). There are centers (center planes) \(T\) of \(\Sigma\) such that \(T=T^{[n]}\) for all \(n\in\mathbb{Z}\); these include \(G\), \(G\sqcup O\), \(G\sqcup I\) and \(G\sqcup O\sqcup I\). ## 4. Addition: Clifford algebra on a semi CK space The foundations of Geometric Algebra (GA) were laid by H. Grassmann and W. Clifford. A description and appreciation of their works is given by D. Hestenes in [12]. Due to research by Hestenes, GA experienced a revival and is now an important part of mathematics and a number of its applications. Short introductions to GA offer [3, 15, 19]. ### Grassmann algebras on \(\mathbb{R}^{n+1}\) and on \(\mathbb{P}\mathbb{R}^{n+1}\) We make use of the Grassmann algebra \(\Lambda\boldsymbol{V}\) on \(\boldsymbol{V}=\mathbb{R}^{n+1}\). For \(0\leq k\leq n{+}1\), the elements of \(\Lambda^{k}\boldsymbol{V}\) are called _multivectors_ of grade \(k\). Conventionally the wedge product \(\wedge\) is used as the exterior product on \(\Lambda\boldsymbol{V}\); but here we use the operator \(\vee\) which fits better with the geometric interpretation. Starting from the canonical basis \((\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{n+1})\) of \(\boldsymbol{V}\), we get bases for \(\Lambda^{k}V\), \(1\leq k\leq n{+}1\), as follows: The set of canonical basis elements of \(\Lambda^{k}V\) is \(\{\boldsymbol{e}_{i_{1}}\vee\cdots\vee\boldsymbol{e}_{i_{k}}\,|\,i_{1}<\cdots< i_{k}\,\}\), and the elements within this set are placed according to the lexicographic order of their multi-indices. Instead of \(\boldsymbol{e}_{i_{1}}\vee\cdots\vee\boldsymbol{e}_{i_{k}}\), we also write \(\boldsymbol{e}_{i_{1},\ldots,i_{k}}\). Let \(S_{k}\), \(k\in\mathbb{N}\), denote the symmetric group on the set \(\{1,\ldots,k\}\). A multivector \(\boldsymbol{v}\) of rank \(k\leq n{+}1\) can always be written \[\boldsymbol{v}=\sum_{\begin{subarray}{c}\sigma\in S_{n+1}\\ \sigma(1)<\cdots<\sigma(k)\end{subarray}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, We call \(v_{\sigma(1),\dots,\sigma(k)}\) the _component_ of \(\mathbf{v}\)_with multi-index_\(\sigma(1),\dots,\sigma(k)\). Let \(\mathbf{u}\) be a multivector of grade \(r\) and \(\mathbf{v}\) a multivector of grade \(s\), and we assume that \(r+s\leq n+1\). Then we get a multivector \(\mathbf{w}=\mathbf{u}\vee\mathbf{v}\) of grade \(r+s\) with \[\begin{array}{l}\mathbf{w}_{\sigma(1),\dots,\sigma(r+s)}\ =\ \ \sum_{\begin{subarray}{c} \sigma\in S_{r+s}\\ \sigma(1)<\dots<\sigma(r)\\ \sigma(r+1)<\dots<\sigma(r+s)\end{subarray}}\ \ \ \ \ \ \operatorname{sgn}(\sigma)\,u_{\sigma(1),\dots, \sigma(r)}\,v_{\sigma(r+1),\dots,\sigma(r+s)}\.\end{array}\] The vector space \(\Lambda V\) the can be projectivized and the exterior product \(\vee\) can be transferred to an exterior product \(\vee\) on \(\operatorname{P\Lambda\mathbf{V}}\) by \(\mathbb{R}\mathbf{v}\vee\mathbb{R}\mathbf{w}:=\mathbb{R}(\mathbf{v}\vee\mathbf{w})\). The elements of \(\operatorname{P\Lambda}^{k}V\) are called _\(k\)-flats_. A \(k\)-flat \(F\) is called _decomposable_ if there are \(k\) independent points \(P_{1}=\mathbb{R}\mathbf{v}_{1},\dots,P_{k}=\mathbb{R}\mathbf{v}_{k}\) such that \(F=\mathbb{R}(\mathbf{v}_{1}\vee\dots\vee\mathbf{v}_{k})\). In this case we write \(F=P_{1}\vee\dots\lor\mathbf{v}_{k}\). If \(\sigma\in S_{k}\) is a permutation, \(P_{\sigma(1)}\vee\dots\lor P_{\sigma(k)}=P_{1}\vee\dots\lor P_{k}\). The points \(P_{1},\dots,P_{k}\) are independent precisely when \(P_{1}\vee\dots\lor P_{k}\neq 0\). If \(P_{1}\vee\dots\lor P_{k}\) is a \((k-1)\)-plane in \(\operatorname{P\mathbf{V}}\) which consists of all points \(Q\in\operatorname{P\mathbf{V}}\) satisfying \(P_{1}\vee\dots\lor P_{k}\lor Q=0\). More generally, if \(P_{1}\vee\dots\lor P_{j}\neq 0\) and \(Q_{1}\vee\dots\lor Q_{k}\neq 0\), then \(P_{1}\vee\dots\lor P_{j}\lor Q_{1}\vee\dots\lor Q_{k}\neq 0\) precisely when the planes \(P_{1}\sqcup\dots\sqcup P_{j}\) and \(Q_{1}\sqcup\dots\sqcup Q_{k}\) are disjoint. ### The Plucker embedding of a plane There is an embedding \(\iota\colon\operatorname{sub}(\operatorname{P\mathbf{V}})\to\operatorname{P \Lambda\mathbf{V}}\), the _Plucker embedding_, defined by \(\iota(U):=P_{1}\vee\dots\lor P_{k}\), where \(P_{1},\dots,P_{k}\) is a minimal generating set of \(U\). (It can be easily verified that this mapping is well-defined.) The image \(\iota(U)\) is a cut out in \(\operatorname{P\Lambda}^{k}\mathbf{V}\) by a set of quadratic equations, so called _Plucker relations_. _Example_: \(\sum_{1\leq i<j\leq 4}\,l_{ij}\,E_{i}\lor E_{j}\in\operatorname{P\Lambda}^{2} \mathbb{R}^{3}\) is the exterior product of two elements in \(\operatorname{P\mathbb{R}}^{3}\) iff \(l_{12}l_{34}+l_{13}l_{24}+l_{14}l_{23}=0\). ### Clifford's geometric product A (generalized) inner product \(\{\mathfrak{R}\}\) on \(\mathbf{V}=\mathbb{R}^{n+1}\) can be expanded to a (generalized) inner product on \(\Lambda\mathbf{V}\): Given vectors \(\mathbf{u},\mathbf{u}_{1},\dots,\mathbf{u}_{r},\mathbf{v}_{1},\dots,\mathbf{v}_{r}\in\mathbf{V}\) and a multivector \(\mathbf{w}\in\Lambda^{k}\mathbf{V}\), then \[\begin{array}{l}\mathbf{u}[\mathfrak{R}](\mathbf{v}_{1}\vee\dots\vee\mathbf{v}_{r})\ =\ \sum_{1\leq j\leq r}(-1)^{j}\,(\mathbf{u}[\mathfrak{R}]\mathbf{v}_{j})(\mathbf{v}_{1}\lor \dots\vee\widehat{\mathbf{v}}_{j}\vee\dots\mathbf{v}_{r}),\\ (\mathbf{v}_{1}\vee\dots\vee\mathbf{v}_{r})[\mathfrak{R}]\mathbf{w}\ =\ \mathbf{v}_{1}[\mathfrak{R}](\mathbf{v}_{2}[ \mathfrak{R}](\cdots[\mathfrak{R}](\mathbf{v}_{r}[\mathfrak{R}]\mathbf{w})\cdots)),\ \text{if}\ r\leq k,\\ \text{and}\ \ \ (\mathbf{u}_{1}\vee\dots\vee\mathbf{u}_{r})[\mathfrak{R}](\mathbf{v}_{r}\vee \dots\vee\mathbf{v}_{1})\ =\ \det(\mathbf{u}_{i}[\mathfrak{R}]\mathbf{v}_{j})_{1\leq i,j\leq r}\.\end{array}\] Given \(\mathbf{v},\mathbf{w}\in\mathbf{V}\), the _geometric product_ of these two vectors is defined by \[\mathbf{v}\,\mathbf{w}\ :=\mathbf{v}[\mathfrak{R}]\mathbf{w}+\mathbf{v}\vee\mathbf{w}.\] As a direct consequence of this equation we get: 1. \(\mathbf{v}[\mathfrak{R}]\mathbf{w}=\frac{1}{2}(\mathbf{w}\,\mathbf{v}+\mathbf{v}\,\mathbf{w}),\) (2) \(\mathbf{v}\vee\mathbf{w}=\frac{1}{2}(\mathbf{w}\,\mathbf{v}-\mathbf{v}\,\mathbf{w})\.\) 2. If \(\mathbf{v}\) is a vector with \(\mathbf{v}[\mathfrak{R}]\mathbf{v}\neq 0\), then \(\mathbf{v}\) has a multiplicative inverse \(\mathbf{v}^{-1}=\frac{1}{\mathbf{v}}=\frac{1}{\mathbf{v}[\mathfrak{R}]\mathbf{v}}\mathbf{v}\.\) ### Description of point reflections in a CK space using the geometric product We start with a semi CK space \((\operatorname{P\mathbf{V}},\mathfrak{R})\) as introduced in Section 1. Let \(Q=\mathbb{R}\mathbf{q}\) be any point and \(R=\mathbb{R}\mathbf{r}\) and \(S=\mathbb{R}\mathbf{s}\) be two anisotropic points in this space. The image of \(Q\) under a reflection in \(R\) is the point \[Q^{\prime}=\mathbb{R}\big{(}\mathbf{q}-2\frac{\mathbf{q}[\mathfrak{R}]\mathbf{r}}{\mathbf{r}[ \mathfrak{R}]\mathbf{r}}\mathbf{r}\big{)}=\mathbb{R}\big{(}\mathbf{q}-(\mathbf{q}\mathbf{r}+\mathbf{r} \mathbf{q})\frac{1}{\mathbf{r}[\mathfrak{R}]\mathbf{r}}\mathbf{r}\big{)}=\mathbb{R}\big{(}\mathbf{q} -\frac{\mathbf{q}\mathbf{r}^{2}}{\mathbf{r}[\mathfrak{R}]\mathbf{r}}-\mathbf{r}\mathbf{q}\mathbf{r}^{-1} \big{)}=\mathbb{R}\big{(}\mathbf{r}\mathbf{q}\mathbf{r}^{-1}\big{)}.\] The reflection of \(Q^{\prime}\) in \(S\) delivers the point \(Q^{\prime\prime}=\mathbb{R}\big{(}\mathbf{s}\,\mathbf{r}\,\mathbf{q}\,(\mathbf{s}\mathbf{r})^{-1} \big{)}\). We study the double reflection more closely and assume that \(R\neq S\). There is a uniquely determined point \(T=\mathbb{R}\mathbf{t}\) on \(R\sqcup S\) with \(\mathbf{t}[\mathfrak{R}]\mathbf{r}=0\). If \(R\,\sqcup\,S\) is isotropic, then \(\boldsymbol{t}^{2}=0\) and we can find a nonzero real number \(d\) such that \(\boldsymbol{s}=\boldsymbol{r}+d\boldsymbol{t}\). We get \(\boldsymbol{sr}=\boldsymbol{r}^{2}(1+d\boldsymbol{t}\boldsymbol{r}^{-1})= \boldsymbol{r}^{2}\exp(d\boldsymbol{t}\boldsymbol{r}^{-1})\) with \(\exp(\boldsymbol{v})\,:=\,\sum_{k=0}^{\infty}\frac{\boldsymbol{v}^{k}}{k!}\) for \(\boldsymbol{v}\in\Lambda\boldsymbol{V}\). If \(R\,\sqcup\,S\) is anisotropic, we may assume that \(\boldsymbol{r}^{2},\boldsymbol{s}^{2},\boldsymbol{t}^{2}\in\{-1,1\}\). If \(d\,\in\,\mathbb{D}\) denotes the distance of the points \(R\) and \(S\), then \(\boldsymbol{s}=\cosh(d)\,\boldsymbol{r}\pm\sinh(d)\,\boldsymbol{t}\) and we get \[\boldsymbol{sr}=\big{(}\cosh(d)\,\boldsymbol{r}\pm\sinh(d)\,\boldsymbol{t} \big{)}\,\boldsymbol{r}=\boldsymbol{r}^{2}\big{(}\cosh(d)\pm\sinh(d)\, \boldsymbol{t}\boldsymbol{r}^{-1}\big{)}=\boldsymbol{r}^{2}\exp(\pm\,d \boldsymbol{t}\boldsymbol{r}^{-1}).\]
2310.15668
Hypergraph Motifs and Their Extensions Beyond Binary
Hypergraphs naturally represent group interactions, which are omnipresent in many domains: collaborations of researchers, co-purchases of items, and joint interactions of proteins, to name a few. In this work, we propose tools for answering the following questions: (Q1) what are the structural design principles of real-world hypergraphs? (Q2) how can we compare local structures of hypergraphs of different sizes? (Q3) how can we identify domains from which hypergraphs are? We first define hypergraph motifs (h-motifs), which describe the overlapping patterns of three connected hyperedges. Then, we define the significance of each h-motif in a hypergraph as its occurrences relative to those in properly randomized hypergraphs. Lastly, we define the characteristic profile (CP) as the vector of the normalized significance of every h-motif. Regarding Q1, we find that h-motifs' occurrences in 11 real-world hypergraphs from 5 domains are clearly distinguished from those of randomized hypergraphs. Then, we demonstrate that CPs capture local structural patterns unique to each domain, and thus comparing CPs of hypergraphs addresses Q2 and Q3. The concept of CP is extended to represent the connectivity pattern of each node or hyperedge as a vector, which proves useful in node classification and hyperedge prediction. Our algorithmic contribution is to propose MoCHy, a family of parallel algorithms for counting h-motifs' occurrences in a hypergraph. We theoretically analyze their speed and accuracy and show empirically that the advanced approximate version MoCHy-A+ is more accurate and faster than the basic approximate and exact versions, respectively. Furthermore, we explore ternary hypergraph motifs that extends h-motifs by taking into account not only the presence but also the cardinality of intersections among hyperedges. This extension proves beneficial for all previously mentioned applications.
Geon Lee, Seokbum Yoon, Jihoon Ko, Hyunju Kim, Kijung Shin
2023-10-24T09:27:59Z
http://arxiv.org/abs/2310.15668v1
# Hypergraph motifs and their extensions beyond binary ###### Abstract Hypergraphs naturally represent group interactions, which are omnipresent in many domains: collaborations of researchers, co-purchases of items, and joint interactions of proteins, to name a few. In this work, we propose tools for answering the following questions in a systematic manner: **(Q1)** what are the structural design principles of real-world hypergraphs? **(Q2)** how can we compare local structures of hypergraphs of different sizes? **(Q3)** how can we identify domains from which hypergraphs are? We first define _hypergraph motifs_ (h-motifs), which describe the overlapping patterns of three connected hyperedges. Then, we define the significance of each h-motif in a hypergraph as its occurrences relative to those in properly randomized hypergraphs. Lastly, we define the _characteristic profile_ (CP) as the vector of the normalized significance of every h-motif. Regarding Q1, we find that h-motifs' occurrences in 11 real-world hypergraphs from 5 domains are clearly distinguished from those of randomized hypergraphs. In addition, we demonstrate that CPs capture local structural patterns unique to each domain, and thus comparing CPs of hypergraphs addresses Q2 and Q3. The concept of CP is naturally extended to represent the connectivity pattern of each node or hyperedge as a vector, which proves useful in node classification and hyperedge prediction. Our algorithmic contribution is to propose MoCHy, a family of parallel algorithms for counting h-motifs' occurrences in a hypergraph. We theoretically analyze their speed and accuracy and show empirically that the advanced approximate version MoCHy-A\({}^{+}\) is up to \(25\times\) more accurate and \(32\times\) faster than the basic approximate and exact versions, respectively. Furthermore, we explore _ternary hypergraph motifs_ that extends h-motifs by taking into account not only the presence but also the cardinality of intersections among hyperedges. This extension proves beneficial for all previously mentioned applications. Keywords:Hypergraph Hypergraph motif Ternary hypergraph motif Counting algorithm ## 1 Introduction Complex systems consisting of pairwise interactions between individuals or objects are naturally expressed in the form of graphs. Nodes and edges, which compose a graph, represent individuals (or objects) and their pairwise interactions, respectively. Thanks to their powerful expressiveness, graphs have been used in a wide variety of fields, including social network analysis, web, bioinformatics, and epidemiology. Global structural patterns of real-world graphs, such as power-law degree distribution [12; 34] and six degrees of separation [49; 107], have been extensively investigated. In addition to global patterns, real-world graphs exhibit patterns in their local structures, which differentiate graphs in the same domain from random graphs or those in other domains. Local structures are revealed by counting the occurrences of different network motifs [78; 79], which describe the patterns of pairwise interactions between a fixed number of connected nodes (typically 3, 4, or 5 nodes). As a fundamental building block, network motifs have played a key role in many analytical and predictive tasks, including community detection [15; 71; 103; 111], classification [25; 65; 78], and anomaly detection [13; 96]. Despite the prevalence of graphs, interactions in several complex systems are groupwise rather than pairwise: collaborations of researchers, co-purchases of items, joint interactions of proteins, tags attached to the same web post, to name a few. These group interactions cannot be represented by edges in a graph. Suppose three or more researchers coauthor a publication. This co-authorship cannot be represented as a single edge, and creating edges between all pairs of the researchers cannot be distinguished from multiple papers coauthored by subsets of the researchers. This inherent limitation of graphs is addressed by hypergraphs, which consist of nodes and hyperedges. Each hyperedge is a subset of any number of nodes, and it represents a group interaction among the nodes. For example, the coauthorship relations in Figure 1(a) are naturally represented as the hypergraph in Figure 1(b). In the hypergraph, seminar work [66] coauthored by Jure Leskovec (L), Jon Kleinberg (K), and Christos Faloutsos (F) is expressed as the hyperedge \(e_{1}=\{L,K,F\}\), and it is distinguished from three papers coauthored by each pair, which, if they exist, can be represented as three hyperedges \(\{K,L\}\), \(\{F,L\}\), and \(\{F,K\}\). The successful investigation and discovery of local structural patterns in real-world graphs motivate us to explore local structural patterns in real-world hypergraphs. However, network motifs, which proved to be useful for graphs, are not trivially extended to hypergraphs. Due to the flexibility in the size of hyperedges, it is possible to form \(2^{n}\) distinct hyperedges with a given set of \(n\) nodes. As a result, the potential number of hypergraphs is \(2^{2^{n}}\), which is extraordinarily large even for a small number of nodes. This implies that there can be numerous possible interactions among hyperedges, highlighting the complexity of hypergraph structures. In this work, taking these challenges into consideration, we define 26 _hypergraph motifs_ (h-motifs) so that they describe overlapping patterns of three connected hyperedges (rather than nodes). As seen in Figure 1(d), h-motifs describe the overlapping pattern of hyperedges \(e_{1}\), \(e_{2}\), and \(e_{3}\) by the emptiness of seven subsets: \(e_{1}\setminus e_{2}\setminus e_{3}\), \(e_{2}\setminus e_{3}\setminus e_{1}\), \(e_{3}\setminus e_{1}\setminus e_{2}\), \(e_{1}\cap e_{2}\setminus e_{3}\), \(e_{2}\cap e_{3}\setminus e_{1}\), \(e_{3}\cap e_{1}\setminus e_{2}\), and \(e_{1}\cap e_{2}\cap e_{3}\). As a result, every overlapping pattern is described by a unique h-motif, independently of the sizes of hyperedges. While this work focuses on overlapping patterns of three hyperedges, h-motifs are easily extended to four or more hyperedges. We count the number of each h-motif's instances in 11 real-world hypergraphs from 5 different domains. Then, we measure the significance of each h-motif in each hypergraph by comparing the count of its instances in the hypergraph against the counts in properly randomized hypergraphs. Lastly, we compute the _characteristic profile_ (CP) of each hypergraph, defined as the vector of the normalized significance of every h-motif. Comparing the counts and CPs of different hypergraphs leads to the following observations: * Structural design principles of real-world hypergraphs that are captured by frequencies of different h-motifs are clearly distinguished from those of randomized hypergraphs. * Hypergraphs from the same domains have similar CPs, while hypergraphs from different domains have distinct CPs (see Figure 2). In other words, CPs Fig. 1: (a) Example: co-authorship relations. (b) Hypergraph: the hypergraph representation of (a). (c) Line Graph: the line-graph representation of (b). (d) Hypergraph Motifs: example h-motifs and their instances in (b). successfully capture local structure patterns unique to each domain. Similarly, h-motifs can also be employed to summarize the connectivity pattern of each node or hyperedge. Specifically, for each node, we can calculate its _node profile_ (NP), a 26-element vector with each element indicating the frequency of each motif's instances within the node's ego-network. Likewise, the _hyperedge profile_ (HP) of a hyperedge is a 26-element vector with each element representing the count of each motif's instances that involve the hyperedge. We demonstrate empirically that NPs and HPs effectively capture local connectivity patterns, serving as valuable features for node classification and hyperedge prediction tasks. Our algorithmic contribution is to design MoCHy (**M**otif **C**ounting in **H**ypergraphs), a family of parallel algorithms for counting h-motifs' instances, which is the computational bottleneck of the aforementioned process. Note that since multi-way overlaps are taken into consideration, counting the instances of h-motifs is more challenging than counting the instances of network motifs, which are defined solely based on pairwise interactions. We provide one exact version, named MoCHy-E, and two approximate versions, named MoCHy-A and MoCHy-A\({}^{+}\). Empirically, MoCHy-A\({}^{+}\) is up to \(25\times\) more accurate than MoCHy-A, and it is up to \(32\times\) faster than MoCHy-E, with little sacrifice of accuracy. These empirical results are consistent with our theoretical analyses. Additionally, we investigate _ternary hypergraph motifs_ (3h-motifs) as a promising extension of h-motifs. While h-motifs focus only on the emptiness of seven subsets derived from intersections among hyperedges, 3h-motifs further differentiate patterns based on the cardinality of these subsets. In particular, 3h-motifs consider whether the cardinality of each non-empty subset surpasses a specific threshold or not, resulting in 431 distinct patterns. We demonstrate that employing 3h-motifs instead of h-motifs leads to performance improvements in all the previously mentioned applications, i.e., hypergraph (domain) classification, node classification, and hyperedge prediction. In summary, our contributions are summarized as follows: * **Novel Concepts:** We introduce h-motifs, which capture the local structures of hypergraphs, independently of the sizes of hyperedges or hypergraphs. We extend this concept to 3h-motifs, allowing for a more detailed distinction of local structures. * **Fast and Provable Algorithms:** We develop MoCHy, a family of parallel algorithms for counting h-motifs' instances. We show theoretically and empirically that the advanced version significantly outperforms the basic ones, providing a better trade-off between speed and accuracy. * **Discoveries in \(11\) Real-world Hypergraphs:** We show that h-motifs and 3h-motifs reveal local structural patterns that are shared by hypergraphs from the same domains but distinguished from those of random hypergraphs and hypergraphs from other domains (see Figure 2). * **Machine Learning Applications:** We empirically demonstrate that h-motifs allow for the extraction of effective features in three machine-learning tasks, and employing 3h-motifs enables the extraction of even stronger features. **Reproducibility:** The code and datasets used in this work are available at [https://github.com/jing9044/MoCHy-with-3h-motif](https://github.com/jing9044/MoCHy-with-3h-motif). This paper is an extension of our previous work [62], which first introduced the concept of h-motifs and related counting algorithms. In this extended version, we investigate various extensions of h-motifs, including 3h-motifs (Section 5 and Appendices G and H). Furthermore, we develop an advanced on-the-fly algorithm for improved space efficiency (Section 4.4) and establish accuracy guarantees for the approximate counting algorithms in the form of sample concentration bounds (Theorems 4 and 7). We also evaluate the effectiveness of h-motifs for machine learning applications on three tasks using 7 to 11 datasets (Section 6.5 and Appendices J and K). We especially demonstrate the superior performance of 3h-motifs over their variants and h-motifs in these tasks (Sections 6.4 and 6.5, and Appendix L). Finally, we measure and compare the importance of different h-motifs in characterizing hypergraph Figure 2: Distributions of h-motifs’ instances precisely characterize local structural patterns of real-world hypergraphs. Note that the hypergraphs from the same domains have similar distributions, while the hypergraphs from different domains do not. See Section 6.3 for details. structures and their correlation with global structural properties (Section 6.3 and Appendix E). In Section 2, we introduce h-motifs and related concepts. In Section 3, we describe how we use these concepts to characterize hypergraphs, hyperedges, and nodes. In Section 4, we present exact and approximate algorithms for counting instances of h-motifs, and we analyze their theoretical properties. In Section 5, we extend h-motifs to 3h-motifs. In Section 6, we provide experimental results. After discussing related work in Section 7, we offer conclusions and future research directions in Section 8. ## 2 Proposed Concepts In this section, we introduce preliminary concepts, and based on them, we define the proposed concept, i.e., hypergraph motifs. Refer to Table 1 for the notations frequently used in the paper. ### Preliminaries and Notations We introduce some preliminary concepts and notations. **Hypergraph:** Consider a _hypergraph_\(G=(V,E)\), where \(V\) and \(E:=\{e_{1},e_{2},...,e_{|E|}\}\) are sets of nodes and hyperedges, respectively.1 Each hyperedge \(e_{i}\in E\) is a non-empty subset of \(V\), and we use \(|e_{i}|\) to denote the number of nodes in it. For each node \(v\in V\), we use \(E_{v}:=\{e_{i}\in E:v\in e_{i}\}\) to denote the set of hyperedges that include \(v\). We say two hyperedges \(e_{i}\) and \(e_{j}\) are _adjacent_ if they share any member, i.e., if \(e_{i}\cap e_{j}\neq\varnothing\). Then, for each hyperedge \(e_{i}\), we denote the set of hyperedges adjacent to \(e_{i}\) by \(N_{e_{i}}:=\{e_{j}\in E:e_{i}\cap e_{j}\neq\varnothing\}\) and the number of such hyperedges by \(|N_{e_{i}}|\). Similarly, we say three hyperedges \(e_{i}\), \(e_{j}\), and \(e_{k}\) are _connected_ if there exists at least one hyperedge among them that is adjacent to the other two. Footnote 1: Note that, in this work, \(E\) is not a multi-set. That is, we assume that every hyperedge is unique. **Hyperwedges:** We define a _hyperwedge_ as an unordered pair of adjacent hyperedges. We denote the set of hyperedges in \(G\) by \(\wedge:=\{\{e_{i},e_{j}\}\in\binom{E}{2}:e_{i}\cap e_{j}\neq\varnothing\}\). We use \(\wedge_{ij}\in\wedge\) to denote the hyperwedge consisting of \(e_{i}\) and \(e_{j}\). In the example hypergraph in Figure 1(b), there are four hyperwedges: \(\wedge_{12}\), \(\wedge_{13}\), \(\wedge_{23}\), and \(\wedge_{14}\). **Line Graph:** We define the _line graph_ (a.k.a., projected graph) of a hypergraph \(G=(V,E)\) as \(\tilde{G}=(E,\wedge,\omega)\), where \(\wedge\) is the set of hyperwedges and \(\omega(\wedge_{ij}):=|e_{i}\cap e_{j}|\). That is, in the line graph \(\tilde{G}\), hyperedges in \(G\) act as nodes, and two of them are adjacent if and only if they share any member. To be more precise, \(\bar{G}\) is a weighted variant of a line graph, where each edge is assigned a weight equal to the size of overlap of the corresponding hyperwedge in \(G\). Note that for each hyperedge \(e_{i}\in E\), \(N_{e_{i}}\) is the set of neighbors of \(e_{i}\) in \(\bar{G}\), and \(|N_{e_{i}}|\) is its degree in \(\bar{G}\). Figure 1(c) shows the line graph of the example hypergraph in Figure 1(b). **Incidence Graph:** We define the _incidence graph_ (a.k.a., star expansion) of a hypergraph \(G=(V,E)\) as \(G^{\prime}=(V^{\prime},E^{\prime})\) where \(V^{\prime}:=V\cup E\) and \(E^{\prime}:=\{(v,e)\in V\times E:v\in e\}\). That is, in the bipartite graph \(G^{\prime}\), \(V\) and \(E\) are the two subsets of nodes, and there exists an edge between \(v\in V\) and \(e\in E\) if and only if \(v\in e\). ### Hypergraph Motifs (H-Motifs) We introduce hypergraph motifs, which are basic building blocks of hypergraphs. Then, we discuss their properties and generalization. **Definition and Representation:** Hypergraph motifs (or h-motifs in short) are designed for describing the overlapping patterns of three connected hyperedges. Specifically, given a set \(\{e_{i},e_{j},e_{k}\}\) of three connected hyperedges, h-motifs describe its overlapping pattern by the emptiness of the following seven sets: (1) \(e_{i}\setminus e_{j}\setminus e_{k}\), (2) \(e_{j}\setminus e_{k}\setminus e_{i}\), (3) \(e_{k}\setminus e_{i}\setminus e_{j}\), (4) \(e_{i}\cap e_{j}\setminus e_{k}\), (5) \(e_{j}\cap e_{k}\setminus e_{i}\), (6) \(e_{k}\cap e_{i}\setminus e_{j}\), and (7) \(e_{i}\cap e_{j}\cap e_{k}\). Formally, a h-motif is defined as a binary vector of size 7 whose elements represent the emptiness of the above sets, respectively, and as seen in Figure 1(d), h-motifs are naturally represented in the Venn diagram. Equivalently, when we leave at most one node in each of the above subsets, h-motifs can be defined based on the isomorphism between sub-hypergraphs consisting of three connected hyperedges. While there can be \(2^{7}\) h-motifs, 26 h-motifs remain once we exclude symmetric ones, those that cannot be obtained from distinct hyperedges (see Figure 4), and those that cannot be obtained from connected hyperedges. The 26 cases, which we call \begin{table} \begin{tabular}{c|l} \hline \hline **Notation** & **Definition** \\ \hline \(G=(V,E)\) & hypergraph with nodes \(V\) and hyperedges \(E\) \\ \(E=\{e_{1},..,e_{|E|}\}\) & set of hyperedges \\ \(E_{v}\) & set of hyperedges that contains a node \(v\) \\ \hline \(\wedge\) & set of hyperwedges in \(G\) \\ \(\wedge_{ij}\) & hyperwedge consisting of \(e_{i}\) and \(e_{j}\) \\ \hline \(\tilde{G}=(E,\wedge,\omega)\) & line graph representation of \(G\) \\ \(\omega(\wedge_{ij})\) & the number of nodes shared between \(e_{i}\) and \(e_{j}\) \\ \(N_{e_{i}}\) & set of neighbors of \(e_{i}\) in \(\tilde{G}\) \\ \hline \(h(\{e_{i},e_{j},e_{k}\})\) & h-motif corresponding to an instance \(\{e_{i},e_{j},e_{k}\}\) \\ \(M[t]\) & count of h-motif \(t\)’s instances \\ \hline \hline \end{tabular} \end{table} Table 1: Frequently-used symbols. motif 1_ through _h-motif 26_, are visualized in the Venn diagram in Figure 3. **Instances of H-motifs** : Consider a hypergraph \(G=(V,E)\). A set of three connected hyperedges is an _instance_ of h-motif \(t\) if their overlapping pattern corresponds to h-motif \(t\). The count of each h-motif's instances is used to characterize the local structure of \(G\), as discussed in the following sections. **Open and Closed H-motifs** : A h-motif is _closed_ if all three hyperedges in its instances are adjacent to (i.e., overlapped with) each other. If its instances contain two non-adjacent (i.e., disjoint) hyperedges, a h-motif is _open_. In Figure 3, h-motifs 17 - 22 are open; the others are closed. **Properties of H-motifs:** From the definition of h-motifs, the following desirable properties are immediate: * **Exhaustivity:** h-motifs capture overlapping patterns of _all possible_ three connected hyperedges. * **Unicity:** overlapping pattern of any three connected hyperedges is captured by _at most one_ h-motif. * **Size Independence:** h-motifs capture overlapping patterns _independently of the sizes of hyperedges_. Note that there can be infinitely many combinations of sizes of three connected hyperedges. Note that the exhaustiveness and the uniqueness imply that overlapping pattern of any three connected hyperedges is captured by _exactly one_ h-motif. **Why Multi-way Overlaps2**: Multi-way overlaps (e.g., the emptiness of \(e_{1}\cap e_{2}\cap e_{3}\) and \(e_{1}\setminus e_{2}\setminus e_{3}\)) play a key role in capturing the local structural patterns of real-world hypergraphs. Taking only the pairwise overlaps (e.g., the emptiness of \(e_{1}\cap e_{2}\), \(e_{1}\setminus e_{2}\), and \(e_{2}\setminus e_{1}\)) into account limits the number of possible overlapping patterns of three distinct hyperedges to just eight,2 significantly limiting their expressiveness and thus usefulness. Specifically, 12 (out of 26) h-motifs have the same pairwise overlaps, while their occurrences and significances vary substantially in real-world hypergraphs. For example, in Figure 1, \(\{e_{1},e_{2},e_{4}\}\) and \(\{e_{1},e_{3},e_{4}\}\) have the same pairwise overlaps, while their overlapping patterns are distinguished by h-motifs. Footnote 2: Note that using the conventional network motifs in s limits this number to two. ## 3 Characterization using H-motifs In this section, we outline the process of using h-motifs to summarize local structural patterns within a hypergraph, as well as those around individual nodes and hyperedges, for the purpose of characterizing them. ### Hypergraph Characterization What are the structural design principles of real-world hypergraphs distinguished from those of random hypergraphs? Below, we introduce the characteristic profile (CP), which is a tool for answering the above question using h-motifs. **Randomized Hypergraphs:** While one might try to characterize the local structure of a hypergraph by absolute counts of each h-motif's instances in it, some h-motifs may naturally have many instances. Thus, for more accurate characterization, we need random hypergraphs to be compared against real-world hypergraphs. In the network motif literature, configuration models have been widely employed for this purpose [78; 79]. These models generate random graphs while preserving the degree distribution of the original graph. Using the configuration model does not introduce an excessive level of randomness, maintaining a meaningful and controlled comparison with the original graph. Figure 4: The patterns that cannot be obtained from three distinct hyperedges. For example, any three hyperedges corresponding to the leftmost pattern are necessarily identical. However, according to our definition of hypergraphs in Section 2.1, every hyperedge is unique. Thus, there should be no instance of the pattern. Figure 3: The 26 h-motifs studied in this work. In each Venn diagram, uncolored regions are empty without containing any nodes, while colored regions include at least one node. H-motifs 17 - 22 are open, while the others are closed. In line with prior research, we used a configuration model extended to hypergraphs to obtain random hypergraphs. Specifically, we employ the Chung-Lu model [7], which is a configuration model designed to generate random bipartite graphs while preserving in expectation the degree distributions of the original graph [7] (for a precise theoretical description, please refer to Eq.(20) in Appendix F). We first apply this model to the incidence graph \(G^{\prime}\) of the input hypergraph \(G\) to obtain randomized bipartite graphs, and then we transform them into random hypergraphs. The empirical distributions of node degrees and hyperedge sizes in the random hypergraphs closely resemble those in \(G\), as shown in Figure 17 in Appendix F, where we also provide pseudocode of the process (Algorithm 6) and its theoretical properties. **Significance of H-motifs:** We measure the significance of each h-motif in a hypergraph by comparing the count of its instances against the count of them in randomized hypergraphs. Specifically, the _significance_ of a h-motif \(t\) in a hypergraph \(G\) is defined as \[\Delta_{t}:=\frac{M[t]-M_{rand}[t]}{M[t]+M_{rand}[t]+\epsilon}, \tag{1}\] where \(M[t]\) is the number of instances of h-motif \(t\) in \(G\), and \(M_{rand}[t]\) is the average number of instances of h-motif \(t\) in randomized hypergraphs. We fixed \(\epsilon\) to 1 throughout this paper. This way of measuring significance was proposed for network motifs [78] as an alternative of normalized Z scores, which can be dominated by few network motifs with small variances. Specifically, when the variance of the occurrences of a specific network motif in randomized graphs is very small, the Z-score becomes significantly large, and thus the Z-score of the particular network motif may dominate all others, regardless of its absolute occurrences. **Characteristic Profile (CP):** By normalizing and concatenating the significances of all h-motifs in a hypergraph, we obtain the characteristic profile (CP), which summarizes the local structural pattern of the hypergraph. Specifically, the _characteristic profile_ of a hypergraph \(G\) is a vector of size 26, where each \(t\)-th element is \[CP_{t}:=\frac{\Delta_{t}}{\sqrt{\sum_{t=1}^{26}\Delta_{t}^{2}}}. \tag{2}\] Note that, for each \(t\), \(CP_{t}\) is between \(-1\) and \(1\). The CP is used in Section 6.3 to compare the local structural patterns of real-world hypergraphs from diverse domains. ### Hyperedge Characterization Each individual hyperedge can also be characterized by the h-motif instances that contain it. **Hyperedge Profile (HP):** Specifically, given a hypergraph \(G=(V,E)\), the _hyperedge profile_ (HP) of a hyperedge \(e\in E\) is a 26-element vector, where each \(t\)-th element is the number of h-motif \(t\)'s instances that include \(e\). It should be noticed that, for HPs, we use absolute counts of h-motif instances rather than their normalized significances. Normalized significances are introduced for CPs to enable direct comparison of hypergraphs at different scales, specifically with varying numbers of nodes and hyperedges. Since comparisons between individual hyperedges, such as for the purpose of hyperedge prediction within a hypergraph, may be free from such issues, we simply use the absolute counts of h-motif instances when defining HPs.3 In Section 6.5, we demonstrate the effectiveness of HPs as input features in hyperedge prediction tasks. Footnote 3: Recall that the CPs are specifically designed to capture structural similarity between hypergraphs of potentially varying scales, typically using a simple metric such as cosine similarity. Regarding HPs and NPs (defined in Section 3.3), our primary objectives of using them are to distinguish missing hyperedges from other candidates (for HPs) and to distinguish nodes from different domains (for NPs). For these purposes, the scale information can be useful, and thus, we employ absolute counts for both HPs and NPs to retain and leverage this scale information. It is also important to note that, in our experiments, NPs and HPs are used with classifiers (e.g., hypergraph neural networks) powerful enough to capture (dis)similarity even across differing scales. ### Node Characterization Similarly, we characterize each node by the h-motif instances in its ego network. Below, we introduce three types of ego-networks in hypergraphs, and based on these, we elaborate on the node characterization method. **Hypergraph Ego-networks:** Comrie and Kleinberg [30] defined three distinct types of ego-networks. For each node \(v\in V\) in a hypergraph \(G=(V,E)\), we denote the neighborhood of \(v\) (including \(v\) itself) by \(V_{v}:=\bigcup_{e_{i}\in E_{v}}e_{i}\), where \(E_{v}:=\{e_{i}\in E:v\in e_{i}\}\). The _star ego-network_ of \(v\) is a subhypergraph of \(G\) with \(V_{v}\) as its node set and \(E_{v}\) (i.e., the hyperedges that contain \(v\)) as its hyperedge set. The _radial ego-network_ of \(v\) a subhypergraph of \(G\) with \(V_{v}\) as its node set and \(R_{v}:=\{e_{i}\in E:e_{i}\subseteq V_{v}\}\) (i.e., the hyperedges that are subsets of the neighborhood of \(v\)) as its hyperedge set. Lastly, the _contracted ego-network_ of \(v\) has \(V_{v}\) as its node set and \(C_{v}:=\bigcup_{e_{i}\in E}\{e_{i}\cap V_{v}\}\setminus\emptyset\) as its hyperedge set, and mathematically, the contracted ego-network of \(v\) is the subhypergraph of \(G\) induced by \(V_{v}\). Note that \(E_{v}\subseteq R_{v}\subseteq C_{v}\). Compared to \(E_{v}\), \(R_{v}\) additionally includes hyperedges that consist only of the neighbors of \(v\) but not include \(v\). Compared to \(R_{v}\), \(C_{v}\) additionally includes the non-empty intersection of each hyperedge and the neighborhood of \(v\). **Node Profile (NP):** Given a hypergraph \(G=(V,E)\), the _node profile_ (NP) of a node \(v\in V\) is a 26-element where each \(t\)-th element is the number of h-motif \(t\)'s instances within an ego-network of \(v\). Note that, as for HPs, we use the absolute counts of h-motifs, instead of their normalized significances, for NPs. Depending on the types of ego-networks, we define _star node profiles_ (SNPs), _radial node profiles_ (RNPs), and _contracted node profiles_ (CNPs). In Appendix K, we provide an empirical comparison of these three types of NPs in the context of a node classification task. The results show that using RNPs consistently yields better performance than SNPs or CNPs, indicating that additional complete hyperedges (i.e., \(R_{v}\setminus E_{v}\)) are helpful, while partial ones extracted from hyperedges (i.e., \(C_{v}\setminus R_{v}\)) are not. ## 4 Proposed Algorithms Given a hypergraph, how can we count the instances of each h-motif? Once we count them in the original and randomized hypergraphs, the significance of each h-motif and the CP are obtained immediately by Eq. (1) and Eq. (2). In this section, we present MoCHy (**Mo**tif **C**ounting in **H**ypergraphs), which is a family of parallel algorithms for counting the instances of each h-motif in the input hypergraph. We first describe line-graph construction, which is a preprocessing step of every version of MoCHy. Then, we present MoCHy-E, which is for exact counting. After that, we present two different versions of MoCHy-A, which are sampling-based algorithms for approximate counting. Lastly, we discuss parallel and on-the-fly implementations. Throughout this section, we use \(h(\{e_{i},e_{j},e_{k}\})\) to denote the h-motif that describes the connectivity pattern of an h-motif instance \(\{e_{i},e_{j},e_{k}\}\). We also use \(M[t]\) to denote the count of instances of h-motif \(t\). **Remarks:** The problem of counting h-motifs' occurrences bears some similarity to the classic problem of counting network motifs' occurrences. However, differently from network motifs, which are defined solely based on pairwise interactions, h-motifs are defined based on triple-wise interactions (e.g., \(e_{i}\cap e_{j}\cap e_{k}\)). One might hypothesize that our problem can easily be reduced to the problem of counting the occurrences of network motifs, and thus existing solutions (e.g., [20; 87]) are applicable to our problem. In order to examine this possibility, we consider the following two attempts: 1. [label=()] 2. Represent pairwise relations between hyperedges using the line graph, where each edge \(\{e_{i},e_{j}\}\) indicates \(e_{i}\cap e_{j}\neq\emptyset\). 3. Represent pairwise relations between hyperedges using the directed line graph where each directed edge \(e_{i}\to e_{j}\) indicates \(e_{i}\cap e_{j}\neq\emptyset\) and at the same time \(e_{i}\not\subset e_{j}\). The number of possible connectivity patterns (i.e., network motifs) among three distinct connected hyperedges is just two (i.e., closed and open triangles) and eight in (a) and (b), respectively. In both cases, instances of multiple h-motifs are not distinguished by network motifs, and the occurrences of h-motifs can not be inferred from those of network motifs. In addition, another computational challenge stems from the fact that triple-wise and even pair-wise relations between hyperedges need to be computed from the input hypergraph, while pairwise relations between edges are given in graphs. This challenge necessitates the precomputation of partial relations, described in the next subsection. ### Line Graph Construction (Algorithm 1) As a preprocessing step, every version of MoCHy builds the line graph \(\bar{G}=(E,\wedge,\omega)\) (see Section 2.1) of the input hypergraph \(G=(V,E)\), as described in Algorithm 1. To find the neighbors of each hyperedge \(e_{i}\) (line 3), the algorithm visits each hyperedge \(e_{j}\) that contains \(v\) and satisfies \(j>i\) (line 5) for each node \(v\in e_{i}\) (line 4). Then for each such \(e_{j}\), it adds \(\wedge_{ij}=\{e_{i},e_{j}\}\) to \(\wedge\) and increments \(\omega(\wedge_{ij})\) (lines 6 and 7). The time complexity of this preprocessing step is given in Lemma 1. ``` Input : (1) input hypergraph: \(G=(V,E)\) (2) line graph: \(\bar{G}=(E,\wedge,\omega)\) Output : exact count of each h-motif \(t\)'s instances \(M[t]\) 1\(M\leftarrow\) map whose default value is 0 2foreachhyperedge \(e_{i}\in E\)(in parallel)do 3foreachunordered hyperedge pair \(\{e_{j},e_{k}\}\in\binom{N_{e_{i}}}{2}\)do 4if\(e_{j}\cap e_{k}=\varnothing\) or \(i<\min(j,k)\)then 5\(M[h(\{e_{i},e_{j},e_{k}\})]\) += 1 6return\(M\) ``` **Algorithm 2**MoCHy-E: Exact H-motif Counting Lemma 1 (Complexity of Line Graph Construction): _The expected time complexity of Algorithm 1 is \(O(\sum_{\wedge_{ij}\in\wedge}|e_{i}\cap e_{j}|)\)._ Proof: If all sets and maps are implemented using hash tables, the expected time complexity of lines 6 and 7 is \(O(1)\) in expectation with uniform hash functions, and they are executed \(|e_{i}\cap e_{j}|\) times for each \(\wedge_{ij}\in\wedge\). Since \(|\wedge|<\sum_{e_{i}\in E}|N_{e_{i}}|\) and \(|e_{i}\cap e_{j}|\leq|e_{i}|\), Eq. (3) holds. \[\sum_{\wedge_{ij}\in\wedge}|e_{i}\cap e_{j}|<\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{ e_{i}}|). \tag{3}\] ### Exact H-motif Counting (Algorithm 2) We present MoCHy-E (MoCHy **E**xact), which counts the instances of each h-motif exactly. The procedures of MoCHy-E are described in Algorithm 2. For each hyperedge \(e_{i}\in E\) (line 2), each unordered pair \(\{e_{j},e_{k}\}\) of its neighbors, where \(\{e_{i},e_{j},e_{k}\}\) is an h-motif instance, is considered (line 3). If \(e_{j}\cap e_{k}=\varnothing\) (i.e., if the corresponding h-motif is open), \(\{e_{i},e_{j},e_{k}\}\) is considered only once. However, if \(e_{j}\cap e_{k}\neq\varnothing\) (i.e., if the corresponding h-motif is closed), \(\{e_{i},e_{j},e_{k}\}\) is considered two more times (i.e., when \(e_{j}\) is chosen in line 2 and when \(e_{k}\) is chosen in line 2). Based on these observations, given an h-motif instance \(\{e_{i},e_{j},e_{k}\}\), the corresponding count \(M[h(\{e_{i},e_{j},e_{k}\})]\) is incremented (line 5) only if \(e_{j}\cap e_{k}=\varnothing\) or \(i<\min(j,k)\) (line 4). This guarantees that each instance is counted exactly once. The time complexity of MoCHy-E is given in Theorem 1, which uses Lemma 2. Lemma 2 (Complexity of Computing \(h(\{e_{i},e_{j},e_{k}\})\)): _Given the input hypergraph \(G=(V,E)\) and its line graph \(\bar{G}=(E,\wedge,\omega)\), for each h-motif instance \(\{e_{i},e_{j},e_{k}\}\), the expected time for computing \(h(\{e_{i},e_{j},e_{k}\})\) is \(O(\min(|e_{i}|,|e_{j}|,|e_{k}|))\)._ Proof: Assume \(|e_{i}|=\min(|e_{i}|,|e_{j}|,|e_{k}|)\), without loss of generality, and all sets and maps are implemented using hash tables. As defined in Section 2.2, \(h(\{e_{i},e_{j},e_{k}\})\) is computed in \(O(1)\) time from the emptiness of the following sets: (1) \(e_{i}\setminus e_{j}\setminus e_{k}\), (2) \(e_{j}\setminus e_{k}\setminus e_{i}\), (3) \(e_{k}\setminus e_{i}\setminus e_{j}\), (4) \(e_{i}\cap e_{j}\setminus e_{k}\), (5) \(e_{j}\cap e_{k}\setminus e_{i}\), (6) \(e_{k}\cap e_{i}\setminus e_{j}\), and (7) \(e_{i}\cap e_{j}\cap e_{k}\). We check their emptiness from their cardinalities. We obtain \(e_{i}\), \(e_{j}\), and \(e_{k}\), which are stored in \(G\), and their cardinalities in \(O(1)\) time. Similarly, we obtain \(|e_{i}\cap e_{j}|\), \(|e_{j}\cap e_{k}|\), and \(|e_{k}\cap e_{i}|\), which are stored in \(\bar{G}\), in \(O(1)\) time in expectation with uniform hash functions. Then, we compute \(|e_{i}\cap e_{j}\cap e_{k}|\) in \(O(|e_{i}|)\) time in expectation by checking for each node in \(e_{i}\) whether it is also in both \(e_{j}\) and \(e_{k}\). From these cardinalities, we obtain the cardinalities of the six other sets in \(O(1)\) time as follows: (1) \(|e_{i}\setminus e_{j}\setminus e_{k}|=|e_{i}|-|e_{i}\cap e_{j}|-|e_{k}\cap e _{i}|+|e_{i}\cap e_{j}\cap e_{k}|\), (2) \(|e_{j}\setminus e_{k}\setminus e_{i}|=|e_{j}|-|e_{i}\cap e_{j}|-|e_{j}\cap e _{k}|+|e_{i}\cap e_{j}\cap e_{k}|\), (3) \(|e_{k}\setminus e_{i}\setminus e_{j}|=|e_{k}|-|e_{k}\cap e_{i}|-|e_{j}\cap e _{k}|+|e_{i}\cap e_{j}\cap e_{k}|\), (4) \(|e_{i}\cap e_{j}\setminus e_{k}|=|e_{i}\cap e_{j}|-|e_{i}\cap e_{j}\cap e_{k}|\), (5) \(|e_{j}\cap e_{k}\setminus e_{i}|=|e_{j}\cap e_{k}|-|e_{i}\cap e_{j}\cap e_{k}|\), (6) \(|e_{k}\cap e_{i}\setminus e_{j}|=|e_{k}\cap e_{i}|-|e_{i}\cap e_{j}\cap e_{k}|\). Hence, the expected time complexity of computing \(h(\{e_{i},e_{j},e_{k}\})\) is \(O(|e_{i}|)=O(\min(|e_{i}|,|e_{j}|,|e_{k}|))\). Theorem 1 (Complexity of MoCHy-E): _The expected time complexity of Algorithm 2 is \(O(\sum_{e_{i}\in E}(|N_{e_{i}}|^{2}\cdot|e_{i}|))\)._ Proof: Assume all sets and maps are implemented using hash tables. The total number of triples \(\{e_{i},e_{j},e_{k}\}\) considered in line 3 is \(O(\sum_{e_{i}\in E}|N_{e_{i}}|^{2})\). By Lemma 2, for such a triple \(\{e_{i},e_{j},e_{k}\}\), the expected time for computing \(h(\{e_{i},e_{j},e_{k}\})\) is \(O(|e_{i}|)\). Thus, the total expected time complexity of Algorithm 2 is \(O(\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\), which dominates that of the preprocessing step (see Lemma 1 and Eq. (3)). **Extension of MoCHy-E to H-motif Enumeration:** Since MoCHy-E visits all h-motif instances to count them, it is extended to the problem of enumerating every h-motif instance (with its corresponding h-motif), as described in Algorithm 3. The time complexity remains the same. ``` Input : (1) input hypergraph: \(G=(V,E)\) (2) line graph: \(\bar{G}=(E,\wedge,\omega)\) (2) line graph: \(\bar{G}=(E,\wedge,\omega)\) (3) \(M[h(\{e_{i},e_{j},e_{k}\})]\) (4) \(e_{i}\) (5) \(e_{i}\) (6) \(e_{i}\) (7) \(e_{i}\) (8) \(e_{i}\) (9) \(e_{i}\) (10) \(e_{i}\) (11) \(e_{i}\) (12) \(e_{i}\) (13) \(e_{i}\) (14) \(e_{i}\) (15) \(e_{i}\) (16) \(e_{i}\) (17) \(e_{i}\) (18) \(e_{i}\) (19) \(e_{i}\) (19) \(e_{i}\) (20) \(e_{i}\) (21) \(e_{i}\) (22) \(e_{i}\) (23) \(e_{i}\) (24) \(e_{i}\) (25) \(e_{i}\) (26) \(e_{i}\) (27) \(e_{i}\) (28) \(e_{i}\) (29) \(e_{i}\) (30) \(e_{i}\) (20) \(e_{i}\) (21) \(e_{i}\) (22) \(e_{i}\) (23) \(e_{i}\) (24) \(e_{i}\) (25) \(e_{i}\) (26) \(e_{i}\) (27) \(e_{i}\) (28) \(e_{i}\) (29) \(e_{i}\) (31) \(e_{i}\) (20) \(e_{ ``` Input : (1) input hypergraph: \(G=(V,E)\) (2) line graph: \(\bar{G}=(E,\wedge,\omega)\) (3) number of samples: \(s\) ``` Output : estimated count of each h-motif \(t\)'s instances: \(\bar{M}[t]\) 1\(\bar{M}[t]\leftarrow\) map whose default value is 0 2for\(n\gets 1...s\)(in parallel)do 3\(e_{i}\leftarrow\) sample a uniformly random hyperedge 4foreach hyperedge\(e_{j}\in N_{e_{i}}\)do 5foreach hyperedge\(e_{k}\in(N_{e_{i}}\cup N_{e_{j}}\setminus\{e_{i},e_{j}\})\)do 6if\(e_{k}\not\in N_{e_{i}}\) or \(j<k\)then 7\(\bar{M}[h(\{e_{i},e_{j},e_{k}\})]\) += 1 8foreach h-motif \(t\)do 9\(\bar{M}[t]\leftarrow\bar{M}[t]\cdot\frac{|E|}{3s}\) 10return\(\bar{M}\) ``` **Algorithm 3**MoCHy-EENUM for H-motif Enumeration MoCHy-A: Hyperedge Sampling (Algorithm 4): (Algorithm 4) is based on hyperedge sampling. It repeatedly samples \(s\) hyperedges from the hyperedge set \(E\) uniformly at random with replacement (line 3). For each sampled hyperedge \(e_{i}\), the algorithm searches for all h-motif instances that contain \(e_{i}\) (lines 4-7), and to this end, the 1-hop and 2-hop neighbors of \(e_{i}\) in the line graph \(\bar{G}\) are explored. After that, for each such instance \(\{e_{i},e_{j},e_{k}\}\) of h-motif \(t\), the corresponding count \(\bar{M}[t]\) is incremented (line 7). Lastly, each estimate \(\bar{M}[t]\) is rescaled by multiplying it with \(\frac{|E|}{3s}\) (lines 8-9), which is the reciprocal of the expected number of times that each of the h-motif \(t\)'s instances is counted.4 This rescaling makes each estimate \(\bar{M}[t]\) unbiased, as formalized in Theorem 2. Footnote 4: Each hyperedge is expected to be sampled \(\frac{s}{|E|}\) times, and each h-motif instance is counted whenever any of its 3 hyperedges is sampled. (4) **Theorem 2**: **(Bias and Variance of MoCHy-A)** _For every h-motif t, Algorithm 4 provides an unbiased estimate \(\bar{M}[t]\) of the count \(M[t]\) of its instances, i.e.,_ \[\mathbb{E}[\bar{M}[t]]=M[t]. \tag{4}\] _The variance of the estimate is_ \[\mathbb{V}\mathrm{ar}[\bar{M}[t]]=\frac{1}{3s}\cdot M[t]\cdot(|E|-3)+\frac{1} {9s}\sum_{l=0}^{2}p_{l}[t]\cdot(l|E|-9), \tag{5}\] _where \(p_{l}[t]\) is the number of pairs of h-motif t's instances that share \(l\) hyperedges._ Proof: See Appendix A. The time complexity of MoCHy-A is given in Theorem 3. **Theorem 3**: **(Complexity of MoCHy-A)** _The expected time complexity of Algorithm 4 is \(O(\frac{s}{|E|}\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\)._ Proof: Assume all sets and maps are implemented using hash tables. For a sample hyperedge \(e_{i}\), computing \(N_{e_{i}}\cup N_{e_{j}}\) for every \(e_{j}\in N_{e_{i}}\) takes \(O(\sum_{e_{j}\in N_{e_{i}}}(|N_{e_{i}}|+|N_{e_{j}}|))\) time in expectation with uniform hash functions if we compute \(N_{e_{i}}\cup N_{e_{j}}\) by checking whether each hyperedge \(e\in N_{e_{j}}\) is also in \(N_{e_{i}}\). By Lemma 2, computing \(h(\{e_{i},e_{j},e_{k}\})\) for all considered h-motif instances takes \(O(\min(|e_{i}|,|e_{j}|)\cdot\sum_{e_{j}\in N_{e_{i}}}|N_{e_{i}}|+|N_{e_{j}}|)\) time in expectation. Thus, the expected time complexity for processing a sample \(e_{i}\) is \[O(\min(|e_{i}|,|e_{j}|)\cdot\sum_{e_{j}\in N_{e_{i}}}(|N_{e_{i}}| +|N_{e_{j}}|))\] \[\qquad\qquad=O(|e_{i}|\cdot|N_{e_{i}}|^{2}+\sum_{e_{j}\in N_{e_{i }}}(|e_{j}|\cdot|N_{e_{j}}|)),\] which can be written as \[O(\sum_{e_{i}\in E}(\mathbb{1}(e_{i}\text{ is sampled})\cdot|e_{i}| \cdot|N_{e_{i}}|^{2})\] \[+\sum_{e_{j}\in E}(\mathbb{1}(e_{j}\text{ is adjacent to the sample})\cdot|e_{j}|\cdot|N_{e_{j}}|)).\] From this, linearity of expectation, \(\mathbb{E}[\mathbb{1}(e_{i}\text{ is sampled})]=\frac{1}{|E|}\), and \(\mathbb{E}[\mathbb{1}(e_{j}\text{ is adjacent to the sample})]=\frac{|N_{e_{j}}|}{|E|}\), the expected time complexity per sample hyperedge becomes \(O(\frac{1}{|E|}\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\). Hence, the expected total time complexity for preprocessing \(s\) samples is \(O(\frac{s}{|E|}\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\). We also obtain concentration inequalities of MoCHy-A (Theorem 4) using Hoeffding's inequality (Lemma 3), and the inequalities particularly depend on the number of samples and the number of instances of each h-motif. **Lemma 3** (Hoeffding's Inequality [38]): _Let \(X_{1}\), \(X_{2}\), \(\ldots\), \(X_{n}\) be independent random variables with \(a_{j}\leq X_{j}\leq b_{j}\) for every \(j\in\{1,2,\ldots,n\}\). Consider the sum of random variables \(X=X_{1}+\cdots+X_{n}\), and let \(\mu=\mathbb{E}[X]\). Then for any \(\tau>0\), we have_ \[\Pr[|X-\mu|\geq\tau]\leq 2\exp\left(-\frac{2\tau^{2}}{\sum_{j=1}^{n}(b_{j}-a_{j} )^{2}}\right).\] Theorem 4.1 (Concentration Bound of MoCHy-A): _Let \(d_{\max}[t]=\max_{e\in E[t]}|N_{e}|\) where \(E[t]:=\bigcup_{h(e_{i},e_{j},e_{j})=t}\{e_{i},e_{j},e_{k}\}\). For any \(\epsilon,\delta>0\), if \(M[t]>0\) and the number of samples \(s>\frac{1}{18\epsilon^{2}}\left(\frac{|E|d_{\max}[t]^{2}}{M[t]}\right)^{2} \log(\frac{2}{\delta})\), then \(\Pr(|\bar{M}[t]-M[t]|\geq M[t]\cdot\epsilon)\leq\delta\) holds for each \(t\in\{1,2,\ldots,26\}\)._ Proof: See Appendix B. MoCHy-A+:Hyperwedge Sampling (Algorithm 5):MoCHy-A\({}^{+}\) (Algorithm 5) provides a better trade-off between speed and accuracy than MoCHy-A. Differently from MoCHy-A, which samples hyperedges, MoCHy-A\({}^{+}\) is based on hyperwedge sampling. It selects \(r\) hyperedges uniformly at random with replacement (line 3), and for each sampled hyperwedge \(\wedge_{ij}\in\wedge\), it searches for all h-motif instances that contain \(\wedge_{ij}\) (lines 4-5). To this end, the hyperedges that are adjacent to \(e_{i}\) or \(e_{j}\) in the line graph \(\bar{G}\) are considered (line 4). For each such instance \(\{e_{i},e_{j},e_{k}\}\) of h-motif \(t\), the corresponding estimate \(\hat{M}[t]\) is incremented (line 5). Lastly, each estimate \(\hat{M}[t]\) is rescaled so that it unbiasedly estimates \(M[t]\), as formalized in Theorem 4.1. To this end, it is multiplied by the reciprocal of the expected number of times that each instance of h-motif \(t\) is counted.5 Footnote 5: Note that each instance of open and closed h-motifs contains 2 and 3 hyperwedges, respectively. Each instance of closed h-motifs is counted if one of the 3 hyperwedges in it is sampled, while that open h-motifs is counted if one of the 2 hyperwedges in it is sampled. Thus, in expectation, each instance of open and closed h-motifs is counted \(3r/|\wedge|\) and \(2r/|\wedge|\) times, respectively. Theorem 4.2 (Bias and Variance of MoCHy-A\({}^{+}\)): _For every h-motif t, Algorithm 5 provides an unbiased estimate \(\hat{M}[t]\) of the count \(M[t]\) of its instances, i.e.,_ \[\mathbb{E}[\hat{M}[t]]=M[t]. \tag{6}\] _For every closed h-motif \(t\), the variance of the estimate is_ \[\mathbb{V}\mathrm{ar}[\hat{M}[t]]=\frac{1}{3r}\cdot M[t]\cdot(| \wedge|-3)+\frac{1}{9r}\sum_{n=0}^{1}q_{n}[t]\cdot(n|\wedge|-9), \tag{7}\] _where \(q_{n}[t]\) is the number of pairs of h-motif \(t\)'s instances that share \(n\) hyperwedges. For every open h-motif \(t\), the variance is_ \[\mathbb{V}\mathrm{ar}[\hat{M}[t]]=\frac{1}{2r}\cdot M[t]\cdot(| \wedge|-2)+\frac{1}{4r}\sum_{n=0}^{1}q_{n}[t]\cdot(n|\wedge|-4). \tag{8}\] Proof: See Appendix C. The time complexity of MoCHy-A\({}^{+}\) is given in Theorem 4.1. Theorem 4.3 (Complexity of MoCHy-A\({}^{+}\)): _The expected time complexity of Algorithm 5 is \(O(\frac{r}{|\wedge|}\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\)._ Proof: Assume all sets and maps are implemented using hash tables. For a sample hyperwedge \(\wedge_{ij}\), computing \(N_{e_{i}}\cup N_{e_{j}}\) takes \(O(|N_{e_{i}}|+|N_{e_{j}}|)\) time in expectation with uniform hash functions if we compute \(N_{e_{i}}\cup N_{e_{j}}\) by checking whether each hyperedge \(e\in N_{e_{j}}\) is also in \(N_{e_{i}}\). By Lemma 2, computing \(h(\{e_{i},e_{j},e_{k}\})\) for all considered h-motif instances takes \(O(\min(|e_{i}|,|e_{j}|)\cdot|N_{e_{i}}|+|N_{e_{j}}|)\) time in expectation. Thus, the expected time complexity for processing a sample \(\wedge_{ij}\) is \(O(\min(|e_{i}|,|e_{j}|)\cdot(|N_{e_{i}}|+|N_{e_{j}}|))=O(|e_{i}|\cdot|N_{e_{i} }|+|e_{j}|\cdot|N_{e_{j}}|)\), which can be written as \[O(\sum\nolimits_{e_{i}\in E}(\mathbb{1}(e_{i}\text{ is included in the sample})\cdot|e_{i}|\cdot|N_{e_{i}}|)\] \[+\sum\nolimits_{e_{j}\in E}(\mathbb{1}(e_{j}\text{ is included in the sample})\cdot|e_{j}|\cdot|N_{e_{j}}|)).\] From this, linearity of expectation, \(\mathbb{E}[\mathbb{1}(e_{i}\) is included in the sample\(]) = \(\frac{|N_{e_{i}}|}{|\wedge|}\), and \(\mathbb{E}[\mathbb{1}(e_{j}\) is included in the sample\(]) = \(\frac{|N_{e_{j}}|}{|\wedge|}\), the expected time complexity per sample hyperwedge is \(O(\frac{1}{|\wedge|}\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\). Hence, the total time complexity for processing \(r\) samples is \(O(\frac{r}{|\wedge|}\sum_{e_{i}\in E}(|e_{i}|\cdot|N_{e_{i}}|^{2}))\). Additionally, we derive concentration inequalities for MoCHy-A\({}^{+}\) (Theorem 4.1), following a similar approach to that of Theorem 4.1, but with different minimum sample sizes for guaranteeing the same bound. Theorem 4.2 (Concentration Bound of MoCHy-A\({}^{+}\)): _Let \(d_{\max}[t]=\max_{e\in E_{[t]}}|N_{e}|\) where \(E[t]:=\bigcup_{h(e_{i},e_{j},e_{j})=t}\{e_{i},e_{j},e_{k}\}\). For each \(t\in\{1,\ldots,26\}\) such that \(M[t]>0\) and for any \(\epsilon,\delta>0\), a sufficient condition of being \(\Pr(|\hat{M}[t]-M[t]|\geq M[t]\cdot\epsilon)\leq\delta\) is \(r>\frac{1}{18\epsilon^{2}}\left(\frac{|A|d_{\max}[t]}{M[t]}\right)^{2}\)\(\log(\frac{2}{\delta})\), if h-motif \(t\) is closed, and \(r>\frac{1}{8\epsilon^{2}}\left(\frac{|A|d_{\max}[t]}{M[t]}\right)^{2}\)\(\log(\frac{2}{\delta})\), if motif \(t\) is open._ Proof: See Appendix D. **Comparison of MoCHy-A and MoCHy-A\({}^{+}\):** Empirically, MoCHy-A\({}^{+}\) provides a better trade-off between speed and accuracy than MoCHy-A, as presented in Section 6.7. We provide an analysis that supports this observation. Assume that the numbers of samples in both algorithms are set so that \(\alpha=\frac{s}{|E|}=\frac{r}{|\wedge|}\). For each h-motif \(t\), since both estimates \(\bar{M}[t]\) of MoCHy-A and \(\hat{M}[t]\) of MoCHy-A\({}^{+}\) are unbiased (see Eqs. (4) and (6)), we only need to compare their variances. By Eq. (5), \(\operatorname{Var}[\bar{M}[t]]=O(\frac{M[t]+p_{1}[t]+p_{2}[t]}{\alpha})\), and by Eq. (7) and Eq. (8), \(\operatorname{Var}[\bar{M}[t]]=O(\frac{M[t]+q_{1}[t]}{\alpha})\). By definition, \(q_{1}[t]\leq p_{2}[t]\), and thus \(\frac{M[t]+q_{1}[t]}{\alpha}\leq\frac{M[t]+p_{1}[t]+p_{2}[t]}{\alpha}\). Moreover, in real-world hypergraphs, \(p_{1}[t]\) tends to be several orders of magnitude larger than the other terms (i.e., \(p_{2}[t]\), \(q_{1}[t]\), and \(M[t]\)), and thus \(\bar{M}[t]\) of MoCHy-A tends to have larger variance (and thus larger estimation error) than \(\bar{M}[t]\) of MoCHy-A\({}^{+}\). Despite this fact, as shown in Theorems 4.1 and 4.2, MoCHy-A\({}^{+}\) is \(\frac{|E|+d_{\max}[t]}{|\wedge|}\) for each closed h-motif \(t\), and \(\frac{4|E|+d_{\max}[t]}{9|\wedge|}\) for each open h-motif \(t\). In real-world datasets (refer to Table 2 in Section 6.1), the maximum value (across all h-motifs) of \(\frac{|E|\cdot d_{\max}[t]}{|\wedge|}\) varies from 5 (in the _contact-primary_) to 500 (in the _coauth-history_). That is, MoCHy-A\({}^{+}\) requires fewer samples than MoCHy-A for the same bound, thereby supporting the empirical superiority of MoCHy-A\({}^{+}\) over MoCHy-A. However, it is important to note a limitation in this comparison of bounds. Our concentration bounds may not be optimal since they are based on worst-case scenarios, relying on the term \(d_{\max}[t]\). ### Parallel and On-the-fly Implementations We discuss parallelization of MoCHy and then on-the-fly computation of line graphs. **Parallelization:** All versions of MoCHy and line-graph construction are easily parallelized as highlighted in Algorithms 1-5. Specifically, we can parallelize line-graph construction and MoCHy-E by letting multiple threads process different hyperedges (in line 3 of Algorithm 1 and line 2 of Algorithm 2) independently in parallel. Similarly, we can parallelize MoCHy-A and MoCHy-A\({}^{+}\) by letting multiple threads sample and process different hyperedges (in line 3 of Algorithm 4) and hyperwedges (in line 3 of Algorithm 5) independently in parallel. The estimated counts of the same h-motif obtained by different threads are summed up only once before they are returned as outputs. We present some empirical results in Section 6.7. **H-motif Counting without Line Graphs:** If the input hypergraph \(G\) is large, computing its line graph \(\bar{G}\) (Algorithm 1) is time and space-consuming. Specifically, building \(\bar{G}\) takes \(O(\sum_{\wedge i_{j}\in\wedge}|e_{i}\cap e_{j}|)\) time (see Lemma 1) and requires \(O(|E|+|\wedge|)\) space, which often exceeds \(O(\sum_{e_{i}\in E}|e_{i}|)\) space required for storing \(G\). Thus, instead of precomputing \(\bar{G}\) entirely, we can build it incrementally while memoizing partial results within a given memory budget. We apply this idea to MoCHy-A\({}^{+}\), resulting in the following two versions of the algorithm: * **On-the-fly MoCHy-A\({}^{+}\) (Basic)**: This is a right-foward application of the memoization idea to MoCHy-A\({}^{+}\) (Algorithm 5). We compute the neighborhood of a hyperedge \(e_{i}\in E\) in \(\bar{G}\) (i.e., \(\{(k,\omega(\wedge_{ik})):k\in N_{e_{i}}\}\)) only if (1) a hyperwedge with \(e_{i}\) (e.g., \(\wedge_{ij}\)) is sampled (in line 3) and (2) its neighborhood is not memoized. The computed neighborhood is memoized with priority based on the degree \(|N_{e_{i}}|\) of \(e_{i}\) in \(\bar{G}\). That is, if the memoization budget is exceeded, we evict the memoized neighborhood of hyperedges in decreasing order of their degrees in \(\bar{G}\) until the budget is met. This is because the neighborhood of high-degree hyperedges is frequently retrieved, despite a higher computational cost. According to our preliminary studies, this memoization scheme based on degree demonstrates faster speeds compared to memoizing the neighborhood of random hyperedges or least recently used (LRU) hyperedges. * **On-the-fly MoCHy-A\({}^{+}\) (Adv.)**: This is an improved version that considers the order in which hyperedges are processed. It first collects a list \(W\) of sampled hyperwedges and groups the hyperwedges consisting of the same hyperedge. Between the two hyperedges forming a hyperwedge, the one with the larger neighborhood is used to group the hyperwedge. The hyperwedges are processed group by group, and thus hyperwedges consisting of the same hyperedges are more likely to be processed consecutively, thereby increasing the chance of utilizing memoized neighborhoods before they are evicted. As a result, On-the-fly MoCHy-A\({}^{+}\) (Adv.) is empirically faster than On-the-fly MoCHy-A\({}^{+}\) (Basic), as shown in Section 6.7. For details of On-the-fly MoCHy-A\({}^{+}\) (Basic) and On-the-fly MoCHy-A\({}^{+}\) (Adv.), refer to Algorithms 8 and 9, respectively, in Appendix I. ## 5 Extensions of H-motifs In this section, we explore two distinct approaches to extending the concept of h-motifs. We especially define ternary hypergraph motifs, which demonstrate consistent advantages for a variety of real-world applications. ### Extensions Beyond Binary As defined in Section 2.2, h-motifs describe overlapping patterns of three hyperedges solely based on the emptiness of the seven subsets derived from their intersections. That is, for each subset, h-motifs classify it into binary states, non-empty or empty, which corresponds to being colored or uncolored in Figure 3. This coarse classification inevitably results in the loss of detailed information within the intersections. Below, we introduce ternary hypergraph motifs, which mitigate this information loss by assigning ternary states to each subset based on its cardinality. **Definition of 3H-motifs:** Ternary hypergraph motifs (or 3h-motifs in short) are the extension of h-motifs, so as h-motifs are, they are designed for describing the overlapping pattern of three connected hyperedges. Given an instance (i.e., three connected hyperedges) \(\{e_{i},e_{j},e_{k}\}\), 3h-motifs describe its overlapping pattern by the cardinality of the following seven sets: (1) \(e_{i}\setminus e_{j}\setminus e_{k}\), (2) \(e_{j}\setminus e_{k}\setminus e_{i}\), (3) \(e_{k}\setminus e_{i}\setminus e_{j}\), (4) \(e_{i}\cap e_{j}\setminus e_{k}\), (5) \(e_{j}\cap e_{k}\setminus e_{i}\), (6) \(e_{k}\cap e_{i}\setminus e_{j}\), and (7) \(e_{i}\cap e_{j}\cap e_{k}\). Differently from h-motifs, which consider two states for each subset (empty or non-empty), 3h-motifs takes into account three (denoted by the '3' in 3h-motifs.) states for each subset. Specifically, for each of these seven sets, we classify it into three states based on its cardinality \(c\) as follows: (1) \(c=0\), (2) \(0<c\leq\theta\), (3) \(c>\theta\), where \(\theta\geq 1\). Throughout this paper, we set the value of \(\theta\) to 1, and thus each of these seven sets is classified into one of three categories: empty, singleton, and multiple. Equivalently, if we leave \(\min(\lceil c/\theta\rceil,2)\) node in each of the above subsets with cardinality \(c\), 3h-motifs can be defined based on the isomorphism between sub-hypergraphs formed by three connected hyperedges. Refer to Appendix L for a discussion on 3h-motifs with different values of \(\theta\) and additional variants of 3h-motifs. Out of the \(3^{7}\) possible patterns, 431 3h-motifs remain if we exclude symmetric ones, those that cannot be obtained from distinct hyperedges, and those that cannot be derived from connected hyperedges. Visual representations of 3h-motifs 1-6, which are the six 3h-motifs subdivided from h-motif 1, are provided in Figure 5. For a complete list of all 431 3h-motifs, refer to Appendix L. **Characterization using 3H-motifs:** 3H-motifs can naturally substitute h-motifs for characterizing hypergraphs, hyperedges, and nodes. By using 3h-motifs, characteristic profiles (CPs), hyperedge profiles (HPs), and node profiles (NPs) become 431-element vectors. **Counting 3H-motifs' Instances:** To count instances of 3h-motifs using the MoCHy, the only necessary change is to replace \(h(\{e_{i},e_{j},e_{k}\})\) with \(h_{3}(\{e_{i},e_{j},e_{k}\})\), which provides the corresponding 3h-motif for a given instance \(\{e_{i},e_{j},e_{k}\}\). As formalized in Lemma 4, \(h_{3}(\{e_{i},e_{j},e_{k}\})\) can be computed with the same time complexity as \(h(\{e_{i},e_{j},e_{k}\})\), and thus replacing \(h(\{e_{i},e_{j},e_{k}\})\) with \(h_{3}(\{e_{i},e_{j},e_{k}\})\) does not change the time complexity of all versions of MoCHy. Lemma 4 (Complexity of Computing \(h_{3}(\{e_{i},e_{j},e_{k}\})\)): _Given the input hypergraph \(G=(V,E)\) and its line Figure 5: The six 3h-motifs subdivided from h-motif 1. In each Venn diagram, uncolored regions are empty without containing any nodes. Colored regions with a triangle contain more than 0 and at most \(\theta\) nodes, while colored regions with a circle contain more than \(\theta\) nodes. Throughout this paper, we set \(\theta\) to 1. graph \(\bar{G}=(E,\wedge,\omega)\), for each 3h-motif instance \(\{e_{i},e_{j},e_{k}\}\), the expected time complexity for computing \(h_{3}(\{e_{i},e_{j},e_{k}\})\) is \(O(\min(|e_{i}|,|e_{j}|,|e_{k}|))\)._ Proof.: Following the proof of Lemma 2, we can show that it takes \(O(\min(|e_{i}|,|e_{j}|,|e_{k}|))\) time in expectation to obtain the cardinalities of all the following sets: (1) \(e_{i}\setminus e_{j}\setminus e_{k}\), (2) \(e_{j}\setminus e_{k}\setminus e_{i}\), (3) \(e_{k}\setminus e_{i}\setminus e_{j}\), (4) \(e_{i}\cap e_{j}\setminus e_{k}\), (5) \(e_{j}\cap e_{k}\setminus e_{i}\), (6) \(e_{k}\cap e_{i}\setminus e_{j}\), and (7) \(e_{i}\cap e_{j}\cap e_{k}\). Based on the cardinality \(c\) of each of the seven sets, it takes \(O(1)\) time to classify it into (1) \(c=0\), (2) \(0<c\leq\theta\), and (3) \(c>\theta\). Classifying all seven sets, which takes \(O(1)\) time, determines a specific 3h-motif. Thus, the expected time complexity of computing \(h_{3}(\{e_{i},e_{j},e_{k}\})\) is \(O(\min(|e_{i}|,|e_{j}|,|e_{k}|))\), which is same as that of computing \(h(\{e_{i},e_{j},e_{k}\})\). **Extensions Beyond Ternary:** The concept of 3h-motifs can be generalized to \(k\)h-motifs for any \(k>3\) by classifying each of the seven considered sets into \(k\) states. For instance, for \(k=4\), each set can be classified into four states based on its cardinality \(c\) as follows: (1) \(c=0\), (2) \(0<c\leq\theta_{1}\), (3) \(\theta_{1}<c\leq\theta_{2}\), (4) \(c>\theta_{2}\), where \(\theta_{2}>\theta_{1}\geq 1\). The number of \(k\)h-motifs increases rapidly with respect to \(k\). Specifically, the number becomes \(3,076\) for \(k=4\), \(14,190\) for \(k=5\), and \(49,750\) for \(k=6\), as derived in Appendix H. In this study, we concentrate on h-motifs and 3h-motifs (i.e., \(k=2\) and \(k=3\)), which are already capable of characterizing local structures in real-world hypergraphs, as evidenced by the empirical results in Section 6. ### Extensions Beyond Three Hyperedges The concept of h-motifs is easily generalized to four or more hyperedges. For example, a h-motif for four hyperedges can be defined as a binary vector of size 15 indicating the emptiness of each region in the Venn diagram for four sets. After excluding disconnected ones, symmetric ones, and those that cannot be obtained from distinct hyperedges, there remain \(1,853\) and \(18,656,322\) h-motifs for four and five hyperedges, respectively, as discussed in detail in Appendix G. This work focuses on the h-motifs for three hyperedges, which are already capable of characterizing local structures of real-world hypergraphs, as shown empirically in Section 6. ## 6 Experiments In this section, we review the experiments that we design for answering the following questions: * **Q1. Comparison with Random:** Does counting instances of different h-motifs reveal structural design principles of real-world hypergraphs distinguished from those of random hypergraphs? * **Q2. Comparison across Domains:** Do characteristic profiles capture local structural patterns of hypergraphs unique to each domain? * **Q3. Comparison of Characterization Powers:** How well do h-motifs, 3h-motifs, and network motifs capture the structural properties of real-world hypergraphs? * **Q4. Machine Learning Applications:** Can h-motifs and 3h-motifs offer useful input features for machine learning applications? * **Q5. Further Discoveries:** What interesting discoveries can be uncovered by employing h-motifs in real-world hypergraphs? * **Q6. Performance of Counting Algorithms:** How fast and accurate are the different versions of MoCHy? ### Experimental Settings **Machines:** We conducted all the experiments on a machine with an AMD Ryzen 9 3900X CPU and 128GB RAM. **Implementations:** We implemented every version of MoCHy using C++ and OpenMP. For hash tables, we used the implementation named 'unordered_map' provided by the C++ Standard Template Library. **Datasets:** We used the following eleven real-world hypergraphs from five different domains: * co-authorship (coauth-DBLP, coauth-geology [97], and coauth-history [97]): A node represents an author. A hyperedge represents all authors of a publication. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline **Dataset** & \(|V|\) & \(|E|\) & \(|e|^{\star}\) & \(|\wedge|\) & \(|S_{i}\)**\# H-motifs** \\ \hline \hline coauth-DBLP & 1,924,991 & 2,406,792 & 25 & 125M & 3.016 & 26.38\(\pm\) 18M \\ coauth-geology & 1,256,385 & 1,203,895 & 25 & 37.6M & 1,935 & 6B \(\pm\) 4.8M \\ coauth-history & 1,014,734 & 895,439 & 25 & 1.7M & 855 & 83.2M \\ \hline coautet-primary & 242 & 12,704 & 5 & 2.2M & 916 & 617M \\ contact-high & 327 & 7,818 & 5 & 503K & 439 & 69.6M \\ \hline email-Enron & 143 & 1,512 & 18 & 87.8K & 590 & 9.6M \\ email-EU & 998 & 25,027 & 25 & 8.3M & 6,152 & 7B \\ \hline tags-mbutn & 3,029 & 147,222 & 5 & 56M & 408,365 & 4.47\(\pm\) 1.5B \\ tags-mauth & 1,829 & 170,746 & 5 & 913M & 49,559 & 9.27\(\pm\) 3.2B \\ \hline threads-abuntu & 125,602 & 106,999 & 14 & 21.6M & 5,968 & 1.14B \\ threads-math & 176,445 & 595,749 & 21 & 647M & 39,019 & 2.27\(\pm\) 883M \\ \hline \hline \multicolumn{7}{l}{* The maximum size of a hyperedge. ** The maximum degree in the line graph.} \\ \end{tabular} \end{table} Table 2: Statistics of 11 real hypergraphs from 5 domains. * contact (contact-primary [99] and contact-high [75]): A node represents a person. A hyperedge represents a group interaction among individuals. * email (email-Enron [55] and email-EU [66; 111]): A node represents an e-mail account. A hyperedge consists of the sender and all receivers of an email. * tags (tags-ubuntu and tags-math): A node represents a tag. A hyperedge represents all tags attached to a post. * threads (threads-ubuntu and threads-math): A node represents a user. A hyperedge groups all users participating in a thread. These hypergraphs are made public by the authors of [14], and in Table 2 we provide some statistics of the hypergraphs after removing duplicated hyperedges. We used MoCHy-E for the _coauth-history_ dataset, the _threads-ubuntu_ dataset, and all datasets from the contact and email domains. For the other datasets, we used MoCHy-A\({}^{+}\) with \(r=2,000,000\), unless otherwise stated. We used a single thread unless otherwise stated. We computed CPs based on five hypergraphs randomized as described in Section 2.2. We computed CPs based on h-motifs (instead of 3h-motifs), unless otherwise stated. stances of h-motifs 17 and 18 consist of a hyperedge and its two disjoint subsets (see Figure 3). **H-motifs in Co-authorship Hypergraphs:** We observe that instances of h-motifs 10, 11, and 12 appear more frequently in all three hypergraphs from the co-au thorship domain than in the corresponding random hypergraphs. Although there are only about 190 instances of h-motif 12 in the corresponding random hypergraphs, there are about 64 million such instances (about \(\mathbf{337},\mathbf{000}\times\) more instances) in the _coauth-DBLP_ dataset. As seen in Figure 3, in instances of h-motifs 10, 11, and 12, a hyperedge is overlapped with the two other overlapped hyperedges in three different ways. **H-motifs in Contact Hypergraphs:** Instances of h-motifs 9, 13, and 14 are noticeably more common in both contact datasets than in the corresponding random hypergraphs. As seen in Figure 3, in instances of h-motifs 9, 13, and 14, hyperedges are tightly connected and nodes are mainly located in the intersections of all or some hyperedges. **H-motifs in Email Hypergraphs:** Both email datasets contain particularly many instances of h-motifs 8 and 10, compared to the corresponding random hypergraphs. As seen in Figure 3, instances of h-motifs 8 and 10 consist of three hyperedges one of which contains the most nodes. **H-motifs in Tags Hypergraphs:** In addition to instances of h-motif 11, which are common in most real hypergraphs, instances of h-motif 16, where all seven regions are not empty (see Figure 3), are particularly frequent in both tags datasets than in corresponding random hypergraphs. **H-motifs in Threads Hypergraphs:** Lastly, in both data sets from the threads domain, instances of h-motifs 12 and 24 are noticeably more frequent than expected from the corresponding random hypergraphs. In Appendix E, we analyze how the significance of each h-motif is correlated with the global structural properties of hypergraphs. ### Q2. Comparison across Domains We compare the characteristic profiles (CPs) of the real-world hypergraphs. In Figure 6, we present the CPs (i.e., the significances of the 26 h-motifs) of each hypergraph. As seen in the figure, hypergraphs from the same domains have similar CPs. Specifically, all three hypergraphs from the co-authorship domain share extremely similar CPs, even when the absolute counts of h-motifs in them are several orders of magnitude different. Similarly, the CPs of both hypergraphs from the tags domain are extremely similar. However, the CPs of the three hypergraphs from the co-authorship domain are clearly distinguished by them of the hypergraphs from the tags domain. While the CPs of the hypergraphs from the contact domain and the CPs of those from the email domain are similar for the most part, they are distinguished by the significance of h-motif 3. These observations confirm that CPs accurately capture local structural patterns of real-world hypergraphs. **Importance of H-motifs:** Since some h-motifs can be more useful than others, we measure the importance of Figure 6: Characteristic profiles (CPs) capture local structural patterns of real-world hypergraphs accurately. The CPs are similar within domains but different across domains. Note that the significance of h-motif 3 distinguishes the contact hypergraphs from the email hypergraphs. Figure 7: Importance of h-motifs in differentiating hypergraph domains: All 26 h-motifs contribute to distinguishing hypergraph domains, with each h-motif having varying levels of importance. each h-motif in distinguishing the domains of hypergraphs. We define the _importance_ of a h-motif as its contribution to differentiating the domains of hypergraphs. The importance of each h-motif \(t\) is defined as: \[importance[t]=1-\frac{dist_{within}[t]}{dist_{across}[t]},\] where \(dist_{within}[t]\) is the average CP distance between hypergraphs from the same domain, and \(dist_{across}[t]\) is the average CP distance between hypergraphs from different domains. As seen in Figure 7, all 26 h-motifs have positive importances, indicating that all h-motifs do contribute to distinguishing the domains of hypergraphs. Note that each h-motif has different importance: some h-motifs are extremely important (e.g., h-motifs 3, 4, and 23), while some are less important (e.g., h-motifs 2, 12, and 15). It is important to note that these importance scores should be interpreted with caution, as they may be overfitted given the limited number of datasets (specifically, the similarities observed in 7 within-domain pairs and 48 cross-domain pairs). ### Q3. Comparison of Characterization Powers We compare the characterization power of h-motifs, 3h-motifs, and basic network motifs. Through this comparison, we demonstrate the effectiveness of h-motifs and 3h-motifs in capturing the structural properties of real-world hypergraphs. **CPs Based on Network Motifs:** In addition to characteristic profiles (CPs) based on h-motifs and 3h-motifs, we additionally compute CPs based on network motifs. Specifically, we construct the incidence graph \(G^{\prime}\) (defined in Section 2.1) of each hypergraph \(G=(V,E)\). Then, we compute the CPs based on the network motifs consisting of 3 to 5 nodes, using [20].6 Using each of the three types of CPs, we compute the similarity matrices (specifically, correlation coefficient matrices) of the real-world hypergraphs and provide them in Figure 8. Footnote 6: Nine patterns can be obtained from incident graphs, which are bipartite graphs, and thus CPs based on network motifs are 9-element vectors. **Comparison of Pearson Correlations:** As seen in Figures 8(a), 8(b) and 8(d), the domains of the real-world hypergraphs are distinguished more clearly by the CPs based on h-motifs than by the CPs based on network motifs. Numerically, when the CPs based on h-motifs are used, the average correlation coefficient is 0.978 within domains and 0.654 across domains, and the gap is 0.324. However, when the CPs based on network motifs are used, the average correlation coefficient is 0.988 within domains and 0.919 across domains, and the gap is just 0.069. As seen in Figures 8(c) and 8(d), the hypergraph domains are distinguished even more distinctly differentiated by the CPs based on 3h-motifs. Using 3h-motifs as a basis for the CPs results in significantly lower correlation coefficients between the contact and email domains, as well as between the tag and Figure 8: Real-world hypergraphs from the same domain exhibit similar characteristic profiles (CPs), while those from different domains have distinct CPs. Notably, the CPs based on h-motifs and 3h-motifs capture local structural patterns more accurately than those based on network motifs, as supported numerically in the table. thread domains, allowing for a better distinction between these domains. Numerically, when 3h-motifs are used, the average correlation coefficient is 0.932 within domains and 0.370 across domains, and the gap is 0.562. These results support that h-motifs and 3h-motifs play a key role in capturing local structural patterns of real-world hypergraphs. **Comparison of Clustering Performances:** We further compare the characterization powers by evaluating clustering performance using each similarity matrix as the input for spectral clustering [86]. We set the target number of clusters to the number of hypergraph domains. As summarized in Figure 8(d), the NMI scores, where higher scores indicate better clustering performance, are 0.678, 0.905, and 1 when network motifs, h-motifs, and 3h-motifs, respectively, are used as a basis for the CPs. Notably, when 3h-motifs are used, the hypergraph domains are perfectly classified into distinct clusters. These results confirm again the effectiveness of h-motifs and 3h-motifs in characterizing real-world hypergraphs. ### Q4. Machine Learning Applications We demonstrate that h-motifs and 3h-motifs provide useful input features for two machine learning tasks. **Hyperedge Prediction:** We first consider the task of predicting future hyperedges in the seven real-world hypergraphs where MoCHy-E completes within a reasonable duration. As in [112], we formulate this problem as a binary classification problem, aiming to classify real hyperedges and fake ones. To this end, we create fake hyperedges in both training and test sets by replacing some fraction of nodes in each real hyperedge with random nodes. Refer to Appendix J for detailed settings. Then, we train classifiers using each of the following sets of input hyperedge features: * **HP26** (\(\in\mathbb{R}^{26}\)): HP based on h-motifs. * **HP7** (\(\in\mathbb{R}^{7}\)): The seven features with the largest variance among those in HP based on h-motifs. * **THP** (\(\in\mathbb{R}^{431}\)): HP based on 3h-motifs. * **BASELINE** (\(\in\mathbb{R}^{7}\)): The mean, maximum, and minimum degree7 and the mean, maximum, and minimum number of neighbors8 of the nodes in each hyperedge and its size. Footnote 7: The degree of a node \(v\) is the number of hyperedges that \(v\) is in. Footnote 8: The neighbors of a node \(v\) is the nodes that appear in at least one hyperedge together with \(v\). We employ XGBoost [26] as the classifier since it outperforms other classifiers, specifically logistic regression, random forest, decision tree, and multi-layer perception, on average, regardless of the feature sets used. Results with other classifiers can be found in Appendix J. We report the accuracy (ACC) and the area under the ROC curve (AUC) in each setting in Table 4. Using HP26 and HP7, which are based on h-motifs, yields consistently better predictions than using BASELINE, which is a baseline feature set. In addition, using THP, which is based on 3h-motifs, leads to the best performance in almost all settings. These results suggest that h-motifs provide informative hyperedge features, and 3h-motifs provide even stronger hyperedge features for hyperedge prediction. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & **HP26** & **HP7** & **THP** & **BASELINE** \\ \hline \multirow{2}{*}{**conuth-DBLP**} & **ACC** & 0.801 & 0.744 & **0.836** & 0.646 \\ & **AUC** & 0.880 & 0.820 & **0.909** & 0.707 \\ \hline \multirow{2}{*}{**coanth-MAG-Geology**} & **ACC** & 0.782 & 0.722 & **0.819** & 0.661 \\ & **AUC** & 0.865 & 0.798 & **0.892** & 0.741 \\ \hline \multirow{2}{*}{**coanth-MAG-History**} & **ACC** & 0.696 & 0.683 & **0.716** & 0.608 \\ & **AUC** & 0.811 & 0.761 & **0.820** & 0.732 \\ \hline \multirow{2}{*}{**contact-primary-school**} & **ACC** & 0.722 & 0.769 & **0.779** & 0.603 \\ & **AUC** & 0.872 & 0.868 & **0.886** & 0.647 \\ \hline \multirow{2}{*}{**contact-high-school**} & **ACC** & **0.907** & 0.860 & 0.904 & 0.585 \\ & **AUC** & **0.968** & 0.949 & 0.967 & 0.641 \\ \hline \multirow{2}{*}{**email-Enron**} & **ACC** & 0.815 & 0.725 & **0.827** & 0.633 \\ & **AUC** & **0.922** & 0.816 & 0.921 & 0.701 \\ \hline \multirow{2}{*}{**email-Eu**} & **ACC** & 0.911 & 0.878 & **0.920** & 0.702 \\ & **AUC** & 0.922 & 0.954 & **0.977** & 0.781 \\ \hline \hline \end{tabular} \end{table} Table 4: H-motifs and 3h-motifs give informative hyperedge features. The use of h-motifs and 3h-motifs for input features in HP26 and THP, respectively, consistently outperforms using the baseline features in BASELINE for predicting hyperedges in all datasets. Even when reducing the dimension of HM26 to that of BASELINE (i.e., using HP7), the accuracy of predictions using h-motif-based features remains superior. For each setting, the best result is in **bold** and the second best one is underlined. The standard deviations of all the results are smaller than 0.0001. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & **NP26** & **NP7** & **TNP** & **BASELINE** \\ \hline **ACC** & 0.682 & 0.545 & **0.723** & 0.659 \\ **AVG AUC** & 0.952 & 0.901 & **0.967** & 0.950 \\ \hline \hline \end{tabular} \end{table} Table 5: H-motifs and 3h-motifs provide valuable input features for node classification, with 3h-motifs showing particularly strong performance. The use of them for input features in NP26 and TNP yields better classification results than using the baseline features in BASELINE. For each metric, the best result is in **bold** and the second best one is underlined. The standard deviations of all the results are smaller than 0.0001. **Node Classification:** As another machine learning application, we consider the task of node classification, where the label of each node is the hypergraph it belongs to. Since we utilize all eleven real-world hypergraphs, each node can have one of eleven possible labels. We draw 100 nodes uniformly at random from each hypergraph, and we use 80% of them for training and the remaining 20% of them for testing. Refer to Appendix K for detailed experimental settings. We train four classifiers using each of the following sets of input node features: * **NP26** (\(\in\mathbb{R}^{26}\)): NP based on h-motifs. * **NP7** (\(\in\mathbb{R}^{7}\)): The seven features with the largest variance among those in NP based on h-motifs. * **TNP** (\(\in\mathbb{R}^{431}\)): NP based on 3h-motifs. * **BASELINE** (\(\in\mathbb{R}^{7}\)): The node count, hyperedge count, average hyperedge size, average overlapping size, density [40]9, overlapness [60]10, and the number of hyperedges that contain the ego-node in each ego-network. Footnote 9: The ratio between the hyperedge count and the node count. Footnote 10: The ratio between the sum of hyperedge sizes and the node count. For all feature sets, we use radial ego-networks as the ego-networks and XGBoost [26] as the classifier. This is because using radial ego-networks and XGBoost gives better classification results than using other types of ego-networks and other classifiers in most cases. Refer to Appendix K for full experimental results with other types of ego-networks and other classifiers. Figure 11: Using MoCHy-A\({}^{+}\), characteristic profiles (CPs) can be estimated accurately from a small number of samples. Figure 10: MoCHy-A\({}^{+}\) gives the best trade-off between speed and accuracy. It yields up to 25\(\times\) more accurate estimation than MoCHy-A, and it is up to 32.5\(\times\) faster than MoCHy-E. The error bars indicate \(\pm\) 1 standard error over 20 trials. Figure 9: Trends in the formation of collaborations are captured by h-motifs. (a) The fractions of the instances of h-motifs 2 and 22 have increased rapidly. (b) The fraction of the instances of open h-motifs has increased steadily since 2001. We report the accuracy (ACC) and the average area under the ROC curve (AVG AUC) in each setting in Table 5. Using TNP, which is based on 3h-motifs, yields the best classification result. Using NP26, which is based on h-motifs, outperforms using BASELINE, which is a baseline feature set. However, reducing the dimension of NP26 to that of BASELINE results in the worst performance. These results demonstrate that h-motifs and particularly 3h-motifs provide effective input features for node classification, highlighting the importance of local structural patterns in hypergraphs for this task. ### Q5. Further Observations We analyze the evolution of the co-authorship Hypergraphs by employing h-motifs. The dataset contains bibliographic information on computer science publications. Using the publications in each year from 1984 to 2016, we create 33 hypergraphs where each node corresponds to an author, and each hyperedge indicates the set of the authors of a publication. Then, we compute the fraction of the instances of each h-motif in each hypergraph to analyze patterns and trends in the formation of collaborations. As shown in Figure 9, over the 33 years, the fractions have changed with distinct trends. First, as seen in Figure 9(b), the fraction of the instances of open h-motifs has increased steadily since 2001, indicating that collaborations have become less clustered, i.e., the probability that two collaborations intersecting with a collaboration also intersect with each other has decreased. Notably, the fractions of the instances of h-motif 2 (closed) and h-motif 22 (open) have increased rapidly, accounting for most of the instances. ### Q6. Performance of Counting Algorithms We test the speed and accuracy of all versions of MoCHy under various settings. To this end, we measure elapsed time and relative error defined as \[\frac{\sum_{t=1}^{26}|M[t]-\bar{M}[t]|}{\sum_{t=1}^{26}M[t]}\text{ and }\frac{\sum_{t=1}^{26}|M[t]-\hat{M}[t]|}{\sum_{t=1}^{26}M[t]},\] for MoCHy-A and MoCHy-A\({}^{+}\), respectively. Unless otherwise stated, we use a single thread without the on-the-fly computation scheme. **Speed and Accuracy:** In Figure 10, we report the elapsed time and relative error of all versions of MoCHy on the 6 different datasets where MoCHy-E terminates within a reasonable time. The numbers of samples in MoCHy-A and MoCHy-A\({}^{+}\) are set to \(\{2.5\times k:1\leq k\leq 10\}\) percent of the counts of hyperedges and hyperwedges, respectively. MoCHy-A\({}^{+}\) provides the best trade-off between speed and accuracy. For example, in the _threads-ubuntu_ dataset, MoCHy-A\({}^{+}\) provides \(24.6\times\) lower relative error than MoCHy-A, consistently with our theoretical analysis (see the last paragraph of Section 4.3). Moreover, in the same dataset, MoCHy-A\({}^{+}\) is \(32.5\times\) faster than MoCHy-E with little sacrifice on accuracy. **Effects of the Sample Size on CPs:** In Figure 11, we report the CPs obtained by MoCHy-A\({}^{+}\) with different numbers of hyperwedge samples on 3 datasets. Even with a smaller number of samples, the CPs are estimated near perfectly. **Parallelization:** We measure the running times of the proposed method with different numbers of threads on the _threads-ubuntu_ and _coauth-DBLP_ datasets. As seen in Figure 12, in both datasets, MoCHy achieves significant speedups with multiple threads. Specifically, with 8 threads, MoCHy-E and MoCHy-A\({}^{+}\) (\(r=1M\)) achieve speedups of 5.4 and 6.7, respectively in _threads-ubuntu_ dataset. In the _coauth-DBLP_ dataset, similar trends can be observed with speedups of 5.0 and 6.0 when using MoCHy-A\({}^{+}\) for \(r=1M\) and \(r=8M\), respectively. MoCHy-E cannot be tested on the _coauth-DBLP_ dataset since it does not complete within a reasonable duration. **Effects of On-the-fly Computation on Speed:** We analyze the effects of the on-the-fly computation of line graphs (discussed in Section 4.4) on the speed of MoCHy-A\({}^{+}\) under different memory budgets for memoization. To this end, we use the _coauth-DBLP_ dataset, and we set the memory budgets so that up to {0%, 0.1%, 1%, 10%, 100%} of the edges in the line graph can be memoized. When the budget is 0%, we compute the neighbors of each hyperedge within the sampled hyperedge every time, without precomputing or memoizing (a part of) the line graph. As shown in Figure 13, both On-the-fly MoCHy-A\({}^{+}\) (Basic) and On-the-fly MoCHy-A\({}^{+}\) (Adv.) faster than MoCHy-A\({}^{+}\) without memoization, and their speed tends to improve as the memory budget increases. In addition, On-the-fly MoCHy-A\({}^{+}\) (Adv.) is consistently faster than On-the-fly MoCHy-A\({}^{+}\) (Basic) across different memory budgets. Specifically, it achieves up to \(1.72\times\) reduced runtime, demonstrating the effectiveness of its carefully ordered processing schemes for sampled hyperwedges. **Comparison with Network-motif Counting:** We assess the computational time needed for counting the instances of h-motifs, 3h-motifs, and network motifs on the _coauth-DBLP_ dataset, which is our largest dataset. We employ MoCHy-A\({}^{+}\) for both h-motifs and 3h-motifs, and for network motifs, we utilize Motivo [20], a re cently introduced algorithm, to count the instances of network motifs up to size 5. In all cases, we fix the sample size to 2 million. As shown in Figure 14(a), when counting instances of h-motifs, MoCHy-A\({}^{+}\) is consistently faster than Motivo across different numbers of threads, and the gap increases as the number of threads grows. When it comes to counting 3h-motifs, MoCHy-A\({}^{+}\) is slower than Motivo with a single thread, but it becomes faster with five or more threads. This is attributed to MoCHy-A\({}^{+}\) achieving significant speedups with more threads, compared to Motivo, as shown in Figure 14(b). ## 7 Related Work We review prior work on network motifs, algorithms for counting them, and hypergraphs. While the definition of a network motif varies among studies, here we define it as a connected graph composed by a predefined number of nodes. **Network Motifs**. Network motifs were proposed as a tool for understanding the underlying design principles and capturing the local structural patterns of graphs [39; 94; 79]. The occurrences of motifs in real-world graphs are significantly different from those in random graphs [79], and they vary also depending on the domains of graphs [78]. The concept of network motifs has been extended to various types of graphs, including dynamic [85] graphs, bipartite graphs [18], heterogeneous graphs [89], and simplicial complexes [14; 52; 88] The occurrences of network motifs have been Figure 14: MoCHy-A\({}^{+}\) for counting the instances of h-motifs is consistently faster than Motivo [20], which counts the instances of network motifs of size up to 5, across different numbers of threads. When counting 3h-motifs, MoCHy-A\({}^{+}\) is faster than Motivo with five or more threads. This is attributed to the fact that MoCHy-A\({}^{+}\) exhibits better speedup as the number of threads increases compared to Motivo. Figure 12: Both MoCHy-E and MoCHy-A\({}^{+}\) achieve significant speedups with multiple threads. Figure 13: On-the-fly MoCHy-A\({}^{+}\) (Basic) and On-the-fly MoCHy-A\({}^{+}\) (Adv.) achieve substantial speed improvements, compared to MoCHy-A\({}^{+}\) without memoization, even when memoizing a small fraction of line graphs Between the two methods, On-the-fly MoCHy-A\({}^{+}\) (Adv.) is faster up to 1.72\(\times\) than On-the-fly MoCHy-A\({}^{+}\) (Basic), due to its carefully ordered processing scheme for sampled hyperwedges. used in a wide range of graph applications: community detection [15; 103; 111; 71], ranking [119], graph embedding [90; 114], and graph neural networks [65], to name a few. **Algorithms for Network Motif Counting.** We focus on algorithms for counting the occurrences of every network motif whose size is fixed or within a certain range [5; 6; 11; 20; 27; 37; 87], while many are for a specific motif (e.g., the clique of size 3) [4; 42; 47; 53; 57; 83; 92; 95; 104; 106; 102; 103; 104]. Given a graph, they aim to count rapidly and accurately the instances of motifs with 4 or more nodes, despite the combinatorial explosion of the instances, using the following techniques: 1. **Combinatorics:** For exact counting, combinatorial relations between counts have been employed [5; 87; 85]. That is, prior studies deduce the counts of the instances of motifs from those of other smaller or equal-size motifs. 2. **MCMC:** Most approximate algorithms sample motif instances from which they estimate the counts. Based on MCMC sampling, the idea of performing a random walk over instances (i.e, connected subgraphs) until it reaches the stationarity to sample an instance from a fixed probability distribution (e.g., uniform) has been employed [17; 27; 37; 91; 105; 76]. 3. **Color Coding:** Instead of MCMC, color coding [8] can be employed for sampling [19; 20; 21]. Specifically, prior studies color each node uniformly at random among \(k\) colors, count the number of \(k\)-trees with \(k\) colors rooted at each node, and use them to sample instances from a fixed probability distribution. In our problem, which focuses on h-motifs with only 3 hyperedges, sampling instances with fixed probabilities is straightforward without (2) or (3), and the combinatorial relations on graphs in (1) are not applicable. In algorithmic aspects, we address the computational challenges discussed at the beginning of Section 4 by answering (a) what to precompute (Section 4.1), (b) how to leverage it (Sections 4.2 and 4.3), and (c) how to prioritize it (Sections 4.4 and 6.7), with formal analyses (Lemma 2; Theorems 1, 3, and 6). **Hypergraph Mining**. Hypergraphs naturally represent group interactions occurring in a wide range of fields, including computer vision [43; 113], bioinformatics [45], circuit design [50; 82], social network analysis [68; 109], cryptocurrency [54], and recommender systems [69; 23]. There also has been considerable attention on machine learning on hypergraphs, including [3; 9; 59; 121; 51], classification [48; 59; 100; 113], hyperedge prediction [114; 112; 117; 44], and anomaly detection [61]. Recent studies on real-world hypergraphs revealed interesting patterns commonly observed across domains, including (a) global structural properties (e.g., giant connected components and small diameter) [33; 56; 22] and their temporal evolution (e.g., shrinking diameter) [56]; (b) structural properties of ego-networks (e.g., density and overapness) [60] and their temporal evolution (e.g., decreasing rates of novel nodes) [30]; and (c) temporal patterns regarding arrivals of the same or overlapping hyperedges [16; 29; 24]. Notably, Benson et al. [14] studied how several local features, including edge density, average degree, and probabilities of simplicial closure events for 4 or less nodes11, differ across domains. Our analysis using h-motifs is complementary to these approaches in that it (1) captures local patterns systematically without hand-crafted features, (2) captures static patterns without relying on temporal information, and (3) naturally uses hyperedges with any number of nodes without decomposing them into small ones. Footnote 11: The emergence of the first hyperedge that includes a set of nodes each of whose pairs co-appear in previous hyperedges. The configuration of the pairwise co-appearances affects the probability. Recently, there has been an extension of hypergraph motifs to temporal hypergraphs, which evolve over time [63; 64]. This extension introduces 96 temporal hypergraph motifs (TH-motifs) that capture not only the overlapping patterns but also the relative order among three connected hyperedges. This extension has been shown to improve the characterization power of h-motifs in hypergraph classification and hyperedge prediction tasks. Along with the concept of TH-motifs, a family of algorithms has been proposed for the exact and approximate counting of TH-motifs. The focuses of the algorithms are the dynamic update of the line graph over time and the prioritized sampling of time intervals for estimation. It is important to note that this conceptual and algorithmic extension requires temporal information as input, and is orthogonal to our extension to 3h-motifs, which only requires topological information. ## 8 Conclusions and Future Directions In this section, we present conclusions and future research directions. ### Conclusions In this work, we introduce hypergraph motifs (h-motifs), and their extensions, ternary hypergraph motifs (3h-motifs). Using them, we investigate the local structures of 11 real-world hypergraphs from 5 different domains. We summarize our contributions as follows: * **Novel Concepts:** We define 26 h-motifs, which describe connectivity patterns of three connected hyperedges in a unique and exhaustive way, independently of the sizes of hyperedges (Figure 3). We extend this concept to 431 3h-motifs, enabling a more specific differentiation of local structures (Figure 5). * **Fast and Provable Algorithms:** We propose 3 parallel algorithms for (approximately) counting every h-motif's instances, and we theoretically and empirically analyze their speed and accuracy. Both approximate algorithms yield unbiased estimates (Theorems 2 and 5), and especially the advanced one is up to \(32\times\) faster than the exact algorithm, with little sacrifice on accuracy (Figure 10). * **Discoveries in \(11\) Real-world Hypergraphs:** We confirm the efficacy of h-motifs and 3h-motifs by showing that local structural patterns captured by them are similar within domains but different across domains (Figures 6 and 8). * **Machine Learning Applications:** Our experiments have shown that h-motifs are effective in extracting features for hypergraphs, hyperedges, and nodes in tasks such as hypergraph clustering, hyperedge prediction, and node classification. Furthermore, using 3h-motifs has been demonstrated to improve the feature extraction capabilities, resulting in even better performances on these applications. ### Future Research Directions Future directions include exploring the practical applications of h-motifs and 3h-motifs, motivated by the numerous successful use cases of network motifs in practical applications. For example, network motifs have been used in the domain of biology, for identifying crucial interactions between proteins, DNA, and metabolites within biological networks [110; 74]. Another compelling example lies within mobile communication networks, where network motifs have been observed to significantly impact the efficiency of information delivery across users [118]. In addition, network motifs are proven to be powerful tools for enhancing the performance of other practical applications including anomaly detection [81; 116] and recommendation [36; 120; 101; 31]. Furthermore, they are recognized as a useful ingredient when designing graph-related algorithms, such as graph neural networks [93; 115] and graph clustering algorithms [111; 15]. These examples demonstrate the substantial potential of h-motifs in diverse applications, and notably, most of them are less explored in hypergraphs than in graphs. In Sections 6.4 and 6.5, we demonstrated the critical role of h-motifs and 3h-motifs in enhancing performance across hypergraph learning tasks, including node classification and hyperedge prediction We believe that the considered hypergraph learning tasks can be readily applied to practical applications [28; 10]. For example, to achieve effective educational management and evaluation, it is important to classify the academic performance (e.g., poor, medium, and excellent) of students (nodes) based on the associations (hyperedges) among them [70]. It is also crucial to classify fake news (nodes) based on the patterns of news consumption by users (hyperedges) [46]. Accurately identifying labels for objects (nodes) in images (hyperedges) containing multiple entities is a crucial task in computer vision [108]. Refer to a survey [10] for a broader range of applications formulated as node classification on hypergraphs. In addition, hyperedge prediction can be employed for identifying novel sets (e.g., outfits) of items (e.g., fashion items) to be purchased together [72] (b) suggesting novel combinations of ingredients for recipes [117], (c) recommending new collaborations among researchers [73], and (d) discovering groups of genes collaborating for specific biological functions [80]. As we demonstrated in Sections 6.4 and 6.5, h-motifs and 3h-motifs serve as valuable tools for addressing such tasks, indicating their potential applicability in practical scenarios, which we leave for future work. Other promising research directions include (a) extending h-motifs and 3h-motifs to complex and rich hypergraphs, such as labeled or heterogeneous hypergraphs, and (b) investigating alternative random hypergraph models for assessing the significance of h-motifs and 3h-motifs. **Reproducibility:** The code and datasets used in this work are available at [https://github.com/jing9044/MoCHy-with-3h-motif](https://github.com/jing9044/MoCHy-with-3h-motif).
2303.04885
Brown Dwarf Retrievals on FIRE!: Atmospheric Constraints and Lessons Learned from High Signal-to-Noise Medium Resolution Spectroscopy of a T9 Dwarf
Brown dwarf spectra offer vital testbeds for our understanding of the chemical and physical processes that sculpt substellar atmospheres. Recently, atmospheric retrieval approaches have been applied to a number of low-resolution (R~100) spectra of brown dwarfs, yielding constraints on the abundances of chemical species and temperature structures of these atmospheres. Medium-resolution (R~1e3) spectra of brown dwarfs offer significant additional insight, as molecular features are more easily disentangled from one another and the thermal structure of the upper atmosphere is more readily probed. We present results from a GPU-based retrieval analysis of a high signal-to-noise, medium-resolution (R~6000) FIRE spectrum from 0.85-2.5 microns of a T9 dwarf. At 60x higher spectral resolution than previous brown dwarf retrievals, a number of novel challenges arise. We examine the strong effect of different opacity sources on our retrieved constraints, in particular for CH4. Furthermore, we find that flaws in the data such as errors from order stitching can greatly bias our results. We compare these results to those obtained for a R~100 spectrum of the same object, revealing how constraints on atmospheric abundances and temperatures improve by an order of magnitude or more (depending on the species) with increased spectral resolution. In particular, we precisely constrain the abundance of H2S, which is undetectable at lower spectral resolution. While these medium-resolution retrievals offer the potential of precise, stellar-like constraints on atmospheric abundances (~0.02 dex), our retrieved radius is unphysically small (R~0.50 R$_{Jup}$), indicating lingering shortcomings with our modeling framework. This work is an initial investigation into brown dwarf retrievals at medium spectral resolution, offering guidance for future ground-based studies and JWST observations of substellar objects.
Callie E. Hood, Jonathan J. Fortney, Michael R. Line, Jacqueline K. Faherty
2023-03-08T20:48:00Z
http://arxiv.org/abs/2303.04885v2
Brown Dwarf Retrievals on FIREI: Atmospheric Constraints and Lessons Learned from High Signal-to-Noise Medium Resolution Spectroscopy of a T9 Dwarf ###### Abstract Brown dwarf spectra offer vital testbeds for our understanding of the chemical and physical processes that sculpt substellar atmospheres. Recently, atmospheric retrieval approaches have been successfully applied to a number of low-resolution (R\(\sim\)100) spectra of L, T, and Y dwarfs, yielding constraints on the abundances of chemical species and temperature structures of these atmospheres. Medium-resolution (R\(\sim 10^{3}\)) spectra of brown dwarfs offer significant additional insight, as molecular features are more easily disentangled from one another and the thermal structure of the upper atmosphere is more readily probed. We present results from a GPU-based retrieval analysis of a high signal-to-noise, medium-resolution (R\(\sim\)6000) FIRE spectrum from 0.85-2.5 \(\mu\)m of the T9 dwarf UGPS J072227.51-054031.2. At 60\(\times\) higher spectral resolution than previous brown dwarf retrievals, a number of novel challenges arise, which we explore. We examine the strong effect of different opacity sources on our retrieved constraints, in particular for CH\({}_{4}\). Furthermore, we find that flaws in the data such as errors from order stitching can greatly influence our results. We compare these retrieval results to those obtained for a R\(\sim\)100 spectrum of the same object, revealing how constraints on atmospheric abundances and temperatures improve by an order of magnitude or more (depending on the species) with increased spectral resolution. In particular, we are able to precisely constrain the abundance of H\({}_{2}\)S which is undetectable at lower spectral resolution. While these medium-resolution retrievals offer the promising potential of precise, stellar-like constraints on atmospheric abundances (\(\sim\)0.02 dex), our retrieved radius is unphysically small (\(R=0.50^{+0.01}_{-0.01}\) R\({}_{Jup}\)), indicating lingering shortcomings with our modeling framework. This work is an initial investigation into the promises and challenges of brown dwarf retrievals at medium spectral resolution, offering guidance for future ground-based studies and JWST observations of substellar objects. 0000-0002-4000-0002]Callie E. Hood 0000-0002-3880-7000]Jonathan J. Fortney 0000-0002-4002-3888]Michael R. Line 0000-0002-4883-0888]Jacqueline K. Faherty ## 1 Introduction Brown dwarfs, objects more massive than gas giant planets but yet not massive enough to sustain hydrogen fusion like a star (13 M\({}_{Jup}\lesssim\) M \(\lesssim\) 73 M\({}_{Jup}\), Burrows et al., 2001), provide essential testbeds of our understanding of the physics and chemical processes that sculpt substellar atmospheres. Without a sustained central energy source from fusion, brown dwarfs instead cool over time, leading to the formation of molecules and condensates in their atmospheres which dramatically affect their emitted spectra across the M, L, T, and Y spectral types (e.g. Kirkpatrick, 2005; Cushing et al., 2011). The chemical and physical processes shaping these spectra are expected to be similar to those of gas giant exoplanets due to their similar effective temperatures (Faherty et al., 2016). Thus, the often more easily-observable spectra of brown dwarfs can inform our predictions for and interpretations of spectra of directly imaged planets. Traditionally, brown dwarf spectra have been compared to theoretical "grid models" which use our current understanding of substellar atmospheres and evolution to produce model spectra for a small number of fundamental parameters, such as composition, effective temperature, and surface gravity (see Marley & Robinson (2015) for a review). The cost of a small number of parameters is the number of chemical and physical assumptions, for example radiative-convective and thermochemical equilibrium, that are required. These grid models are an important resource for connecting observed properties with physical parameters of brown dwarfs as well as predicting signatures to be tested with future observations. However, while advancements in molecular opacities and the increasing model complexity have led to improved fits to observed spectra (Phillips et al., 2020; Marley et al., 2021), notable discrepancies remain (Leggett et al., 2021) indicating there is still much to be learned about modeling these cool atmospheres. An alternative way to glean information from brown dwarf spectra is atmospheric retrieval, a data-driven Bayesian inverse method where minimal assumptions are made for the cost of far more free parameters. First developed for Earth and Solar System sciences (e.g. Rodgers, 2000; Fletcher et al., 2007) and then adapted for exoplanets (e.g. Madhusudhan and Seager, 2009; Benneke and Seager, 2012; Line et al., 2013), atmospheric retrievals have been applied successfully to brown dwarf spectra of various spectral types (Line et al., 2014, 2015, 2017; Burningham et al., 2017, 2021; Zalesky et al., 2019, 2022; Gonzales et al., 2020, 2021, 2022; Kitzmann et al., 2020; Piette and Madhusudhan, 2020; Howe et al., 2022; Lueber et al., 2022; Wang et al., 2022; Xuan et al., 2022; Calamari et al., 2022). Retrievals provide a way to test the assumptions included in grid models; for example, Line et al. (2017) and Zalesky et al. (2019) used retrievals of T and Y dwarfs to show a decrease of Na and K abundances with effective temperature, validating the rainout chemistry paradigm over pure equilibrium. However, while atmospheric retrieval can explore a wider range of possible atmospheres, unphysical combinations of parameters can still provide good fits to the data and therefore spuriously be preferred in retrieval frameworks. In particular, a number of brown dwarf retrieval studies have yielded unphysically small radius constraints (e.g. Burningham et al., 2021; Lueber et al., 2022) or very high surface gravities (Zalesky et al., 2019). Therefore, comparison to theoretical expectations from grid models are still needed to ensure retrieval results are fully contextualized. A vast majority of brown dwarf retrieval studies have been conducted on low-resolution (R \(\sim\) 100) spectra. At medium-resolution, R \(\gtrsim\) 1000, molecular bandheads are resolved into unique groups of densely-packed lines, allowing for more robust detections of molecules. Furthermore, the cores of strong lines are formed at lower pressures than can be sensed at low spectral resolutions, providing better probes of the upper end of the atmosphere's temperature-pressure profile. Comparisons of medium-resolution spectra of brown dwarfs to grid models have provided validations of certain line lists (Canty et al., 2015) and constraints on the brown dwarf's fundamental properties (e.g. Bochanski et al., 2011; Petrus et al., 2022; Hoch et al., 2022). Spectroscopy at medium-to-high spectral resolutions of brown dwarfs have been analyzed in retrieval frameworks, but often over a narrow wavelength range and with relatively low signal-to-noise, though they can also be combined with low-resolution observations for better constraints (Wang et al., 2022; Xuan et al., 2022). The aim of this work is to test how the atmospheric retrieval framework works at medium spectral resolution (R \(\sim\) 6000), in terms of both new insights and novel challenges. We use the same framework successfully applied at low-resolution (e.g. Line et al., 2017; Zalesky et al., 2022) for a spectrum with roughly 60\(\times\) higher spectral resolution. This work is structured as follows. In Section 2, we describe our dataset, retrieval framework, and the modifications necessary at this spectral resolution. In Section 3, we give an overview of the tests performed and changes made throughout this project, and the accompanying results and lessons learned. In Section 4, we put our results in context with constraints from low-resolution spectra of the same object, previous analysis of this dataset, and grid models. Finally, our conclusions are summarized in Section 5. ## 2 Methods ### Spectra of UGPS 0722 We perform our analysis on the medium-resolution spectrum of UGPS 0722 presented by Bochanski et al. (2011, hereafter B11) obtained with the Folded-port InfraRed Echellette Simcoe et al. (2013, FIRE) at the Magellan Telescopes. This spectrum covers 0.85 to 2.5 \(\mu\)m over 21 orders with R \(\sim\) 6000. We scaled the reduced and order-stitched spectrum of B11 to the observed H-band photometry on the Mauna Kea Observatories (MKO) photometric system following Line et al. (2017). We will explore potential issues with which regions of this spectrum to include in our analysis (for example due to telluric absorption or order stitching problems) in later sections. We also use the low-resolution spectrum of UGPS 0722 from the SpeX Prism Library (Burgasser, 2014), which covers a similar wavelength range of 0.8 to 2.5 \(\mu\)m with a wavelength-dependent resolution of \(\sim\) 87-300. This spectrum was also calibrated to flux units using the same H-band photometry as for the FIRE spectrum. The flux-calibrated FIRE and SpeX spectra are shown in Figure 1. ### GPU Retrieval Framework We use the CHIMERA retrieval framework successfully applied previously to low-resolution brown dwarf spectra (Line et al., 2015, 2017; Zalesky et al., 2019). However, generating a forward model emission spectrum at a resolution of \(\sim\) 60,000 (which is then binned to R \(\sim\) 6,000) is quite computationally expensive. Thus, to make this study feasible, we must use a modified version for use with graphical processing units (GPUs), that builds upon the code described in Zalesky et al. (2022). Specifically, we modify the radiative transfer to solve the two stream multiple scattering problem using the methods described in Toon et al. (1989). However, as we are not in a particularly cloud regime, the effects of multiple scattering are neglible. As in the previous studies done with this framework, we use the affine-invariant MCMC ensemble sampler package emcee(Foreman-Mackey et al., 2013). We include uniform-with-altitude volume mixing ratios of H\({}_{2}\)O, CH\({}_{4}\), CO, NH\({}_{3}\), H\({}_{2}\)S, Na, and K, the surface gravity, a radius-to-distance scaling factor, the temperature-pressure (TP) profile, and three cloud parameters: cloud volume mixing ratio, the cloud pressure base, and the sedimentation efficiency (Ackerman and Marley, 2001). As in Line et al. (2017), the TP profile is parameterized by 15 independent temperature-pressure points subject to two smoothing hyperparameters. These 15 temperature-pressure points are interpolated onto a finer 70 layer pressure grid for the radiative transfer using a cubic Hermite spline. We began with the set of absorption cross-sections presented in Freedman et al. (2008) and subsequently updated as detailed in Freedman et al. (2014), Lupu et al. (2014), and Marley et al. (2021). However, we use a set of H\({}_{2}\)O opacities calculated based on the POKAZATEL line list (Polyansky et al., 2018). In Section 3, we explore the effect of switching our opacity source for a number of species, including NH\({}_{3}\)(Coles et al., 2019), CH\({}_{4}\)(Harreaves et al., 2020), K (Allard et al., 2016), and Na(Allard et al., 2019). For the cloud opacity, we used Mie scattering theory assuming a Mg\({}_{2}\)SiO\({}_{4}\) cloud with optical properties from Wakeford and Sing (2015). However, the exact cloud species assumed should not particularly matter as cloud optical properties tend to be gray over \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Description \\ \hline log(f\({}_{i}\)) & log of the uniform-with-altitude volume \\ & mixing ratios of H\({}_{2}\)O, CH\({}_{4}\), CO, NH\({}_{3}\), \\ & H\({}_{2}\)S, Na, and K \\ log(g) & log surface gravity [cm s\({}^{-2}\)] \\ (R/D)\({}^{2}\) & radius-to-distance scale [R\({}_{\rm Jup}\)/pc] \\ T(P) & temperature at 15 pressure levels [K] \\ \(b\) & errorbar inflation exponent (Line et al., 2015) \\ \(\gamma,\beta\) & TP profile smoothing hyperparameters \\ & (Line et al., 2015) \\ log(Cloud VMR) & log of the cloud volume mixing ratio \\ log(P\({}_{c}\)) & log of the cloud base pressure \\ f\({}_{sed}\) & sedimentation efficiency \\ RV & radial velocity [km s\({}^{-1}\)] \\ \(v\) sin \(i\) & rotational velocity [km s\({}^{-1}\)] \\ \hline \end{tabular} \end{table} Table 1: Free Parameters in Our Retrieval Model Figure 1: The FIRE (B11, R \(\sim\) 6000) and SpeX (Burgasser, 2014, R \(\sim\) 87 - 300) spectra of UGPS 0722. The FIRE fluxes are in blue, FIRE errors in gray, and SpeX datapoints in orange. these near-infrared wavelengths and the cloud's placement and extent in the atmosphere are parameterized independent of composition. At these moderate resolutions, the radial and rotational velocities present in the spectrum, thus we add these as two additional parameters in the retrieval forward model. Both properties were measured in B11 by comparison to grid models. We use the _dopplerShift_ function from PyAstronomy(Czesla et al., 2019) to shift the forward modeled emission spectrum by a given radial velocity and interpolate it back onto the input wavelength grid. For the rotational velocity, we first tested the _rotRoad_ function from PyAstronomy which implements a wavelength-dependent broadening kernel as described by Gray (2008) for a given \(v\) sin \(i\) and linear limb-darkening coefficient (we chose 0). However, this method proved infeasibly slow, requiring 89 seconds to generate one broadened forward model spectrum. We then tested the _fastRotBroad_ function from PyAstronomy which uses a single broadening kernel, leading to a much faster forward model generating time of 0.19 seconds but differences from the slower version that were larger than the error bars of the FIRE spectrum as shown in Figure 2. As a compromise, we split our spectrum into two and used _fastRotBroad_ for each half, taking 0.2 seconds but leading to differences from the more accurate function that were smaller than the error bars. To test how our retrieval framework might perform on the FIRE data of UGPS 0722, we generated a fake test data set. We used CHIMERA to create one forward model based on the TP profile from the Sonora Bobcat grid (Marley et al., 2021) for an object with T\({}_{eff}\) = 600 K and log(g)= 4.0, giving it a radial velocity of 47 km s\({}^{-1}\) and \(v\) sin \(i\) of 40 km \(s^{-1}\). We assume constant with altitude chemical abundances with log volume mixing ratios of H\({}_{2}\)O = -3.09, CH\({}_{4}\) = -3.33, CO = -10.37, H\({}_{2}\)S = -4.59, NH\({}_{3}\) =-5.00, K = -6.80, and Na = -4.90. This forward model was convolved to the FIRE instrument resolution, interpolated onto the wavelength grid of the FIRE spectrum, and given the same error bars as the FIRE spectrum of UGPS 0722. As such, we assume a \((R/D)^{2}\) value of 0.0158 (corresponding to a radius of \(\sim\)0.528 R\({}_{Jup}\) for an object at the distance of UGPS 0722) which is needed to give the same peak flux to error ratio as the FIRE spectrum of UGPS 0722. The results from a CHIMERA retrieval on this test dataset compared to the input values are shown in Figure 3. We see that we are able to recover the input values with high accuracy and precision (with the exception of CO as it is unconstrained due to the low input mixing ratio), particularly with an unprecedented uncertainty of \(\sim\) 0.01 dex on the chemical abundances and surface gravity, \(\sim\)10\(\times\) more precise than constraints from spectra at R\(\sim\)100 (Line et al., 2017; Zalesky et al., 2022). ## 3 Results ### Initial Fire Retrieval vs. SpeX Initially, we used the entire order-stitched and flux-calibrated FIRE spectrum UGPS 0722, with a few spurious data points effectively ignored by greatly inflating their error bars. However, in regions of high telluric absorption, the error bars on the spectrum were artificially underestimated (almost 14 orders of magnitude more precise than elsewhere). To prevent these data points from incorrectly driving our results, the error bars in these regions are inflated to a high enough value that these data points functionally do not contribute to our retrieval analysis. We also mask out those regions of strong telluric absorption in the SpeX spectrum as well to allow for a direct comparison. Figure 4 shows the retrieved TP profiles and posterior distributions for selected parameters from our initial analysis on the UGPS FIRE spectrum compared to results from the SpeX spectrum. Though the FIRE spectrum potentially offers more precision than the SpeX results, we find our MCMC chains have trouble converging, with bimodal posteriors, unphysical values for certain parameters like the surface gravity, and very jagged TP profiles. Figure 2: Comparison of different rotational broadening methods. The top panel shows modeled emission spectra using different rotational broadening methods with a snippet of the FIRE spectrum of UGPS 0722. The bottom panel shows the difference between the model broadened with a wavelength-dependent kernel (“Slow”) and those using either 1 or 2 kernels for the whole spectrum (“Fast”), and the data error bars in this region. At least two broadening kernels are required to reduce the difference with the slower, more accurate method to smaller than the error bars on the FIRE spectrum. We suspected our forward model was too flexible, which was leading to overfitting and unphysical results. We reduced the number of parameters in our model by fixing the second TP profile smoothing hyperparameter \(\beta\) and the cloud parameters to specific values. Line et al. (2017) found letting \(\beta\) vary had a negligible effect compared to the nominal fixed value of \(\beta=5\times 10^{-5}\) used by Line et al. (2015). As previous work (Line et al., 2015, 2017) showed little evidence for optically thick clouds in T dwarfs, we set the cloud opacity parameters log(Cloud VMR), log(P\({}_{c}\)), f\({}_{sed}\) = [-15, 2, 10], consistent with an optically thin cloud deck. The effect of fixing \(\beta\) and the cloud parameters is shown in Figure 5 in orange. The TP profiles are smoother and more precisely constrained, while we no longer see bimodal posteriors for the plotted parameters. We note a potential detection of H\({}_{2}\)S in this object, although the posterior on the H\({}_{2}\)S abundance has a long lower tail. We will explore letting the cloud parameters vary again in Section 3.8. ### Effect of Resolution Element and Wavelength Limits Next, we decided to more mimic the analysis of B11 and limit the FIRE spectrum to 0.9 - 2.35 \(\mu\)m when comparing to models, effectively getting rid of the noisiest regions of the spectrum at the beginning and end. Additionally, like SpeX, the FIRE spectrum is oversampled compared to a spectral resolution element, so we take every 4th pixel of the spectrum to ensure independent data points (e.g. Line et al., 2017; Kitzmann et al., 2020). Figure 5 shows the effect of these two changes in grey. While the retrieved TP profiles are very similar to the previous results at pressures less than \(\sim 15\) bars, the deep atmosphere is much warmer. The molecular abundances often shift slightly, but with the exception of H\({}_{2}\)O the new posteriors are within 1\(\sigma\). Our precision on almost all parameters also decreases, due to the substantial decrease in the number of included data points. ### Effect of Updated Line Lists In an effort to improve our best model fit to the observed spectrum, we investigated the effect of changing the sources of line lists used for CH\({}_{4}\) and NH\({}_{3}\). For NH\({}_{3}\), we upgraded to the more recent CoYuTe (Coles et al., 2019) ExoMol line list instead of the older BYTe Figure 3: Retrieved TP profiles and posteriors of certain parameters for a “fake” FIRE spectrum based on the Sonora Bobcat model with an effective temperature of 600 K and surface gravity log(g)=4.0. (Yurchenko et al., 2011) list we were using previously. For CH\({}_{4}\), we replaced the ExoMol 10to10 list (Yurchenko & Tennyson, 2014) with the recent HITEMP line list published by Hargreaves et al. (2020) which combined the ab initio line lists of Rey et al. (2017) with HITRAN2016 data (Gordon et al., 2017). Comparisons of these "old" and "new" line lists for CH\({}_{4}\) and NH\({}_{3}\) at a specific pressure and temperature are shown in Figure 6. The two CH\({}_{4}\) opacities on the left show clear deviations beyond just shifts to line positions, especially blueward of \(\sim\) 1.62 \(\mu\)m. In contrast, while minor differences are evident between the two NH\({}_{3}\) line lists in the right panel, they have similar overall values across the wavelength region of the spectrum. The effect of these changed line lists on our retrieved parameters is discussed below. #### 3.3.1 Low Resolution Retrievals Figure 7 shows the effect of these updated line lists on the retrieved TP profile and selected posteriors for the SpeX spectrum. The retrieved TP profiles are consistently warmer than those obtained when using the Yurchenko & Tennyson (2014) CH\({}_{4}\) and Yurchenko et al. (2011) NH\({}_{3}\) line lists. Constraints on the abundances of H\({}_{2}\)O, CH\({}_{4}\), and NH\({}_{3}\) all shift to lower values, while the retrieved abundance of K increases by \(\sim\) 0.5 dex. The retrieved surface gravity also decreases, from log(g)=5.19\({}^{+0.27}_{-0.40}\) to log(g)=4.43\({}^{+0.27}_{-0.17}\) (cgs). Furthermore, the retrieved radius, assuming the parallax distance of 4.12 \(\pm\) 0.04 pc (Leggett et al., 2012), decreases from 0.76\({}^{+0.06}_{-0.04}\) to 0.5\({}^{+0.03}_{-0.02}\) R\({}_{Jup}\), an unphysically small value (see Section 4.1 for further discussion). Notably, the dramatic decrease in both surface gravity and radius are driven by updating the CH\({}_{4}\) line list, as this effect occurs even when the NH\({}_{3}\) line list is kept the same. To assess if the FIRE dataset is consistent with the SpeX spectrum, we smoothed and sampled the FIRE data down to the SpeX resolution. Our retrieval results on this smoothed FIRE spectrum are compared to the original SpeX results (both with the updated cross sections) in Figure 7. The retrieved constraints arising from the smoothed FIRE spectrum and the SpeX spectrum show remarkable agreement. This suggests that at low resolutions, observations at different times with completely different instruments produce extremely consistent results, given our retrieval model assumptions. Figure 4: Retrieved TP profiles and posterior distributions for selected parameters for initial retrievals on the SpeX and FIRE spectra of U0722. #### 3.3.2 Medium Resolution Retrievals With the increased spectral resolution of FIRE, we can better assess the accuracy of the line positions in various line lists. Figure 8 shows two narrow regions of the FIRE spectrum of U0722 where CH\({}_{4}\) and NH\({}_{3}\) are dominant in the left and right panels, respectively. For each molecule, we show model spectra generated with the updated line lists. In both cases, our newer line lists are better able to replicate the line positions of their respective molecules. Both the Hargreaves et al. (2020) and Coles et al. (2019) line lists incorporated empirical energy levels where available, as opposed to solely computed ones, leading to improved line position accuracy. Unsurprisingly, improving the fit to line positions in our model does affect our retrieved atmospheric parameters. Figure 9 shows how updating the line lists of CH\({}_{4}\) and NH\({}_{3}\) affects our retrieved TP profiles and posteriors in orange. While the TP profiles are relatively similar deeper than \(\sim\) 1 bar, the scenario with updated line lists prefers the atmosphere to be as cold as possible around 0.2 bars. Our retrieved posteriors for all molecular abundances as well as potassium do shift to higher values. The surface gravity also increases, from a median log(g) = 3.66 \({}^{+0.02}_{-0.02}\) to log(g) = 4.08 \({}^{+0.03}_{-0.03}\)(cgs), a more plausible value. However, the radius decreases just like for the SpeX retrieval in Section 3.3.1, from 0.76\({}^{+0.01}_{-0.01}\) to 0.53\({}^{+0.01}_{-0.01}\) R\({}_{Jup}\), an unphysically small size, which we discuss more in Section 4.1. The retrieved radial velocity and \(v\) sin \(i\) are relatively unaffected by our choice of CH\({}_{4}\) and NH\({}_{3}\) line lists, indicating these measurements may be primarily driven by H\({}_{2}\)O lines in the spectrum. Notably, we retrieve bounded constraints on the H\({}_{2}\)S and CO abundances; CO constraints were unbounded when using the older line lists. Figure 10 shows model spectra for the median retrieved parameters, with and without H\({}_{2}\)S, where we expect H\({}_{2}\)S opacity to have an effect. Both H\({}_{2}\)S and CO have the greatest effect on our model spectra between \(\sim 1.56-1.60\mu\)m where there is a window in the combined opacity of H\({}_{2}\)O and CH\({}_{4}\). Including H\({}_{2}\)S does improve the model fit in this region of the spectrum, but mostly as an overall shift in strength of features. However, around 1.59 \(\mu\)m there is what appears to be an H\({}_{2}\)S line that is blended with another feature but clear in the residuals shown on the bottom. Tannock et al. (2022) reported an H\({}_{2}\)S detection in a Figure 5: Effect on the retrieved TP profiles and selected posteriors of fixing the smoothing hyperparameter \(\beta\) and cloud parameters to set values (orange). In Section 3.2, we then limited the input spectrum to 0.9-2.35 \(\mu\)m and took every fourth pixel to limit the analysis to one data point per resolution element (grey). Figure 6: Comparison of old and new molecular cross sections at 725 K and 1 bar, and smoothed to R \(\sim\) 6000. _Left_: Comparison of CH\({}_{4}\) cross sections from the ExoMol 10to10 (Yurchenko & Tennyson, 2014) and HITEMP (Hargreaves et al., 2020) line lists. _Right_: Comparison of CH\({}_{4}\) cross sections from the older BYTe (Yurchenko et al., 2011) and newer CoYuTe (Coles et al., 2019) ExoMol line lists. Figure 7: Effect on the retrieved TP profiles and selected posteriors of updating the CH\({}_{4}\) and NH\({}_{3}\) line lists to those of Hargreaves et al. (2020) and Coles et al. (2019), respectively, for the SpeX spectrum (orange). Results for the FIRE spectrum smoothed to the resolution of SpeX (grey) agree well for all parameters with those from the SpeX spectrum. Figure 8: Comparisons of median model spectra for retrieval results using old and new molecular line lists, compared to the FIRE spectrum of U0722 in narrow regions of the spectrum where CH\({}_{4}\) and NH\({}_{3}\) are expected to dominate on the left and right, respectively. _Left:_ Models with Yurchenko and Tennyson (2014) and Hargreaves et al. (2020) CH\({}_{4}\) line lists; the Hargreaves et al. (2020) line list does a significantly better job at matching the CH\({}_{4}\) lines in this region. _Right:_ Models with Yurchenko et al. (2011) and Coles et al. (2019) NH\({}_{3}\) line lists; the Coles et al. (2019) line list improves the fit to NH\({}_{3}\) lines in this region. Figure 9: Effect on the retrieved TP profiles and selected posteriors from the FIRE spectrum when updating the CH4 and NH3 line lists to those of Hargreaves et al. (2020) and Coles et al. (2019), respectively (orange). The retrieved TP profiles and selected posteriors resulting from removing regions of the data where order stitching was not successful (Section 3.4) are shown in grey. high signal-to-noise, R \(\sim\) 45,000 spectrum of a T6 dwarf, where they show a strong H\({}_{2}\)S feature at this exact location around 1.59 \(\mu\)m. Thus, our constraint on the H\({}_{2}\)S abundance is consistent with this feature, though it is dependent on a relatively small number data points (\(\sim\)3 points for the 1.59 \(\mu\)m feature and \(\sim\) 20 others). Figure 11 shows models with and without CO, for the region of the spectrum where the model spectra are most different from each other. While the inclusion of CO does improve our model fit in this region, there is not a clear CO feature to point to as the source of our CO constraint. Thus, while we do have a bounded posterior for the CO abundance, it is perhaps a less trustworthy detection. ### Issues with Order Stitching FIRE observations in cross-dispersed mode are spread over 21 different orders. These orders have some overlap in wavelength coverage on either end. The final stitched spectrum published by Bochanski et al. (2011) combined these orders into a single spectrum by averaging the regions where the orders overlapped. However, in some cases when one order was noisier than the other, this averaging can lead to data artifacts. We first noticed this issue when inspecting the spectrum around 2.1 \(\mu\)m, where a jump or step in the data appears that is not possible to reproduce with our forward models, as shown in Figure 12. To avoid these potential order stitching issues from biasing our retrieved results, we artificially inflated the error bars in order overlap regions where the final stitched product differed from either input order spectrum by more than 10%, effectively removing these data points from our analysis. Figure 9 shows the result of removing these order stitching artifacts on our retrieval results in grey. The TP profile no longer has a strong preference for cold temperatures around 0.4 bars, instead fanning out as expected for pressures not probed by our observations. Our molecular and alkali abundances change slightly, with a less precise constraint on the amount of CO. While our median retrieved surface gravity increases to a slightly more plausible value of log(g)= 4.19 \({}^{+0.04}_{-0.03}\) from log(g) = 4.08 \({}^{+0.03}_{-0.03}\)(cgs), our median radius shrinks even more to \(0.50^{+0.01}_{-0.01}\) R\({}_{Jup}\). Finally, our retrieved radial velocity and \(v\) sin \(i\) posteriors also slightly shift as well. ### Retrieving on Subsections of the Spectrum While our stitched FIRE spectrum of U0722 covers \(y\) - \(K\) bands, we wanted to investigate the constraining power of different sets of spectral bands. In particular, Hargreaves et al. (2020) retains the completeness limits of the input line lists from Rey et al. (2017), giving a temperature-dependent maximum wavenumber limit for which the CH\({}_{4}\) line list can be considered complete, or including all lines strong enough to affect the resulting opacity. This limit is 10700 cm\({}^{-1}\) or 0.93 \(\mu\)m for 1300 K, and 9500 cm\({}^{-1}\) or 1.05 \(\mu\)m for 1400 and 1500 K. Thus, the Hargreaves et al. (2020) CH\({}_{4}\) line list is not complete for the deeper, hotter temperatures probed by the \(y\) band. Motivated by this potential completeness problem, we retrieved on the \(J\) - \(K\) band data only, followed by a retrieval on solely \(H\) - \(K\) bands. Figure 11: Model spectrum from the median retrieved parameters generated with and without CO, compared to a snipppet of the FIRE spectrum where some CO opacity is expected. While including CO does slightly improve the fit to the data, the effect is slight. Figure 10: Model spectrum from the median retrieved parameters generated with (orange) and without (blue) H\({}_{2}\)S, compared to a snipppet of the FIRE spectrum where H\({}_{2}\)S opacity is expected. Some features are better fit with the inclusion of H\({}_{2}\)S. Comparisons of the retrieved TP profiles and select posteriors are shown in Figure 13. The TP profiles get cooler as the shorter wavelength data is progressively removed from the analysis. As the flux in \(y\) and \(J\) bands comes from deeper in the atmosphere, we do not constrain the temperatures to as deep of pressures without these data, as indicated by the "fanning out" of the TP profiles in these deeper layers. The H\({}_{2}\)O and CH\({}_{4}\) abundances shift to higher values when just looking at \(H\) and \(K\) bands. While our constraint on the CO abundance becomes more precise for the _J-K_ retrieval, there is still a long tail towards low values. The H\({}_{2}\)S and NH\({}_{3}\) posteriors shift slightly with the exclusion of the \(y\) band data. While we lose all constraints on the K abundance for just \(H\) & \(K\), we have a very precise and high constraint on the Na abundance when looking at _J-K_ bands. Our retrieved surface gravity is also higher for the _J-K_ retrieval than the other two, with a median of log(g)=4.31\({}^{+0.06}_{-0.04}\). We retrieve a larger radius as we remove each bluer band of data, perhaps reflecting the cooler TP profiles retrieved for these cases as well. Finally, removing the \(J\) band data causes quite a shift in the retrieved radial velocity and \(v\) sin \(i\) values, pointing to the importance of the strong water lines in \(J\) band to our constraints on these values for the full spectrum. Given that the Hargreaves et al. (2020) line list is not complete for \(y\) band at the temperatures found in this object, the _J-K_ retrieval is perhaps a compromise between CH\({}_{4}\) line completeness while retaining some flux from deeper in the atmosphere. However, the anomalously high Na abundance does call into question the physical plausibility of these results. ### Effect of Alkali Opacities Another change we made to our framework was to update the opacities used for the alkali metals Na and K. In particular, their strong lines at \(\sim\) 0.59 and 0.77 \(\mu\)m can be very broadened and significantly impact the near infrared spectrum of a brown dwarf. Prescriptions for these line profiles can vary quite a bit. We first used older alkali opacities based on the unified line-shape theory (Allard et al., 2007, 2007), as used in the Sonora Bobcat grid (Marley et al., 2021); example cross-sections of Na and K for 725 K and 1 bar are shown in Figure 14, which basically become flat continuum opacity sources after a certain wing cutoff point. The high retrieved Na abundance for the _J-K_ retrieval in Section 3.5 above can then be understood as an additional source of continuum opacity used to reduce \(J\) band flux which is not penalized when the \(y\) band is excluded. These alkali opacities were shown by Gonzales et al. (2020) to produce more physically reasonable alkali abundances in their retrieval study of a d/sdL7+T7.5p binary than those from Burrows and Volobuyev (2003). In contrast, newer cross sections based on the recent theoretical advancements of Allard et al. (2016, 2019) have more complicated shapes and significantly lower cross sections rightward of \(\sim\) 1.2 \(\mu\)m. These differences demonstrated in Figure 14 between the older and newer line profiles can greatly affect our retrieved parameters. To isolate the differences, we retrieved on the SpeX data with just updating the Na cross sections, just updating K, and then updating both simultaneously. The retrieved TP profiles and selected posteriors for the different alkali treatments for the SpeX data are shown in Figure 15. Updating the Na opacity on its own does not particularly affect our retrieved results, perhaps unsurprisingly since the older K cross sections have so much more opacity. Updating to the new K causes a large change to our retrieved TP profiles, causing a very cold upper atmosphere with an inversion and a hotter, more precisely-constrained profile from \(\sim\)1 - 10 bars. The retrieved surface gravity and radius also decrease. Finally, updating Na and K at the same time is mostly similar to when just using the new K, except for a very high amount of retrieved Na. Figure 16 shows the effect of updating the alkali cross sections on our retrieved TP profiles and posteriors of selected parameters for the FIRE data of U0722. Similarly to the SpeX retrieval, the retrieved TP profile is hotter from \(\sim\)1 - 10 bars and goes to 0 K (unphysical) at \(\sim\)0.1 bars before inverting back to higher temperatures. The abundances of many species shift to higher values, particularly Na which again has an extremely high abundance. Additionally, the CO posterior loses Figure 12: A region of the stitched FIRE spectrum of U0722 where issues with order stitching have caused a “jump” in the data. its long low tail, leading to a higher median abundance. We also retrieve a lower surface gravity, concordant with the SpeX results, while the radius increases slightly. Looking at a spectrum generated with the median retrieved parameters compared to one with significantly less Na, we see that the only discernible differences occur from \(\sim\)0.9 - 0.98 \(\mu\)m shown in Figure 17. The gaps in the FIRE data shown were regions where order stitching was found to be an issue as discussed in Section 3.4. Although the high Na model does get closer to fitting the data in this region, it is still not a particularly good fit by eye, completely missing many of the data points. Furthermore, this is one of the noisiest regions of the data. Thus, the high retrieved Na abundance is likely a spurious result driven by low signal-to-noise data in these regions. We ran another retrieval on the FIRE data with the new alkali cross sections, but without any data blueward of 1 \(\mu\)m. The results are shown in Figure 16. The previous high Na constraint disappears as expected. While other posteriors shift as well, the constraints on the other species change only slightly. However, the inverted TP profile and even lower surface gravity compared to the retrieval with the old alkali cross sections are a cause for concern. Therefore, the old alkali cross sections are preferred as they give us more physical results. Further studies of the alkali line profiles and their effect on retrievals in particular would help provide con Figure 14: Comparison of old (dash-dot) and new (solid) alkali cross sections at 725 K and 1 bar, and smoothed to R \(\sim\) 6000. The older alkali cross sections are from Freedman et al. (2014), while the newer ones are based on Allard et al. (2016) and Allard et al. (2019) for K and Na, respectively. Figure 13: Effect on the retrieved TP profiles and selected posteriors when performing the retrieval on the full wavelength range, \(J\)-\(K\), or just \(H\) and \(K\) bands. text for these results and guidance for future medium resolution retrievals. ### Setting the Radius to 1 Jupiter Radius Given the unphysically small retrieved radius for U0722, we explored the impact of fixing the retrieved radius to 1 Jupiter radius on the TP profile and abundance constaints. Figure 18 shows the retrieved TP profiles and selected posteriors with this fixed radius in orange compared to when the radius is allowed to vary. The retrieved TP profiles overlap with the previous results for \(\sim\)2-4 bars but with a slightly different slope, as they are hotter above and cooler below this region. All abundance posteriors shift, including the disappearance of the long low tail for CO, except for H\({}_{2}\)S which remains relatively unchanged. Importantly, the surface gravity decreases to an unphysically small median value of log(g)\(=3.45\). The radial velocity posterior is strangely trimodal; however, one peak does correspond to the previous value. Finally, the \(v\) sin \(i\) posterior does shift as well. Thus, fixing the radius to a more physical value would definitely affect our results, but also yield a questionably low surface gravity. As such, we do not use this fixed radius retrieval to compare to SpeX and grid models in Section 4. ### Cloudy Retrieval In Section 3.1 we fixed our cloud parameters to be consistent with an optically thin cloud that would not affect the emission spectrum when it appeared our model was perhaps too flexible. After incorporating the preceding changes we found improved our results (fixing \(\beta\), changing the wavelength limits to match B11, sampling one data point per resolution element, updating the CH\({}_{4}\) and NH\({}_{3}\) opacities, and ignoring regions victim to poor order stitching), we allowed the cloud parameters to vary once again. Figure 18 shows the resulting TP profiles and posteriors in grey. The cloud parameters are unconstrained, except for a potential upper limit on the cloud volume mixing ratio. Other parameters are relatively unaffected, suggesting the inclusion of cloud opacity has little impact. This lack of evidence for optically thick clouds in the atmosphere of U0722 are consistent with expectations for late T dwarfs (e.g. Kirkpatrick, 2005) as well as other T dwarf retrieval studies at lower spectral resolution (e.g. Line et al., 2017; Zalesky et al., 2022). ## 4 Discussion and Conclusions ### SpeX vs FIRE Figure 15: Effect of updating alkali cross sections on retrieved TP profiles and selected posteriors using the SpeX spectrum. We choose the FIRE retrieval from Section 3.4, with pieces of the spectrum affected by order stitching removed and all the preceding changes, as our "preferred" FIRE retrieval. While Sections 3.5 - 3.8 explore various other tests we performed, the retrieval from Section 3.4 uses the entire wavelength range while providing more physical constraints than those gained when using the new alkali opacities or fixing the radius for example. Table 2 lists each change to our retrieval framework we tested, the section of the paper where it is discussed, and whether or not it was applied for our preferred retrieval. Figure 19 shows the FIRE spectrum compared to the median model spectrum from the initial retrieval described in 3.1 and the median model spectrum from this final preferred FIRE retrieval. Red lines at the top of each panel indicate data points removed from our analysis due to spurious flux values and high telluric absorption (Section 3.1) or suspected problems with order stitching (Section 3.4). In Figure 19, the model from our final retrieval fits the FIRE spectrum much better than for our initial retrieval, particularly the peak of \(y\) band and the CH\({}_{4}\) features in \(H\) band, as discussed in Section 3.3. However, though the posteriors of the retrieved parameters are quite different, there are large sections of the spectrum where both models reproduce Figure 16: Effect on the retrieved TP profiles and selected posteriors for the FIRE spectrum when updating our alkali cross sections (orange). As our Na constraint appears to be only from a small snippet on the blue end of the FIRE spectrum, removing any data \(<1\mu\)m leads to an unconstrained Na abundance (grey). Figure 17: Model spectra from the median retrieved parameters using the new alkali cross sections, generated with and without the retrieved high Na abundance. The region of the data shown is the only one with a discernible difference between the two model spectra. the observed spectrum relatively well, as should be expected for a data-driven model fitting method. Most of the tests of our framework outlined in Section 3, other than updating the line lists, were motivated by intuition or unphysical results rather than poor fits to the data. As such, we do not plot the median models from all our different retrieval runs in the interest of brevity and clarity. Figure 20 shows how much the constraints on our retrieved posteriors improve when using the R \(\sim\) 6000 FIRE spectrum compared to the R \(\sim\) 100 SpeX spectrum. The "preferred" SpeX retrieval here uses the same updated line lists as described in Section 3.3. The H\({}_{2}\)O, CH\({}_{4}\), and NH\({}_{3}\) abundance posteriors with FIRE are \(\sim\)3-6\(\times\) more precise than those from SpeX. Similarly, we can constrain the surface gravity to within about 0.04 dex, about 5\(\times\) more precise than SpeX. Furthermore, between \(\sim\)0.5 to 20 bars, the TP profile is constrained within 100 K compared to the much wider 500 K spread for SpeX. Finally, our retrieval on the medium resolution FIRE spectrum allows constraints on parameters such as H\({}_{2}\)S abundance, CO abundance, radial velocity, and \(v\) sin \(i\) which are not constrained with the SpeX spectrum. Table 3 lists physical parameters of U0722 for both retrievals, as well as from previous studies which will be discussed below. To calculate a number of these quantities and their uncertainties from the parameters in our retrieval framework, we take 5000 random samples of our posterior. With these precise constraints on the molecular abundances, we can consider the metallicity and C/O ratio as most of the metal content in these cool brown dwarf atmospheres is contained in H\({}_{2}\)O and CH\({}_{4}\), with some contribution from CO and NH\({}_{3}\). We calculate Figure 18: Effect on the retrieved TP profiles and selected posteriors for the FIRE spectrum of fixing the radius to 1 R\({}_{Jup}\) (orange). Almost all parameters are significantly effected. The results when allowing the cloud parameters to vary again (Section 3.8) are shown in grey. We find little evidence for optically thick clouds in the atmosphere of U0722. Figure 19: Comparison of the FIRE spectrum (light blue) with the median model of our initial retrieval (navy) and the median model of our “final” preferred FIRE retrieval from Section 3.4 (orange). The FIRE errors are shown in gray. Red lines at the top of each panel indicate sections of the FIRE spectrum that were removed from our final analysis as described in Sections 3.1, 3.2, and 3.4. Our final preferred median model spectrum does a better job fitting the FIRE data, particularly the peak of \(y\) band and the CH\({}_{4}\) features in \(H\) band. \begin{table} \begin{tabular}{l c c} \hline \hline Change & Section of Paper & In preferred retrieval? \\ \hline Add radial velocity and \(v\) sin \(i\) & 2.2 & yes \\ Mask regions of total telluric absorption & 3.1 & yes \\ Remove second T-P smoothing hyperparameter & 3.1 & yes \\ Fix cloud parameters to optically thin values & 3.1 & yes \\ Limit data to 0.9-2.35 \(\mu\)m & 3.2 & yes \\ Take every 4th data point to not oversample the FIRE resolution element & 3.2 & yes \\ Update NH\({}_{3}\) and CH\({}_{4}\) opacities & 3.3 & yes \\ Mask data points affected by issues with orders stitching & 3.4 & yes \\ Limit analysis to subsections of the spectrum, _J-K_ and _H-K_ & 3.5 & no \\ Update alkali opacities & 3.6 & no \\ Fix radius to 1 R\({}_{Jup}\) & 3.7 & no \\ Allow cloud parameters to vary & 3.8 & no \\ \hline \end{tabular} \end{table} Table 2: Changes to our retrieval framework tested for application to the FIRE spectrum of U0722. Figure 20: Retrieved TP profiles and selected posterior distribution for selected parameters for SpeX and FIRE spectra of U0722 after making the changes discussed in Section 3. these quantities from our retrieved abundances following Equations 1 and 2 from Zalesky et al. (2022) and assuming solar abundances from Lodders (2010). To account for condensation processes that can deplete atmospheric oxygen, we multiply our H\({}_{2}\)O abundance by 1.3 to better approximate the intrinsic metallicity and C/O as in Zalesky et al. (2022). Our retrieved abundances yield [M/H] = -0.10\({}^{+0.02}_{-0.02}\) dex and C/O = 0.54\({}^{+0.01}_{-0.02}\) from the FIRE retrieval, with \(\sim\)3-4\(\times\) the precision than from the SpeX retrieval. For comparison, Zalesky et al. (2022) on average constrain [M/H] and C/O to within 0.2 dex with 50 T dwarf spectra at R\(\sim\)100, although the precision varies among the objects. Though we have achieved precise constraints on these bulk properties, Calamari et al. (2022) carried out a retrieval study of GI 229B and showed the 1.3 oxygen scaling factor lead to a calculated C/O that was unexpectedly inconsistent with measurements of the primary star. They suggest better understanding of the potential oxygen sinks in brown dwarf interiors could refine the best way to connect measured atmospheric C/O to the bulk value. For each sample of our posterior, we also generate a low-resolution spectrum over 0.3 to 250 \(\mu\)m, which we integrate and use to compute the bolometric luminosity L\({}_{Bol}\) and effective temperature T\({}_{eff}\). Leggett et al. (2012) report a luminosity range of log(L\({}_{Bol}\)/L\({}_{Sun}\)) = -6.05-6.17 from observed spectra of U0722 over \(\sim\) 0.7 - 4 \(\mu\)m. The calculated log(L\({}_{Bol}\)/L\({}_{Sun}\)) from the FIRE results of -6.23\({}^{+0.20}_{-0.09}\) is consistent with this literature value, but the luminosity from our SpeX results log(L\({}_{Bol}\)/L\({}_{Sun}\)) = -6.42\({}^{+0.08}_{-0.05}\) is significantly lower. We also note that the luminosity constraints from the SpeX retrieval are more precise than those from the FIRE retrieval. While between 0.5 - 20 bars the TP profile is much better constrained by the FIRE retrieval, outside of this range the temperatures vary considerably and are hotter than the corresponding SpeX values, leading to a calculated luminosity that is both higher and less well-constrained. The effective temperatures calculated from the luminosities and radius values unsurprisingly show the same behavior. However, both our SpeX and FIRE retrievals do result in an unphysically small radius of about 0.5 R\({}_{Jup}\), potentially calling the accuracy of our other constraints into question. The parameter actually constrained by CHIMERA is the radius-to-distance scaling factor (R/D)\({}^{2}\); however, U0722 has a well-constrained distance from parallax measurements (4.12 \(\pm\) 0.04 pc, Leggett et al., 2012) that is most likely not the source of our impossibly small radius. From the Sonora Bobcat evolutionary tracks (Marley et al., 2021), the minimum possible radius for even a 10 Gyr object at subsolar metallicity is 0.75 R\({}_{Jup}\). Figures 7 and 9 show this problem particularly gets worse when updating the CH\({}_{4}\) and NH\({}_{3}\) cross sections which are necessary to accurately reproduce the line positions seen in the FIRE spectrum. One potential source of error could be the completeness of the Hargreaves et al. (2020) CH\({}_{4}\) line list in \(y\) band as discussed in Section 3.5. However, even at 725 K where this CH\({}_{4}\) line list should be complete, there are substantial broadband differences from the Yurchenko and Tennyson (2014) line list for \(\lambda<1.62\mu\)m as shown in Figure 6. Furthermore, even with the Yurchenko and Tennyson (2014) line list we were retrieving a smaller radius than expected (0.76 R\({}_{Jup}\)), indicating additional issues that are perhaps only exacerbated by the change to the newer CH\({}_{4}\) line list. Further work is needed to assess whether our small radius could be attributed to completeness issues with the CH\({}_{4}\) line list from Hargreaves et al. (2020), uncertainties on how to treat alkali opacities, poor photometric calibration, or some other unseen flaw in our modeling framework. Recently, multiple retrieval studies have found radii smaller than expected from evolutionary models for both L dwarfs (Gonzales et al., 2020; Burningham et al., 2021; Xuan et al., 2022) and T dwarfs (Kitzmann et al., 2020; Lueber et al., 2022) across different retrieval frameworks and instruments with varying spectral resolutions. Furthermore, Zhang et al. (2021) performed a uniform comparison of 55 late T dwarf spectra with the Sonora Bobcat grid (Marley et al., 2021) of forward models using the Bayesian framework Starfish, finding small radii for a number of the studied objects indicating this issue is not solely found in retrieval analyses. Notably, Zhang et al. (2021) fit a SpeX spectrum of U0722, getting T\({}_{eff}\)= 680 \(\pm\) 26 K, log(g)=3.6 \(\pm\) 0.3 dex, [M/H]= -0.06 \(\pm\) 0.2 dex, and R = 0.43 \(\pm\) 0.04 R\({}_{Jup}\), yielding even more unphysical values of the surface gravity and radius than in this work. More retrieval studies for large samples of brown dwarf spectra, combined with insights from high-quality data from JWST, will hopefully illuminate the source of this small radius problem. In addition, the growing sample of transiting brown dwarfs from the TESS mission (e.g. Subjak et al., 2020; Carmichael et al., 2022) can provide independent tests of the radii predicted by evolutionary models, though their irradiated nature may cause difficulty in making comparisons. ### Comparison to Bochanski et al. Results We can compare the results from our preferred retrieval on the FIRE spectrum of U0722 to the original analysis of the dataset presented by B11. By comparing the spectrum to line lists, they were able to identify features of H\({}_{2}\)O, CH\({}_{4}\), and NH\({}_{3}\), as well as broad absorption from K. Similarly, we are able to place constraints on the abundances of all of these species to within \(\pm\) 0.02 dex. In addition, we are also able to constrain the H\({}_{2}\)S abundance due to a few distinct H\({}_{2}\)S features in the spectrum (see Section 3.3.2) unidentified in the previous study. B11 fit the FIRE spectrum of U0722 with BT-Settl models of Allard et al. (2011) based on the fitting procedure outlined by Cushing et al. (2008). The large differences we see in retrieved posteriors when looking at subsections of the spectrum in Section 3.5 reflect similar variance across the model fits of B11 of data in different bandpasses. However, the authors find significant differences between the best fitting models and the data in many places across the spectrum. Furthermore, there are notable discrepancies in our retrieved physical values from those of B11 when considering their full spectrum fit (most analogous to our results) as shown in Table 3. While their log(g) 4.0\({}^{+0.3}_{-0.3}\) is consistent, they report a larger mass of 5.24 M\({}_{Jup}\) from evolutionary models, indicating a much larger radius value of 1.14\({}^{+0.46}_{-0.33}\) R\({}_{Jup}\). Due to the large uncertainty on U0722's parallax at the time, B11 allow distance instead of the radius to vary when fitting the (R/d)\({}^{2}\) scaling factor. When allowing this distance to freely vary, their best fit to the entire spectrum also prefers a hotter object with effective temperature 700 \(\pm\) 50 K, consistent with the calcualted T\({}_{eff}\) \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{ Parameter} & FIRE Retrieval & \multicolumn{1}{c}{SpeX Retrieval} & BT-Settl Model Grid Fit & Tuned ATMO Model Grid Fit \\ & (this work) & (this work) & Bochanski et al. (2011) & Leggett et al. (2021) \\ \hline Wavelength Range (\(\mu\)m) & 0.9 - 2.35 & 0.9 - 2.35 & 0.9 - 2.35 & 0.6 - 5.1 \\ Spectral Resolution & 6000 & 100 & 6000 & 180-500f \\ log(g) (cgs) & 4.19\({}^{+0.04}_{-0.03}\) & 4.43\({}^{+0.28}_{-0.17}\) & 4.0\({}^{+0.3}_{-0.3}\) & 4.5 \\ Radius (R\({}_{Jup}\)) & 0.5\({}^{+0.01}_{-0.01}\) & 0.5\({}^{+0.03}_{-0.02}\) & 1.14\({}^{+0.46}_{-0.33}\) \(a\) & 1.12g \\ Distance (pc) & Fixed at 4.12 & Fixed at 4.12 & 4.6 (restricted)b & Fixed at 4.12 \\ & & & 49 (unrestricted) & \\ Mass (M\({}_{Jup}\)) & 1.5\({}^{+0.1}_{-0.1}\) & 1.9\({}^{1.2}_{-0.6}\) & 5.24c & 15g \\ C/O & 0.54\({}^{+0.01}_{-0.02}\) & 0.55\({}^{+0.08}_{-0.06}\) & 0.55d & 0.55h \\ [M/H] (dex) & -0.10\({}^{+0.02}_{-0.02}\) & -0.02\({}^{+0.07}_{-0.06}\) & 0.0d & 0.0h \\ log(L\({}_{Bol}\)/L\({}_{Sun}\)) & -6.23\({}^{+0.20}_{-0.09}\) & -6.42\({}^{+0.08}_{-0.05}\) & -6.11 (restricted)e & -5.99e \\ & & & -5.52 (unrestricted) & \\ T\({}_{eff}\) (K) & 711\({}^{+84}_{-35}\) & 638\({}^{+32}_{-15}\) & 500\({}^{+50}_{-50}\) (restricted) & 540 \\ & & & 700\({}^{+50}_{-50}\) (unrestricted) & \\ \hline \end{tabular} \end{table} Table 3: Parameters of U0722 calculated from this work and previous studies. from our FIRE retrieval but with a far larger distance than the literature value (49 pc vs. 4.12 pc). Our retrieval functionally achieves the same effect with a small (R/d)\({}^{2}\) scaling factor instead attributed to a small radius given the reliable parallax measurement. However, when the distance is required to be within 5\(\sigma\) of the parallax measurement from Lucas et al. (2010), the authors get a much colder T\({}_{eff}\) of 500 \(\pm\) 50 K. Finally, while their measured \(v\) sin\(i\) of 40 \(\pm\) 10 km s\({}^{-1}\) agrees with our results, their radial velocity measurement of 46.9 \(\pm\) 2.5 km s\({}^{-1}\) is significantly higher than our result of 35.80 \(\pm\) 0.15 km s\({}^{-1}\). Given that the radial velocity was measured in part with models using significantly older line lists for basically all molecules present in this object's spectrum, such an inconsistency is perhaps expected. Future work to calculate if an updated RV measurement would change the original determination by B11 of a thin disk Galactic orbit could provide interesting context to U0722's potential age and evolutionary trajectory, though the change in radial velocity may not be enough to matter. Furthermore, for both velocity parameters our reported precision from emcee is significantly higher than those reported for the measurements by B11. Both the radial velocity and \(v\) sin \(i\) were calculated by cross-correlating the 1.27 - 1.31 \(\mu\)m section of FIRE spectrum with a template, either a grid model spectrum or another observation of a T dwarf, which may or may not be a good fit to the data in question. In contrast, the log-likelihood based MCMC approach reported here both considers significantly more data points across the whole spectrum and has been shown to produce smaller error bars than cross-correlation methods (Brogi and Line, 2019). ### Comparison to Grid Models Given the many advances in line lists over the past decade, it is instructive to look at more recent comparisons of U0722 spectra with grid models. Miles et al. (2020) find that the low-resolution (R \(\sim\) 300) \(M\) band spectrum of U0722 indicates disequilibrium amounts of CO from vertical mixing, consistent with our tentative detection of CO in the FIRE spectrum. Leggett et al. (2021) compare the flux-calibrated 0.6-5.1 \(\mu\)m spectrum of U0722 created from multiple published observations (Lucas et al., 2010; Leggett et al., 2012; Miles et al., 2020) with the ATMO 2020 grid of models, finding that the models with and without disequilibrium chemistry still have trouble fitting the data well. The authors improve this fit by "tuning" the gradient of the TP profile away from the standard adiabat. As such, we compare our retrieved results to this adiabat-adjusted model in Table 3, which has an effective temperature of 540 K, log surface gravity (cgs) of 4.5, solar metallicity, log Kzz (cm\({}^{2}\) s\({}^{-1}\))=7, and an adiabatic index of 1.27. However, we note that the NIR portion of the U0722 spectrum used in Leggett et al. (2021) is slightly inconsistent with that of the FIRE and SpeX spectra examined in this work. Figure 21 shows all three datasets, as well as the tuned ATMO model; the peak \(y\) band flux in particular of the Leggett et al. (2021) is offset in both strength and wavelength. While the source of this discrepancy is unclear, it undoubtedly contributes to differences in our fitted or retrieved values for U0722. Figure 22 shows our retrieved TP profiles from the FIRE spectrum of U0722, compared to the tuned ATMO model from Leggett et al. (2021), as well as two forward models with disequilibrium chemistry from Mukherjee et al. (2022). The tuned ATMO TP profile has a steeper gradient than either of the other models, reflecting the adjusted adiabat. Our retrieved TP profiles from FIRE are significantly hotter than that of the tuned ATMO model, more closely resembling that of a hotter 700 K object from the Mukherjee et al. (2022) grid, consistent with our calculated T\({}_{eff}\). The difference in temperature is consistent with the differences in radii- the tuned ATMO model assumed a radius of 1.12 R\({}_{Jup}\) from the evolutionary models of Phillips et al. (2020), more than double that of our retrieved value. Thus, a cooler object would be necessary to achieve a similar amount of flux. Figure 23 compares our retrieved molecular abundances, shown by the solid lines and shaded 1\(\sigma\) regions, to the mixing ratios in the tuned ATMO model as a function of pressure in the atmosphere. Our retrieved values for CO, CH\({}_{4}\), and H\({}_{2}\) are all in agreement with the ATMO abundances above 100 bars, and our H\({}_{2}\)O value posterior is only 0.003 dex away from the ATMO model value. However, our NH\({}_{3}\) posterior is \(\sim\)0.5 dex lower, perhaps reflecting the differences in effective temperature and surface gravity between our retrieval results and the tuned ATMO model. Thus, while our retrieved surface gravity and radius are unphysical, our molecular abundances are plausible and mostly consistent with grid model predictions. The similarity in molecular abundances is also reflected in the consistency of the C/O ratios and metallicities reported in Table 3. ## 5 Conclusions In this work, we have applied the CHIMERA atmospheric retrieval framework to a high signal-to-noise, medium-resolution (R \(\sim\) 6000) FIRE spectrum of a T9 dwarf. Key takeaways from this work are as follows: 1. _Limitations of the dataset must be taken into account_: In Section 3.2, we show that ensuring only one data point per resolution element is sampled as well as cutting out very noisy regions of data at either end of the spectrum can affect our retrieved posteriors. More dramatically, areas of the spectrum where orders were stitched poorly can negatively bias the retrieval analysis as discussed in Section 3.4. Future improvements to order stitching methods, or new ways to account for potential stitching issues within a retrieval framework, could help alleviate this problem in future analysis of data from echelle spectrographs. 2. _Using different opacity sources may lead to very different results_: Updating the line lists for CH\({}_{4}\) and NH\({}_{3}\) to those from Hargreaves et al. (2020) and Coles et al. (2019), respectively, greatly improved the ability of the forward models to match the line positions in the data, as shown in Figure 8. Almost all retrieved posteriors were affected by this change, even for the retrieval on the R \(\sim 100\) SpeX spectrum as shown in Figures 7 and 9. However, the updated line lists yielded an even smaller radius of 0.5 R\({}_{Jup}\). We also tested different treatments of Na and K opacities in Section 3.6, Figure 21: Comparison of the FIRE (blue) and SpeX (orange) spectra of U0722 analyzed in this work to the spectrum (yellow) and tuned ATMO model (pink) used in Leggett et al. (2021). There are notable discrepancies in the peak of the \(y\) band. Figure 23: Molecular volume mixing ratios from the tuned ATMO model (dotted lines) as a function of atmospheric pressure compared to the median and 1\(\sigma\) posteriors from our FIRE retrieval (solid lines and shaded regions). With the exception of NH\({}_{3}\), the abundances are quite similar. Figure 22: TP profiles from our FIRE retrieval (blue) to the tuned ATMO model (pink) and two disequilibrium forward models from Mukherjee et al. (2022). The tuned ATMO model is significantly colder than our retrieved TP profiles, and has a different gradient in accordance with the adjusted adiabat discussed by Leggett et al. (2021). again finding largely disparate results depending on which cross sections were used. While we do recommend the Hargreaves et al. (2020) CH\({}_{4}\) line list due to its ability to match the line positions in our FIRE spectrum, more comparison with other line lists with regards to completeness as well as treatment of the alkali wings are needed to fully utilize this kind of high-quality data. 3. _Medium-resolution retrievals offer very precise constraints, but they may not be accurate_: As shown in Figure 20, the constraints on the temperature-pressure profile and abundances from our FIRE retrieval of U0722 are significantly more precise than those from the SpeX spectrum. In particular, we are able to retrieve the abundances of H\({}_{2}\)S and tentatively CO which is not possible with the lower resolution spectrum. However, while we do get these precise, stellar-like constraints on atmospheric abundances (\(\sim\)0.02 dex), the radius is far too small to be physically plausible. This small size is in accordance with our retrieved TP profile being hotter than previous analyses of this object, yielding a similar overall observed flux. Thus, this study joins a growing number of modeling analyses of brown dwarf spectra that have yielded smaller radii than allowed by our understanding of brown dwarf evolution. This work is a first foray into the challenges and benefits of applying atmospheric retrieval tools to medium-resolution spectra of brown dwarfs. With the launch of JWST and future ground-based studies, more work is needed to assess how we can improve our current modeling frameworks to address these challenges and unlock the potential for trustworthy, precise constraints on substellar atmospheres from retrievals of medium-resolution spectra. We acknowledge use of the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315. This work benefited from the 2022 Exoplanet Summer Program in the Other Worlds Laboratory (OWL) at the University of California, Santa Cruz, a program funded by the Heising-Simons Foundation. MRL acknowledges support from NASA XRP grant 80NSSC19K0293. JF acknowledges support from NSF Award Number:1909776 and NASA Award Number: 80NSSC22K0142 as well as the Heising Simons Foundation.
2301.02455
Null energy condition violation: Tunnelling versus the Casimir effect
We show that tunnelling between two degenerate minima, as allowed in a finite volume, leads to a non-extensive symmetric ground state. This results in Null Energy Condition violation for sufficiently low temperatures, when a continuous set of momenta in the box containing the field is assumed. Taking into account discrete momenta can modify this picture and is achieved via the addition of the Casimir energy to the tunnelling-induced ground state energy. Focusing on zero-temperature, these non-trivial effects are found to compete, depending on the typical length scales involved.
Jean Alexandre, Drew Backhouse
2023-01-06T10:39:56Z
http://arxiv.org/abs/2301.02455v2
# NEC violation: Tunnelling versus the Casimir effect ###### Abstract We show that tunnelling between two degenerate minima, as allowed in a finite volume, leads to a non-extensive symmetric ground state. This results in Null Energy Condition violation for sufficiently low temperatures, when a continuous set of momenta in the box containing the field is assumed. Taking into account discrete momenta can modify this picture and is achieved via the addition of the Casimir energy to the tunnelling-induced ground state energy. Focusing on zero-temperature, these non-trivial effects are found to compete, depending on the typical length scales involved. + Footnote †: preprint: KCL-PH-TH/2023-01 ## I Introduction Spontaneous Symmetry Breaking (SSB) is strictly speaking valid for infinite volumes only, where tunnelling between degenerate vacua is completely suppressed. On the other hand, for a field confined in a box of finite volume, tunnelling between degenerate vacua is allowed and we study here the energetic consequences. Involving tunnelling in the quantisation of a system automatically takes into account the different vacua and is known to lead to a convex effective action [1]. This is not the case in the situation of SSB, where the different vacua are decoupled and quantisation over a single vacuum does not necessarily lead to convexity. Taking into account several degenerate vacua in the partition function comes with a remarkable energetic feature, generated dynamically: the effective action is non-extensive, as was shown in [2; 3] with a semi-classical approximation for the partition function. The latter works were done in an \(O(4)\)-symmetric Euclidean spacetime though, and to account for a full description of tunnelling one needs a finite spatial volume \(V\) and an independent large Euclidean time \(\beta\). The natural context for these studies is therefore equilibrium field theory at a finite-temperature \(T=1/\beta\). The corresponding Quantum Mechanics study was done in [4], involving a gas of instantons/anti-instantons which dominates the partition function in the limit of small temperature. It is shown there that the Null Energy Condition (NEC - see [5] for reviews) is violated, as a consequence of a non-extensive effective action induced by tunnelling. The present article extends this study to full 4-dimensional quantum fluctuations, and we find that NEC violation occurs in any finite volume for sufficiently low temperatures. Our study does not however deal with high-temperature symmetry restoration, as seen in the Kibble-Zureck mechanism [6]. We are instead interested in the low-temperature regime, where tunnelling dominates over thermal fluctuations providing an opportunity to violate the NEC, which the Kibble-Zurek mechanism does not. We first evaluate quantum corrections with continuous momentum for fluctuations above each saddle point, to describe the fundamental dynamical mechanism induced by tunnelling. We then take into account the modification arising from discrete momentum in a finite volume, using results known from studies of the Casimir effect (see [7] for a review). The latter is known to be either attractive or repulsive, depending on the geometry of the box containing the field, as well as the boundary conditions the field satisfies on the walls of the box. As a consequence, as far as NEC violation is concerned, the difference between discrete and continuous momentum can play an important role. In Section II we describe the semi-classical approximation in which the partition function is derived, to take into account the different saddle points which are relevant to tunnelling: static saddle points and the instanton/anti-instanton dilute gas. Details of the calculations with continuous momentum are given in Appendices A and B. Section III focuses on the ground state of the effective action, with a non-extensive energy density providing the origin of NEC violation. The maximum effect occurs at zero temperature and is the regime in which we introduce corrections arising from discrete momentum in Section IV via the Casimir energy. We find that tunnelling and the Casimir effect compete when the typical size of the box containing the field is of the order of the Compton wavelength of the corresponding particle. For a larger box, the Casimir effect seems to be dominant. To summarise our results: in the low-temperature regime, the sum of energy density \(\rho\) and pressure \(p\) can be written in the form \[\rho+p\simeq\ A_{\rm finite\mbox{-}T}\ +\ B_{\rm tunnelling}\ +\ C_{\rm Casimir}\,\] where * the finite-temperature contribution \(A\) is always positive (and vanishes exponentially for \(T\to 0\)); * the tunnelling contribution \(B\), calculated with continuous momentum, is always negative (and vanishes exponentially for \(V\to\infty\)); * the discrete momentum correction \(C\) has a sign which depends on the geometry and topology of the finite box containing the field (and vanishes for \(V\to\infty\)). As expected, the NEC is satisfied at zero temperature and for infinite volume, where \(\rho+p=0\) for a homogeneous vacuum. ## II Semi-Classical Approximation ### Model Consider a single real scalar field \(\phi(t,\mathbf{x})\) in Euclidean space, at finite-temperature \(T=1/\beta\) and in a three-dimensional spacial volume \(V\), described by the Euclidean action \[\int_{0}^{\beta}dt\int_{V}d^{3}x\left(\frac{1}{2}(\partial\phi)^{2}+\frac{ \lambda}{24}(\phi^{2}-v^{2})^{2}+j\phi\right). \tag{1}\] The finite volume is represented by a physical box containing the scalar field, in which we assume continuous momenta to calculate quantum corrections. Section IV discusses corrections arising from discrete momentum and the boundary conditions the field satisfies at the walls of the box. Finite temperature requires field configurations to have periodic boundary conditions in Euclidean time and, as later discussed, has an impact on the saddle point configurations which are allowed in the partition function. Introducing the dimensionless variables \(\tau\equiv\omega t\) and \[\omega\equiv v\sqrt{\frac{\lambda}{6}}\ \,\ \ \varphi\equiv\sqrt{\frac{ \lambda}{6}}\frac{\phi}{\omega}\ \,\ \ k\equiv\sqrt{\frac{\lambda}{6}}\frac{j}{\omega^{3}}\, \tag{2}\] leads to the bare action \[S[\varphi]=\frac{\lambda v^{4}}{12\omega}\int_{0}^{\omega\beta}d \tau\int_{V}\!d^{3}x\Big{(}(\varphi^{\prime})^{2}+\frac{1}{\omega^{2}}( \nabla\varphi)^{2}\] (3) \[\ \ source and periodic boundary conditions, the fundamental saddle point we consider behaves as the function (11). Assuming the jumps occur over a short time in comparison to \(\beta\), the instanton/anti-instanton pair spends the same time \(\beta/2\) exponentially close to each static saddle point, resulting in an action for such a pair \(\varphi_{\rm pair}\) of \[S_{\rm pair}\simeq\frac{1}{2}S_{L}+\frac{1}{2}S_{R}+2S_{\rm inst}. \tag{12}\] Revisiting the analogy of classical mechanics in the upside-down potential \(V(\varphi)\), the other possible saddle points consist of periodic oscillations made of \(n\) instanton/anti-instanton pairs, where the value of \(n\) depends on how "exponentially close" the oscillations from a static saddle point begins. An example of an exact saddle point is given in Fig.(a)a. Assuming the total Euclidean time \(\beta\) is large enough to leave the structure of pairs intact, the time spent close to one static saddle point is the same as the time spent close to the other and the total action for \(n\) pairs is \[S_{\rm n\ pairs}\simeq\frac{1}{2}S_{L}+\frac{1}{2}S_{R}+2nS_{\rm inst}. \tag{13}\] The latter "crystalline" structure, with \(n\) periodic oscillations, corresponds to an exact solution of the equation of motion. For large \(\beta\), where the average distance between instantons and anti-instantons remains large compared to their width, a translation of each jump leaves the action \(S_{\rm inst}\) invariant and the resulting highly degenerate "gas" of instanton/anti-instanton pairs dominates the partition function. An example of an approximate saddle point is given in Fig.(b)b. In this "dilute gas" approximation, the \(n\) instanton/anti-instanton pair configurations spend on average an equal time \(\beta/2\) close to each static saddle point, with the same total action (13) as for an exact \(n\)-pair configuration as a result of the translational invariance of jumps. ### Partition function The partition function is evaluated in the semi-classical approximation via a sum over the two static saddle points, \(\varphi_{L}\) and \(\varphi_{R}\), and the dilute gas of \(n\) instanton/anti-instanton pairs for all possible values of \(n\). Together, with the corresponding one-loop fluctuation factors \(F_{L,R}\) and \(F_{n}\), the semi-classical approximation of the partition function reads \[Z[k] \simeq F_{L}(\beta)\exp(-S_{L})+F_{R}(\beta)\exp(-S_{R})\] \[+ \sum_{n=1}^{\infty}\left(\prod_{i=1}^{2n}\int_{\tau_{i-1}}^{ \omega_{R}\beta}\tau_{i}\right)F_{n}\exp(-S_{\rm n\ pairs})\.\] In the latter expression, the product of integrals over the times \(\tau_{i}\) where the jumps occur corresponds to the zero-mode of the fluctuation factor for the saddle point made of \(n\) instanton/anti-instanton pairs. Indeed, the translational invariance of the action means that the \(i_{\rm th}\) jump can happen at any time \(\tau_{i}\in[\tau_{i-1},\beta]\). For finite temperature though, there is a maximum number of instanton/anti-instanton pairs, however, the error made in the summation for \(n\to\infty\) is negligible since each term is suppressed by \(\exp(-nS_{\rm inst})\). With the fluctuation factors derived in Appendix A and Appendix B, we can write \[Z[k] = \exp\Big{(}-\Sigma_{L}(\beta)\Big{)}+\exp\Big{(}-\Sigma_{R}( \beta)\Big{)} \tag{15}\] \[+\exp\Big{(}-\Sigma_{\rm gas}(\beta)\Big{)}\,\] where \(\Sigma_{L},\Sigma_{R}\) and \(\Sigma_{\rm gas}\) are the connected graphs generating functionals for the static saddle points and the gas of instanton/anti-instanton pairs respectively. We Figure 1: Examples of exact and approximate saddle points. In the dilute gas approximation, the difference between the corresponding actions is of order \(B\omega\beta\exp(-\omega\beta)\ll 1\), and the partition function is dominated by the whole set of approximate saddle points. note that an instanton or anti-instanton does not lead to any imaginary part in the partition function, unlike a bounce, since the former are monotonous functions of the Euclidean time, such that the fluctuation operator does not have negative eigenvalues [10]. #### ii.2.1 Static saddle points One-loop quantum corrections can be split into two contributions: the zero-temperature corrections, containing all the divergences, and the divergence free finite-temperature dependent corrections. The zero-temperature contribution is calculated in [3] and is expressed in terms of the renormalised parameters. It is mentioned here that, in the case of several saddle points and in order to avoid confusion between loop orders, renormalisation should be done at the level of the individual connected graphs generating functionals before performing the Legendre transform. The finite-temperature contribution can be calculated using the Schwinger proper time representation - see Appendix A - and the overall contribution is \[\Sigma_{L,R}(\beta)\] \[= B_{r}\omega_{r}\beta\Bigg{(}(\varphi_{L,R}^{2}-1)^{2}+4k\varphi_{ L,R}\] \[+\frac{\lambda_{r}}{96\pi^{2}}(3\varphi_{L,R}^{2}-1)^{2}\ln\left( \frac{3}{2}\varphi_{L,R}^{2}-\frac{1}{2}\right)\] \[-\frac{\lambda_{r}(3\varphi_{L,R}^{2}-1)}{3\pi^{2}}\sum_{l=1}^{ \infty}\frac{K_{2}\left(l\omega_{r}\beta\sqrt{3\varphi_{L,R}^{2}-1}\right)}{( l\omega_{r}\beta)^{2}}\Bigg{)}\.\] In the previous expression, the renormalised parameters are \[\lambda_{r} \equiv \lambda-\frac{3\lambda^{2}}{32\pi^{2}}\log\left(\frac{\Lambda^{2 }}{\lambda v^{2}}\right)\, \tag{17}\] \[v_{r}^{2} \equiv v^{2}-\frac{3\Lambda^{2}}{16\pi^{2}}+\frac{\lambda v^{2}}{ 16\pi^{2}}\log\left(\frac{\Lambda^{2}}{\lambda v^{2}}\right)\,\] \[B_{r} \equiv \frac{\lambda_{r}v_{r}^{4}V}{24\omega_{r}}\,\] \[\omega_{r} \equiv v_{r}\sqrt{\frac{\lambda_{r}}{6}}\,\] and \(K_{2}(z)\) is a modified Bessel function of the second kind with asymptotic behaviour \[K_{2}(z\rightarrow\infty)\simeq e^{-z}\sqrt{\frac{\pi}{2z}}. \tag{18}\] We note that \(l\) does not correspond to Matsubara modes. Also, the temperature-independent part of the expression (II.2.1) reproduces the zero-temperature result derived in [3]. #### ii.2.2 Gas of instanton/anti-instanton pairs The evaluation of \(\Sigma_{gas}\) involves the fluctuation factor above each jump and includes a summation over the allowed jump positions in the interval \(\tau\in[0,\beta]\)[10]. The additional contribution of quantum fluctuations arises from the "flat" parts of the instanton/anti-instanton configurations, which are exponentially close to each static saddle point for the approximate average time of \(\beta/2\) when neglecting the width of each jump compared to \(\beta\). Performing the resummation over instantons/anti-instantons, we show in Appendix B that the corresponding connected graphs generating functional is then \[\Sigma_{\rm gas}(\beta) \simeq \Sigma_{L}(\beta/2)+\Sigma_{R}(\beta/2)\] \[-\ln\Big{(}\cosh(\bar{N})-1\Big{)}\,\] where \[\bar{N}\equiv\omega_{r}\beta\sqrt{\frac{6}{\pi}S_{\rm inst}}\ e^{-S_{\rm inst }}\, \tag{20}\] corresponding to the average number of instanton/anti-instanton pairs at temperature \(T=1/\beta\). In this article we are interested in the limit \(\omega_{r}\beta\gg 1\) for a fixed volume - and thus fixed action \(S_{\rm inst}\) - such that we consider the situation where \(\bar{N}\gg 1\), corresponding to the full tunnelling regime. In the situation where \(\beta\) is fixed and \(V\) becomes large we have \(\bar{N}\ll 1\), where tunnelling is suppressed and the system is better approximated by SSB [4]. ## III Non-extensive ground state ### One-particle-irreducible effective action From the partition function evaluated for a constant source \(j\), the classical field is obtained as \[\phi_{c}\equiv-\frac{1}{Z}\frac{\delta Z}{\delta j}\ \rightarrow\ -\frac{1}{V\beta Z}\frac{\partial Z}{\partial j}\, \tag{21}\] which, in terms of the dimensionless quantities previously introduced, can be written as \[\varphi_{c}=-\frac{1}{4B_{r}\omega_{r}\beta Z}\frac{\partial Z[k]}{\partial k }. \tag{22}\] From the expression (II.2.1) for the partition function, together with the expressions (II.2.1) and (II.2.1), the classical field is expanded in powers of the source \(k\) \[\varphi_{c}=\left(-f_{0}+\frac{\lambda_{r}}{128\pi^{2}}f_{1}\right)k+{\cal O} (k^{3})\, \tag{23}\] where \[f_{0} \equiv \frac{1+16B_{r}\omega_{r}\beta+\cosh(\bar{N})}{2\Big{(}1+\cosh( \bar{N})\Big{)}}\] \[f_{1} \equiv \frac{7+32B_{r}\omega_{r}\beta+7\cosh(\bar{N})}{1+\cosh(\bar{N}) }\.\] Consistently with the symmetry of the bare potential, the classical field \(\phi_{c}\) is an odd function of \(k\): the even powers of \(k\) cancel out in the expression for \(\varphi_{c}\) after adding the contribution of the different saddle points, leading to the mapping \(k=0\Leftrightarrow\varphi_{c}=0\). We then perform the Legendre transform, after expressing the source as a function of the classical field \[k(\varphi_{c})=-\frac{1}{2}\left(g_{0}+\frac{\lambda}{16\pi^{2}}g_{1}\right) \varphi_{c}+\mathcal{O}(\varphi_{c}^{3})\, \tag{25}\] where \[g_{0} \equiv\frac{4\Big{(}1+\cosh(\bar{N})\Big{)}}{1+16B_{r}\omega_{r} \beta+\cosh(\bar{N})} \tag{26}\] \[g_{1} \equiv\frac{\Big{(}1+\cosh(\bar{N})\Big{)}\Big{(}7+32B_{r}\omega_ {r}\beta+7\cosh(\bar{N})\Big{)}}{\Big{(}1+16B_{r}\omega_{r}\beta+\cosh(\bar{N} )\Big{)}^{2}}\.\] The effective action for a constant configuration is finally \[\Gamma(\varphi_{c}) =-\ln Z\Big{(}k(\varphi_{c})\Big{)}-4B_{r}\omega_{r}\beta\int k( \varphi_{c})\ d\varphi_{c} \tag{27}\] \[=\Gamma(0)+B_{r}\omega_{r}\beta\left(g_{0}+\frac{\lambda}{16\pi^ {2}}g_{1}\right)\varphi_{c}^{2}+\mathcal{O}(\varphi_{c}^{4})\,\] where \[\Gamma(0) =-\ln Z(0) \tag{28}\] \[=-\ln\left(2e^{-\Sigma_{0}(\beta)}+e^{-2\Sigma_{0}(\beta/2)} \Big{(}\cosh(\bar{N})-1\Big{)}\right)\,\] and \(\Sigma_{0}\equiv\Sigma_{L}|_{k=0}=\Sigma_{R}|_{k=0}\). The effective potential \(U_{\text{eff}}\) is finally given by \[\Gamma(\phi_{c})=V\beta U_{\text{eff}}(\phi_{c})\, \tag{29}\] and, as expected, it satisfies the following properties: * it is a convex function of \(\phi_{c}\), since the mass term is positive; * the ground state is at \(\varphi_{c}=0\), or equivalently \(k=0\); * it has a non-trivial volume-dependence and is thus non-extensive. For the following studies of NEC violation we focus on the ground state \(\varphi_{c}=0\). ### NEC violation The ground state density \(\rho\) and pressure \(p\) are obtained from the free energy \[F=\frac{1}{\beta}\Gamma(0)=-\frac{1}{\beta}\ln Z(0)\, \tag{30}\] and their sum can be written as [4] \[\rho+p =\frac{1}{V}\left(F-T\frac{\partial F}{\partial T}\right)-\frac{ \partial F}{\partial V}\] \[=-T\frac{\partial U_{\text{eff}}(0)}{\partial T}-V\frac{\partial U _{\text{eff}}(0)}{\partial V}. \tag{31}\] From the expression (28), we obtain for \(\omega_{r}\beta\gg 1\) \[\rho+p \simeq\frac{4{\omega_{R}^{5/2}}}{(\sqrt{2}\pi\beta)^{3/2}}\ e^{- \omega_{R}\beta/\sqrt{2}} \tag{32}\] \[-\frac{\omega_{R}}{V}\left(S_{\text{inst}}+\frac{1}{2}\right) \sqrt{\frac{6}{\pi}S_{\text{inst}}}\ e^{-S_{\text{inst}}}\.\] On the right-hand side, the first term corresponds to thermal fluctuations and the second term corresponds to tunnelling. These terms compete for the overall sign of \(\rho+p\) leading to the following cases: * _Infinite volume:_\(\rho+p\geq 0\) In the limit of infinite volume tunnelling is suppressed, as seen via the vanishing of the average number (20) of instanton/anti-instanton pairs for any fixed temperature: \(\lim_{V\to\infty}\bar{N}=0\) for fixed \(\beta\). Hence only thermal fluctuations contribute and \[\rho+p=\frac{4{\omega_{R}^{5/2}}}{(\sqrt{2}\pi\beta)^{3/2}}\ e^{-\omega_{R} \beta/\sqrt{2}}\,\] (33) with \(\rho+p\to 0\) as the temperature goes to \(0\) or equivalently \(\beta\to\infty\). This result is not surprising: the limit of infinite volume corresponds to SSB and, as expected, the NEC is satisfied; * _Finite volume and zero temperature:_\(\rho+p<0\) In this situation, only the tunnelling term contributes and \[\rho+p=-\frac{\omega_{R}}{V}\left(S_{\text{inst}}+\frac{1}{2}\right)\sqrt{\frac {6}{\pi}S_{\text{inst}}}\ e^{-S_{\text{inst}}}\.\] (34) The NEC is violated as a consequence of the explicit volume-dependence of the effective potential; * _Boundary \(\rho+p=0\)_ We sketch in Fig.2 the boundary \(V(T)\) between the region where the NEC is satisfied and the region where the NEC is violated. Finally, we note that NEC violation is suppressed exponentially with the volume, unlike the power law suppression which is found with \(O(4)\)-symmetric Euclidean spacetime coordinates [3; 19]. ## IV Discrete momentum corrections We focus here on the ground state obtained for \(k=0\) in the case of zero temperature, where NEC violation arising from tunnelling is maximum. The previous sections ignore quantisation of momentum when calculating the connected graphs generating functional for each static saddle point in a finite volume. As we explain below, the evaluation of \(\Sigma_{L,R}\) with discrete momentum consists of taking into account the relevant Casimir energy. There is no such contribution from the jumps in the instantons/anti-instantons since the corresponding one-loop corrections do not depend on momentum. ### Vacuum energy The Casimir contribution to the connected graphs generating functional is defined as \[\Sigma_{\rm Cas}\equiv\Sigma_{L,R}|_{\rm discrete}-\Sigma_{L,R}|_{\rm continuum }\, \tag{35}\] where the ultraviolet divergences cancel out since they are identical in the discrete and continuum cases. For zero temperature and vanishing source, the expression (16) gives \[\Sigma_{L,R}(k=0,T=0)|_{\rm continuum}=\lim_{\beta\to\infty}\Sigma_{0}( \beta)=0\, \tag{36}\] such that, instead of eq.(28), one-loop corrections obtained with discrete momentum lead to \[\Gamma(0) = -\ln\Big{(}2e^{-\Sigma_{\rm Cas}}+e^{-\Sigma_{\rm Cas}}\Big{(} \cosh(\bar{N})-1\Big{)}\Big{)} \tag{37}\] \[= \Sigma_{\rm Cas}-\ln\Big{(}\cosh(\bar{N})+1\Big{)}\.\] The above expression takes advantage of the proportionality between \(\Sigma_{0}\) and \(\beta\) in the limit of vanishing temperature, such that \[2\Sigma_{0}(\beta/2)\to\Sigma_{0}(\beta), \tag{38}\] as \(\beta\to\infty\). In the situation of one saddle point, and therefore no tunnelling, \(\Gamma(0)=\Sigma_{\rm Cas}=\beta E_{\rm Cas}\) where \(E_{\rm Cas}\) is the Casimir energy corresponding to quantum fluctuations about a single vacua \(\pm v\) (where one has approximately quadratic fluctuations with mass \(m=\sqrt{2}\omega_{r}\)). Hence \[U_{\rm eff}(0)=\frac{E_{\rm Cas}}{V}-\frac{1}{V\beta}\ln\Big{(}\cosh(\bar{N})+ 1\Big{)}\, \tag{39}\] and we see the additive nature of the Casimir effect and tunnelling contributions, similarly to the finite-temperature contribution. The sum of density and pressure reads finally \[\rho+p = \frac{E_{\rm Cas}}{V}-\frac{\partial E_{\rm Cas}}{\partial V}\] \[-\frac{\omega_{R}}{V}\left(S_{\rm inst}+\frac{1}{2}\right)\sqrt {\frac{6S_{\rm inst}}{\pi}}\ e^{-S_{\rm inst}}\.\] ### Casimir contribution to the NEC The Casimir energy is highly sensitive to the geometry of the box containing the field, as well as the boundary conditions used on the corresponding surfaces [7]. For a scalar field \(\varphi(t,x)\) in the interval \(x\in[0,L]\) for example, the possible choices of boundary conditions are defined as follows Dirichlet: \[\varphi(t,0)=\varphi(t,L)=0\] (41) Neumann: \[\partial_{x}\varphi(t,0)=\partial_{x}\varphi(t,L)=0\] Periodic: \[\varphi(t,0)=\varphi(t,L)\.\] (42) For the cases we consider, the asymptotic form of the Casimir effect is identical for both Dirichlet and Neumann boundary conditions. We thus consider mixed boundary conditions, where different subsets of the boundary can possess either Dirichlet or Neumann conditions. For the case of mixed boundary conditions, the Casimir energy is dependent on the size/curvature of the material boundaries, and for the case of periodic boundary conditions, it is dependent on the period length/curvature of the non-trivial spacetime. A 'general rule' states that flat geometries lead to exponential suppression of the Casimir energy for \(mL\gg 1\), where \(L\) is the length scale of the relevant boundaries, and that curved geometries lead to power-law suppression of the Casimir energy for \(mR\gg 1\), where \(R\) is the radius of curvature of the relevant surfaces. There are exceptions to this general rule though, which are highlighted in the following examples. \(\bullet\)_Dirichlet boundary conditions, flat boundaries_ The original Casimir configuration consists of a scalar field constrained between two parallel, flat mirrors with surface area \(A\) and separation \(a\), with the scalar field satisfying Dirichlet conditions on the boundaries. The Figure 2: The boundary between the regions where the NEC is satisfied and where it is violated due to the competition of tunnelling and thermal fluctuations. The plot shows the curve \(V(T)\) in terms of the dimensionless variables used in this article. corresponding Casimir energy is [7] \[E_{\rm Cas}\simeq\left\{\begin{array}{ll}-\frac{A\pi^{2}}{1440a^{3}}&\quad{\rm for }\quad am\ll 1\\ -\frac{A}{8\sqrt{2}}\left(\frac{m}{\pi a}\right)^{3/2}\ e^{-2ma}&\quad{\rm for }\quad am\gg 1\end{array}\right. \tag{43}\] and is always negative. \(\bullet\)_Dirichlet boundary conditions, curved boundaries_ For dimensional reasons, the Casimir energy for a scalar field confined within the curved boundary of a 2-sphere of radius \(R\) with Dirichlet boundary conditions is given in terms of the dimensionless function \[E_{\rm Cas}=\frac{1}{R}f(mR)\, \tag{44}\] and is found to obey power law suppression in \(mR\), for \(mR\gg 1\)[12]. \(\bullet\)_Periodic boundary conditions, flat spacetime_ For a scalar field confined to the surface of a 3-torus (a rectangular box with periodic boundary conditions), the sign of the Casimir energy depends on the ratio of the lengths of the box and we have [13] \[E_{\rm Cas}\simeq-\frac{(mL)^{3/2}}{L}\exp(-mL)\quad\mbox{ for }\quad mL\gg 1\, \tag{45}\] where \(L\) is the typical size of the period length. \(\bullet\)_Periodic boundary conditions, curved spacetime_ For a scalar field confined to the surface of a 3-sphere with radius \(R\), we would expect the asymptotic form to be a power law in R. However, this special case is an exception to the general rule as a consequence of the accidental vanishing of the heat-kernel coefficients (see Sec. 3 of [7] for details). The resulting Casimir energy has instead an exponential asymptotic form, as in the case of flat geometries [14] \[E_{\rm Cas}\simeq+\frac{(mR)^{5/2}}{R}\exp(-2\pi mR)\quad\mbox{ for}\quad mR\gg 1. \tag{46}\] The above examples display how the Casimir effect for a massive scalar field is at most suppressed by the exponential \(e^{-mL}\), where \(L\) is a typical size of the boundary containing the field. On the other hand, the tunnelling contribution to the NEC, calculated with continuous momentum, is proportional to \[e^{-S_{\rm inst}}\sim\exp\left(-\frac{(mL)^{3}}{\lambda}\right)\, \tag{47}\] and is therefore negligible compared to the Casimir contribution in the regime \(mL\gg\sqrt{\lambda}\). For \(mL\sim\sqrt{\lambda}\) though, tunnelling competes with the Casimir effect and can change the sign of \(\rho+p\) in the situation where the Casimir energy is positive. As an example, we sketch in Fig.3 the boundary \(R(\lambda)\) between the region where the NEC is satisfied and the region where it is violated, due to the competition between tunnelling and the Casimir effect on a 3-sphere. We note however two important points regarding the Casimir examples cited here: _(i)_ they are valid for ideal surfaces only, and a realistic confining mechanism for the scalar field would lead to a modification of the Casimir vacuum energies, especially if the field is confined by an external potential instead of a physical box [15]; _(ii)_ they assume free scalar fields and ignore its self-interactions. On the other hand, the tunnelling mechanism described here: _(i)_ necessitates the field to be self-interacting; _(ii)_ is not sensitive to the geometry/topology of the box containing the field. Hence the conclusions regarding which effect dominates could be modified by a more thorough study, depending on the situation which is considered. Finally, the Average Null Energy Condition is not violated by the present mechanism. Indeed, if we take into account the energy necessary to maintain the confining mechanism the overall ground state of the system does not violate the NEC [16], consistently with what is expected from causality [17]. ## V Conclusions Tunnelling between degenerate vacua is exponentially suppressed with the volume of the box containing the field, but nevertheless allows the possibility of NEC violation at low temperatures. Taking into account discrete momentum of fluctuations in a finite volume implies this effect is mainly relevant for situations where the typical size of the box is not too large compared to the Compton wave length of the particle, and where tunnelling can lead to an overall NEC violation. A potential application lies Figure 3: The boundary between the regions where the NEC is satisfied and where it is violated, due to the competition of the Casimir energy and tunnelling at zero temperature on a 3-sphere. The plot shows the curve \(R(\lambda)\) in terms of the dimensionless variables used in this article. in axion physics, where the de Broglie wavelength can be of order 1 kpc [18] with the confinement provided by a gravitational well. Exponential suppression in the volume could potentially be avoided by a consideration of non-degenerate vacua, where other saddle points with a volume-independent action become relevant, as in the original study of false vacuum decay [8]. The resulting effective action would be non-extensive in a certain regime of the classical field, but more studies need to be done for the status of NEC violation in the corresponding vacuum. Finally, NEC violation could play an important role in Early Universe Cosmology, where tunnelling could provide a dynamical mechanism for a cosmological bounce, as explained in [19]: as the Universe contracts, tunnelling switches on and violates the NEC, which induces a bounce after which tunnelling is suppressed as the Universe expands. This scenario necessitates the study of tunnelling in a Friedman-Lemaitre-Robertson-Walker background though, and is left for future work. ## Acknowledgements The authors would like to thank Klaus Kirsten for valuable correspondence regarding the Casimir effect, and JA would like to thank Janos Polonyi for enlightening discussions. This work is supported by the Leverhulme Trust (grant RPG-2021-299) and the Science and Technology Facilities Council (grant STFC-ST/T000759/1). For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. ## Appendix A Fluctuation factor for a static saddle point The fluctuation factors for the static saddle points are calculated with continuous 3-dimensional momenta, introducing the cut-off \(\Lambda\) in the Schwinger proper time representation of the propagator. Introducing the dimensionless Matsubara frequency \(\nu_{n}\equiv 2\pi n/\omega\beta\), we have \[\mathrm{Tr}\left\{\ln\left(\delta^{2}S[\varphi_{i}]\right)\right\} \tag{10}\] \[= V\int\frac{d^{3}p}{(2\pi)^{3}}\sum_{n=-\infty}^{\infty}\int_{1/ \Lambda^{2}}^{\infty}\frac{\mathrm{d}s}{s}\mathrm{e}^{-4B\omega\beta s(p^{2} /\omega^{2}+\nu_{n}^{2}+3\varphi_{i}-1)}\] \[= \frac{V\omega^{3}}{2\pi^{2}}\sum_{n=-\infty}^{\infty}\int_{1/X^{ \infty}}^{\infty}\frac{\mathrm{d}x}{x}\int_{0}^{\infty}\mathrm{d}q\;q^{2} \mathrm{e}^{-x(q^{2}+\nu_{n}^{2}+3\varphi_{i}-1)}\] \[= \frac{V\omega^{3}}{8\pi^{3/2}}\sum_{n=-\infty}^{\infty}\int_{1/X^ {\infty}}^{\infty}\frac{\mathrm{d}x}{x^{5/2}}\mathrm{e}^{-x(\nu_{n}^{2}+3 \varphi_{i}-1)}\] \[= \frac{V\omega^{3}}{8\pi^{3/2}}\int_{1/X^{2}}^{\infty}\frac{ \mathrm{d}x}{x^{5/2}}\mathrm{e}^{-x(3\varphi_{i}-1)}\vartheta_{0}\left(\frac{ 4\pi x}{\omega^{2}\beta^{2}}\right)\;,\] where the dimensionless variables are \[q\equiv\frac{p}{\omega}\quad,\quad x\equiv 4B\omega\beta s\quad,\quad X^{2} \equiv\frac{\Lambda^{2}}{4B\omega\beta}\, \tag{11}\] and \(\vartheta_{0}(y)\) is the Jacobi function \[\vartheta_{0}(y)\equiv\sum_{n=-\infty}^{\infty}\mathrm{e}^{-\pi yn^{2}}. \tag{12}\] Making use of the following property \[\vartheta_{0}(y)=y^{-1/2}\vartheta_{0}(1/y)\, \tag{13}\] the above becomes \[\mathrm{Tr}\left\{\ln\left(\delta^{2}S[\varphi_{i}]\right)\right\} \tag{14}\] \[= \frac{V\omega^{4}\beta}{16\pi^{2}}\int_{1/X^{2}}^{\infty}\frac{ \mathrm{d}x}{x^{3}}\mathrm{e}^{-x(3\varphi_{i}-1)}\vartheta_{0}\left(\frac{ \omega\beta}{4\pi x}\right)\] \[= \frac{V\omega^{4}\beta}{16\pi^{2}}\int_{1/X^{2}}^{\infty}\frac{ \mathrm{d}x}{x^{3}}\mathrm{e}^{-x(3\varphi_{i}-1)}\sum_{n=-\infty}^{\infty}e^ {-\omega^{2}\beta^{2}n^{2}/4x}\] \[= \lambda\frac{B\omega\beta}{24\pi^{2}}\Big{(}I_{\Lambda}(\varphi_ {i})+I_{T}(\varphi_{i})\Big{)}\,\] where \[I_{\Lambda}(\varphi_{i}) \equiv\int_{1/\Lambda^{2}}^{\infty}\frac{\mathrm{d}x}{x^{3}}\ \mathrm{e}^{-x(3\varphi_{i}-1)} \tag{15}\] \[I_{T}(\varphi_{i}) \equiv 2\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{\mathrm{d}x}{x^{3}} \ \mathrm{e}^{-x(3\varphi_{i}-1)-\omega^{2}\beta^{2}n^{2}/4x}\.\] The first integral \(I_{\Lambda}\) is the temperature-independent divergent integral which, after renormalisation, produces the same results as in the zero-temperature case [3]. The second integral \(I_{T}\) is the temperature-dependent contribution corresponding to the finite-temperature corrections. It is finite, which is why the cut-off is taken to infinity in this specific term. This temperature-dependent integral can be written in terms of the modified Bessel functions of the second kind \(K_{2}(z)\) as \[I_{T}(\phi_{i})=\sum_{n=1}^{\infty}\frac{16(3\varphi_{i}-1)}{(n\omega\beta)^{ 2}}K_{2}(n\omega\beta\sqrt{3\varphi_{i}-1}). \tag{16}\] Together with the integral \(I_{\Lambda}\), the connected graphs generating functional for homogeneous saddle points is given by eq.(16). ## Appendix B Fluctuation factor for the instantons/anti-instantons gas We calculate here the contribution \(\exp(-\Sigma_{\mathrm{gas}})\) to the partition function (15), following the known approach in studies of tunnelling effects [10]. The invariance of the action for \(n\) instanton/anti-instanton pairs under the translation of the jumps leads to the degeneracy factor in the partition function \[\left(\prod_{i=1}^{2n}\int_{\tau_{i-1}}^{\omega\beta}\tau_{i}\right)=\frac{( \omega\beta)^{2n}}{(2n)!}\, \tag{17}\] where \(\tau_{i}\in[\tau_{i-1},\omega\beta]\) and \(\tau_{0}=0\), since successive instanton jumps can only occur after previous ones. Each jump has an associated fluctuation factor \(\sqrt{6S_{\rm inst}/\pi}\) and thus the total fluctuation factor is given by the product of the contributions of the "flat" parts of the \(n\)-pairs of instanton/anti-instantons and the \(n\) pairs of jumps. On average, each configuration of \(n\) instanton/anti-instanton pairs spends the same time \(\simeq\beta/2\) close to each static saddle point, such that the expression for \(F_{n}\) is finally \[F_{n}=F_{L}(\beta/2)F_{R}(\beta/2)\left(\frac{6S_{int}}{\pi}\right)^{n}. \tag{10}\] Substituting the above results into the partition function (14), along with the total action (13) for \(n\) pairs, yields the total contribution to the partition function due to instanton/anti-instanton pairs \[\exp(-\Sigma_{\rm gas})\] \[=e^{-\Sigma_{L}[\beta/2]}e^{-\Sigma_{R}[\beta/2]}\sum_{n=1}^{ \infty}\frac{(\omega\beta)^{2n}}{(2n)!}\left(\frac{6S_{\rm int}}{\pi}\right)^{ n}e^{-2nS_{\rm int}}\] \[=\exp\Big{(}-\Sigma_{L}[\beta/2]-\Sigma_{R}[\beta/2]\Big{)}\] \[\quad\times\left(\cosh\left(\omega\beta\sqrt{\frac{6S_{\rm int}} {\pi}}\ e^{-S_{\rm int}}\right)-1\right)\,\] This leads to the expression (19), where the parameters can be replaced by their renormalised version, since the overall expression is already at one-loop.
2306.08861
Motion Capture Dataset for Practical Use of AI-based Motion Editing and Stylization
In this work, we proposed a new style-diverse dataset for the domain of motion style transfer. The motion dataset uses an industrial-standard human bone structure and thus is industry-ready to be plugged into 3D characters for many projects. We claim the challenges in motion style transfer and encourage future work in this domain by releasing the proposed motion dataset both to the public and the market. We conduct a comprehensive study on motion style transfer in the experiment using the state-of-the-art method, and the results show the proposed dataset's validity for the motion style transfer task.
Makito Kobayashi, Chen-Chieh Liao, Keito Inoue, Sentaro Yojima, Masafumi Takahashi
2023-06-15T05:12:54Z
http://arxiv.org/abs/2306.08861v2
# Motion Capture Dataset for Practical Use of AI-based Motion Editing and Stylization ###### Abstract. In this work, we proposed a new style-diverse dataset for the domain of motion style transfer. The motion dataset uses an industrial-standard human bone structure and thus is industry-ready to be plugged into 3D characters for many projects (Figure 1). We claim the challenges in motion style transfer and encourage future work in this domain by releasing the proposed motion dataset both to the public and the market. We conduct a comprehensive study on motion style transfer in the experiment using the state-of-the-art method, and the results show the proposed dataset's validity for the motion style transfer task. **Computing methodologies \(\rightarrow\) Motion capture**, _Machine learning_. ## 1. Introduction Skeletal animation, or rigging, is a widespread method in computer animation where a hierarchical set of interconnected bones represents a character. This high-level representation of the motion makes it more intuitive for animators and engineers to control and revise the motion of characters in games and movies. A way to perform skeletal animation is to explicitly specify the position and rotation of each body joint of a character. Professional artists usually do this in character rigging and motion design. On the other hand, motion capture systems have been widely used to capture high-quality source motion, usually performed by a human actor, which then is retargeted to the target virtual character. Recently video games, movies, and other CG-based media have used motion capture to capture many human movements. However, motion capture systems are high-costed, yet the post-process, such as cleaning the noisy motion, is still time-consuming. Therefore, the diversity of motion databases is usually limited, and it remains hard to access motion data that contains various human personalities and emotions. In this paper, we define content as the base of a motion and define style as the human mood and personality tied to the motion. Due to the limited access to motion capture data with various styles, there has been a long-standing research interest in making diverse stylized motions for games and movies that pursue realistic and expressive character animation. However, it is difficult to create new movements that include all the various styles using classical methods, where feature values are manually tuned, such as [12] and [1], because it is hard to express styles mathematically. Recently, motion-to-motion translation methods using neural networks have been actively researched, and these methods help perform random motion generation, motion interpolation, and motion style transfer. Motion style transfer is a neural network-based method that can transfer a motion style to other motions, making the other motions act like the style but retain their own motion contents. To enrich the variety of motion datasets in the industry and take advantage of motion-to-motion translation, in this work, we propose an industrial-ready dataset that fits well for the task of motion style transfer. This high-fidelity dataset is created with professional actors using a high-quality motion capture system. The hierarchical structure of the human posture in the dataset follows a general standard in the industry; therefore, it is easy for researchers and developers to plugin the motion into their works and products. In this paper, we first propose an open-sourced motion dataset that contains general motion, such as walking and running, along with various styles, such as active and exhausted. Next, we conduct a comprehensive experiment using a state-of-the-art motion style transfer method to evaluate the suitability of the proposed dataset. Finally, we introduce an on-the-shelf industry-ready motion dataset comprising a large amount of Emote 1 motions where the actors move their bodies dramatically to express motion styles and emotions. Footnote 1: Emote is a common phrase in video games, meaning quick actions that include hand and body gestures and dancers used for character communications. The main contribution of this work can be summarized as follow: Figure 1. Motions including in our dataset applied to Mirai Komachi character model. * We open-source a large motion dataset that has a diversity of styles of motion * The proposed motion dataset uses an industrial standard human bone structure and is industry-ready to be plugged into any project 1 * A comprehensive study using a state-of-the-art motion style transfer method verifies the suitability of the proposed dataset on motion style transfer tasks * We have an on-the-shelf emote motion dataset that contains more emoting and motion with various dramatical styles and emotions ## 2. Related Work This chapter reviews recent works for motion datasets and methods of motion style transfer, which inspire this paper proposing a motion dataset with superiority in motion style transfer. ### Motion Dataset Motion data depicts the kinematic information of moving objects, which can be represented using a sequence of poses. In this work, we focus on human motion, where the human poses are usually constructed with positions and rotations of the body joints, linked as a hierarchical bone structure. In previous works, one way to record human motion is to take videos of the person. Human motion is described as sequential frames of images where the pixel region of the human changes over time when the human is moving. These datasets are usually used for general pose estimation methods. Recently, neural network-based methods have been proposed to retrieve the position and rotation information (Cao et al., 2021; Wang et al., 2021), which can be further used for actual applications (Liao et al., 2022; Moryossef et al., 2020; Shiro et al., 2019). Despite the convenience of using existing videos, the fidelity of the motion prediction may greatly reflect the quality of the actual application, such as commercial movies and games and high-resolution motion analysis in sports and rehabilitation (Chan et al., 2011; Liao et al., 2023; Rincon et al., 2016; Zhu et al., 2022). Therefore, to obtain more precise motion data for a particular motion analysis domain, motion capture systems with optical motion capture cameras or inertial sensors are used to capture high-resolution and high-fidelity motion (Guerra-Filho, 2005; Roetenberg et al., 2009). Research groups and institutes have recently built human motion datasets using these motion capture methods (Lakher and Black, 2015; CMU, 2000; Hoyet et al., 2012; Ionescu et al., 2014; Mandery et al., 2015; Muller et al., 2009; Troje, 2002; Xia et al., 2015). With these high-quality motion data, recent works using motion as input have been actively researched. However, due to the time-consuming process of creating high-quality motion data using motion capture systems, motion-to-motion methods such as motion style transfer to expand the motion datasets play an important role in the motion dataset research domain ((Ma et al., 2010; Xia et al., 2015)). In order to generalize motion-to-motion methods, it is important to have access to motion datasets that contain both diverse motion contents and various styles, which have been seen as limitations in previous works. In this work, we solve this problem by proposing a new motion dataset that greatly suits the task of motion style transfer. ### Motion Style Transfer Recently with the development of machine learning technology and with the ease of accessing large image datasets, image-to-image translation methods such as image restoration and style transfer have been well-studied in previous works. However, compared to images, creating high-quality motion data is time-consuming, causing a limited amount of various motions containing different styles, such as personality and emotion. Therefore, the use of machine learning for motion style transfer has not been well studied yet an active field. In (Holden et al., 2015), the authors introduce CNNs, which have achieved great success in the image domain, to motion style transfer and significantly improve transformation quality. This method solves the difficulty of mathematical representation of style by extracting features from motion via convolutional autoencoders. While the progress has further pushed ahead research in motion style transfer, the above methods require very large datasets because they require paired motions during training and can only perform style transfers with the learned content and style. However, the difficulty of collecting motion data made it difficult to prepare a large number of paired motions or to prepare sufficient data for all styles to be transformed. (Aherman et al., 2020) is proposed as a framework in which styles can be extracted from a small sample if a well-trained network is used. This framework achieves motion style transformation by encoding content and style in separate networks and combining them using AdaIN. AdaIN has been developed in the context of style transformation technology for images in (Huang and Belongie, 2017), and Aberman et al. are the first to apply it to motion style transfer. This method dramatically improves the quality of the transformation and clearly produces more natural motion outputs. Following the breakthrough of AdaIN in the motion domain, motion style transfer has been actively researched. To tackle the lack of motion datasets and limited style labels, (Wen et al., 2021) proposes an autoregressive flow-based generative model that can be trained without style labels. (Park et al., 2021) addresses the challenge of learning spatial relationships between joints by performing spatial-temporal convolution rather than simply in a time direction. In addition, the application of random noise allows for generating a wide variety of styles. In other directions, (Wang et al., 2021) attempts to introduce loss of kinematic constraints in machine learning models in order to generate natural human actions, and (Mason et al., 2022) proposes a framework that can perform real-time style transformation. Standing out from the above recent works, Jang et al. propose Motion Puzzle (Jang et al., 2022), which is a framework that can be trained on a dataset without style labels and can reflect time-varying styles on specific body parts. These characteristics profit generalization and provide more precise control of the motion which is suitable for real applications. We expect that the quality of style transformation will be improved by using this framework; therefore, we decided to use this motion style transfer method for our research and application development. Despite the great achievement in motion style transfer, many limitations still need to be solved. To encourage future research on motion style transfer, we list the following challenges and research questions that have been found in previous works. A more effective and general style transfer framework is considered to be able to: * Retrieve style features and transfer styles from long-time motions or from motions where styles change over time * Transfer styles between different motion categories * Adapt different bone structures to a network model (to reuse the network for other motion datasets) * Edit style/content that is unseen during training * Help increase the creativity of the output motion * Let users control the output of the style transfer network so that the output can match their demand ## 3. Dataset This section introduces the proposed motion dataset. Our dataset consists of two sub-datasets: "Bandai-Namco-Research-Motiondataset-1"(Ours-1) and "Bandai-Namco-Research-Motiondataset-2"(Ours-2). Both sub-datasets are available at the GitHub. ### Dataset Collection The proposed motion dataset was collected in a motion capture studio of Bandai Namco Studios Inc., generally utilized to collect data for the commercial product. The motion capture system (Vicon [Vicon nd]) measures the 3D positions of infrared markers attached to the actor all over the body. The 3D position data of the infrared markers were analyzed by software from Vicon to reproduce the actor's motion. Motions in the proposed motion dataset were performed by professional motion actors, one female and one male actor for each sub-dataset. The content of the motions was selected from motions that are actually collected relatively frequently in game production. The style of motion is defined by properties that affect the behavior of the content, such as characterization, emotion, or state. The motion for each style was decided using the knowledge of motion actors who are experts in human motion. The motion datasets (especially Ours-2) were recorded under the following careful direction to make them suitable for learning motion style conversions: * Motion must be consistent within the same content style * The differences in motion for the same content between styles should be clear. * Style representations are consistent across content The recording was scheduled so that the quality of the motion would not vary due to the actors' physical and mental exhaustion. The collected actor motion data was smoothed to remove noise and retargeted to specific character proportions to eliminate differences in motion data due to body shape differences between actors. The proportions of motion data in the proposed motion dataset are consistent with those of the character model Mirai Komachi (Bandai Namco Research Inc.) [1]. In addition, motion data was trimmed so that non-acting scenes such as idling were not included, and content and style annotations were created based on the intent at the time of collection. Note that the FPS is reduced to 30FPS for the dataset available to the public. ### Our Dataset We have published two datasets on GitHub, one is "Bandai-Namco-Research-Motion dataset-1" (Ours-1), and the other is "Bandai-Namco-Research-Motion dataset-2" (Ours-2). Ours-1 contains 17 types of wide-range content including daily activities, fighting, and dancing, and 15 styles, for a total of 36,673 frames. This dataset has a wide variety of contents, as well as expression variety per style. Ours-1 is a pilot dataset collected to estimate which and how much data is needed for Ours-2. Ours-2 contains 10 types of content mainly focusing on locomotion and hand actions and 7 styles, for a total of 384,931 frames. Figure 2 shows the typical operation of Ours-2. Compared with Ours-1, this dataset contains a single, uniform expression per style and a rich assortment of data per content. The characteristics of our dataset are as follows. * The total number of frames is rich, exceeding 400,000 frames. * It uses a skeleton with the same joint structure as humans, which is the standard in the gaming industry. Therefore, the results of the style transfer can be easily applied to other humanoid character models by motion retargeting. * The inclusion of various motions, mainly locomotion and hand motion. * Especially in Ours-2, the proposed motion dataset is abundant for all styles in each content, making it easy to use for research and development of motion style transfers * All data can be downloaded with the unstyled portions already removed, eliminating the need for preprocessing when used for training. ### Dataset Comparison Training Motion style transfer model requires large amounts of motion data. [14] has published Xia dataset of 79,829 frames, including 12 contents and 8 styles, which has been used in many studies[12, 13, 14]. [1] published BFA dataset of 696,117 frames, including 9 contents and 16 styles. Ubisoft La Forge Animation dataset(LAFAN dataset) was published in [15]. This dataset contains a total of 496,672 frames, with 15 contents but only one style. More recently, a dataset named 100STYLE was published [16]. This dataset contains 10 contents and 100 styles of locomotion for a total of 4,779,750 frames, which is a huge size. These datasets have the following issues. * The bone structure used by Xia and BFA, which follows the bone structure of the CMU mocap dataset, and the bone structure used by 100STYLE, are different from the general skeleton in the game creation, making it difficult to use for motion generation in games and other applications. * There are few data in which has the same set of content and style, making it difficult to train them well. We designed and created the proposed motion dataset to solve these issues. Table 1 shows the comparison of our dataset to existing datasets, where the same set of content and style represents the average value of the data with the same set of content and style. General skeleton means whether motions in a dataset are cleaned for the use of humanoid rig that can be plugged into common game engines in the industry such as Unity and Unreal engine. The skeleton's difference between datasets is shown in Fig. 3. We observe that our dataset has a natural skeleton in terms of the connection of the joints at the waist and the side, and the length of the limbs to represent a humanoid. In addition, our skeletons have most of the offsets of each joint aligned with the coordinate axes, making it more intuitive for humans to modify the motion data. ### Emote motion dataset To contribute to both the research community and the industry, we have collected additional dataset that includes complex motions, such as dancing, for more practical motion-style transfer. The emote motion dataset is collected using a different setting from the open-sourced dataset. See Appendix A.1 for more details on this dataset. ## 4. Experiments and Results We conducted experiments to demonstrate the applicability of our dataset for Motion Style Transfer. ### Experiments We adopted the algorithm suggested by Jang et al. (Jang et al., 2022). The algorithm is a progressive one in the data-driven approach of Motion Style Transfer, has a function to realize style transfer for body parts, and has high robustness to maintain contents. A model of motion style transfer has two inputs: the content motion \(\mathbf{m}^{s}\) with style \(s\), and the style motion \(\mathbf{n}^{t}\) with style \(t\). The output motion \(\hat{\mathbf{m}}^{t}\) consists of the content of \(\mathbf{m}^{s}\) and represents style \(t\). The model architecture for inference consists of three networks, a content encoder \(E_{C}\), a style encoder \(E_{S}\), and a decoder \(D\), expressed as: \[\hat{\mathbf{m}}^{t}=D\left(E_{C}(\mathbf{m}^{s})|E_{S}(\mathbf{n}^{t})\right). \tag{1}\] We refactored and partially modified the code of the authors to apply it to our motion data skeleton, which is different from the skeleton applied by the authors in their paper. Details of the modifications to the code are provided in Appendix A.2. The representation of styles, such as characteristics or emotions, depends on the content of the motion. For example, on the one hand, 'active' is expressed in locomotion as vertical movement of the entire body and swinging arms with the elbow bent; on the other hand, in hand-raising motion as the acceleration of arms and following of the entire body. This is why it is difficult to learn a general representation of the style from limited content motions. Therefore, we trained the motion for each content in this paper to improve transfer quality from a limited set of motion data. Figure 3. The bone structure of each dataset. Figure 2. Sample motion of our dataset. Each row of motion is the same style; normal, active, and elderly. And each column of motion is the same content; (a) walk, (b) run, (c) raise-up-right-hand, (d) wave-right-hand. Each skeleton is arranged in time sequence from left (blue) to the right (green). ### Results Figure 4 shows the results of motion style transfer using our dataset. Each row corresponds to the transfer set. The model generates output motion (right column) from content input motion (left column) and style input motion (center column). The motion is successfully transferred, preserving the content of the content input and reflecting the style of the style input. In addition, we have experimented with the proposed motion dataset using motion interpolation and style transfer on different body parts. The motion data can be learned to form a smooth latent style space in the embedding layer of the network and can thus generate intermediate motion results that are unseen in the dataset (Figure 5). Moreover, taking the superiority of the Motion Puzzle, we are able to produce various stylized motion that has different styles at different body parts (Figure 6). Based on these results, we have confirmed the suitability of the dataset for the task of motion style transferring. Finally, we have performed motion retargeting from the motion data to 3D characters using Unreal Engine 5. As shown in Figure 7, the proposed motion data can be easily plugged into both the original character (Mirai Komachi) and other characters from different public databases. ## 5. Conclusion In this paper, a motion dataset is proposed for the use of motion style transfer. The stylized motion is captured using professional actors with a large-scale optical motion capture system. We compare the proposed motion dataset to other datasets used in recent motion style transfer works. A comprehensive experiment has been done and shows the suitability of the dataset for motion-to-motion transfer. To contribute to both the research community and the industry, we release the motion dataset to GitHub for public usage. In addition, another motion dataset, which contains advanced motion with more emoting styles, is released to the market for professional usage. All the motion in the two datasets has an industrial standard human bone structure and is cleaned to be used directly for any application. In the future, we plan to expand the motion dataset by adding more actors and motion categories. Moreover, we are building a motion editing pipeline for generating various motions for character control in real applications. The pipeline includes fixing raw motion data, rigging, and retargeting to the target character for animations and games. Furthermore, to solve the problem in limited motion data, we urge the development of a universal motion style transfer that learns general representation from the entire motion dataset and thus can transfer style among different motion contents. We plan to conduct further experiments for the motion style transfer and find the best implementation for industry usage. When ready, we will embed motion stylization into the motion pipeline and release an industry-ready application.
2305.11860
Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning and Coding with LLMs
A popular approach for improving the correctness of output from large language models (LLMs) is Self-Consistency - poll the LLM multiple times and output the most frequent solution. Existing Self-Consistency techniques always generate a constant number of samples per question, where a better approach will be to non-uniformly distribute the available budget based on the amount of agreement in the samples generated so far. In response, we introduce Adaptive-Consistency, a cost-efficient, model-agnostic technique that dynamically adjusts the number of samples per question using a lightweight stopping criterion. Our experiments over 17 reasoning and code generation datasets and three LLMs demonstrate that Adaptive-Consistency reduces sample budget by up to 7.9 times with an average accuracy drop of less than 0.1%. Our code and data are available at https://www.sample-step-by-step.info
Pranjal Aggarwal, Aman Madaan, Yiming Yang, Mausam
2023-05-19T17:49:25Z
http://arxiv.org/abs/2305.11860v2
# Let's Sample Step by Step: ###### Abstract A popular approach for improving the correctness of output from large language models (LLMs) is Self-Consistency - poll the LLM multiple times and output the most frequent solution. Existing Self-Consistency techniques always draw a _constant_ number of samples per question, where a better approach will be to non-uniformly distribute the available budget based on the amount of agreement in the samples drawn so far. In response, we introduce Adaptive-Consistency, a cost-efficient, model-agnostic technique that _dynamically_ adjusts the number of samples per question using a lightweight stopping criterion. Our experiments over 13 datasets and two LLMs demonstrate that Adaptive-Consistency reduces sample budget by up to 6.0 times with an average accuracy drop of less than 0.1%.1 Footnote 1: Code and LLM outputs are available at [http://sample-step-by-step.info/](http://sample-step-by-step.info/). ## 1 Introduction The increasing adoption of large language models (LLMs) across various tasks, such as text generation and reasoning (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022; Mishra et al., 2022), mathematical reasoning (Lewkowycz et al., 2022; Gao et al., 2022), and code generation (Li et al., 2022; Madaan et al., 2023), has underscored the importance of improving the correctness of their outputs. A popular method for achieving this goal is _Self-Consistency_(Wang et al., 2022), a majority voting technique where multiple output samples are generated for a given input, and the final decision is based on the most frequently occurring output among the samples. Current Self-Consistency methods typically employ a fixed budget approach, wherein a predetermined number of samples (e.g., 40) are drawn to make a decision. However, as LLMs continue to grow in size and complexity, the sampling time and computational costs associated with majority voting become increasingly challenging. This challenge is particularly evident in high-stakes applications like competition-level code generation (Li et al., 2022), where generating a large number of programs, sometimes up to a million, is essential for maximizing performance. To address this challenge, we introduce _Adaptive-Consistency_, a cost-efficient, model-agnostic majority voting technique. Adaptive-Consistency employs a lightweight stopping criterion that dynamically adjusts the number of samples (\(n\)) for each input, as opposed to using a fixed budget (\(k\)). The intuition is that if a clear majority is established with high confidence after sampling fewer than \(k\) answers (\(n<k\)), there is no need to draw additional samples. Consequently, our method reduces computational costs without compromising output quality. Our experiments show that Adaptive-Consistency achieves \(n<k\) in most cases (on average, \(n\sim 0.25k\) for code-davinci-002 and \(n\sim 0.55k\) for Vicuna-13B) compared to the fixed-budget approach of Self-Consistency. Adaptive-Consistency models the probability distribution over unique samples using a Dirichlet distribution, allowing us to quantify the confidence in the lead of the majority element over other elements. For instance, if the majority element has a count of 9 out of the first 10 samples, the likelihood of it remaining the majority element even after 40 samples is very high (\(>99\%\)). This allows Adaptive-Consistency to stop sampling at this point, reducing the cost by 30 samples, while Self-Consistency would continue to sample all 40 answers. As a fast inference-time technique requiring no additional training, Adaptive-Consistency provides a convenient off-the-shelf option for all pre-trained language models, offering the flexibility to balance computational cost and performance. We evaluate Adaptive-Consistency on 13 diverse tasks and two LLMs of different scales (Vicuna 13B and code-davinci-002). Our experimental results show that Adaptive-Consistency outperforms Self-Consistency regarding cost efficiency while maintaining comparable output quality. On code-davinci-002, Adaptive-Consistency reduces the number of samples required by a factor of \(3.7\times\), with an average drop in accuracy: 0.08%. On Vicuna-13B, Adaptive-Consistency requires sampling \(2\times\) fewer samples, with almost no drop in accuracy. In summary, our contributions are: * We propose Adaptive-Consistency, a novel and cost-efficient sampling technique for large language models that dynamically adjusts the number of samples using a lightweight stopping criterion based on the stability of the majority element. * We develop different _stopping criteria_s for our algorithm, each with its own advantages and disadvantages, such as providing probabilities, adaptability to various scenarios, and ease of implementation. * We present experimental results demonstrating that Adaptive-Consistency requires fewer samples in most cases compared to the fixed-budget approach of Self-Consistency, while maintaining comparable output quality across various natural language reasoning tasks. ## 2 Methodology ### Background In-Context Few-Shot PromptingIn-context few-shot prompting is a technique employed by large language models (LLMs) to learn and generalize from a limited number of examples provided within the input of a given task. The model can quickly adapt to novel tasks without fine-tuning or additional training by conditioning the model on a few examples. Specifically, a prompt \(p\) is constructed by concatenating multiple input-answer example pairs \(<x_{i},a_{i}>\). The prompt is then prepended to the test input \(x_{test}\), and the model generates the corresponding answer \(a_{test}\). Figure 1: An overview of _Adaptive-Consistency_: Self-Consistency samples a predetermined number of answers, whereas Adaptive-Consistency iteratively samples until a lightweight Stopping Criteria, decides to report the majority answer. In the figure, we illustrate an example where _Adaptive-Consistency_ achieves a significant reduction in sampling cost/time from 40 to 10 samples by reporting the majority answer after sampling only ten answers. The bottom-left graph compares _Adaptive-Consistency_ with Self-Consistency on three representative reasoning datasets, demonstrating that _Adaptive-Consistency_ consistently reduces the sample budget significantly (Average: \(2.8\times\)) with a negligible drop in accuracy (Average: \(-0.03\%\)). **Listing 1** Comparison of Adaptive-Consistency (top) and Self-Consistency (bottom). Self-Consistency always draws a fixed number of samples. In contrast, Adaptive-Consistency uses a lightweight stopping criterion, allowing it to adaptively halt the sampling process, which can lead to improved efficiency and performance. ``` defadaptive_consistency(max_gens, stop_criterion): observations=[] forkinrange(!,max_gens): observations.append(sample_from_llm()) ifstop_criterion(observations): break returnmajority(observations) defstop_criterion(observations,threshold): #Implementyourlightweightstoppingcriterion defself_consistency(max_gens): observations=[] forkinrange(!,max_gens): observations.append(sample_from_llm()) returnmajority(observations) ``` **Self-Consistency**(Wang et al., 2022b) proposed Self-Consistency which improved performance over CoT reasoning by sampling multiple diverse reasoning chains and aggregating their outputs using a simple majority voting mechanism. However, higher accuracy is a trade-off for increased computational cost since the model must be prompted multiple times for the same question. ### Adaptive-Consistency As discussed, the Self-Consistency method consistently samples a predetermined number of answers (\(k\)) from the language model before returning the majority answer. In contrast, the Adaptive-Consistency method takes an incremental approach to sampling outputs from the language model. After generating each sample, Adaptive-Consistency employs a _stopping criteria_ to determine whether it should 1.) produce an additional sample or 2.) cease sampling and report the current majority answer. This flexible strategy enables Adaptive-Consistency to dynamically adjust the number of samples generated so far (\(n\)) for each input. As our experiments demonstrate, \(n\) is typically less than \(k\) (up to \(4\times\) less in some cases), which suggests that the Adaptive-Consistency method may offer greater cost-efficiency compared to the fixed budget approach employed by Self-Consistency. The Adaptive-Consistency Algorithm (1) differs from Self-Consistency only in terms of the stopping criteria. The design of the stopping criteria is crucial to our method, as it aims to minimize the average number of samples drawn from the LLM while maximizing accuracy. The simplicity of our algorithm allows for the use of various stopping criterias interchangeably, each with its own advantages and disadvantages. We expand on a particular choice of stopping function next. Dirichlet Stopping CriteriaLet \(n\) be the number of samples drawn so far, with \(m\) unique elements. Let \(v=[v_{1},v_{2},\ldots,v_{m}]\) be the counts of each element, and \(p_{i}=\frac{v_{i}}{n}\) be the normalized count. Considering \(n=10\), and \(m=3\) (3 unique elements draw), if \(v=[8,1,1]\), then we can be more confident that \(v_{1}\) is the answer. On the other hand, if \(v=[4,4,2]\), then more samples need to be drawn. Our goal is to formalize and quantify this intuition. By convention, let \(p_{1}=\max(p_{i})\). We want to assess the _stability_ of \(p_{1}\) as the majority element. Specifically, we want to ask the following question: what is the probability that \(p_{1}\) will be the majority element if we repeat the process of drawing \(n\) samples again? Intuitively, if this probability is higher than some predetermined threshold \(C_{thresh}\), then we can be more confident in our decision to stop sampling and return \(p_{1}\) as the majority element: \[P(p_{1}>\max_{i=2}^{m}p_{i}\mid v)>C_{thresh}\] To answer this question, we establish a connection with the Dirichlet distribution. Specifically, we note that the counts \(v\) parameterize a Dirichlet distribution, \(\text{Dir}(V)\). 2. Also, \(p\) represents a multinomial distribution and is the expected value of \(v\). That is, \(p=\mathbb{E}(\text{Dir}(V))\). This connection allows us to explore the behavior of the sampling process by drawing more samples from \(\text{Dir}(V)\) and observing the stability of \(p_{1}\) as the majority element. To compute the probability of \(p_{1}\) being the majority element, we can integrate the joint probability density function of the Dirichlet distribution over the appropriate region of the probability simplex. The integral can be expressed as follows: Footnote 2: Dirichlet is a distribution over multinomials; each draw from Dirichlet is a multinomial distribution. See Details in Appendix D \[P(p_{1}>\max_{i=2}^{m}p_{i}\mid V)\] \[=\int_{0}^{1}\int_{\mathcal{S}(p_{1}^{\prime})}f(p_{1}^{\prime}, p_{2},\dots,p_{m}\mid V).\] \[dp_{2}\cdots dp_{m}dp_{1}^{\prime},\] where \[\mathcal{S}(p_{1}^{\prime})=\{(p_{2},\dots,p_{m})\mid p_{1}^{ \prime}>\max_{i=2}^{m}p_{i}, \tag{1}\] \[\sum_{i=2}^{m}p_{i}=1-p_{1}^{\prime}\}.\] The function \(f(p_{1}^{\prime},p_{2},...,p_{m}|V)\) represents the joint probability density function of the Dirichlet distribution conditioned on the counts \(V\). In Equation 1, the bounds on the integral for \(p_{1}^{\prime}\) range from 0 to 1. The probability simplex \(\mathcal{S}(p_{1}^{\prime})\) is defined for each \(p_{1}^{\prime}\) value, such that \(p_{1}^{\prime}>\max_{i=2}^{m}p_{i}\), and the remaining \(p_{i}\) values sum to \(1-p_{1}^{\prime}\). This constraint ensures that we are considering all possible values of \(p_{1}^{\prime}\) that would maintain its majority status. Here we assume, that the number of possible unique answers (\(m\)) is known, based on current set of observations (\(V\)). In Analysis ((\(\lx@sectionsign\)4.2), we further evaluate Chinese Restaurant Process (CRP) stopping criteria, which relaxes this assumption by not requiring the number of possible unique answers (\(m\)) to be known in advance. Beta Stopping CriteriaSince the number of unique answers in the observation set can be large, Equation (1) is computationally expensive to solve. As an approximation, we observe that establishing the majority of \(p_{1}\) over the next largest probability, \(p_{2}\), is sufficient for our purpose. In this setting, the probability in Equation (2.2) simplifies to a Beta distribution with parameters \((v_{1}+1,v_{2}+1)\) (prior Beta\((1,1)\)), and Equation (1) is replaced by Equation (2). This approximation allows us to efficiently compute the confidence in \(p_{1}\) being the majority, enabling early stopping decisions without incurring substantial computational overhead. \[\int_{0}^{0.5}p_{2}^{v2}\cdot(1-p_{2})^{v_{1}}dp_{2} \tag{2}\] Empirically, we show the performance to be similar to Dirichlet stopping criteria but significantly faster (See Section 4.2). Throughout experiments, we refer to this Beta Stopping Criteria as Adaptive-Consistency. Intuitively, the stopping criteria in equation (1) ensures we generate only a few samples for questions, where the majority answer can be inferred from a small number of samples with high confidence. For example, if nine out of the first ten answers generated by the model are the same: \(a_{1}\), then with \(>99\%\) probability, the majority answer will remain the same even on infinite generations. Thus we can save more sample generations at the cost of minimal loss in accuracy. ## 3 Experiments In this section, we describe the experimental setup for evaluating our proposed method on various reasoning benchmarks. We use three diverse categories of reasoning benchmarks, evaluate the performance on two different language models, and use prompts by PAL Gao et al. (2022) and Self-Consistency Wang et al. (2022). We demonstrate marked improvements in terms of reduced sample generations when compared to Self-Consistency. BenchmarksWe evaluate our method on a diverse set of reasoning benchmarks, encompassing 13 datasets across three distinct categories: 1. **Mathematical Reasoning:** To assess mathematical reasoning capabilities, we utilize the following datasets: GSM-8K Cobbe et al. (2021), SVAMP Patel et al. (2021), and ASDIV Miao et al. (2020). These datasets consist of grade-school-level algebra word problems necessitating arithmetic operations and problem-solving based on contextual information. 2. **Commonsense Reasoning Tasks:** We evaluate Adaptive-Consistency on four commonsense reasoning tasks. **1.) StrategyQA**Geva et al. (2021) comprises questions that demand the model to infer a multi-hop strategy with reasoning steps implicitly embedded in the questions. **2.) Date Understanding** entails questions that require the model to deduce dates from natural language descriptions and perform arithmetic operations accordingly. **3.) Snarks** and **4.)** Ruin Names both focus on emotional understanding tasks. 3. **Logical Reasoning Tasks:** We examine the performance of our method on six diverse logical reasoning tasks. **1.) Tracking Shuffled Objects** is a tracking task that necessitates the model to infer the final state of a system, given its initial state and a sequence of modifications. **2.) Logical Deduction** is a logical deduction task that demands the model to deduce the order of a sequence of objects based on a minimal set of conditions. **3.) Boolean Expressions** is a boolean expressions task that evaluates whether a language model has learned the rules of deductive reasoning, i.e., formal (zeroth-order) logic associated with the words "and," "or," "not," etc. **4.) Salient Translation** is a salient translation error detection task that requires the model to identify the type of error in a translation. **5.) Disambiguation**\(\mathbf{QA}\) is a disambiguation task that necessitates the model to select the person to whom the pronoun refers. **6.) Penguins** describes a table of penguins and requires the model to answer questions about the penguins' attributes. ModelsWe evaluate our method on two different language models: **1. code-davinci-002:** A GPT-3-based publicly available model Brown et al. (2020) which is a part of the Codex series Chen et al. (2021) and has 175 billion parameters.3**2. Vicuna-13B:**Chiang et al. (2023) an open-source transformer model fine-tuned on instruction-following dataset Taori et al. (2023) from the base Llama series Touvron et al. (2023). Footnote 3: We have access to Codex models through OpenAI’s researcher access program. However, since we only need access to the model outputs for our purposes, we will release them for reproducibility. Prompting and SamplingWe use similar prompts as employed in the PAL Gao et al. (2022) and the Chain of Thought Wei et al. (2022). Specifically, for mathematical reasoning and Date Understanding tasks, we use the prompts from PAL. For other commonsense and logical reasoning tasks, we use prompts from CoT Wei et al. (2022). In terms of sampling, we follow the scheme suggested in the Self-Consistency Wang et al. (2022). Specifically, we use a temperature of 0.7 for sampling and limit the number of generations to a maximum of 40. HyperparametersThe only hyperparameters in Adaptive-Consistency, are those related to parameters in stopping criterias (\(C_{thresh}\)). We use a high \(C_{thresh}=0.95\) for Adaptive-Consistency. By using a high threshold, we aim to maintain high accuracy and prevent the algorithm from stopping too early. For other Stopping Criterias, we tune our parameters on the training set of GSM-8K, and use the same thresholds across all the datasets. The impact of the chosen threshold on the performance of our method is further analyzed in the Analysis Section (SS 4.1). BaselinesWe compare our method against Self-Consistency, which is the current state-of-the-art method. Further, in Analysis 4.2, we evaluate Adaptive-Consistency against different stopping criterias such as Random stopping and Majority (stopping at majority). Evaluation MetricsWe evaluate the performance of our method and the baselines using two metrics: average generations sampled from the large language models (LLMs) and overall reasoning accuracy. Our results show that Adaptive-Consistency achieves similar performance to Self-Consistency while often reducing sample budget up to \(6\times\) times. ### Results We compare Adaptive-Consistency with Self-Consistency in Table 1. Our proposed method consistently reduces the computational budget while maintaining negligible drops in accuracy across benchmarks and models. Delta improvements over Self-Consistency are reported in the two rightmost columns. On mathematical and logical reasoning tasks, Adaptive-Consistency significantly reduces the sample budget by factors ranging from \(1.4\times\) to \(6\times\) compared to Self-Consistency, depending on the task and model, with an average decrease in accuracy of less than \(0.1\%\). This trend holds for both the Vicuna-13B and code-davinci-002 models, which differ significantly in model size and base families. Adaptive-Consistency also demonstrates efficiency across a diverse set of tasks, such as emotional understanding tasks (Snarks and Ruin Names), translation error detection (Salient Translation), and ambiguity identification (Disambiguation QA). In these tasks, Adaptive-Consistency results in sample budget reductions of \(1.2\times\) to \(2.5\times\), with a maximum drop in accuracy of \(0.4\%\). Our experiments demonstrate that Adaptive-Consistency effectively enhances the performance of large language models on diverse reasoning tasks by adaptively adjusting the stopping criterion. With comparable accuracy to Self-Consistency and significantly reduced sample budgets, Adaptive-Consistency showcases its potential for more ef \begin{table} \begin{tabular}{l c|c|c c|c c|} \hline \hline & \multicolumn{2}{c|}{**Self-Consistency**} & \multicolumn{2}{c|}{**Adaptive-Consistency**} & \multicolumn{2}{c|}{\(\Delta\)} \\ \cline{3-8} & \multicolumn{1}{c|}{**Accuracy**} & \multicolumn{1}{c|}{**Avg. Gen.**} & \multicolumn{1}{c|}{**Accuracy**} & \multicolumn{1}{c|}{**Gen. Reduc.**} & \multicolumn{1}{c|}{**Acc. Diff. \(\uparrow\)**} \\ \hline \multirow{2}{*}{**GSM-8K**} & Vicuna-13B & **31.6** & **26.8** & \(31.5\) & \(1.4\times\) & \(-0.1\) \\ & code-davinci-002 & **81.1** & **13.8** & \(81.0\) & \(2.9\times\) & \(-0.1\) \\ \hline \multirow{2}{*}{**SVAMP**} & Vicuna-13B & **63.0** & **18.8** & \(62.8\) & \(2.1\times\) & \(-0.2\) \\ & code-davinci-002 & **85.1** & **9.5** & \(85.0\) & \(4.2\times\) & \(-0.1\) \\ \hline \multirow{2}{*}{**ASDIV**} & Vicuna-13B & **64.0** & **16.5** & **64.0** & \(2.4\times\) & \(0.0\) \\ & code-davinci-002 & **83.2** & **10.0** & \(83.2\) & \(4.0\times\) & \(0.0\) \\ \hline \multirow{2}{*}{**Date Understanding**} & Vicuna-13B & \(59.8\) & **17.3** & **60.2** & \(2.3\times\) & \(+0.4\) \\ & code-davinci-002 & **80.3** & **10.7** & \(79.5\) & \(3.7\times\) & \(-0.8\) \\ \hline \multirow{2}{*}{**Tracking Shuffled Objects**} & Vicuna-13B & \(31.8\) & **20.3** & **32.0** & \(2.0\times\) & \(+0.2\) \\ & code-davinci-002 & **77.2** & **9.7** & \(77.1\) & \(4.1\times\) & \(-0.1\) \\ \hline \multirow{2}{*}{**Logical Deduction**} & Vicuna-13B & \(51.2\) & **18.1** & **51.4** & \(2.2\times\) & \(+0.2\) \\ & code-davinci-002 & **89.4** & **8.5** & **89.4** & \(4.7\times\) & \(0.0\) \\ \hline \multirow{2}{*}{**StrategyQA**} & Vicuna-13B & \(65.8\) & **16.3** & **65.8** & \(2.5\times\) & \(0.0\) \\ & code-davinci-002 & **79.0** & **11.9** & \(78.8\) & \(3.4\times\) & \(-0.2\) \\ \hline \multirow{2}{*}{**Boolean Expressions**} & Vicuna-13B & **79.2** & **16.2** & \(78.4\) & \(2.5\times\) & \(-0.8\) \\ & code-davinci-002 & **94.5** & **6.6** & **94.5** & \(6.0\times\) & \(0.0\) \\ \hline \multirow{2}{*}{**Snarks**} & Vicuna-13B & \(73.2\) & **23.2** & **73.6** & \(1.7\times\) & \(+0.4\) \\ & code-davinci-002 & **74.0** & **12.7** & **74.0** & \(3.1\times\) & \(0.0\) \\ \hline \multirow{2}{*}{**Ruin Names**} & Vicuna-13B & **43.6** & **33.8** & **43.6** & \(1.2\times\) & \(0.0\) \\ & code-davinci-002 & **78.0** & **17.2** & **78.0** & \(2.3\times\) & \(0.0\) \\ \hline \multirow{2}{*}{**Salient Translation**} & Vicuna-13B & **28.9** & **28.7** & \(28.7\) & \(1.2\times\) & \(-0.3\) \\ & code-davinci-002 & **64.3** & **11.8** & **64.3** & \(3.4\times\) & \(0.0\) \\ \hline \multirow{2}{*}{**Disambiguation QA**} & Vicuna-13B & **63.7** & **22.8** & \(63.5\) & \(1.8\times\) & \(-0.3\) \\ & code-davinci-002 & \(74.9\) & **13.5** & **75.1** & \(3.0\times\) & \(+0.1\) \\ \hline \multirow{2}{*}{**Penguins**} & Vicuna-13B & \(46.8\) & **22.9** & **47.3** & \(1.7\times\) & \(+0.5\) \\ & code-davinci-002 & \(83.8\) & **11.0** & **84.0** & \(3.6\times\) & \(+0.2\) \\ \hline \multirow{2}{*}{**Average**} & & **67.2** & **16.5** & **67.2** & \(2.8\times\) & **-0.03** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of Adaptive-Consistency with Self-Consistency on 13 diverse reasoning datasets. The table presents the accuracy of Self-Consistency, the average number of generations (Avg. Gen.) for Adaptive-Consistency, and the accuracy of Adaptive-Consistency. Self-Consistency always draws 40 samples. The \(\Delta\) columns display the reduction in generations (Gen. Reduc.) and the difference in accuracy (Acc. Diff.) between Self-Consistency and Adaptive-Consistency. The last row shows the average values across all datasets: Adaptive-Consistency achieves a 2.8 times reduction in sample budget (_Gen. Reduc._) compared to Self-Consistency, while maintaining a minimal average accuracy drop of 0.03% (_Acc. Diff._). ficient and effective reasoning across various domains, output responses, and task difficulties. In summary, our experiments show that Adaptive-Consistency enhances large language models' performance on diverse reasoning tasks by adaptively adjusting the stopping criterion. Adaptive-Consistency achieves accuracy comparable to Self-Consistency while significantly reducing the sample budget, highlighting Adaptive-Consistency' potential for more efficient and effective reasoning in large language models. ## 4 Analysis ### Effect of Confidence Threshold in Adaptive-Consistency The confidence threshold, \(C_{thresh}\), is a crucial hyperparameter for Adaptive-Consistency, as it determines when to stop sampling based on the desired level of confidence in the majority element. While we set the threshold to a stringent value of 0.95 for all experiments, in this section, we analyze the impact of varying \(C_{thresh}\) from 0.5 to 1 to understand the trade-offs between model accuracy and cost-efficiency. In Figure 2, we present a visualization that examines the relationship between the confidence threshold, \(C_{thresh}\), and the performance of our adaptive consistency method in terms of both accuracy and cost-efficiency. The x-axis represents the confidence threshold, varying from 0.5 to 1. The left y-axis displays the model's accuracy, while the right y-axis shows the average number of samples drawn. The plot shows the expected behavior of two curves: the blue curve (accuracy) increases gradually and then plateaus, while the red curve (average number of samples) initially increases linearly and then climbs more steeply. The plateau in accuracy signifies that the model has reached its maximum achievable accuracy, and further sampling doesn't improve it. Meanwhile, the red curve's climbing rate indicates that the model requires more samples to meet increasingly stringent confidence thresholds, highlighting the trade-off between accuracy and cost efficiency. ### Evaluation of Different Stopping Functions Adaptive-Consistency allows a flexible choice of stopping criteria, based on intended objective and requirements. Here we evaluate various such stopping functions. Specifically, we evaluate six different functions: 1.) Random: randomly stopping with a probability \(p\), 2.) Majority: stopping after the most common answer has majority above a threshold, 3.) Entropy: stopping after entropy of answers is below a threshold, 4.) Beta: The main stopping criteria used in Adaptive-Consistency, based on the Equation (2), 5.) Dirichlet: The stopping criteria, based on Equation (1). 6.) Chinese Restaurant Process (CRP): Unlike Dirichlet, CRP makes no assumption on the number of possible unique answers. Based on the available observations, we first model the concentration parameter (\(\alpha\)), then perform monte carlo simulation to get stability of current majority (See Appendix C.2). We show results in Table 4. The parameters for all these methods are tuned as discussed in Section 3. We note, that while Random allows to control the sample budget consistently (fixed at 10), it is the least effective of all stopping criterias evaluated. Majority almost consistently underperforms both Beta and Entropy. While Entropy is competitive to Beta, Entropy lacks human-interpretable stopping rationale. Moreover, Beta can provide guarantees on the accuracy. Dirichlet has similar performance to Beta, however is almost \(1200\times\) slower than Beta, because of the expensive multi-variate integral calculation. This motivates the simplification of Dirichlet into Beta as used in Adaptive-Consistency. CRP is also as effective as Beta, but suffers from same issues of Dirichlet. We refer readers to Appendix C.1 for more details. ## 5 Related Work ### Crowdsourcing and Adaptive Consistency _Adaptive-Consistency_ finds inspiration in techniques from crowdsourcing (Lin et al., 2012; Dai et al., 2013). Traditionally, crowdsourcing involves aggregating diverse human judgments, which presents challenges in managing resource allocation--knowing when to query additional contributors or stop based on the consistency of responses (Doan et al., 2011; Quinn and Bederson, 2011). Early research concentrated on probabilistic models estimating the 'true' answer and worker reliability (Dawid and Skene, 1979; Whitehill et al., 2009), later considering factors like worker expertise, task complexity, and answer quality (Raykar et al., 2010; Welinder et al., 2010). However, rather than addressing these issues with multiple human contributors, _Adaptive-Consistency_ is tailored specifically for LLMs, optimizing for computational efficiency and output accuracy. Architectures for adaptive computationA related body of work on adaptive computation aims to preempt computation based on intermediate representations Liu et al. (2020); Zhou et al. (2020); Schuster et al. (2021); Geng et al. (2021); Xin et al. (2020). Schuster et al. (2022) present clam, a language model that performs language generation adaptively. Hou et al. (2020) propose Dynamic Bert, which can adapt the depth and width of the transformer to satisfy various computational constraints. Xing et al. (2020) propose a dynamic deep neural network with an early-exit strategy embedded for enhancing the quality of compressed images. Another direction of work focuses on pruning model weights or training sparse weights Fan et al. (2019); Jayakumar et al. (2021) to reduce training and inference time. In contrast to these methods, our approach completely obviates making any architectural modifications. Inference-time adaptive computationThese methods focus on adaptive computation at inference time without making architectural modifications to the models. Schwarzschild et al. (2021, 2021) focus on three different generalization tasks. They observe that increasing the number of test iterations (which corresponds to the network depth in their setting) helps the models in generalizing better to difficult problems. Madaan and Yang (2022) leverage two different networks trained for the same task, a larger variant (slow) and a smaller variant (fast). The switch from fast to slow happens during inference, based on the complexity of generation at the current step. Xue et al. (2023) train language models to adaptively read tokens from a tape bank for each input. Different from these works, our focus is tasks where the multiple samples are drawn from a model (vs. iteratively solving a task, which is a focus of these works). Adaptive Sampling in Training and Active LearningAnother line of work focuses on importance-based sampling of input instances during training (Bengio and Senecal, 2008; Prabhu et al., 2019; Berger et al., 2017). For instance, Bengio and Senecal (Bengio and Senecal, 2008) introduce an adaptive importance sampling technique for expediting the training of neural probabilistic language models. Prabhu et al.(Prabhu et al., 2019) investigate sampling bias in deep active classification, proposing an empirical method to mitigate it. Berger et al.(Berger et al., 2017) present an adaptive sampling scheme for training fully convolutional networks efficiently in semantic segmentation tasks involving large and imbalanced datasets. These approaches primarily focus on sampling input instances during the training phase. In contrast to the aforementioned methods, our approach centers on adaptively sampling multiple outputs per input instance during the inference Figure 2: Impact of Confidence Threshold (\(C_{thresh}\)) on Adaptive-Consistency: As \(C_{thresh}\) varies, the accuracy of Adaptive-Consistency increases gradually, eventually plateauing. Initially, the average number of generations also increases gradually but then sharply climbs, reflecting the accuracy-confidence trade-off. phase, without soliciting additional labels. Our method is crafted to efficiently obtain reliable predictions from pretrained language models by adaptively sampling their outputs, distinguishing it from both adaptive sampling in training and active learning, which focus on the training phase. ## 6 Conclusion and Future Work This paper presented Adaptive-Consistency, a cost-efficient and model-agnostic technique for improving the correctness of output from large language models (LLMs) using dynamic sampling. Our approach builds upon the Self-Consistency method and introduces a lightweight stopping criterion that allows for adaptive sampling based on the amount of agreement in the samples drawn so far. Adaptive-Consistency is effective across 13 datasets and two LLMs, reducing the required sample budget by up to 6.0 times while maintaining comparable accuracy, with an average drop of less than 0.1%. Our work opens up several avenues for future research. First, investigating alternative stopping criteria or combining multiple criteria could lead to even more efficient sampling techniques. Finally, in our current approach, the majority decision relies on using exact matches to determine the most common answer. However, this may not always capture the true majority in cases where the output has minor variations that do not affect the overall correctness or relevance of the answer. As a future direction, we plan to investigate the use of approximate matching techniques to identify and count elements that are _close enough_ as identical. ## Limitations Despite the promising results of our proposed Adaptive-Consistency method, it comes with several limitations and opportunities for future improvement. * **Stopping criterion sensitivity:** Our current stopping criterion is based on the stability of the majority element in the set of samples. While this has shown to be effective in our experiments, it may not always be the best indicator of agreement in the samples. There could be cases where the majority element is not stable enough, but the stopping criterion still triggers, potentially leading to suboptimal decisions. Exploring alternative or more robust stopping criteria could help mitigate this issue. * **Generalizability:** Although we have tested our method across a diverse range of 13 datasets and two different LLMs of contrastive scale, there may still be tasks or models where the Adaptive-Consistency approach might not be as effective. Notably, Adaptive-Consistency is expected to fail where Self-Consistency fails. * **Task-specific adaptations:** The current implementation of Adaptive-Consistency is task-agnostic, which might limit its performance on specific tasks that could benefit from task-specific adaptations. Designing specialized versions of Adaptive-Consistency for particular tasks or domains could potentially lead to better performance in those areas. * **Reliance on the pretrained LLM:** Our method builds upon the Self-Consistency technique and relies on the pretrained LLM to generate multiple samples. Consequently, any limitations or biases present in the underlying LLM would still be carried over to the Adaptive-Consistency method. Addressing these issues might require improvements in the LLM training process itself or the incorporation of external knowledge sources.
2308.11644
Synergistic Signal Denoising for Multimodal Time Series of Structure Vibration
Structural Health Monitoring (SHM) plays an indispensable role in ensuring the longevity and safety of infrastructure. With the rapid growth of sensor technology, the volume of data generated from various structures has seen an unprecedented surge, bringing forth challenges in efficient analysis and interpretation. This paper introduces a novel deep learning algorithm tailored for the complexities inherent in multimodal vibration signals prevalent in SHM. By amalgamating convolutional and recurrent architectures, the algorithm adeptly captures both localized and prolonged structural behaviors. The pivotal integration of attention mechanisms further enhances the model's capability, allowing it to discern and prioritize salient structural responses from extraneous noise. Our results showcase significant improvements in predictive accuracy, early damage detection, and adaptability across multiple SHM scenarios. In light of the critical nature of SHM, the proposed approach not only offers a robust analytical tool but also paves the way for more transparent and interpretable AI-driven SHM solutions. Future prospects include real-time processing, integration with external environmental factors, and a deeper emphasis on model interpretability.
Yang Yu, Han Chen
2023-08-17T00:41:50Z
http://arxiv.org/abs/2308.11644v1
# Synergistic Signal Denoising for Multimodal Time Series of Structure Vibration ###### Abstract Structural Health Monitoring (SHM) plays an indispensable role in ensuring the longevity and safety of infrastructure. With the rapid growth of sensor technology, the volume of data generated from various structures has seen an unprecedented surge, bringing forth challenges in efficient analysis and interpretation. This paper introduces a novel deep learning algorithm tailored for the complexities inherent in multimodal vibration signals prevalent in SHM. By amalgamating convolutional and recurrent architectures, the algorithm adeptly captures both localized and prolonged structural behaviors. The pivotal integration of attention mechanisms further enhances the model's capability, allowing it to discern and prioritize salient structural responses from extraneous noise. Our results showcase significant improvements in predictive accuracy, early damage detection, and adaptability across multiple SHM scenarios. In light of the critical nature of SHM, the proposed approach not only offers a robust analytical tool but also paves the way for more transparent and interpretable AI-driven SHM solutions. Future prospects include real-time processing, integration with external environmental factors, and a deeper emphasis on model interpretability. ## 1 Introduction Structural health monitoring (SHM) has emerged as a vital field of research, geared towards preserving the longevity and safety of civil infrastructure [1]. A critical component of SHM is the analysis of vibration time series data, which offers insights into the behavior, health, and performance of structures [2]. As infrastructure, especially in urban regions, is subject to a myriad of dynamic forces--ranging from wind to traffic loads - it becomes pivotal to extract clear and meaningful data from the complex vibration signatures that these forces induce. However, one of the significant challenges plaguing SHM practitioners is the interference of noise in these vibration signals, which can distort interpretations and lead to unreliable conclusions. The dynamic response of structures is often manifested as multimodal vibrations, meaning multiple modes or patterns of vibration coexist. These modes, each characterized by its frequency and shape, provide a fingerprint of the structure's health and dynamic properties. For example, a bridge might have one mode that represents a swaying motion, while another might represent a bouncing motion [3]. Distinguishing between these modes and their individual characteristics is a nuanced task, made increasingly difficult when the signals are clouded by noise. Noise in the context of vibration time series can arise from multiple sources: instrumental noise from sensors, environmental interferences, or other unpredictable external factors [4, 5]. Traditional denoising methods, while effective to a degree, sometimes struggle to cater to the unique intricacies of multimodal structural vibrations [6]. Such challenges underscore the necessity for novel denoising techniques tailored to the specific nature of these signals [7]. Enter the realm of synergistic signal denoising--a cutting-edge approach that promises a more holistic treatment of multimodal vibration data [8]. Unlike conventional methods that treat each mode of vibration as a separate entity, the synergistic approach recognizes the interconnectedness of these modes [9]. By leveraging the relationships between different modal responses, this methodology aims to achieve superior denoising outcomes, preserving the integrity of the structural response while effectively mitigating the noise. The importance of an effective denoising technique cannot be understated [10, 11]. Clear, noise-free signals allow for accurate modal identification, which in turn can be pivotal in detecting anomalies or damage in structures. Furthermore, with the increasing integration of machine learning and artificial intelligence in SHM, the quality of input data, i.e., the vibration signals--directly influences the quality of predictions and insights [12]. This paper ventures into the depths of synergistic signal denoising for multimodal structure vibration time series. We explore its foundational principles, contrast its performance with traditional techniques, and demonstrate its potential in real-world applications. The ultimate objective is to illuminate the path forward in harnessing clean and precise vibration data, a cornerstone in the edifice of modern structural health monitoring. ## 2 Prior Arts and Methods Vibration-based Structural Health Monitoring (SHM) has been extensively studied over the past few decades, primarily due to its potential to detect anomalies and ensure structural safety. Central to SHM's efficacy is the clarity of the acquired vibration signals. However, these signals, particularly from civil structures, are often contaminated with noise from various sources, complicating their analysis [9, 13]. Traditional denoising methods like wavelet decomposition and empirical mode decomposition have been applied to structure-borne signals with varying degrees of success. While these techniques can reduce noise, they sometimes struggle to retain the integrity of multimodal signals, leading to potential information loss [14]. A shift in the paradigm is noted in recent years towards more integrated denoising techniques. A synergistic approach, treating multimodal vibrations as interrelated rather than isolated events, suggest an improved retention of signal characteristics post-denoising compared to conventional methods. Despite its promise, synergistic denoising in the context of SHM remains an underexplored territory, warranting further research to validate its full potential and broader applicability [15]. Structural responses to dynamic loads often manifest in the form of vibration signals that are complex and multifaceted. Analyzing these signals is essential to understand the underlying structural behavior, particularly when it is governed by multiple modes of vibration. In this section, we delve into the fundamentals of multimodal vibrations and discuss common noise sources and characteristics inherent to these signals. Every structure possesses a set of natural frequencies at which it tends to vibrate when subjected to external stimuli. These frequencies correspond to different modes of vibration, with each mode showcasing a distinct deformation shape or pattern. This phenomenon is widely recognized in the context of structural dynamics as'modal analysis'. For instance, a simple beam might primarily deflect in a singular arc-like shape at its fundamental frequency, a mode often termed as the first mode. However, when vibrated at higher frequencies, the same beam might exhibit complex deformation patterns like forming two or more arcs--these represent the higher modes of vibration. In real-world scenarios, structures don't vibrate in a singular mode. Instead, they display a combination of these modes, giving rise to what is termed as'multimodal vibrations'. The presence of multiple vibration modes, especially in large and complex structures, makes signal analysis more intricate. Each mode carries unique information about the structure's health and behavior. Disentangling these concurrent modes from vibration time series and accurately identifying their individual frequencies and shapes is of paramount importance in structural health monitoring. Multimodal vibration signals, while rich in information, are often marred by noise, which can impede accurate signal interpretation [16, 17]. The noise sources contaminating these signals can be broadly categorized as [18]: Instrumental Noise: Every measurement tool, be it an accelerometer, a strain gauge, or any other sensor, has an inherent level of noise. Factors like the sensor's sensitivity, its electronic components, and even its placement can introduce unwanted signals, often drowning the subtle nuances of certain vibration modes. Environmental Interferences: Structures are not isolated entities. They interact with their surroundings, which means their recorded vibration signals are a cumulative effect of their intrinsic response and the myriad of environmental factors. Wind-induced vibrations, seismic activities, or even nearby vehicular traffic can introduce extraneous noise components into the signals (Doebling et al., 1996). Operational Loads: For operational structures, like bridges or buildings, the regular loads they bear--whether from vehicles, occupants, or machinery--can mask or distort their natural vibration patterns, complicating the analysis. The characteristics of noise in multimodal signals are often unpredictable. They might manifest as random spikes in the time domain, irregular fluctuations in amplitude, or unexpected frequency components in the frequency domain. Separating this noise, especially without compromising the integrity of the original vibration modes, remains a significant challenge in the realm of structural health monitoring. ## 3 Results and Discussion Designing a deep learning algorithm tailored for multiple time series, especially one that incorporates attention mechanisms, necessitates a thorough approach that ensures the model can effectively capture underlying temporal patterns and dependencies among series. When dealing with multiple time series, the inherent relationships and dependencies between different sequences can be intricate. Traditional analytical methods sometimes fall short in accurately deciphering these complexities. This is where modern deep learning, fortified by attention mechanisms, offers promising avenues. Before diving into the architecture, it's worth emphasizing the importance of preprocessing. Like with most data-centric models, the quality and structure of the input can profoundly impact the performance. Time series data, given its temporal nature, often needs to be normalized, usually between a scale of 0 and 1, using methods like Min-Max scaling. This step ensures that all time series are on a consistent scale, an essential aspect for neural networks to function optimally. After normalization, the time series data should be divided into overlapping windows. This segmentation aids in feeding the network with structured sequences of data. If, for example, a time series has a length of T and you choose a window size of W, the model will predict the subsequent point by taking into account the preceding W points. With the data preprocessed, the architectural design of the deep learning model becomes the Figure 1: Mechanical motor shaft vibrational signals. Three noisy signals, all originating from the same vibration source, are displayed. next focus. The starting point is an input layer designed to accommodate segments of the time series. Its size is dictated by the chosen window size, W. While traditional time series methods, such as ARIMA or Exponential Smoothing, operate on predefined statistical rules, deep learning offers a more adaptive, data-driven approach. A series of convolutional layers can follow the input layer, although this is optional. Convolutional Neural Networks (CNNs) have proven their merit in detecting local patterns within a sequence. By deploying one-dimensional convolutions, these layers can sieve out salient features from the input series, providing a richer representation for the subsequent layers. Recurrent layers, particularly those using Long Short-Term Memory (LSTM) cells or Gated Recurrent Units (GRU), are integral for models dealing with temporal data. Their design inherently allows them to understand sequences, making them apt for time series data. These layers are adept at capturing dependencies and patterns over extended durations, crucial when dealing with multiple time series. Therefore, incorporating two or more LSTM or GRU layers that sequentially pass data can provide depth to the model. The pivotal aspect of this architecture is the attention mechanism. When dealing with multiple sequences, not all segments or points within a series are equally significant. The attention mechanism facilitates the model's ability to focus on specific sequences, allowing it to allocate different attention scores to various segments. These scores are calculated based on the hidden states of the recurrent layers, determining the relevance of each time step in the series. By employing a softmax function, these scores are then normalized to fit between 0 and 1, which subsequently determine the weights of each step in the sequence. The culmination of this process results in the derivation of a context vector. This vector is essentially a weighted sum of the recurrent layers' hidden states and the attention scores, summarizing the most influential segments of the time series. Post attention layer, dense or fully connected layers can be layered into the model. The context vector obtained from the attention layer, infused with the distilled essence of the most pertinent parts of the series, is processed through these dense layers, enabling the model to further learn from the data [19]. The tail end of this architecture concludes with an output layer. Depending on the task, if the aim is forecasting just one step ahead, a singular neuron is used. If multiple steps are to be predicted, multiple neurons corresponding to the number of steps are integrated. The final stages involve training the model. It's essential to choose an appropriate loss function, with Mean Squared Error (MSE) being apt for regression-based tasks. The choice of optimizer is also critical. The Adam optimizer, with its adaptive learning capabilities, is often a favorable choice for deep learning models. To counteract the pitfalls of overfitting, early stopping can be employed. By consistently monitoring the validation loss and halting training when there's no discernible improvement, the model can be kept in check. Lastly, the model's performance should be assessed on unseen data using metrics like the Root Mean Squared Error (RMSE) or the Mean Absolute Error (MAE) to gauge the model's predictive accuracy. In conclusion, designing a deep learning model embedded with attention mechanisms for multiple time series signals a paradigm shift in time series analysis. The ability to prioritize certain segments over others allows for a nuanced understanding of the data, leading to more robust and accurate predictions. The designed deep learning algorithm for multiple time series, fortified with attention mechanisms, brings forth several significant advantages that cater to the complexities inherent to temporal data. Firstly, by going beyond traditional statistical methods, it offers a data-driven approach that is highly adaptive, learning directly from the unique patterns and intricacies embedded within the data. This adaptability ensures that the model isn't confined by pre-defined rules, allowing it to evolve with the nuances of different datasets. Secondly, the incorporation of attention mechanisms marks a revolutionary stride in handling multiple sequences. In most time series, certain segments or points are inherently more significant than others, and the attention mechanism ensures the model recognizes and prioritizes these pivotal sequences. By assigning different attention scores to different segments, the model can focus on crucial sequences while simultaneously downplaying less relevant segments. This focused approach not only enhances prediction accuracy but also aids in interpreting which segments of the data the model deems most influential, offering valuable insights into the underlying patterns of the series. Additionally, the blend of convolutional and recurrent layers ensures a holistic understanding of the data. While convolutional layers effectively detect local patterns, recurrent layers, especially LSTM or GRU cells, retain memory of past sequences, capturing long-term dependencies, a crucial aspect in time series analysis. This fusion guarantees that the model is well-equipped to understand both immediate and historical patterns. Moreover, the model's modular architecture ensures flexibility. Depending on the data's nature, one can adjust the depth of convolutional or recurrent layers or even refine the attention mechanism. This flexibility means that the algorithm can be tailored to suit various datasets with varying complexities. The training methodologies, such as the use of adaptive optimizers and early stopping, further enhance the model's robustness, ensuring it doesn't overfit and generalizes well on unseen data. In essence, this designed algorithm represents a culmination of some of the best practices in deep learning, tailored explicitly for multiple time series. Its ability to learn from data adaptively, prioritize significant sequences, and capture both local and global patterns, positions it as a formidable tool in the realm of time series analysis. ## 4 Conclusion and Future Work In the realm of Structural Health Monitoring (SHM), our proposed deep learning algorithm, complemented by attention mechanisms, offers a leap forward in efficiently analyzing and interpreting multimodal vibration signals. The challenges presented by the vast and intricate temporal data inherent in SHM demand solutions that can adapt, learn, and most importantly, prioritize salient structural responses over extraneous noise [20]. This algorithm, with its blend of convolutional and recurrent architectures, adeptly captures both instantaneous and prolonged structural behaviors, crucial for early damage detection and predictive maintenance. Moving forward, there's substantial potential in refining this algorithm for SHM-specific challenges. Given the criticality of timely damage detection in infrastructure, real-time processing and analysis can be explored further, possibly integrating edge computing for immediate on-site assess ments. Another avenue lies in fusing SHM data with external environmental factors like weather conditions, which often play a role in structural health. As the world of SHM continually advances, there will also be an increasing need to ensure our deep learning models are not just accurate but also transparent. Interpretability, especially in safety-critical applications like SHM, is paramount. Future work will need to delve deeper into making these complex models more understandable, ensuring engineers and stakeholders can trust and act upon the insights provided.
2308.00991
Wall And Chamber Structure For A Special Biserial Algebra Coming From Perverse Sheaves on $\mathbb{P}^n$
We describe the wall and chamber structure of a special biserial algebra whose module category is equivalent to the category of (middle) perverse sheaves on the complex projective space $\mathbb{P}^n$. In particular, by the well known classification of indecomposable modules for special biserial algebras, we deduce that the algebra of interest is of finite representation type and we provide an explicit description of the walls of the structure. By a result of Bridgeland this wall and chamber structure coincides with the chamber structure in an open subset of the space of stability conditions on the bounded derived category of constructible sheaves on $\mathbb{P}^n$.
Alessio Cipriani, Martina Lanini
2023-08-02T07:51:49Z
http://arxiv.org/abs/2308.00991v1
Wall and Chamber Structure for a Special Biserial Algebra coming from perverse sheaves on \(\mathbb{P}^{n}\) ###### Abstract. We describe the wall and chamber structure of a special biserial algebra whose module category is equivalent to the category of (middle) perverse sheaves on the complex projective space \(\mathbb{P}^{n}\). In particular, by the well-known classification of indecomposable modules for special biserial algebras, we deduce that the algebra of interest is of finite representation type and we provide an explicit description of the walls of the structure. By a result of Bridgeland this wall and chamber structure coincides with the chamber structure in an open subset of the space of stability conditions on the bounded derived category of constructible sheaves on \(\mathbb{P}^{n}\). ## 1. Introduction This work stems from the desire of understanding Bridgeland space of stability conditions for the bounded derived category of complexes on complex projective spaces constructible with respect to the Schubert stratification. Stability spaces for triangulated categories were introduced by Bridgeland in [8] inspired by the work of Douglas on string theory [12]. Many progresses have been made in the study of stability spaces for derived categories of coherent sheaves, while the case of bounded derived categories of constructible complexes, treated in the \(\mathbb{P}^{1}\) case in [20], has had less fortune so far. One of the reasons to look at the bounded derived category \(\mathrm{D}_{c}(X)\) of constructible sheaves on some variety \(X\) is that, if the stratification considered on \(X\) is by affine subvarieties whose cohomology is concentrated in one degree (e.g. flag varieties with Schubert stratification), then the abelian category of middle perverse sheaves on \(X\) is a faithful heart of \(\mathrm{D}_{c}(X)\), that is a heart whose derived category is \(\mathrm{D}_{c}(X)\) itself. By [11] the category of middle perverse sheaves can be realised as a module category for a quiver with relations, so that our triangulated category of interest is now the derived category of a quiver module category, in particular of modules for a finite dimensional algebra. Stability conditions for the category of modules over a finite dimensional algebra were defined and investigated since the Nineties [15]. The definition of wall and chamber structure for a finite dimensional algebra is due to Bridgeland [7] and links the geometric and algebraic notions of stability condition spaces in the following sense: understanding the wall and chamber structure for the category \((Q,I)-\mathrm{rep}\) of representations of a quiver with relations is equivalent to understanding wall crossing phenomena in an open subset of Bridgeland stability space for the derived category of \((Q,I)-\mathrm{rep}\). In fact in some cases it is enough to understand the wall and chamber structure to obtain all the needed information to control Bridgeland's space of stability conditions [6]. The abelian category of middle perverse sheaves on the complex projective space \(\mathbb{P}^{n}\) with Borel stratification is known to be equivalent to the category of modules for a particularly nice quiver with relations \((Q(n),I(n))\) (see for instance [19, Example 1.1]) whose corresponding algebra \(B(n)\) is special biserial. Special biserial algebras have been interest of study for very many years, starting from [18], where they were defined. This is certainly due to the fact that their representation theory is extremely well understood and the classification of their indecomposable modules is purely combinatorial, as explained in [10, 21]. In particular, their indecomposable modules are subdivided into three classes: string modules, band modules and non-uniserial injective-projective modules. We show that our special biserial algebra \(B(n)\) does not have any band module (Proposition 3.7). Since the other two classes of modules are finite, we can deduce that the algebra of interest is of finite representation type. As for the wall and chamber structure, the stability spaces corresponding to non thin indecomposable \(B(n)\)-modules are contained in the walls corresponding to thin modules, so that we are left with a wall and chamber structure which only involves indecomposable thin modules (Theorem 5.8). Walls for string modules have been explicitely described in the acyclic case by [1] via facial description and in [17] via the dual description by the set of ray generators any polyhedral cone. It would have been possible to invoque [17] to prove Theorem 5.8 and to deduce the wall and chamber structure of the category of middle perverse sheaves on the complex projective space from [17] once the indecomposable thin \(B(n)\)-modules had been interpreted as thin module for an acyclic quiver with relations. Nevertheless, since the combinatorics of indecomposable modules for our algebra is very explicit, we tried to keep this paper as self contained as possible (apart from the classification of indecomposable modules) and hence we provide independent proofs of the non relevance of non thin modules, as well as the wall facial description, both relying on the nice combinatorics of strings on \((Q(n),I(n))\) (see Section SS3). Since our motivation came from geometry, we did not investigate the consequences of our main theorem to silting theory. By [1] and [9] the chambers in the wall and chamber structure of a finite dimensional algebra are in bijection with its \(2\)-silting complexes (up to isomorphism), and two \(2\)-silting complexes corresponding to adjacent chambers are related by a mutation. Therefore, determining the number of isomorphism classes of \(2\)-silting complexes for \(B(n)\) becomes the purely combinatorial problem of counting the maximal cones of the polyhedral fan complex resulting from Theorem 5.11. ### Structure of the paper In Section SS2 we collect some basics on quivers with relations and walks on them; we also define our main character \((Q(n),I(n))\). In the following Section SS3 we investigate the combinatorial behaviour of strings on \((Q(n),I(n))\) and prove that there are no bands on \((Q(n),I(n))\). Section SS4 deals with the representation theory of the finite dimensional algebra \(B(n)\) corresponding to \((Q(n),I(n))\). In particular, we recall the parameterisation of isomorphism classes of indecomposable modules and we apply the string combinatorics to deduce that \(B(n)\) is of finite representation type with a very explicit realisation of the indecomposable objects. Finally, in Section 5 we recall King's definition of semistable objects for a finite dimensional algebra and we describe the stability spaces (via inequalities) of any indecomposable module. The facial description of the wall and chamber structure of \(B(n)\) is given in Theorem 5.11. ### Acknowledgements We thank Jon Woolf for many motivating and enlightening discussions, and Catharina Stroppel for very useful correspondence. We also thank Lidia Angeleri Hugel for her feedback on a first version of this paper. The work of A.C. was funded by MUR PNRR-Seal of Excellence, CUP B37G22000800006, and the project REDCOM: _Reducing complexity in algebra, logic, combinatorics_, financed by the programme Ricerca Scientifica di Eccellenza 2018 of the Fondazione Cariverona. M.L. was partially funded by the Fondi di Ricerca Scientifica di Ateneo 2021 CUP E853C22001680005, and the MUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C23000330006. ## 2. Reminder on special biserial algebras and walks on their quivers We recall here some basics on special biserial algebras and their representation theory, see for example [18, 21, 10]. Throughout the paper we will denote the interval of natural numbers \(\{i,i+1,\ldots,j\}\) by \([i,j]\), and use the short hand notation \([m]=[1,m]\). A quiver \(Q=(Q_{0},Q_{1})\) is an oriented graph, with vertex set \(Q_{0}\) and arrow set \(Q_{1}\). For any algebraically closed field \(\mathds{k}\), we denote by \(\mathds{k}Q\) its path algebra, that is the algebra obtained by concatenating paths on the graph, see for instance [2, Chapter 2, SS1]. If \(F\) is the ideal of \(\mathds{k}Q\) generated by the arrows, we say that a further ideal \(I\) is _admissible_ whenever there is an \(m\) such that \(F^{m}\subseteq I\subseteq F^{2}\). The algebra \(\mathds{k}Q/I\) is said _special biserial_ if \(I\) is admissible and the following two extra conditions hold * every vertex is source of at most two arrows and is target of at most two arrows, * for any arrow \(\gamma\) there is at most one arrow \(\delta\) such that the composition \(\gamma\circ\delta\not\in I\) and at most one arrow \(\delta^{\prime}\) such that \(\delta^{\prime}\circ\gamma\not\in I\). If \(\mathds{k}Q/I\) is special biserial, then \(I\) is generated by monomials (that is paths on the quiver) and binomials (that is relations such as \(a\gamma-b\delta\), with \(a,b\in\mathds{k}^{\times}\) and \(\gamma,\delta\) paths on \(Q\)), as explained in [18, Corollary of Lemma 1]. In this paper we deal with the quiver \(Q(n)\), that is the quiver with * vertices \(i\in[0,n]\), * arrows \(\alpha_{i}\colon i+1\to i\) and \(\beta_{i}\colon i\to i+1\) with \(i\in[0,n-1]\), and we consider the ideal of relations \(I(n)\) generated by \[\beta_{i}\alpha_{i}-\alpha_{i+1}\beta_{i+1},\quad\alpha_{i}\alpha_{i+1},\quad \beta_{i+1}\beta_{i}\qquad(i\in[0,n-2]),\quad\alpha_{0}\beta_{0}.\] **Remark 2.1**.: The quiver with relation \((Q(n),I(n))\) coincides with the one appearing in [19, Example 1.1.]. For instance, when \(n=2\) we have the quiver \[Q(2)=\ \begin{CD}2@<{\alpha_{1}}<{}<\xrightarrow{\beta_{1}}1@<{\alpha_{0}}<{}< \xrightarrow{\beta_{0}}0\end{CD} \tag{1}\] with relations \[I(2)=\langle\beta_{0}\alpha_{0}-\alpha_{1}\beta_{1},\alpha_{0}\alpha_{1}, \beta_{1}\beta_{0},\alpha_{0}\beta_{0}\rangle. \tag{2}\] We denote by \(B(n)\) the quotient algebra \(\mathds{k}Q(n)/I(n)\). It is immediate to see that \(B(n)\) is special biserial. Given a quiver \(Q=(Q_{0},Q_{1})\), we denote by \(Q_{1}^{*}\) the set of formal inverses of the arrows of \(Q\). If \(\delta\in Q_{1}^{*}\), we write \(\delta^{*}\) to denote the arrow in \(Q_{1}\) whose formal inverse is \(\delta\) (that is \((\gamma^{*})^{*}=\gamma\)). By convention, if \(\gamma\in Q_{1}\) has source \(s(\gamma)\) and target \(t(\gamma)\) then we set \(s(\gamma^{*})=t(\gamma)\) and \(t(\gamma^{*})=s(\gamma)\). **Definition 2.2**.: Let \(Q=(Q_{0},Q_{1})\) be a quiver. 1. A _walk of length \(m\geq 1\) on \(Q\)_ is a sequence \(w=\gamma_{1}\gamma_{2}\ldots\gamma_{m}\) with \(\gamma_{k}\in Q_{1}\cup Q_{1}^{*}\) for any \(k\) and \(t(\gamma_{k+1})=s(\gamma_{k})\) for all \(k\leq m-1\). 2. A _walk on \(Q\) of length 0 on the vertex \(i\)_ is the trivial path which stays on such a vertex. We denote it by \(\epsilon_{i}\). 3. A _path on \(Q\)_ is a walk \(w=\gamma_{1}\gamma_{2}\ldots\gamma_{m}\) with all \(\gamma_{k}\in Q_{1}\). 4. A walk is said to be _reduced_ if either \(w\) is of length 0, or \(w=c_{1}\ldots c_{m}\) and \(c_{i}\neq c_{i+1}^{*}\) for any \(i\in[m-1]\). **Example 2.3**.: Let us consider \(Q(2)\) as defined in (1). 1. The sequence \(w=\alpha_{1}^{*}\beta_{0}\) is a walk on \(Q(2)\) (and it is not a path). 2. The sequence \(w=\beta_{0}^{*}\alpha_{1}\beta_{1}\) is a reduced walk on \(Q(2)\) (and it is not a path). 3. The sequence \(w=\beta_{1}\beta_{0}\) is a path on \(Q(2)\). If \(w=c_{1}\ldots c_{m}\) is a walk of length \(m>0\), we denote by \(w^{*}\) the walk \(\gamma_{m}^{*}\ldots\gamma_{1}^{*}\). By convention, \(\epsilon_{i}^{*}=\epsilon_{i}\) for any \(i\in Q_{0}\). **Definition 2.4**.: Let \(Q=(Q_{0},Q_{1})\) be a quiver. A walk \(w\) on \(Q\) is said to _contain a non trivial path_\(p\) if \(p\) or \(p^{*}\) are contained in \(w\) (that is \(p\) or \(p^{*}\) can be obtained from the word of \(w\) by removing a suffix and/or a prefix). **Example 2.5**.: Let us consider \(Q(2)\) as defined in (1). 1. The walk \(w=\alpha_{1}^{*}\beta_{0}\) contains the length 1 paths \(u_{1}=\alpha_{1}\) and \(u_{2}=\beta_{0}\). 2. The walk \(w=\beta_{0}^{*}\alpha_{1}\beta_{1}\) contains the path \(\alpha_{1}\beta_{1}\). **Definition 2.6**.: Let \(Q=(Q_{0},Q_{1})\) and let \(I\) be an admissible ideal of \(\Bbbk Q\) such that \(\Bbbk Q/I\) is special biserial. A _string on \((Q,I)\)_ is a reduced walk on \(Q\) such that if \(p\) is a non trivial path contained in \(w\) then \(p\) is not a monomial appearing in any of the generators of \(I\). **Example 2.7**.: Let us consider \(Q(2)\) as defined in (1). 1. The walk \(w=\alpha_{1}^{*}\beta_{0}\) is a string as none of its subpaths appear in \(I(2)\). 2. The walk \(w=\beta_{0}^{*}\alpha_{1}\beta_{1}\) is not a string as the path \(\alpha_{1}\beta_{1}\) is a monomial in the binomial relation \(\beta_{0}\alpha_{0}-\alpha_{1}\beta_{1}\in I(2)\). 3. The walk \(\beta_{1}\beta_{0}\in I(2)\) and hence it is not a string. **Definition 2.8**.: Let \(Q=(Q_{0},Q_{1})\) be a quiver. 1. The _source \(s(w)\)_, resp. the _target_\(t(w)\), of a walk \(w=\gamma_{1}\ldots\gamma_{m}\) of length \(s>0\) on \(Q\) is \(s(\gamma_{m})\), resp. \(t(\gamma_{1})\). The trivial walk \(\epsilon_{i}\) has source and target equal to \(i\). 2. A _cycle_ is a walk \(w\) of strictly positive length such that \(s(w)=t(w)\) 3. A _reduced cycle_ is a cycle \(w=\gamma_{1}\ldots\gamma_{m}\) with \(\gamma_{k}\neq\gamma_{k+1}^{*}\) for any \(k\in[m-1]\) and the indices are taken modulo \(m\). 4. If \(w\) is a reduced cycle we write \(w^{h}\) for the path obtained by concatenating \(w\) with itself \(h\)-times. **Example 2.9**.: Let \(Q\) be the equioriented Dynkin quiver of type \(\tilde{A_{3}}\), that is the quiver given by \(Q_{0}=\{0,1,2\}\) and \(Q_{1}=\{\lambda_{0}:0\to 1,\lambda_{1}:1\to 2,\lambda_{2}:2\to 0\}\). Let \(I=\langle(\lambda_{2}\lambda_{1}\lambda_{0})^{3}\rangle\). We consider walks on \((Q,I)\) 1. The source of \(w=\lambda_{2}\lambda_{1}\) is \(s(w)=1\), while its target is \(t(w)=0\). Therefore \(w\) is not a cycle. 2. Let \(w=\lambda_{2}\lambda_{1}\lambda_{0}\). Then \(s(w)=t(w)=0\) and so \(w\) is a (reduced) cycle on on \(Q\). **Definition 2.10**.: Let \((Q=(Q_{0},Q_{1})\) and let \(I\) be an admissible ideal of \(\Bbbk Q\) such that \(\Bbbk Q/I\) is special biserial. A _band on \((Q,I)\)_ is a non trivial reduced cycle \(w\) on \(Q\) such that \(w^{h}\) is a string path on \((Q,I)\) for any \(h\geq 1\) and such that it does not exist a string path \(u\) of length strictly less than the length of \(w\) with \(u^{k}=w\) for some \(k>1\). **Example 2.11**.: Let \((Q,I)\) be as in Example 2.9. Then the path \(\lambda_{2}\lambda_{1}\lambda_{0}\) is a band on \((Q,I)\), while \(\lambda_{2}\lambda_{1}\lambda_{0}\lambda_{2}\lambda_{1}\lambda_{0}\) is not. ## 3. Combinatorics of strings on \((Q(n),I(n))\) As we will see in the next section, the representation theory of a special biserial algebra is controlled by string and band combinatorics on the corresponding quiver with relations. We collect in this section some results of strings on our quiver \((Q(n),I(n))\). **Lemma 3.1**.: _Let \(w=\gamma_{1}\ldots\gamma_{m}\) be a string on \((Q(n),I(n))\) of length \(m\geq 2\). Then either \(w\in\{\beta_{n-1}\alpha_{n-1},\alpha_{n-1}^{*}\beta_{n-1}^{*}\}\) or \(w\) does not contain any path of length \(\geq 2\)._ Proof.: Let \(k\in[1,m-1]\). Since \(\gamma_{k}\gamma_{k+1}\) is a subwalk of the walk \(w\) it has to hold that \(s(\gamma_{k})=t(\gamma_{k+1})\) and since \(w\) is reduced \(\gamma_{k}\neq\gamma_{k+1}^{*}\). We assume without loss of generality that \(\gamma_{k}\in Q(n)_{1}\) (if it is not we replace \(w\) by \(w^{*}\)). If \(\gamma_{k}\gamma_{k+1}\) is a path then there exists an \(i\) such that either \(\gamma_{k}=\alpha_{i}\) or \(\gamma_{k}=\beta_{i}\). Note that \(\alpha_{i}\) can be (post)concatenated only with the honest arrows \(\alpha_{i+1}\) and \(\beta_{i}\): the first concatenation gives \(\alpha_{i}\alpha_{i+1}\in I(n)\), while the second gives a monomial in the binomial relation \(\beta_{i-1}\alpha_{i-1}-\alpha_{i}\beta_{i}\) if \(i>0\). We deduce that a subpath of length \(>1\) cannot start with \(\alpha_{i}\). Also if \(\gamma_{k}=\beta_{i}\) and \(i<r-1\) we obtain either a monomial in \(I(n)\) or a monomial in some binomial relation in \(I(n)\). If \(\gamma_{k}=\beta_{n-1}\) we instead have to be more careful. In fact then in this case it must hold \(\gamma_{k-1}=\alpha_{n-1}\) and \(\alpha_{n-1}\beta_{n-1}\) is a string on \((Q(n),I(n))\). If \(w\neq\alpha_{n-1}\beta_{n-1}\) then we should have a substring \(\gamma_{k-1}\alpha_{n-1}\beta_{n-1}\) or \(\alpha_{n-1}\beta_{n-1}\gamma_{k+1}\): in the first case we would have to postcompose \(\alpha_{n}\) with an honest arrow, but we know that this is not possible in a string, while in the second case \(\gamma_{k+1}\in\{\alpha_{n-1},\beta_{n-2}\}\), but this would produce a monomial in \(I(n)\) or a term in a binomial relation. **Corollary 3.2**.: _There are three types of strings on \((Q(n),I(n))\):_ 1. _the trivial paths_ \(\epsilon_{i}\)_, for_ \(i\in[0,n]\)_,_ 2. _the cycles_ \(\beta_{n-1}\alpha_{n-1}\) _and_ \(\alpha_{n-1}^{*}\beta_{n-1}^{*}\)_,_ 3. _the alternating walks, that is walks such as_ \(\gamma_{1}\ldots\gamma_{m}\) _and_ \(\gamma_{k}\in Q(n)_{1}\) _if and only if_ \(\gamma_{k+1}\in Q(n)_{1}^{*}\) _for any_ \(k<m\)_._ **Lemma 3.3**.: _Let \(w=\gamma_{1}\ldots\gamma_{m}\) be an alternating string of length \(>0\) on \((Q(n),I(n))\). Let \(f_{w}:[0,m]\to[0,n]\) be the function defined as_ \[f_{w}(0)=s(\gamma_{m}),\quad f_{w}(j)=t(\gamma_{m-j+1}),\ j\in[m].\] _Then \(f_{w}\) is strictly monotone. Moreover \(f_{w}\) is increasing if and only if \(f_{w^{*}}\) is decreasing._ Proof.: First of all we note that if follows from the definition of \(*\)-operation that \(f_{w}\) is increasing if and only if \(f_{w^{*}}\) is decreasing. We prove the result by induction on \(m\). If \(m=1\) there is nothing to show, as \(\gamma_{1}\) is either an arrow or a formal inverse of an arrow and, since there are no loops in \(Q(n)\), source and target of \(\gamma_{1}\) are different. Let \(m>1\) and consider the walk \(w^{\prime}=\gamma_{2}\ldots\gamma_{m}\). This is also an alternating string and we deduce by induction that \(f_{w^{\prime}}\) is strictly monotone (and hence also \(f_{w}\) restricted to \([0,m-1]\) is so). Assume first that \(f_{w^{\prime}}\) is increasing. Then there exists an \(i\) such that \(i=f_{w^{\prime}}(m-2)=t(\gamma_{3})<f_{w^{\prime}}(m-1)=t(\gamma_{2})\). Since \(Q(n)\) only has arrows between \(i\) and \(i\pm 1\) then \(s(\gamma_{2})=t(\gamma_{2})\pm 1\) and it must hold \(t(\gamma_{2})=i+1\). Therefore either \(\gamma_{2}=\beta_{i}\) or \(\gamma_{2}=\alpha_{i}^{*}\). In the first case we know that \(\gamma_{1}\in Q(n)_{1}^{*}\) and by definition of walk \(s(\gamma_{1})=t(\gamma_{2})=i+1\), so that \(\gamma_{1}\in\{\alpha_{i+1}^{*},\beta_{i}^{*}\}\). Since by definition of string \(w\) has to be reduced we conclude that \(\gamma_{1}=\alpha_{i+1}^{*}\) and hence \(f_{w}(m)=t(\alpha_{i+1}^{*})=i+2>i+1=f_{w^{\prime}}(m-1)=f_{w}(m-1)\). The case \(f_{w^{\prime}}\) strictly decreasing is treated in the same manner. **Lemma 3.4**.: _Let \(w=\gamma_{1}\ldots\gamma_{m}\) be an alternating string of length \(>0\) on \((Q(n),I(n))\) and let \(f_{w}\) be defined as in Lemma 3.3. If \(f_{w}\) is increasing, then \(\mathrm{im}f_{w}=[s(w),t(w)]\) and hence \(f_{w}:[0,m]\to[s(w),t(w)]\) is bijective._ Proof.: By Lemma 3.3 we know that \(f_{w}\) is strictly increasing. By the definition of \(Q(n)\) all arrows \(\gamma\in Q(n)_{1}\) have the property that \(t(\gamma)=s(\gamma)\pm 1\), so that \(f_{w}(j)=f_{w}(j-1)\pm 1\). Since \(f\) is strictly increasing it must hold \(f_{w}(j)=f_{w}(0)+j\) for any \(j\in[0,m]\). The claim now follows immediately. We have already observed that that if \(w\) is a string also \(w^{*}\) is a string. The _\(*\)-class of a string_ is the set \(\{w,w^{*}\}\). The following lemma parameterises the \(*\)-classes of strings on \((Q(n),I(n))\). **Lemma 3.5**.: _Then the set of \(*\)-classes of alternating walk strings on \((Q(n),I(n))\) is in bijection with the following set_ \[\mathscr{S}=\left\{(a,b,\eta)\mid 0\leq a<b\leq n,\ \eta\in\{\pm 1\}\right\}.\] Proof.: We denote by \(\mathscr{X}\) the set of \(*\)-classes of strings on \((Q(n),I(n))\) of length \(>0\). We will define a bijective function \(\varphi:\mathscr{X}\to\mathscr{S}\). Let \(w=\gamma_{1}\ldots\gamma_{m}\) be a string of length \(>0\). Then \(f_{w}\) is strictly monotone and it is increasing if and only if \(f_{w^{*}}\) is decreasing. Since we only care about \(*\)-classes we can assume that \(f_{w}\) is strictly increasing. We define \[\varphi(w)=(f_{w}(0),f_{w}(m),\eta),\ \text{with}\ \eta=\begin{cases}1&\text{ if } \gamma_{m}\in Q(n)_{1},\\ -1&\text{ if }\gamma_{m}\in Q(n)_{1}^{*}.\end{cases}\] Viceversa, if \((a,b,\eta)\in\mathscr{S}\) we set \[\psi((a,b,\eta))=\begin{cases}\alpha_{b-1}^{*}\beta_{b-2}\ldots\alpha_{a+1}^{* }\beta_{a}&\text{ if }\eta=1,\text{ and }a\equiv_{2}b\\ \beta_{b-1}\alpha_{b-2}^{*}\ldots\alpha_{a+1}^{*}\beta_{a}&\text{ if }\eta=1,\text{ and }a \not\equiv_{2}b\\ \beta_{b-1}\alpha_{b-2}^{*}\ldots\beta_{a+1}\alpha_{a}^{*}&\text{ if }\eta=-1,\text{ and }a \equiv_{2}b\\ \alpha_{b-1}^{*}\beta_{b-2}\ldots\beta_{a+1}\alpha_{a}^{*}&\text{ if }\eta=-1,\text{ and }a \not\equiv_{2}b,\end{cases}\] where \(a\equiv_{2}b\), resp. \(a\not\equiv_{2}b\), indicates that \(a\) and \(b\) have, resp. do not have, the same parity. Since \(t(\beta_{i})=i+1=t(\alpha_{i+1})=s(\alpha_{i+1}^{*})\) and \(t(\alpha_{j}^{*})=s(\alpha_{j})=s(\beta_{j+1})\) then \((a,b,\eta)\) is certainly a reduced walk. It is a string since it does not contain any path of length \(\geq 2\) or any monomial in \(I(n)\) or summand of a binomial generating \(I(n)\). Note that \(s(\psi((a,b,\eta)))=a\) and \(t(\psi((a,b,\eta)))=b\). At this point is an easy check to see that \(\varphi\) and \(\psi\) are mutual inverse. **Corollary 3.6**.: _The number of \(*\)-classes of strings on \((Q(n),I(n))\) is \((n+1)^{2}+1\)._ Proof.: Corollary 3.2 tells us that we have three types of strings. As for the first type of strings, the trivial paths, they are fixed by the \(*\)-operation and they are \(n\). We have only two strings of the second type which are interchanged by the \(*\)-operation and this gives us another \(*\)-class. By the previous lemma, we are hence left to compute the cardinality of \(\mathscr{S}\) which coincides with twice the cardinality of the set of connected subintervals of \([0,n]\), that is \(n(n+1)\). **Proposition 3.7**.: _There are no bands on \((Q(n),I(n))\) for any \(n\geq 1\)._ Proof.: Assume that \(w=\gamma_{1}\ldots\gamma_{m}\) is a band on \((Q(n),I(n))\). Since it has to be a cycle and there are no loops in \(Q(n)\) the walk \(w\) has length \(m\geq 2\). It follows from Corollary 3.2 that either \(w\in\{\beta_{n-1}\alpha_{n-1},\alpha_{n-1}^{*}\beta_{n-1}^{*}\}\) or \(w\) is an alternating walk. Assume \(w\in\{\beta_{n-1}\alpha_{n-1},\alpha_{n-1}^{*}\beta_{n-1}^{*}\}\). By definition of band also \(w^{2}\) has to be a string, but \((\beta_{n-1}\alpha_{n-1})^{2}\) and \((\alpha_{n-1}^{*}\beta_{n-1}^{*})^{2}\) contain both \(\alpha_{n-1}\beta_{n-1}\), which is a term in a binomial relation in \(I(n)\). This implies that \(w^{2}\) is not a string and hence the case in which \(w\) is not an alternating walk can never happen. Assume now that \(w\) is an alternating walk. Since we require that \(w^{h}\) is a string for any \(h\geq 1\), in particular \(w\) itself is a string and by Lemma 3.3\(f_{w}\) is strictly monotone. On the other hand we require that \(f_{w}(m)=t(\gamma_{1})=s(\gamma_{m})=f_{w}(0)\), but the strictly monotonicity of \(f_{w}\) tells us that this is not possible. ## 4. Classification of indecomposable \(B(n)\)-modules We devote this section to the classification of \(B(n)\)-indecomposable modules. In particular, we will see that the algebra \(B(n)\) is of finite representation type, that is there exists only a finite number of isomorphism classes of \(B(n)\)-indecomposable modules. We start be recalling what a representation is in this setting. Let \(Q=(Q_{0},Q_{1})\) be a quiver and \(I\) be an admissible ideal of its path algebra \(\,\Bbbk Q\). A _\((Q,I)\)-representation over \(\Bbbk\)_ is given by * a collection of finite dimensional \(\,\Bbbk\)-vector spaces \((M_{i})_{i\in Q_{0}}\) * a collection of \(\,\Bbbk\)-linear maps \((\varphi_{\gamma}:M_{i}\to M_{j})_{\gamma:i\to j\in Q_{1}}\) such that the maps verify the relations in \(I\): if \(\sum_{i}c_{i}(\gamma_{1}^{(i)}\ldots\gamma_{m_{i}}^{(i)})\in I\) then \(\sum_{i}c_{i}(M_{\gamma_{1}^{(i)}}\circ\ldots\circ M_{\gamma_{m_{i}}^{(i)}})=0\) The category of (finite dimensional) \((Q,I)\)-representations is equivalent to the category of (finite dimensional) \(\,\Bbbk Q/I\)-modules (see, for instance, [2, Chapter II, Theorem 3.7]), therefore we will very often consider \((Q(n),I(n))\)-representations and refer to them as \(B(n)\)-modules. Let \(Q\) be a quiver and let \(I\) be an admissible ideal such that \(\,\Bbbk Q/I\) is special biserial. As already recalled, \(I\) is generated by monomials and binomials in paths on \(Q\). Assume that we have a minimal set of generators of \(I\). If a path \(p\) appears as a monomial (rescaled by some non zero scalar) in some binomial relation in \(I\) and \(p\not\in I\), then this binomial relation is unique. The following theorem provides a parameterisation of \(\,\Bbbk Q/I\)-modules for any special biserial algebra: **Theorem 4.1**.: _[_21_, Proposition 2.3]__Let \(\,\Bbbk Q/I\) be a special biserial algebra. The indecomposable \(Q/I\)-modules are organised into three families: one parameterised by strings on \((Q,I)\), one parameterised by bands on \((Q,I)\), and a third one parameterised by (non redundant) binomial relations in \(I\). Moreover, the three families have empty intersection._ By Proposition 3.7 we know that there are no bands on \((Q(n),I(n))\) and since we are interested in classifying indecomposables \(B(n)\)-modules we can restrict our attention only to the class of modules corresponding to strings and to binomial relations. Let \(w=\gamma_{1}\ldots\gamma_{m}\) be a string of strictly positive length on a quiver with relations \((Q,I)\) for a special binomial algebra. We can define a function \(f_{w}:[0,m]\to[0,n]\) in the same way as in Lemma 3.3. If \(w=\epsilon_{i}\) for some \(i\) we define \(f_{w}:\{0\}\to[0,m]\) as \(f_{w}(0)=i\). Let \(\gamma\in Q_{1}\cup Q_{1}^{*}\). We denote by \(\tilde{\gamma}\in Q_{1}\) the unique honest arrow in the set \(\{\gamma,\gamma^{*}\}\). **Definition 4.2**.: [21, SS2] Let \(w=\gamma_{1}\ldots\gamma_{m}\) be a string on \((Q(n),I(n))\). The _string representation \(M(w)\) of \(B(n)\)_ is defined as follows: \[M(w)_{j}=\bigoplus_{h\in f_{w}^{-1}(j)}\Bbbk\qquad j\in[0,n],\] \[M(w)_{\delta}=\bigoplus_{i:\gamma_{i}\in\{\delta,\delta^{*}\}}(\Bbbk(s( \tilde{\gamma_{i}}))\stackrel{{\mathbf{1}}}{{\to}}\Bbbk(t( \tilde{\gamma_{i}}))\qquad\delta\in Q(n)_{1},\] where in the definition of the maps we denote by \(\Bbbk(h)\) the \(h\)-th copy of \(\Bbbk\) in the direct sum \(\bigoplus_{h\in f_{w}^{-1}(l)}\Bbbk\). Note that the above defined representation is indecomposable by construction. Moreover, \(M(w)\) is isomorphic to \(M(v)\) if and only if \(w=v^{*}\). Thus isomorphism classes of string modules are parameterised by \(*\)-classes of strings on \((Q(n),I(n))\). Since \(M(w)\) is simple if and only if \(w=\epsilon_{i}\) for some \(i\in[0,n]\), we will sometimes use the notation \(S_{i}\) for \(M(\epsilon_{i})\). If \(w\) is a string on \((Q(n),I(n))\) of length \(\geq 1\) different from \(\beta_{n-1}\alpha_{n-1},\alpha_{n-1}^{*}\beta_{n-1}^{*}\) and \((a,b,\eta)\) is the element of \(\mathscr{S}\) corresponding to the \(*\)-class of \(w\), we will often write \(M(a,b,\eta)\) instead of \(M(w)\). Next we need to give the definition of the indecomposable modules corresponding to binomial relations in \(I(n)\). Recall that the (non redundant) binomial relations in \(I(n)\) are \[\beta_{i}\alpha_{i}-\alpha_{i+1}\beta_{i+1},\qquad i\in[0,n-2]\] and are therefore in bijection with the set \([0,n-2]\). **Definition 4.3**.: Let \(n\geq 2\). Let \(i\in[0,n-2]\). We define the _(non uniserial) projective-injective \(B(n)\)-module \(R(i)\)_ as follows: \[R(i)_{j}=\begin{cases}\Bbbk&\text{ if }j\in\{i,i+2\},\\ \Bbbk\oplus\Bbbk&\text{ if }j=i+1,\\ (0)&\text{ otherwise},\end{cases}\qquad j\in[0,n],\] \[R(i)_{\gamma}=\begin{cases}\left[\begin{array}{cc}1&0\\ 0\\ 1\end{array}\right]&\text{ if }\gamma\in\{\alpha_{i},\beta_{i+1}\},\\ \left[\begin{array}{cc}0\\ 1\end{array}\right]&\text{ if }\gamma\in\{\beta_{i},\alpha_{i+1}\},\qquad \gamma\in Q(n)_{1},\\ (0)&\text{ otherwise}.\end{cases}\] Recall that by Proposition 3.7 we know that there are no band modules. A more precise version of Theorem 4.1 for our algebra \(B(n)\) is hence the following: **Theorem 4.4**.: _Let \(M\) be an indecomposable \(B(n)\)-module. Then either there exists an \(i\in[0,n-1]\) such that \(M\simeq R(i)\) or there exists a string \(w\) on \((Q(n),I(n))\) such that \(M\simeq M(w)\)._ **Example 4.5**.: In the case \(n=1\), the indecomposable \(B(1)\)-modules are all string modules: \[S_{0}=0\stackrel{{\text{$0$}}}{{\underbrace{\sum_{0}^{0}}}} \mathbb{C},\qquad\ S_{1}=\mathbb{C}\stackrel{{\text{$0$}}}{{ \underbrace{\sum_{0}^{0}}}}0,\] \[M(\alpha_{0})=\mathbb{C}\xymatrix{0}{\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/fig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfig/figfigfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfig Proof.: Since \(M\) is an indecomposable \(B(n)\)-module by Theorem 4.4\(M\) is either a string module or is coming from a binomial relation in \(I(n)\). Assume first that \(M\) is a string module. By the proof of Lemma 4.8 we know that all string modules but \(M(\beta_{n-1}\alpha_{n-1})\) are thin. Since \(M\) is assumed not to be thin it must be isomorphic to \(M(\beta_{n-1}\alpha_{n-1})\). Via direct inspection we see that the only two proper submodules of \(M(\beta_{n-1}\alpha_{n-1})\) are (isomorphic to) \(S_{n}\) and \(M(\beta_{n-1})\). On the other hand, its (proper) quotients are isomorphic to \(S_{n}\) and \(M(\alpha_{n-1})\). Since \(\dim M(\beta_{n-1})=\dim M(\alpha_{n-1})\) the lemma is proven in the case \(M\simeq M(\beta_{n-1}\alpha_{n-1})\). We can hence assume that \(M\) is not a string, so that \(M\simeq R(i)\) for some \(i\in[0,n-2]\). In this case by definition of \(R(i)\) we have a non thin module. By direct inspection we see that its proper subobjects are isomorphic to \(S_{i+1}\), \(M(\alpha_{i+1})\), \(M(\beta_{i})\), \(M(\alpha_{i+1}^{*}\beta_{i})\), while its proper quotients are isomorphic to \(S_{i+1}\), \(M(\alpha_{i})\), \(M(\beta_{i+1})\), \(M(\beta_{i+1}\alpha_{i}^{*})\). As \(\dim M(\alpha_{j})=M(\beta_{j})\) for any \(j\in[0,n-1]\) and \(\dim M(\beta_{i+1}\alpha_{i}^{*})=\dim M(\alpha_{i+1}^{*}\beta_{i})\) the claim follows. ## 5. Wall and chamber structure We recall here the definition of wall and chamber structure, following [14, SS1.2, SS1.3]. Since we only deal with our very special quiver with relations \((Q(n),I(n))\), we give the various definitions adapted to our case and do not discuss the more general setting. Firstly, we denote by \(\langle\cdot,\cdot\rangle\) the usual scalar product on \(\mathbb{R}^{n+1}\). We start by recalling King's notion of \(v\)-semistable modules for an algebra with relations [15]. **Definition 5.1**.: Let \(v\in\mathbb{R}^{n+1}\). We say that a \((Q(n),I(n))\)-module \(M\) is \(v\)_-semistable_ if \(\langle v,\dim M\rangle=0\) and \(\langle v,\dim N\rangle\leq 0\) for any non zero proper subobject \(N\) of \(M\). Equivalently, we say that \(M\) is \(v\)_-semistable_ if \(\langle v,\dim M\rangle=0\) and \(\langle v,\dim Q\rangle\geq 0\) for any non zero proper quotient \(Q\) of \(M\). **Remark 5.2**.: Let \(M\) be a non zero \((Q(n),I(n))\)-module and let \(v\in\mathbb{R}^{n+1}\). Observe that if \(N\), \(N^{\prime}\) are subobjects of \(M\) such that \(N\cap N^{\prime}=(0)\), \(\langle v,\dim N\rangle\leq 0\) and \(\langle v,\dim N^{\prime}\rangle\leq 0\) then \(N\oplus N^{\prime}\) is a subobject of \(M\) and automatically \(\langle v,N\oplus N^{\prime}\rangle\leq 0\). Therefore it is enough to consider indecomposable subobjects in Definition 5.1. **Definition 5.3**.: ([14, Definition 11]) 1. Let \(M\) be a non zero \(B(n)\)-module. Its _stability space_ is \[\mathcal{D}(M)=\{v\in\mathbb{R}^{n+1}\mid\text{ $M$ is $v$-semistable}\}.\] 2. We say that the stability space \(\mathcal{D}(M)\) of a non-zero module is a _wall_ if it has codimension 1. 3. The _chambers_ are open connected components of \[\mathbb{R}^{n+1}\setminus\overline{\bigcup\mathcal{D}(M)},\] where the union runs over the set of indecomposable \(B(n)\)-modules. 4. The _wall and chamber structure for \(B(n)\)_ is the combination of all \(\mathcal{D}(M)\) for indecomposable modules \(M\) and all chambers. **Remark 5.4**.: In [14] to define the chambers the complement is taken with respect to the closure of the union of all non-zero modules, but then [14, Proposition 12] shows that it is enough to consider the indecomposable ones. **Example 5.5**.: In the case \(n=1\) we have five indecomposable. We keep the same notation as in Example 4.5 and describe the wall and chamber structure of \(B(1)\). Determining stability space of simple modules is immediate, as by definition they do not have any proper submodule: \[\mathcal{D}(S_{0}) =\{v\in\mathbb{R}^{2}\mid\langle v,(0,1)\rangle=0\}=x\text{ axis},\] \[\mathcal{D}(S_{1}) =\{v\in\mathbb{R}^{2}\mid\langle v,(1,0)\rangle=0\}=y\text{ axis}.\] We are left with other three indecomposable modules: \(M(\alpha_{0})\) and \(M(\beta_{0})\) (dual to each other) and the (injective-projective) indecomposable object \(M(\beta_{0}\alpha_{0})\). Observe that the only proper submodule of \(M(\alpha_{0})\) is \(S_{0}\), which is also the unique proper quotient of \(M(\beta_{0})\), so that \[\mathcal{D}(M(\alpha_{0})) =\left\{v\in\mathbb{R}^{2}\mid\begin{array}{l}\langle v,(1,1) \rangle=0,\\ \langle v,(0,1)\rangle\leq 0\end{array}\right\}=\left\{(t,-t)\mid t\in \mathbb{R}_{\geq 0}\right\},\] \[\mathcal{D}(M(\beta_{0})) =\left\{v\in\mathbb{R}^{2}\mid\begin{array}{l}\langle v,(1,1) \rangle=0,\\ \langle v,(0,1)\rangle\geq 0\end{array}\right\}=\left\{(t,-t)\mid t\in \mathbb{R}_{\leq 0}\right\}.\] Finally, \(M(\beta_{0}\alpha_{0})\) has only two proper subobjects, which are \(M(\beta_{0})\) and \(S_{0}\) ad hence its stability space is \[\mathcal{D}(M(\beta_{0}\alpha_{0}))=\left\{v\in\mathbb{R}^{2}\mid\begin{array} []{l}\langle v,(1,2)\rangle=0,\\ \langle v,(1,1)\rangle\leq 0,\\ \langle v,(0,1)\rangle\leq 0\end{array}\right\}=\left\{(0,0)\right\}.\] The wall and chamber structure in this case is represented in Figure 1. If we compare our picture with [20, Figure 2] we notice that the wall and chamber structure for \(B(1)\) encodes the whole wall crossing phenomena in Bridgeland's stability space of the bounded derived category of constructible sheaves on \(\mathbb{P}^{1}\). **Lemma 5.6**.: _Let \(M\) be a \((Q(n),I(n))\)-module and assume that there exists a proper nonzero submodule \(N\) and a proper nonzero quotient \(Q\) such that \(\dim N=\dim Q\). Thus, \(\mathcal{D}(M)\subseteq\mathcal{D}(N)\)._ Figure 1. Wall and chamber structure of \(B(1)\) Proof.: Recall that \[\mathcal{D}(M) =\left\{v\in\mathbb{R}^{n+1}\middle|\begin{array}{c}\langle v, \dim M\rangle=0,\\ \langle v,\dim N^{\prime}\rangle\leq 0\text{ for any subobj }N^{\prime}\subset M\end{array}\right\}\] \[=\left\{v\in\mathbb{R}^{n+1}\middle|\begin{array}{c}\langle v, \dim M\rangle=0,\\ \langle v,\dim Q^{\prime}\rangle\geq 0\text{ for any quotient }M\twoheadrightarrow Q^{\prime}. \end{array}\right\}\] Let \(v\in\mathcal{D}(M)\). Since \(\dim N=\dim Q\), then it must hold \(\langle v,\dim N\rangle=0\). Moreover, being \(N\) a subject of \(M\), every subobject of \(N\) is also a subobject of \(M\), and hence \(\langle v,\dim N^{\prime}\rangle\leq 0\) for any subobject \(N^{\prime}\subseteq N\). Thus we conclude that \(\mathcal{D}(M)\subseteq\mathcal{D}(N)\). **Lemma 5.7**.: _Let \(M\) be an indecomposable \(B(n)\)-module. Then there exists a thin indecomposable \(L\) such that \(\mathcal{D}(M)\subseteq\mathcal{D}(L)\)._ Proof.: If \(M\) is thin the statement is trivial, so that we can assume that \(M\) is not thin. In this case, by the proof of Lemma 4.10 there exists an \(i\in[0,n-1]\) such that \(S_{i+1}\) is both a submodule and a quotient of \(M\). Thus by Lemma 5.6 we have that \(\mathcal{D}(M)\subset\mathcal{D}(S_{i})\). The following theorem is now an immediate consequence of Lemma 5.7 and the definition of wall and chamber structure. **Theorem 5.8**.: _The wall and chamber structure of \(B(n)\) is uniquely determined by conditions coming from thin indecomposables._ In the proof of Lemma 4.10 we have classified the submodules of any non thin module. This allows us to find explicit inequality description for the stability spaces of indecomposable non thin modules. Notice that by the previous theorem this is not needed for the wall and chamber structure of \(B(n)\), but we decided to include it in our paper for completeness since it is easy to deduce from what we have already seen. **Proposition 5.9**.: _The stability spaces for the non thin indecomposable modules of \(B(n)\) are as follows:_ \[\mathcal{D}(M(\beta_{n-1}\alpha_{n-1}))=\left\{v=(v_{n},\ldots,v_{0})\in \mathbb{R}^{n+1}\mid v_{n}=v_{n-1}=0\right\},\] \[\mathcal{D}(R(i))=\left\{v=(v_{n},\ldots,v_{0})\in\mathbb{R}^{n+1}\mid v_{i}=v _{i+1}=v_{i+2}=0\right\},\quad i\in[0,n-2].\] _In particular, if \(M\) is not thin then \(\mathcal{D}(M)\) is not a wall._ Proof.: First let \(M\simeq M(\beta_{n-1}\alpha_{n-1})\). If \(v\in\mathcal{D}(M)\) then \(\langle v,\dim M\rangle=0\), that is \(v_{n-1}+2v_{n}=0\). But there is a subobject and a quotient of \(M\) which are isomorphic to \(S_{n}\) (cf. proof of Lemma 4.10). Thus \(v_{n}=0\) and hence also \(v_{n-1}=0\). This is a also sufficient condition for \(v\) to belong to \(\mathcal{D}(M)\) as all submodules of \(M\) would have dimension vector whose entries different from the \(n\)-th and \(n-1\)-st are zero, so that we do not have any condition on \(v_{j}\) for \(j\neq n,n-1\). Assume now that \(M\simeq R(i)\) for some \(i\in[0,n-2]\). Then all submodules and quotients of \(M\) have dimension vector whose non zero entries are indexed by some subset of \([i,i+2]\). This means that the conditions on a vector \(v\in\mathbb{R}^{n+1}\) to belong to \(\mathcal{D}(M)\) involve only \(v_{i},v_{i+1},v_{i+2}\) while all the others are free. If \(v\) is in the stability space of \(M\) has to verify \[v_{i}+2v_{i+1}+v_{i+2}=0. \tag{3}\] Moreover (cf. proof of lemma 4.10) \(M\) has submodules (among others) isomorphic to \(S_{i+1}\), \(M(\beta_{i})\) and has a subquotients \(S_{i+1}\) and \(M(\alpha_{i})\). We deduce that \[\langle v,\dim S_{i+1}\rangle=v_{i+1}=0,\quad\langle v,\dim M(\beta_{i}) \rangle=v_{i}+v_{i+1}=0.\] The first equality together with the second one tells us that \(v_{i+1}=v_{i}=0\) and this combined with (3) tells us that also \(v_{i+2}=0\). By the above considerations these are also sufficient conditions. **Example 5.10**.: We apply Theorem 5.8 to determine the wall and chamber structure in the \(n=2\) case. In principle, Corollary 3.6 tells us that we need to compute \(3^{2}+2=11\) stability spaces, but by the previous theorem we know that it is enough to determine 9 of them. Note that there are three stability spaces given by simples and other three pairs of stability spaces given by (dual) thin indecomposables (this is in fact a general feature, as explained in the proof of the next Theorem 5.11). Recall that the thin indecomposable \(B(2)\)-modules are parameterised by the set \(\mathscr{S}^{\prime}=[0,2]\cup\{(a,b,\eta)\mid 0\leq a<b\leq 2\ \eta\in\{\pm 1\}\}\). We denote by \(M(c)\) the indecomposable module corresponding to \(c\in\mathscr{S}^{\prime}\). \[\mathcal{D}(M(0))=xy\text{-hyperplane},\quad\mathcal{D}(M(1))=xz\text{- hyperplane},\quad\mathcal{D}(M(2))=yz\text{-hyperplane},\] \[\mathcal{D}(M(0,1,-1))=\left\{(x,y,z)\in\mathbb{R}^{3}\middle|\begin{array}[] {l}y+z=0,\\ z\leq 0.\end{array}\right\}\] \[\mathcal{D}(M(0,1,1))=\left\{(x,y,z)\in\mathbb{R}^{3}\middle|\begin{array}[] {l}y+z=0,\\ z\geq 0.\end{array}\right\}\] \[\mathcal{D}(M(1,2,-1))=\left\{(x,y,z)\in\mathbb{R}^{3}\middle|\begin{array}[] {l}x+y=0,\\ y\geq 0.\end{array}\right\},\] \[\mathcal{D}(M(1,2,1))=\left\{(x,y,z)\in\mathbb{R}^{3}\middle|\begin{array}[] {l}x+y=0,\\ y\geq 0.\end{array}\right\},\] \[\mathcal{D}(M(0,2,-1))=\left\{(x,y,z)\in\mathbb{R}^{3}\middle|\begin{array}[] {l}x+y+z=0,\\ x\leq 0,\ z\leq 0\end{array}\right\},\] \[\mathcal{D}(M(0,2,1))=\left\{(x,y,z)\in\mathbb{R}^{3}\middle|\begin{array}[] {l}x+y+z=0,\\ x\geq 0,\ z\geq 0\end{array}\right\}.\] Figure 2 illustrates the wall and chamber structure of \(B(2)\). In Figure 2, the three hyperplanes \(\mathcal{D}(M(0)),\mathcal{D}(M(1))\) and \(\mathcal{D}(M(2))\) appear in pink, purple and blue respectively. Moreover, \(\mathcal{D}(M(0,1,1))\) and \(\mathcal{D}(M(0,1,-1))\) appear in black and green, \(\mathcal{D}(M(1,2,-1))\) and \(\mathcal{D}(M(1,2,1))\) appear in black and red, and finally \(\mathcal{D}(M(0,2,-1))\) and \(\mathcal{D}(M(0,2,1))\) are in dark blue. The following theorem generalises the previous example, providing an explicit description of all the relevant stability spaces: **Theorem 5.11**.: _Let \(\mathscr{S}\) be the set from Lemma 3.5. Then_ 1. \(\mathcal{D}(M(\epsilon_{i}))=\{(v_{n},v_{n-1}\ldots,v_{0})\in\mathbb{R}^{n+1} \ |\ v_{i}=0\}\)_,_ 2. _if_ \(c=(a,b,1)\in\mathscr{S}\) _for some_ \(0\leq a<b\leq n\)_, then_ \[\mathcal{D}(M(a,b,-1))=\left\{(v_{n},v_{n-1}\ldots,v_{0})\in\mathbb{R}^{n+1} \left|\begin{array}{l}v_{a}+v_{a+1}+\ldots+v_{b}=0,\\ \sum_{i=a^{\prime}}^{b^{\prime}}v_{i}\leq 0,\ \left(\begin{array}{l}a^{\prime},b^{ \prime}\end{array}\right)\subseteq[a,b],\ a^{\prime}\equiv_{2}a,\\ (b^{\prime}=b\ or\ b^{\prime}\equiv_{2}a+1)\end{array}\right.\right\},\] 3. _if_ \(c=(a,b,1)\) _for some_ \(0\leq a<b\leq n\)_, then_ \[\mathcal{D}(M(a,b,-1))=\left\{(v_{n},v_{n-1}\ldots,v_{0})\in\mathbb{R}^{n+1} \left|\begin{array}{l}v_{a}+v_{a+1}+\ldots+v_{b}=0,\\ \sum_{i=a^{\prime}}^{b^{\prime}}v_{i}\geq 0,\ \left(\begin{array}{l}a^{ \prime},b^{\prime}\end{array}\right)\subseteq[a,b],\ a^{\prime}\equiv_{2}a,\\ (b^{\prime}=b\ or\ b^{\prime}\equiv_{2}a)\end{array}\right.\right\},\] _where \(x\equiv_{2}y\) indicates that \(x\) and \(y\) have the same parity._ Figure 2. Wall and chamber structure of \(B(2)\). Proof.: Claim _(1)_ follows immediately from the fact that \(M(\epsilon_{i})\) is nothing but a simple module \(\simeq S_{i}\) whose dimension vector has only one nonzero entry in position \(i\). Let \(M\simeq M(a,b,\eta)\). Observe that any submodule or subquotient of a thin module is itself thin, therefore a submodule \(N\), resp. a subquotient \(Q\), of \(M\) is uniquely determined by the set \(i\in[a,b]\) such that \(N_{i}\neq 0\), resp. \(Q_{i}\neq 0\). Let \(w=\gamma_{1}\ldots\gamma_{m}\) be the string \(\psi((a,b,\eta))\) (see proof of Lemma 3.5). By definition of string representations it is clear that \(N\) is a submodule if and only if whenever there is an \(i\in[n]\) such that \(N_{i}\neq(0)\) then the following two conditions hold: * there exists \(j\in[0,m]\) such that \(f_{w}(j)=i\), * if there is a \(k\in[m]\) such that \(s(\tilde{\gamma_{k}})=i\) then \(N_{t(\tilde{\gamma_{k}})}\neq 0\). Dually, \(Q\) is a quotient if and only if whenever there is an \(i\in[n]\) such that \(Q_{i}\neq(0)\) then the following two conditions hold: * there exists \(j\in[0,m]\) such that \(f_{w}(j)=i\), * if there is a \(k\in[m]\) such that \(t(\tilde{\gamma_{k}})=i\) then \(Q_{s(\tilde{\gamma_{k}})}\neq 0\). We also observe that if \(M_{i}\neq(0)\) then either \(i\) is source for all adjacent \(\tilde{\gamma}\) or is target for all adjacent \(\tilde{\gamma}\). Therefore _(2)_ and _(3)_ are equivalent: \(\dim M(a,b,-1)=\dim M(a,b,1)\) and if follows from what we just discussed that a \(B(n)\)-module \(N\) is a subobject of \(M(a,b,-1)\) if and only if it is a quotient of \(M(a,b,1)\). This implies that the defining inequalities of \(\mathcal{D}(M(a,b,1))\) are obtained by reversing the ones of \(\mathcal{D}(M(a,b,-1))\) and viceversa. Thus we are reduced to show _(2)_. Let \(M=M(a,b,-1)\). Since \(\dim M_{i}=1\) for any \(i\in[a,b]\) and \(=0\) otherwise, we have that \(\langle(v_{n},v_{n-1},\ldots,v_{0}),\dim M\rangle=0\) if and only if \(\sum_{i=a}^{b}v_{i}=0\). Note that \(S_{i}\) is a submodule if and only if there exists a \(k\in[m]\) such that \(t(\tilde{\gamma_{k}})=i\) (and hence \(i\in[a,b]\)). Let \(w=\gamma_{1}\ldots\gamma_{m}=\psi((a,b,-1))\). In this case we have \(\gamma_{m}=\alpha_{a}^{*}\) and hence \(a\) is a target of \(\tilde{\alpha_{a}^{*}}\). Therefore the targets are the vertices labelled by elements \(c\equiv_{2}a\). We conclude that \(S_{i}\) is a submodule of \(M\) if and only if \(i\equiv_{2}a\). By the classification of indecomposable thin modules we also know that all indecomposable submodules of \(M\) corresponding to alternating walks have to be parameterised by a subset of \(\{(a^{\prime},b^{\prime},\eta^{\prime})\mid a\leq a^{\prime}<b^{\prime}\leq b, \ \eta^{\prime}\in\{\pm 1\}\}\). By the same reasoning as before, we conclude that \(a^{\prime}\equiv_{2}a\). If \(b^{\prime}\neq b\), once again it has to satisfy \(b^{\prime}\equiv_{2}a\), otherwise it would be source of some honest arrow \(\tilde{\gamma_{k}}\). Instead, the vertex \(b=f_{w}(m)\) cannot be source of any arrow not contained in \([a^{\prime},b]\) and hence \(M(a^{\prime},b,-1)\) is (isomorphic to) a submodule of \(M\) for any \(a^{\prime}\equiv_{2}a\). The claim now follows by taking dimension vectors and calculating the scalar products with \(v\). ## Appendix A Perverse sheaves on the complex projective space and \(B(n)\) ### Generalities on perverse sheaves In this section we recall some definitions and important facts on perverse sheaves, see [3, 13] #### a.1.1. Topologically stratified spaces We recall the inductive definition of a topologically stratified space in the sense of [13]. A \(0\)-dimensional topologically stratified space is a discrete union of points. Let \(d>1\), a \(d\)-dimensional topologically stratified space \(X\) is a paracompact Hausdorff topological space with a finite filtration by closed subset \(X=X_{d}\supset\ldots\supset X_{0}\supset X_{-1}=\emptyset\) such that \(X_{i}\smallsetminus X_{i-1}\) is a (possibly empty) \(i\)-dimensional topological manifold, and such that each \(x\in X_{i}\smallsetminus X_{i-1}\) has an open neighbourhood filtration-preserving homeomorphic to \(\mathbb{R}^{i}\times C(L)\) for some \((d-i-1)\)-topologically stratified space \(L\). The (non empty) connected components of \(X_{i}\smallsetminus X_{i-1}\) are called _strata_ and the space \(L\) is called _link_ of \(X_{i}\smallsetminus X_{i-1}\) at \(x\), where \(C(L)=L\times[0,1)/L\times\{0\}\) denotes the open cone on \(L\) with the induced filtration by the vertex and the subsets \(L_{i}\times[0,1)/L_{i}\times\{0\}\). #### a.1.2. The constructible derived category Let \(\mathds{k}\) be a field. Let \(X\) be a topologically stratified space and let \(\mathrm{D}^{b}(X)\) be the bounded derived category of sheaves of \(\mathds{k}\)-vector spaces on \(X\). The _constructible derived category of \(X\)_, which we will denote by \(\mathrm{D}_{c}(X)\), is the full subcategory of \(\mathrm{D}^{b}(X)\) with objects the complexes whose cohomology sheaves are locally-constant when restricted to each stratum of \(X\). Note that \(\mathrm{D}_{c}(X)\) is a triangulated category. Let \(Z\) be a closed union of strata of \(X\). The complementary maps \(\imath\colon Z\to X\) and \({}_{\mathcal{I}}\colon U=X\smallsetminus Z\to X\) induce exact functors \[\mathrm{D}_{c}(Z)\xrightarrow{\imath_{*}}\mathrm{D}_{c}(X)\xrightarrow{{}_{ \mathcal{I}}^{*}}\mathrm{D}_{c}(U)\] which give rise to a triangulated recollement, that is they admit left and right adjoints that satisfy some well-known identities, see for instance [3, SS1.4.1]. #### a.1.3. Perverse t-structures Let \(X\) be a topologically stratified space with strata \(S_{i}\). A _perversity \(p\) on \(X\)_ is a \(\mathbb{Z}\)-valued function from the set of strata of \(X\). Let \(\imath_{S}\colon S\to X\) be the inclusion of a stratum into \(X\). The pair of subcategories \[{}^{p}D^{\leq 0} =\{\mathcal{A}\in\mathrm{D}_{c}(X)\mid\mathcal{H}^{k}(\imath_{S}^ {*}\mathcal{A})=0\quad k>p(S)\}\] \[{}^{p}D^{\geq 0} =\{\mathcal{A}\in\mathrm{D}_{c}(X)\mid\mathcal{H}^{k}(\imath_{S}^ {!}\mathcal{A})=0\quad k<p(S)\}, \tag{4}\] where \(\mathcal{H}^{k}(\mathcal{E})\) denotes the cohomology sheaf of the complex of sheaves \(\mathcal{E}\), is a bounded t-structure on \(\mathrm{D}_{c}(X)\) for any perversity \(p\) on \(X\), see [3, 2.1.4]. The t-structure \(({}^{p}D^{\leq 0},{}^{p}D^{\geq 0})\) is called the _\(p\)-perverse t-structure on \(\mathrm{D}_{c}(X)\)_. The category of \(p\)-perverse sheaves on \(X\) is defined as the _heart_ of such t-structure, that is \({}^{p}\mathrm{Perv}(X)={}^{p}D^{\leq 0}\cap{}^{p}D^{\geq 0}\). It follows from the general theory of t-structure that \({}^{p}\mathrm{Perv}(X)\) is an abelian subcategory of \(\mathrm{D}_{c}(X)\). #### a.1.4. Perverse sheaves as representations of a quiver with relations Under suitable topological assumptions on \(X\), if \(\mathds{k}\) is algebraically closed we have an exact equivalence between categories of perverse sheaves and representations of a quiver with relations by [2, Chapter II, Theorem 3.7]. **Theorem A.1**.: _[_11_, Corollary 5.3 and SS5.1]_ _Let \(\mathds{k}\) be an algebraically closed field and \(X\) a topologically stratified space with finitely many strata, each with finite fundamental group. For any perversity \(p\) on \(X\) the category \({}^{p}\mathrm{Perv}(X)\) is equivalent to a category of representations of a quiver with relations._ ### Perverse sheaves on the complex projective space Let \(n\geq 1\) and consider \(X=\mathbb{P}^{n}\) with the filtration \[X=X_{2n}\supset X_{2n-1}=X_{2n-2}=\mathbb{P}^{n-1}\supset X_{3}=X_{2}=\mathbb{ P}^{1}\supset X_{1}=X_{0}=\mathbf{pt}\supset X_{-1}=\emptyset.\] There are \(n+1\) (non empty) strata \[S_{i}=\mathbb{P}^{i}\smallsetminus\mathbb{P}^{i-1}\cong\mathbb{C}^{i}\quad i \in[0,n]\] such that \(\pi_{1}(S_{i})\) is trivial for all \(i\in[0,n]\). Let \(m\) be the middle perversity on \(X\) (that is \(m(S_{i})=-i\) for all \(i\in[0,n]\)) and \(\mathds{k}\) be an algebraically closed field. By Theorem A.1, there is an exact equivalence between the category of \(m\)-perverse sheaves on \(X\) and the category of representations of the quiver \((Q(n),I(n))\) defined in SS2, that is \[{}^{m}\mathrm{Perv}(\mathbb{P}^{n})\simeq\Bbbk Q(n)/I(n){-}\mathrm{mod}.\] The category of middle perversity perverse sheaves on \(X=\mathbb{P}^{n}\) constructible with respect to the Schubert stratification is equivalent to a regular block of the parabolic category \(\mathcal{O}\) for \(\mathfrak{sl}_{n+1}\) and (maximal) parabolic \(\mathfrak{q}\simeq\mathfrak{sl}_{n}\) as for [5, Proposition 3.5.1]. On the other hand such a block is equivalent to the category of finite dimensional modules for \(\,\Bbbk Q(n)/I(n)\) we investigated in this paper (see [19, Example 1.1]). ### Back to Bridgeland stability conditions Recall that we are aiming at getting a better understanding of Bridgeland's space of stability conditions \(\mathrm{Stab}(\mathrm{D}_{c}(\mathbb{P}^{n}))\) on the triangulated category \(\mathrm{D}_{c}(\mathbb{P}^{n})\). By [4, SS1.5], middle perverse sheaves on \(\mathbb{P}^{n}\) are a faithful heart since we are stratifying the projective space by affine subvarieties whose cohomology is concentrated in one degree. This means that if we take the derived category of middle perverse sheaves on \(\mathbb{P}^{n}\) we get back to \(\mathrm{D}_{c}(\mathbb{P}^{n})\). On the other hand, by [7, Lemma 7.3], we know that the wall and chamber structure of the category \(A{-}\mathrm{mod}\) of modules over a finite dimensional algebra \(A\) is an open subset of \(\mathrm{Stab}(\mathrm{D}(A{-}\mathrm{mod}))\). Since \(B(n){-}\mathrm{mod}\) is equivalent to \({}^{m}\mathrm{Perv}(\mathbb{P}^{n})\) then the wall and chamber structure of \(B(n)\) is nothing but an open piece of \(\mathrm{Stab}(D(B(n){-}\mathrm{mod}))=\mathrm{Stab}(\mathrm{D}({}^{m}\mathrm{ Perv}(\mathbb{P}^{n})))=\mathrm{Stab}(\mathrm{D}_{c}(\mathbb{P}^{n}))\).
2306.16487
Moments of Artin-Schreier L-functions
We compute moments of $L$-functions associated to the polynomial family of Artin--Schreier covers over $\mathbb{F}_q$, where $q$ is a power of a prime $p>2$, when the size of the finite field is fixed and the genus of the family goes to infinity. More specifically, we compute the $k^{\text{th}}$ moment for a large range of values of $k$, depending on the sizes of $p$ and $q$. We also compute the second moment in absolute value of the polynomial family, obtaining an exact formula with a lower order term, and confirming the unitary symmetry type of the family.
Alexandra Florea, Edna Jones, Matilde Lalin
2023-06-28T18:18:24Z
http://arxiv.org/abs/2306.16487v2
# Moments of Artin-Schreier \(L\)-functions ###### Abstract. We compute moments of \(L\)-functions associated to the polynomial, odd polynomial and ordinary families of Artin-Schreier covers over \(\mathbb{F}_{q}\), where \(q\) is a power of a prime \(p\), when the size of the finite field is fixed and the genus of the family goes to infinity. In the polynomial family we compute the \(k^{\text{th}}\) moment for a large range of values of \(k\), depending on the sizes of \(p\) and \(q\). We also compute the second moment in absolute value of the polynomial family, obtaining an exact formula with a lower order term, and confirming the unitary symmetry type of the family. For the odd polynomial family, we obtain asymptotic formulas for the first two moments, in agreement with the symplectic random matrix theory model, and identifying a lower order term in the case of the first moment. We finally obtain an asymptotic formula for the first moment in the ordinary family of Artin-Schreier \(L\)-functions, again explicitly computing a lower order term. Key words and phrases:Artin-Schreier curves; moments of \(L\)-functions; distribution of values of \(L\)-functions 2020 Mathematics Subject Classification: 11G20,11M50, 14G15 ## 1. Introduction In this paper, we are interested in evaluating moments of \(L\)-functions associated to Artin-Schreier covers of \(\mathbb{P}^{1}\). Computing moments in families of \(L\)-functions has a long history. For example, the moments of the Riemann zeta-function \(\zeta(s)\) were introduced by Hardy and Littlewood [11], who obtained asymptotic formulas for the second moment. The fourth moment was studied in [14, 15]. There has been a wealth of literature on moments in various other families of \(L\)-functions; for a (non-exhaustive) list, see for example [16, 17, 18, 19]. Here, we focus on the moments of \(L\)-functions of Artin-Schreier curves. These form an interesting family with a rich arithmetic structure. Their zeta functions are expressed in terms of additive characters of \(\mathbb{F}_{p}\), not in terms of multiplicative characters (as in the case of hyperelliptic curves or cyclic \(\ell\)-covers, for example). The terms corresponding to a fixed additive character can be expressed as exponential sums. The extra arithmetic structure can be used to refine the Weil bound on Artin-Schreier curves [11]. Statistics of zeros of Artin-Schreier \(L\)-functions have been extensively studied. When the size of the finite field goes to infinity, one can use deep equidistribution results of Katz [19], building on work of Katz-Sarnak [20], to show that the _local_ statistics are given by the corresponding statistics of eigenvalues of random matrices in certain ensembles, depending on the specific family under consideration. When considering Artin-Schreier curves, the \(p\)-rank introduces a stratification of the moduli space of covers of genus \(g\)[21]. For example, \(p\)-rank \(0\) corresponds to the family of _polynomial Artin-Schreier curves_, while, when \((p-1)\) divides the genus, the maximal \(p\)-rank corresponds to the family of _ordinary Artin-Schreier curves_. Using the Katz-Sarnak results, one can show that in the large finite field limit, the local statistics in the polynomial and ordinary families follow the local statistics of the unitary group of random matrices, while the local statistics in the case of the _odd polynomial family_ (which we will define later on) follow those of the symplectic group of random matrices. One can also consider the same statistics in the regime when the base finite field is fixed, and the genus of the family goes to infinity, in which case one cannot make use of the equidistribution results. Entin [10] considered the local statistics for the polynomial Artin-Schreier family and showed agreement with the random matrix model; these results were further improved and extended to the ordinary and odd polynomial families in the recent work of Entin and Pirani [11]. The _mesoscopic_ statistics for the ordinary and polynomial families (as well as other \(p\)-rank strata) were considered in [1], BDFL16], where the authors showed that the number of zeros with angles in a prescribed subinterval \(I\) of \([-\pi,\pi]\) whose length is either fixed or goes to \(0\), while \(g|I|\to\infty\) (where \(g\) denotes the genus of the family), has a standard Gaussian distribution. One notices that the mesoscopic scale does not distinguish between the various Artin-Schreier families; hence, the local scale is a finer detector of the family structure. We note that the local statistics of zeros in the fixed finite field limit have been studied over function fields in the case of hyperelliptic curves [12, 13, 14, 15], cyclic \(\ell\)-covers [1, 1, 16], non-cyclic cubic covers of \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\)[1, 16, 17] and Dirichlet \(L\)-functions [1]. The distribution of zeros in the global and mesoscopic regimes were considered in [13] for hyperelliptic curves, in [18] for cyclic \(\ell\)-covers and in [18] for abelian covers of algebraic curves. In this work we compute moments in the families of polynomial, odd polynomial and ordinary Artin-Schreier \(L\)-functions, and show that the moments for the polynomial family behave like the moments of the characteristic polynomials of random matrices in the unitary group, while the moments of the odd polynomial family behave like the moments of random matrices in the symplectic group. Moreover, we check that our answers agree with conjectures about moments [1, 2]. Our results further support the Katz-Sarnak philosophy and agree with the behavior observed in Entin [10] and Entin-Pirani [11] regarding the local statistics of zeros. To describe our results, we first introduce some notation. Let \(p>2\) be an odd prime, and \(q\) a power of \(p\). An Artin-Schreier curve is given by the affine equation \[C_{f}:y^{p}-y=f(x),\] where \(f(x)\in\mathbb{F}_{q}(x)\) is a rational function, together with the automorphism \(y\mapsto y+1\). Let \(p_{1},\ldots p_{r+1}\) be the set of poles of \(f(x)\) and let \(d_{j}\) be the order of the pole \(p_{j}\). Then the genus of \(C_{f}\) is given by \[\mathfrak{g}(C_{f})=\frac{p-1}{2}\Big{(}-2+\sum_{j=1}^{r+1}(d_{j}+1)\Big{)}= \frac{p-1}{2}\Big{(}r-1+\sum_{j=1}^{r+1}d_{j}\Big{)}.\] To an Artin-Schreier curve one also associates its \(p\)-rank, which is defined to be the \(\mathbb{Z}/p\)-rank of \(\operatorname{Jac}(C_{f}\times\overline{\mathbb{F}_{q}})[p]\). A curve with \(p\)-rank \(0\) is in the polynomial family, which corresponds to the case in which \(f(x)\) is a polynomial. If we impose the extra condition that \(f(-x)=-f(x)\), then the curve is in the odd polynomial family. When \(p-1\) divides \(\mathfrak{g}(C_{f})\), a curve with \(p\)-rank equal to \(\mathfrak{g}(C_{f})\) is in the ordinary family. The zeta function of \(C_{f}\) is given by \[Z_{C_{f}}(u)=\exp\Big{(}\sum_{k=1}^{\infty}N_{k}(C_{f})\frac{u^{k}}{k}\Big{)},\] where \(N_{k}(C_{f})\) is the number of points on \(C_{f}\) over \(\mathbb{F}_{q^{k}}\). By the Weil conjectures, it follows that \[Z_{C_{f}}(u)=\frac{\mathcal{L}(u,C_{f})}{(1-u)(1-qu)}, \tag{1}\] where \(\mathcal{L}(u,C_{f})\) is the \(L\)-function associated to \(C_{f}\), which is a polynomial of degree \(2\mathfrak{g}(C_{f})\). It further follows that \[P_{C_{f}}(u)=\prod_{\psi\neq 1}\mathcal{L}(u,f,\psi),\] where \(\psi\) varies over the non-trivial additive characters of \(\mathbb{F}_{p}\) and where \[\mathcal{L}(u,f,\psi)=\exp\Big{(}\sum_{n=1}^{\infty}S_{n}(f,\psi)\frac{q^{-ns }}{n}\Big{)},\] with \[S_{n}(f,\psi)=\sum_{\alpha\in\mathbb{F}_{q^{n}}}\psi(\operatorname{tr}_{q^{n} /p}(f(\alpha))).\] Here, \(\operatorname{tr}_{q^{n}/p}:\mathbb{F}_{q^{n}}\to\mathbb{F}_{p}\) denotes the absolute trace map. Here, we will consider three families of Artin-Schreier \(L\)-functions. * The _polynomial_ Artin-Schreier family, denoted by\(\mathcal{AS}_{d}^{0}\), is defined for \((d,p)=1\) by (2) \[\mathcal{AS}_{d}^{0}=\Big{\{}f\in\mathbb{F}_{q}[x]\,:\,f(x)=\sum_{j=0}^{d}a_{ j}x^{j},a_{d}\neq 0,a_{j}=0\text{ if }j>0,p\mid j\Big{\}}.\] Each curve \(C_{f}\) with \(f\in\mathcal{AS}_{d}^{0}\) has genus \(\mathfrak{g}=(p-1)(d-1)/2\) and \(p\)-rank \(0\). * The _odd polynomial_ Artin-Schreier family, denoted by \(\mathcal{AS}_{d}^{0,\text{odd}}\) is defined for \((d,2p)=1\) by (3) \[\mathcal{AS}_{d}^{0,\text{odd}}=\Big{\{}f\in\mathcal{AS}_{d}^{0}\,:\,f(x)=-f(- x)\Big{\}}.\] * The _ordinary_ Artin-Schreier family, denoted by \(\mathcal{AS}_{d}^{\text{ord}}\), is defined by \[\mathcal{AS}_{d}^{\text{ord}}=\Big{\{}f=h/g:h,g\in\mathbb{F}_{q}[x],g\text{ monic square-free},(h,g)=1,\] (4) \[(\deg(g)=d,\deg(h)\leq d)\text{ or }(\deg(g)=d-1,\deg(h)=d)\Big{\}}.\] Each curve \(C_{f}\) with \(f\in\mathcal{AS}_{d}^{\text{ord}}\) has genus \(\mathfrak{g}=(p-1)(d-1)\) and has \(p\)-rank equal to \(\mathfrak{g}(C_{f})\). We will compute moments in each of the families above, and show that their behavior is given by that of random unitary matrices in the cases of the polynomial family and the ordinary family, and that of symplectic matrices in the case of the odd polynomial family. More precisely, we will prove the following theorems. Throughout, fix a non-trivial additive character \(\psi\) of \(\mathbb{F}_{p}\). Our results do not depend on the choice of \(\psi\). In the case of the polynomial family, we prove the following. **Theorem 1.1**.: _Let \(d\) such that \((d,p)=1\). For an integer \(k\geq 2\), we have_ \[\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}}\mathcal{L}\Big{(} \frac{1}{\sqrt{q}},f,\psi\Big{)}^{k}=\prod_{P}\left(\frac{1}{p}\sum_{\ell=0}^{p -1}\left(1-\frac{\xi_{p}^{\ell}}{\sqrt{|P|}}\right)^{-k}\right)+O\Big{(}q^{ \frac{d}{2}\left(\frac{k+1}{p}-1\right)}d^{k+1}k^{d(k+1)+k}\Big{)},\] _where \(\xi_{p}\) denotes a primitive \(p^{\text{th}}\) root of unity._ _When \(k=1\), we have_ \[\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}}\mathcal{L} \Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}=(1-q^{-1})\frac{1-q^{\left(1-\frac{p} {2}\right)\left(\left\lfloor\frac{d}{p}\right\rfloor+1\right)}}{(1-q^{\left(1 -\frac{p}{2}\right)})(1-q^{-\frac{p}{2}})}.\] **Remark 1.2**.: _In order to get an asymptotic formula above for \(k\geq 2\), we need_ \[(k+1)\Big{(}\log_{q}k+\frac{1}{2p}\Big{)}<\frac{1}{2}-\epsilon,\] _for any \(\epsilon>0\). Note that in the expression above, the greater \(\log_{p}q\) is, the more moments we can compute. When \(q\) is a very large power of \(p\), one would be able to compute roughly \(p\) moments, while the case \(q=p\) would allow for a more restricted range of moments._ We also consider the second moment in absolute value for the polynomial family, which is the more standard moment to consider in the case of a family with unitary symmetry. In this case, we obtain an exact formula with a lower order term of size \(dq^{\frac{d}{p}-\frac{d}{2}}\) as follows. For \(m\in\mathbb{Z}\), let \([m]_{p}\) denote the element of \(\{0,1,\ldots,p-1\}\) such that \(m\equiv[m]_{p}\,(\operatorname{mod}p)\). **Theorem 1.3**.: _Let \(d\) be such that \((d,p)=1\). We have_ \[\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}} \left|\mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi\right)\right|^{2}= \frac{(1-q^{1-p})d}{(1-q^{1-\frac{p}{2}})^{2}}-\frac{2pq^{1-\frac{ p}{2}}(1-q^{-\frac{p}{2}})}{(1-q^{1-\frac{p}{2}})^{3}}\] \[+dq^{(d+p-[d]_{p})\left(\frac{1}{p}-\frac{1}{2}\right)}\left(1- \frac{1}{q}\right)\left(\frac{1}{(1-q^{1-\frac{p}{2}})^{2}}-\frac{[d]_{p}}{p( 1-q^{1-\frac{p}{2}})}\right)\] \[+C_{d}\frac{q^{(d-1)\left(\frac{1}{p}-\frac{1}{2}\right)}}{p},\] _where \(C_{d}\) is a constant depending solely on \([d]_{p}\). More precisely,_ \[C_{d}= \frac{1-q^{-\frac{p}{2}}}{1-q^{1-\frac{p}{2}}}S_{2}(d-1,q^{\frac{ 1}{2}-\frac{1}{p}})+S_{2}(d-1,q^{\frac{1}{2}-\frac{1}{p}})+\frac{q-1}{pq}S_{2} (d-1,q^{\frac{1}{2}-\frac{1}{p}})(-1-[d]_{p})\] \[-\frac{q-1}{pq}S_{2}(-1,q^{\frac{1}{2}-\frac{1}{p}})S_{1}(d,q^{ \frac{1}{2}-\frac{1}{p}})+\frac{q-1}{pq}S_{3}(d-1,q^{\frac{1}{2}-\frac{1}{p}} )-\frac{q-1}{pq}S_{3}(d-2,q^{\frac{1}{2}-\frac{1}{p}}),\] _where the formula for \(S_{\ell}(n,x)\) is given by Lemma 2.8._ For the odd polynomial family, we prove the following results. **Theorem 1.4**.: _Let \(d\) be such that \((d,2p)=1\). We have_ \[\frac{1}{|\mathcal{AS}_{d}^{0,odd}|}\sum_{f\in\mathcal{AS}_{d}^{0, odd}}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}= \frac{d\left(1-\frac{1}{q}\right)\left(1-q^{-\frac{p}{2}}\right)}{2 \left(1-q^{1-\frac{p}{2}}\right)}-\frac{pq^{1-\frac{p}{2}}\left(1-\frac{1}{q} \right)^{2}}{2\left(1-q^{1-\frac{p}{2}}\right)^{2}}\] \[+\frac{\left(1+\frac{1}{q}\right)+2\left(1-\frac{1}{q}\right)q^{ 1-\frac{p}{2}}-\left(1+\frac{1}{q}\right)q^{1-p}}{2\left(1-q^{2-p}\right)}+D_ {d}q^{d\left(\frac{1}{p}-\frac{1}{2}\right)},\] _where \(D_{d}\) is a constant depending solely on \([d]_{p}\). More precisely,_ \[D_{d}= \frac{q-1}{pq}\left(S_{2}\left(\frac{d+p}{2},q^{1-\frac{2}{p}} \right)-q^{-\frac{2}{p}}S_{2}\left(\frac{d+p-2}{2},q^{1-\frac{2}{p}}\right)\right.\] \[\left.+q^{\frac{1}{2}-\frac{1}{p}}S_{2}\left(\frac{d-1}{2},q^{1- \frac{2}{p}}\right)-q^{\frac{1}{2}-\frac{3}{p}}S_{2}\left(\frac{d-3}{2},q^{1- \frac{2}{p}}\right)\right),\] _where the formula for \(S_{\ell}(n,x)\) is given by Lemma 2.8._ Finally, we obtain an asymptotic formula for the second moment in the odd polynomial family. **Theorem 1.5**.: _Let \(d\) be such that \((d,2p)=1\). For any \(\epsilon>0\), we have_ \[\frac{1}{|\mathcal{AS}_{d}^{0,odd}|}\sum_{f\in\mathcal{AS}_{d}^{0,odd}} \mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}^{2}=P(d)+O\Big{(}q^{\frac{d }{p}-\frac{d}{2}+\epsilon d}\Big{)},\] _where \(P(d)\) is an explicit degree \(3\) polynomial, whose leading coefficient is given by equation (33)._ We will also prove the following about the ordinary family. **Theorem 1.6**.: _We have_ \[\frac{1}{|\mathcal{AS}_{d}^{ord}|}\sum_{f\in\mathcal{AS}_{d}^{ord}}\mathcal{L }\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}=\frac{\mathcal{E}(1/q,1)}{\mathcal{ G}(1/q^{2})(1-q^{-\frac{p}{2}})(1-q^{1-\frac{p}{2}})}+c_{0}q^{d(\frac{2}{p}-1)}+O \Big{(}q^{d(\frac{2}{p}-\frac{3}{2}+\epsilon)}\Big{)},\] _with \(\mathcal{E}(w,u)\) given in (50), \(\mathcal{G}(u)\) given in (13), and \(c_{0}\) a specific constant (see equation (55))._ Proving Theorems 1.1, 1.3, 1.4, 1.5, and 1.6 relies on using different techniques depending on the family or moment under consideration. The proofs of Theorem 1.1, 1.4, 1.5 and 1.6 have as starting points the relationship between the \(L\)-function of an Artin-Schreier curve and the \(L\)-function of a multiplicative character associated to each such curve, described explicitly in [10] and [EP]. However, the details vary significantly depending on the specific subfamily one is working with. In the case of the polynomial family, one can roughly express the \(k^{\text{th}}\) moment in terms of the \(k^{\text{th}}\) moment of multiplicative characters _of order \(p\)_modulo \(x^{d}\). Computing moments of \(L\)-functions associated to fixed order characters is generally a difficult problem; moments of quadratic \(L\)-functions over function fields are relatively well-understood (see, for example, [1, 12, 13, 14]). Some partial results are known towards moments of cubic \(L\)-functions (see [10]), and much less is known about higher order characters. However, in the case of Artin-Schreier \(L\)-functions, the multiplicative characters of order \(p\) under consideration are modulo \(x^{d}\), which is special. In this case, one can use the dichotomy exploited by Keating and Rudnick in [10], to express sums of the \(k^{\text{th}}\) divisor function in arithmetic progressions in terms of short interval sums over function fields. One can then use strong results about the sum of the generalized divisor function in short intervals over function fields due to Sawin [11]. We note that both the Artin-Schreier polynomial family and the Artin-Schreier ordinary family are expected to have unitary symmetry; hence one might expect the results in these two settings to be of similarly the same quality. However, one notices that in the case of the ordinary family, we only compute the first moment, while many moments are computed in the case of the polynomial family. This is explained by the fact that the multiplicative characters associated to the Artin-Schreier curves in these two different subfamilies have a different structure. In the case of the polynomial family, as mentioned before, one deals with characters modulo \(x^{d}\) or order \(p\); however, for the ordinary family, the associated characters are no longer modulo powers of \(x\), so one cannot use strong results about the divisor function in short intervals. Proving Theorem 1.3 requires different techniques, as one cannot rely on results about the divisor function in short intervals in this case either. Instead, we use the approximate functional equation to write the absolute value squared of the \(L\)-function in terms of sums of length roughly \(d\) (note that the \(L\)-function is a polynomial of degree approximately \(d\)), and then one uses orthogonality relations for the sums over additive characters as in the work in [1]. We remark that in Theorems 1.3, 1.4, and 1.6 we computed specific lower order terms of size \(X^{\frac{1}{p}-\frac{1}{2}}\), where \(X\) is roughly the size of the family (in the case of the polynomial and odd polynomial families, \(X=q^{d}\) and in the case of the ordinary family, \(X=q^{2d}\)). With some extra work, one could compute a similar size lower order term for the second moment in the odd polynomial family as well, but the computations are already laborious, so we have decided not to do that here. We remark that one could consider the moments with absolute value in the family of ordinary Artin-Schreier curves, i.e., one could consider the moments of \(|\mathcal{L}(q^{-1/2},f,\psi)|^{2k}\). However, that would require obtaining an analogue of Lemma 4.2 in the case of the ordinary family. We note that Lemma 4.2 follows from Lemma 9.1 in [1]; an analogue of that result is proven in Lemma 8.5 in [1] for the case of the ordinary family. However, the range of parameters in that result is not strong enough to allow us to compute the moments with the absolute value. We plan to return to this question in the future. The paper is organized as follows. In Section 2 we provide some background on Artin-Schreier \(L\)-functions and gather the results we will need from the work in [EP]. In Sections 3 and 4 we prove Theorems 1.1 and 1.3 respectively. In Section 5 we prove Theorems 1.4 and 1.5, and we prove Theorem 1.6 in Section 6. We finally check that our results match the Random Matrix Theory predictions in Section 7. ## Acknowledgements M. Lalin thanks Julio Andrade, Alina Bucur, Chantal David, Brooke Feigon, and Jonathan Keating for preliminary discussions (which motivated this paper) about moments of Artin-Schreier \(L\)-functions which took place mainly at the American Institute of Mathematics during the workshop "Arithmetic Statistics over finite fields and function fields" in 2014. The study of this problem was further motivated by a talk Alexei Entin gave at the workshop "Moments of \(L\)-functions" held at the University of Northern British Columbia in August 2022, which the first and third author attended. A substantial part of this work was completed during the "Women in Numbers 6" workshop at the Banff International Research Station for Mathematical Innovation and Discovery and the authors would like to thank the organizers and BIRS for the excellent working conditions. This work is supported by the National Science Foundation [DMS-2101769 to AF], the Natural Sciences and Engineering Research Council of Canada [Discovery Grant 355412-2022 to ML], the Fonds de recherche du Quebec - Nature et technologies [Projet de recherche en equipe 300951 to ML]. ## 2. Artin-Schreier curves and \(L\)-functions Here, we will give some basic properties of Artin-Schreier \(L\)-functions and their associated characters. ### Some generalities of function field arithmetics We first introduce some notation and basic objects of study. Let \(\mathcal{M}\) denote the set of monic polynomial in \(\mathbb{F}_{q}[x]\), \(\mathcal{M}_{n}\) the set of monic polynomials of degree \(n\) in \(\mathbb{F}_{q}[x]\) and \(\mathcal{M}_{\leq n}\) the set of monic polynomials of degree less than or equal to \(n\). For \(f\) a polynomial in \(\mathbb{F}_{q}[x]\), let \(d_{k}(f)\) denote the \(k^{\text{th}}\) divisor function (i.e., \(d_{k}(f)=\sum_{f_{1}\cdots f_{k}=f}1\)). The zeta-function of \(\mathbb{F}_{q}[x]\) is defined by \[\zeta_{q}(s)=\sum_{f\in\mathcal{M}}\frac{1}{|f|^{s}},\] for \(\text{Re}(s)>1\). By counting monic polynomials of a fixed degree, it follows that \[\zeta_{q}(s)=\frac{1}{1-q^{1-s}},\] and this provides a meromorphic continuation of \(\zeta_{q}(s)\), with a simple pole at \(s=1\). Making the change of variables \(u=q^{-s}\), the zeta-function becomes \[\mathcal{Z}(u)=\sum_{f\in\mathcal{M}}u^{\deg(f)}=\prod_{P}\Big{(}1-u^{\deg(P) }\Big{)}^{-1},\] for \(|u|<1/q\), where the Euler product above is over monic, irreducible polynomials. One then obtains the expression \[\mathcal{Z}(u)=\frac{1}{1-qu},\] for the zeta-function in the whole complex plane, having a simple pole at \(u=\frac{1}{q}\). Note that throughout the paper, we will often use Perron's formula over function fields. Namely, if \(\mathcal{A}(u)=\sum_{f\in\mathcal{M}}a(f)u^{\deg(f)}\) is absolutely convergent in \(|u|\leq r<1\), then \[\sum_{f\in\mathcal{M}_{n}}a(f)=\frac{1}{2\pi i}\oint\frac{\mathcal{A}(u)}{u^{ n+1}}\,du,\ \ \sum_{f\in\mathcal{M}_{\leq n}}a(f)=\frac{1}{2\pi i}\oint\frac{\mathcal{A}(u)}{u^ {n+1}(1-u)}\,du. \tag{5}\] ### Artin-Schreier \(L\)-functions Recall that we are considering curves given by the affine equation \[C_{f}:y^{p}-y=f(x),\] where \(f(x)\in\mathbb{F}_{q}(x)\) is a rational function, together with the automorphism \(y\mapsto y+1\). Recall the definition (1) of the \(L\)-function \(\mathcal{L}(u,f,\psi)\), where \(\psi\) is a non-trivial additive character of \(\mathbb{F}_{p}\). We also have the following Euler product for the \(L\)-function: \[\mathcal{L}(u,f,\psi)=\prod_{P}\Big{(}1-\psi_{f}(P)u^{\deg(P)}\Big{)}^{-1},\] where \[\psi_{f}(P)=\sum_{\begin{subarray}{c}\alpha\in\mathbb{F}_{q^{\deg P}}\\ P(\alpha)=0\end{subarray}}\psi(f(\alpha))=\psi(\operatorname{tr}_{q^{\deg P}/p }f(\alpha))\text{ for any $\alpha$ a root of $P$},\] and where \(\operatorname{tr}_{q^{n}/p}:\mathbb{F}_{q^{n}}\to\mathbb{F}_{p}\) is the absolute trace map. We extend \(\psi_{f}(F)\) to a completely multiplicative function for any element of \(\mathbb{F}_{q}(x)\). On the other hand, we also have that the \(L\)-function is a polynomial of degree \(d-1\), so we can write \[\mathcal{L}(u,f,\psi)=\sum_{j=0}^{d-1}a_{j}(f,\psi)u^{j},\] where \[a_{j}(f,\psi)=\sum_{F\in\mathcal{M}_{j}}\psi_{f}(F).\] The functional equation of \(\mathcal{L}(u,f,\psi)\) is given by \[\mathcal{L}(u,f,\psi)=\epsilon(f)(qu^{2})^{\frac{d-1}{2}}\mathcal{L}\Big{(} \frac{1}{qu},f,\overline{\psi}\Big{)}. \tag{6}\] (See [11, Section 3].) We will now explain how to associate a multiplicative character to each Artin-Schreier \(L\)-function. Before that, we quickly recall some basic facts about Dirichlet characters over \(\mathbb{F}_{q}[x]\) and their \(L\)-functions. ### Multiplicative characters and their \(L\)-functions Let \(Q(x)\) denote a monic polynomial in \(\mathbb{F}_{q}[x]\). A Dirichlet character modulo \(Q\) is defined to be a character of the multiplicative group \((\mathbb{F}_{q}[x]/Q)^{\times}\), extended to a completely multiplicative function by \(\chi(g)=0\) for any \((g,Q)\neq 1\) and \(\chi(g)=\chi(g\,(\operatorname{mod}Q))\) if \((g,Q)=1\). A Dirichlet character is even if \(\chi(cF)=\chi(F)\) for any \(0\neq c\in\mathbb{F}_{q}\), and odd otherwise. A character is primitive if there is no proper divisor \(Q_{1}|Q\) such that \(\chi(g)=1\) whenever \((g,Q)=1\) and \(g\equiv 1\,(\operatorname{mod}Q_{1})\). If \(Q\in\mathcal{M}\), and \(H\) denotes a subset of the group of characters modulo \(Q\), we denote by \(H^{\operatorname{pr}}\) the set of primitive characters in \(H\). The \(L\)-function associated to a Dirichlet character \(\chi\) modulo \(Q\) is given by \[\mathcal{L}(u,\chi)=\sum_{F\in\mathcal{M}}\chi(F)u^{\deg(F)}=\prod_{P}\Big{(}1 -\chi(P)u^{\deg(P)}\Big{)}^{-1},\] where the product is over monic irreducible polynomials \(P\). Using orthogonality of characters, it follows that if \(\chi\) is non-principal, \(\mathcal{L}(u,\chi)\) is a polynomial of degree at most \(\deg(Q)-1\) If \(\chi\) is a primitive character, then the Riemann hypothesis for \(\mathcal{L}(u,\chi)\) is true, and we can write \[\mathcal{L}(u,\chi)=(1-u)^{\epsilon(\chi)}\prod_{j=1}^{\deg(Q)-1-\epsilon(\chi)} \Big{(}1-u\sqrt{q}\rho_{j}\Big{)},\] where \(|\rho_{j}|=1\) are the normalized inverse roots of \(\mathcal{L}(u,\chi)\), and where \(\epsilon(\chi)=1\) if \(\chi\) is even and \(\epsilon(\chi)=0\) otherwise. ### Multiplicative characters associated to Artin-Schreier curves Here, we gather a few results we need from the work of Entin [12] and Entin-Pirani [EP], which relate the Artin-Schreier \(L\)-functions to Dirichlet \(L\)-functions. For \((d,p)=1\), let \[\mathcal{F}_{d}=\Big{\{}f\in\mathbb{F}_{q}[x]\,:\,f(x)=\sum_{j=0}^{d}a_{j}x^{ j},a_{d}\neq 0,a_{j}=0\text{ if }p\mid j\Big{\}}.\] Note that we have \[|\mathcal{F}_{d}|=(q-1)q^{d-\lfloor\frac{d}{p}\rfloor-1}.\] Using (2), we can write \[\mathcal{AS}_{d}^{0}=\bigsqcup_{b\in\mathbb{F}_{q}}\{f+b\,:\,f\in\mathcal{F}_{ d}\}. \tag{7}\] We have the following result from [EP]. **Lemma 2.1**.: _[_EP_, Lemma 2.1]_ _For \(f\in\mathbb{F}_{q}(x)\setminus\mathbb{F}_{q}\) which is not of the form \(f=h^{p}-h\) for \(h\in\mathbb{F}_{q}(x)\). We have \(\mathcal{L}(u,f+b,\psi)=\mathcal{L}(\psi(\operatorname{tr}_{q/p}(b))\cdot u,f,\psi)\)._ For \(c\in\mathbb{F}_{q}[x]\), if \(x|c\), we put \(\chi_{f}(c)=0\); otherwise, let \[\chi_{f}(c):=\psi\Big{(}\operatorname{tr}_{q/p}\Big{(}\sum_{c(\alpha)=0}f( \alpha)\Big{)}\Big{)}.\] We also have the following. **Proposition 2.2**.: _[_12_, Lemmata 7.1, 7.2]_ _Assume that \((d,p)=1\), and \(f\in\mathcal{F}_{d}\). Then_ * \(\chi_{f}(c)\) _is a primitive Dirichlet character modulo_ \(x^{d+1}\) _of order_ \(p\)_. (In particular_ \(\chi_{f}\) _is even since_ \((p,q-1)=1\)_, and_ \(|\mathbb{F}_{q}^{*}|=q-1\)_.)_ * _The map_ \[\mathcal{F}_{d}\to\{\text{primitive Dirichlet characters modulo $x^{d+1}$ of order $p$}\}\] _given by_ \(f\mapsto\chi_{f}\) _is a bijection._ * \(\mathcal{L}(u,\chi_{f})=(1-u)\mathcal{L}(u,f,\psi)\)_._ Now we consider the odd polynomial family. Recall that in this case, \((d,2p)=1\). We have \[|\mathcal{AS}_{d}^{0,\text{odd}}|=(q-1)q^{\frac{d-1}{2}-\lfloor\frac{d-1}{p} \rfloor+\lfloor\frac{d-1}{2p}\rfloor}. \tag{8}\] **Lemma 2.3**.: _[_EP_, Lemma 2.4]_ _For \(f\in\mathcal{AS}_{d}^{0,\text{odd}}\), we have \(\mathcal{L}(u,f,\psi)\in\mathbb{R}[u]\)._ Following [EP], for an abelian group \(A\) and a group of characters \(B\subseteq A^{*}\), let \[B^{\perp}=\{a\in A\,:\,\chi(a)=1\text{ for all }\chi\in B\}.\] The orthogonality relations imply that \[\frac{1}{|B|}\sum_{\chi\in B}\chi(a)=\begin{cases}1&\text{ if }a\in B^{\perp}, \\ 0&\text{ if }a\notin B^{\perp}.\end{cases} \tag{9}\] Let \[H_{\text{odd}}=\{\chi_{f}\,:\,f\in\mathcal{AS}_{e}^{0,\text{odd}},1\leq e\leq d \}\cup\{1\}.\] By Proposition 2.5 in [EP], we have that \(H_{\text{odd}}\) is a subgroup of the group of characters of order \(p\) modulo \(x^{d+1}\). Moreover, \[H_{\text{odd}}^{\text{pr}}=\{\chi_{f}\,:\,f\in\mathcal{AS}_{d}^{0,\text{odd}},\deg(f)=d\}.\] Now let \(H_{\text{odd},d}\) denote the set of characters modulo \(x^{d}\) which induce a character in \(H_{\text{odd}}\). We have the following statement. **Lemma 2.4**.: _[_EP_, Lemma 2.6]_ _Let \(Q\) denote \(x^{d}\) or \(x^{d+1}\), and let \(H_{\text{odd},Q}\) denote the set of characters modulo \(Q\) which induce a character in \(H_{\text{odd}}\). Then \(H_{\text{odd},Q}^{\perp}=RS\), where_ \[R =\{f\in\mathbb{F}_{q}[x]\,:\,f(0)\neq 0,f(x)\equiv g(x^{2})\,( \operatorname{mod}Q),\text{ for some }g\in\mathbb{F}_{q}[x]\},\] \[S =\{f\in\mathbb{F}_{q}[x]\,:\,f(0)\neq 0,f(x)\equiv g(x^{p})\,( \operatorname{mod}Q),\text{ for some }g\in\mathbb{F}_{q}[x]\}.\] We finally consider the ordinary Artin-Schreier family (4). For any monic square-free \(g\) of degree \(d\) or \(d-1\), we define \[\mathcal{AS}_{d,g}^{\text{ord}}=\{f=h/g\,:\,h\in\mathbb{F}_{q}[x],(h,g)=1,f\in \mathcal{AS}_{d}^{\text{ord}}\}.\] Following [EP], for \(g\) square-free of degree \(d\) or \(d-1\), let \[\mathcal{H}_{g}=\{f=h/g\,:\,h\in\mathbb{F}_{q}[x],(h,g)=1,\deg(h)<\deg(g)\} \subset\mathcal{AS}_{d,g}^{\text{ord}}.\] Note that \[|\mathcal{H}_{g}|=\phi(g). \tag{10}\] First assume that \(\deg(g)=d\). We then have that \[\mathcal{AS}_{d,g}^{\text{ord}}=\bigsqcup_{b\in\mathbb{F}_{q}}\{f+b\,:\,f\in \mathcal{H}_{g}\}, \tag{11}\] so from (10), we have that \[|\mathcal{AS}_{d,g}^{\text{ord}}|=q\phi(g).\] Now assume that \(\deg(g)=d-1\). As in [EP], we have \[\mathcal{AS}_{d,g}^{\text{ord}}=\bigsqcup_{\begin{subarray}{c}a\in\mathbb{F}_{ q}^{*}\\ b\in\mathbb{F}_{q}\end{subarray}}\{f+b\,:\,f\in\mathcal{H}_{g}^{a}\},\] where \[\mathcal{H}_{g}^{a}=\{f+ax\,:\,f\in\mathcal{H}_{g}\}.\] Then note that in this case, \[|\mathcal{AS}_{d,g}^{\text{ord}}|=q(q-1)\phi(g).\] **Lemma 2.5**.: _We have that_ \[|\mathcal{AS}_{d}^{\text{ord}}|=\mathcal{G}(1/q^{2})(q^{2}+q-1)q^{2d-1}+O\left(q^{ \frac{3d}{2}+\epsilon n}\right),\] _where \(\mathcal{G}(u)\) is given by (13)._ Proof.: Note that \[|\mathcal{AS}_{d}^{\text{ord}}|=\sum_{\begin{subarray}{c}g\in\mathcal{M}_{d} \\ g\text{ square-free}\end{subarray}}q\phi(g)+\sum_{\begin{subarray}{c}g\in \mathcal{M}_{d-1}\\ g\text{ square-free}\end{subarray}}q(q-1)\phi(g). \tag{12}\] The generating function for the sum over \(g\) is \[\sum_{\begin{subarray}{c}g\in\mathcal{M}\\ g\text{ square-free}\end{subarray}}\phi(g)u^{\deg(g)}=\prod_{P}\left(1+(|P|-1)u^{ \deg(P)}\right)=\mathcal{Z}(qu)\mathcal{G}(u),\] where \[\mathcal{G}(u):=\prod_{P}\left(1-u^{\deg(P)}-|P|(|P|-1)u^{2\deg(P)}\right) \tag{13}\] converges absolutely for \(|u|<q^{-\frac{3}{2}}\). Then applying Perron's formula (5), we have \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\phi(g)=\frac{1}{2\pi i}\oint\frac{ \mathcal{G}(u)}{(1-q^{2}u)u^{n+1}}du,\] where the integral takes place in a small circle around the origin. Shifting the contour of integration over to \(|u|=q^{-\frac{3}{2}-\epsilon}\), we pick up the pole at \(u=q^{-2}\) and get \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\phi(g)=\mathcal{G}(1/q^{2})q^{2n}+O\left(q^{ \frac{3n}{2}+\epsilon n}\right).\] The result follows from replacing the above in equation (12). For \(f=h/g\in\mathcal{AS}_{d}^{\text{ord}}\), and \(c\in\mathbb{F}_{q}[x]\), define \[\chi_{f}(c)=\begin{cases}\psi\Big{(}\operatorname{tr}_{q/p}\Big{(}\sum_{c( \alpha)=0}f(\alpha)\Big{)}\Big{)}&\text{ if }(c,g)=1,\\ 0&\text{ if }(c,g)\neq 1,\end{cases} \tag{14}\] as in equation (2.7) in [EP]. We have the following result ([EP, Proposition 2.8]). **Proposition 2.6**.: _Let \(f=h/g\in\mathcal{AS}_{d}^{\text{ord}}\). Then \(\mathcal{L}(u,\chi_{f})=(1-\delta(f)u)\mathcal{L}(u,f,\psi)\), where_ \[\delta(f)=\begin{cases}\psi(\operatorname{tr}_{q/p}f(\infty))&\text{ if }\deg(h)=\deg(g)=d,\\ 1&\text{ otherwise.}\end{cases}\] Propositions 2.9 and 2.10 from [EP] can be summarized as follows. **Proposition 2.7**.: _The map \(f\mapsto\chi_{f}\) gives a bijection between \(\mathcal{H}_{g}\) and the set of primitive Dirichlet characters modulo \(g^{2}\) of order \(p\). In particular, the character \(\chi_{f}\) is even._ We remark that, while the above statement in [EP] is given in the context of \(\deg(g)=d\), the argument for the proof works similarly for the case \(\deg(g)=d-1\). However, while the statement itself is still true, it remains to understand the nature of the characters that are defined for \(f=\frac{h}{g}\) when \(\deg(h)=d\) and \(\deg(g)=d-1\). Recall that we have claimed that such an \(f\) can be described as an element of \(\mathcal{H}_{g}+ax+b\) with \(a\in\mathbb{F}_{q}^{*}\) and \(b\in\mathbb{F}_{q}\). To understand this, consider the division algorithm \(h(x)=(ax+b)g(x)+r(x)\), where \(r(x)=0\) or \(\deg(r)<\deg(g)=d-1\). Dividing by \(g(x)\), we obtain \(f(x)=ax+b+f_{0}(x)\), where \(f_{0}(x)=\frac{r(x)}{g(x)}\). In order to better describe the character arising from \(f\), it is convenient to work in \(\mathbb{P}^{1}(\mathbb{F}_{q})\) and with homogeneous polynomials in two variables \(X\) and \(Z\). To this end, we consider the homogenization of the terms in the above equation by making the change \(x\mapsto\frac{X}{Z}\), namely \[F(X,Z)=\frac{aX}{Z}+b+F_{0}(X,Z), \tag{15}\] where \[F(X,Z)=\frac{h(X/Z)}{g(X/Z)}=\frac{H(X,Z)}{ZG(X,Z)},\quad H(X,Z)=h(X/Z)Z^{d}, G(X,Z)=g(X/Z)Z^{d-1},\] and similarly for \(F_{0}(X,Z)\). In this context, the associated character can be described as \[\chi_{F}(C)=\begin{cases}\psi\Big{(}\operatorname{tr}_{q/p}\Big{(}\sum_{[ \alpha_{X}:\alpha_{Z}]\in\mathbb{P}^{1}(\overline{\mathbb{F}_{q}})\atop C( \alpha_{X},\alpha_{Z})=0}F(\alpha_{X},\alpha_{Z})\Big{)}\Big{)}&\text{ if }(C,G)=1,\\ 0&\text{ if }(C,G)\neq 1,\end{cases} \tag{16}\] for \(C(X,Z)\in\mathbb{F}_{q}[X,Z]\). By (15), we have that \[\chi_{F}=\chi_{\frac{aX}{Z}}\chi_{b}\chi_{F_{0}}.\] Note that Proposition 2.7 can be applied to view \(\chi_{F_{0}}=\chi_{f_{0}}\) as a character modulo \(g^{2}\) (where \(\deg(g)=d-1\) in this case). We also have that \(\chi_{b}\) is defined as usual. It remains to understand \(\chi_{\frac{aX}{Z}}\). To this end, we look at the definition given in (16). Notice that we have two cases, according to whether \((C,Z)=1\) or not. If \((C,Z)\neq 1\), then \(C\) does not arise from homogenizing a polynomial \(c\in\mathbb{F}_{q}[x]\), since it has an extra \(Z\) factor, and in this case, the character takes the value zero. The polynomials arising from \(c\in\mathbb{F}_{q}[x]\) yield \(C(X,Z)\) that are coprime to \(Z\). In this case, we can proceed to evaluate the first line of (16), and we can go back to the original definition given in (14). This gives \[\chi_{ax}(c)=\psi\Big{(}\operatorname{tr}_{q/p}\Big{(}\sum_{c(\alpha)=0}a \alpha\Big{)}\Big{)}.\] We remark that the character \(\chi_{\frac{aX}{Z}}\) has conductor a power of \(Z\) (that is, the conductor is a power of \(\infty\) when looking in \(\mathbb{F}_{q}[x]\)). To understand this power, we can dehomogenize by setting \(X=1\) instead of \(Z=1\). In this case, \(\chi_{\frac{aX}{Z}}\) corresponds to \(\chi_{\frac{a}{z}}\) and it becomes a character modulo \(z^{2}\), again by Proposition 2.7. ### Sums involving roots of unity Here, we will prove certain results about sums involving roots of unity, which we will use repeatedly throughout the paper. Let \[S_{\ell}(n,x)=\sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{j}x)^{\ell}},\] where \(\xi_{p}\) is a non-trivial \(p^{\text{th}}\) root of unity in \(\mathbb{C}\). We will obtain formulas for \(S_{\ell}(n,x)\) for \(\ell=1,2,3\). **Lemma 2.8**.: _For \(|x|>1\) and \(n\in\mathbb{Z}\), we have_ \[S_{1}(n,x)=\frac{px^{[n]_{p}}}{1-x^{p}},\] \[S_{2}(n,x)=px^{[n+1]_{p}-1}\left(\frac{[n+1]_{p}}{1-x^{p}}+\frac{px^{p}}{(1-x^ {p})^{2}}\right),\] _and_ \[S_{3}(n,x)=\frac{px^{[n+2]_{p}-2}}{2}\left(\frac{[n+2]_{p}([n+2]_{p}-1)}{1-x^{ p}}+\frac{p(2[n+2]_{p}+p-1)x^{p}}{(1-x^{p})^{2}}+\frac{2p^{2}x^{2p}}{(1-x^{p})^{ 3}}\right).\] Proof.: We have \[S_{1}(n,x)= \sum_{j=0}^{p-1}\frac{1}{(1-\xi_{p}^{j}x)\xi_{p}^{nj}}=-\frac{1}{ x}\sum_{k=0}^{\infty}x^{-k}\sum_{j=0}^{p-1}\xi_{p}^{-(n+1+k)j}\] \[= -\frac{p}{x}\sum_{\begin{subarray}{c}k=0\\ k\equiv-n-1\,(\text{mod}\,p)\end{subarray}}^{\infty}x^{-k}=-\frac{px^{-[-n-1]_ {p}-1}}{1-x^{-p}},\] since the inner sum in the first line is equal to \(p\) when \(n+1+k\) is divisible by \(p\), and \(0\) otherwise. Notice that \([-n-1]_{p}+1=p-[n]_{p}\), giving \[S_{1}(n,x)=-\frac{px^{[n]_{p}}}{x^{p}-1}.\] The expressions of \(S_{2}(n,x)\) and \(S_{3}(n,x)\) can be obtained by using the fact that \[S_{\ell+1}(n,x)=\frac{1}{\ell}\frac{\partial}{\partial x}S_{\ell}(n+1,x)\] when \(\ell>0\). We will also need the case \(x=1\). More precisely, we prove the following result. **Lemma 2.9**.: _We have_ \[\sum_{j=1}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{j})}=\frac{p-1}{2}-[n]_{p}.\] Proof.: Notice that for \(0<a\leq p-1\), \[\sum_{j=1}^{p-1}\frac{1-\xi_{p}^{aj}}{(1-\xi_{p}^{j})}=\sum_{j=1}^{p-1}(1+\xi_ {p}^{j}+\cdots+\xi_{p}^{(a-1)j})=p-a.\] By looking at the logarithmic derivative at \(x=1\) of \(x^{p-1}+\cdots+x+1=\prod_{j=1}^{p-1}(x-\xi_{p}^{j})\), we have \[\sum_{j=1}^{p-1}\frac{1}{(1-\xi_{p}^{j})}=\frac{p-1}{2}.\] We conclude by combining the above equations. ## 3. Proof of Theorem 1.1 Before proving Theorem 1.1, we first state some results we will need and prove some preliminary lemmas. The following estimate for sums of the generalized divisor function in short intervals is due to Sawin [11], and will be crucial in our computations. **Proposition 3.1**.: _([11, Theorem 1.1.]) For natural numbers \(n\), \(h\), \(k\) with \(h<n\) and \(f\) a monic polynomial of degree \(n\) in \(\mathbb{F}_{q}[T]\), we have_ \[\left|\sum_{\begin{subarray}{c}g\in\mathbb{F}_{q}[T]\\ \deg g<h\end{subarray}}d_{k}(f+g)-\binom{n+k-1}{k-1}q^{h}\right|\leq 3\binom{n +k-1}{k-1}(k+2)^{2n-h}q^{\frac{1}{2}\left(h+\left\lfloor\frac{n}{p}\right\rfloor -\left\lfloor\frac{n-h}{p}\right\rfloor+1\right)}.\] We will now prove the following lemma. **Lemma 3.2**.: _Let \(n>d\) and let \(A(x)\in\mathbb{F}_{q}[x]\) such that \(A(0)\neq 0\) and \(\deg(A)\leq d\). Then_ \[\sum_{\begin{subarray}{c}\deg(F)=n\\ F(x)\equiv A(x)\,(\mathrm{mod}\,x^{d+1})\end{subarray}}d_{k}(F)=\binom{n+k-1}{ k-1}q^{n-d}+O\left(\binom{n+k-1}{k-1}(k+2)^{n+d}q^{\frac{1}{2}\left(n-d+ \left\lfloor\frac{n}{p}\right\rfloor-\left\lfloor\frac{d}{p}\right\rfloor \right)}\right).\] Proof.: Let \[A(x)=\sum_{j=0}^{d}a_{j}x^{j},\] where \(a_{j}\in\mathbb{F}_{q}\), \(a_{0}\neq 0\) (note that \(a_{d}\) can be \(0\)). Also note that we can assume, without loss of generality, that \(a_{0}=1\). Otherwise, we can rewrite the sum as \[\sum_{\begin{subarray}{c}\deg(F)=n\\ F(x)\equiv A(x)\,(\mathrm{mod}\,x^{d+1})\end{subarray}}d_{k}(F)=\sum_{ \begin{subarray}{c}\deg(F)=n\\ a_{0}^{-1}F(x)\equiv A_{1}(x)\,(\mathrm{mod}\,x^{d+1})\end{subarray}}d_{k}(F),\] where \(A_{1}(x)=a_{0}^{-1}A(x)\). Since \(d_{k}(F)=d_{k}(cF)\) for \(c\in\mathbb{F}_{q}^{*}\), we easily see that \[\sum_{\begin{subarray}{c}\deg(F)=n\\ F(x)\equiv A(x)\,(\mathrm{mod}\,x^{d+1})\end{subarray}}d_{k}(F)=\sum_{ \begin{subarray}{c}\deg(F)=n\\ F(x)\equiv A_{1}(x)\,(\mathrm{mod}\,x^{d+1})\end{subarray}}d_{k}(F),\] and \(A_{1}(x)\) has the property that its constant coefficient is \(1\). Now since \(F(x)\equiv A(x)\,(\mathrm{mod}\,x^{d+1})\), we write \[F(x)=f_{n}x^{n}+\cdots+f_{d+1}x^{d+1}+a_{d}x^{d}+\cdots+a_{0},\] where \(f_{n}\neq 0\). Now let \[F^{*}(x)=x^{\deg(F)}F\Big{(}\frac{1}{x}\Big{)}\] be the reverse polynomial of \(F\). Then we have \[F^{*}(x)=a_{0}x^{n}+\cdots+a_{d}x^{n-d}+f_{d+1}x^{n-d-1}+\cdots+f_{n},\] and \(\deg(F^{*})=n\), since \(a_{0}\neq 0\). Note that we can write \[F^{*}(x)=x^{n-\deg(A)}A^{*}(x)+g(x),\] where \(g(x)\) varies over polynomials of degree less than \(n-d\). Also note that for \(F(x)\) such that \(F(0)\neq 0\), we have \(d_{k}(F)=d_{k}(F^{*})\). Hence we rewrite \[\sum_{\begin{subarray}{c}\deg(F)=n\\ F(x)\equiv A(x)\,(\operatorname{mod}x^{d+1})\end{subarray}}d_{k}(F)=\sum_{ \deg(g)<n-d}d_{k}(g(x)+A^{*}(x)x^{n-\deg(A)}).\] Using Proposition 3.1 with \(h=n-d\) and the fact that \(A^{*}(x)x^{n-d}\) is monic, the conclusion follows. Now for \(\ell\,(\operatorname{mod}p)\), we let \[\alpha_{k}(\ell)=\sum_{b\in\mathbb{F}_{q}}\frac{\psi(\operatorname{tr}_{q/p}( b))^{\ell}}{\Big{(}1-\frac{\psi(\operatorname{tr}_{q/p}(b))}{\sqrt{q}}\Big{)}^{k}}. \tag{17}\] We will prove the following. **Lemma 3.3**.: _For \(\ell\,(\operatorname{mod}p)\), we have_ \[\alpha_{k}(\ell)=\frac{q}{p}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j\ell}}{\big{(}1-q^ {-1/2}\xi_{p}^{j}\big{)}^{k}},\] Proof.: We have \[\alpha_{k}(\ell) =\sum_{b\in\mathbb{F}_{q}}\psi(\operatorname{tr}_{q/p}(b))^{\ell} \left(1-\frac{\psi(\operatorname{tr}_{q/p}(b))}{\sqrt{q}}\right)^{-k}\] \[=\sum_{b\in\mathbb{F}_{q}}\psi(\operatorname{tr}_{q/p}(b))^{\ell} \sum_{h=0}^{\infty}\binom{-k}{h}\left(-\frac{\psi(\operatorname{tr}_{q/p}(b)) }{\sqrt{q}}\right)^{h}\] by the binomial theorem. Therefore, \[\alpha_{k}(\ell)=\sum_{h=0}^{\infty}\binom{-k}{h}(-q)^{-h/2}\sum_{b\in\mathbb{ F}_{q}}\psi(\operatorname{tr}_{q/p}(b))^{\ell+h}\] The inner sum above is equal to \(q\) when \(\ell+h\) is divisible by \(p\). Otherwise, the inner sum is equal to zero since \(\psi\) is a nontrivial additive character. Therefore, \[\alpha_{k}(\ell)=\frac{q}{p}\sum_{h=0}^{\infty}\binom{-k}{h}(-q)^{-h/2}\sum_{j =0}^{p-1}\xi_{p}^{j(\ell+h)}.\] We switch the order of summation and use the binomial theorem again to conclude that \[\alpha_{k}(\ell)=\frac{q}{p}\sum_{j=0}^{p-1}\xi_{p}^{j\ell}\sum_{h=0}^{\infty} \binom{-k}{h}\left(-q^{-1/2}\xi_{p}^{j}\right)^{h}=\frac{q}{p}\sum_{j=0}^{p-1} \frac{\xi_{p}^{j\ell}}{\left(1-q^{-\frac{1}{2}}\xi_{p}^{j}\right)^{k}}.\] We are now ready to begin the proof of Theorem 1.1. Proof of Theorem 1.1.: Using (7), Lemma 2.1, and Proposition 2.2, we write \[\frac{1}{|\mathcal{A}\mathcal{S}_{d}^{0}|}\sum_{f\in\mathcal{A} \mathcal{S}_{d}^{0}}\mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi\right)^{k}= \frac{1}{q|\mathcal{F}_{d}|}\sum_{b\in\mathbb{F}_{q}}\sum_{f\in \mathcal{F}_{d}}\mathcal{L}\left(\frac{1}{\sqrt{q}},f+b,\psi\right)^{k}\] \[= \frac{1}{q|\mathcal{F}_{d}|}\sum_{b\in\mathbb{F}_{q}}\sum_{f\in \mathcal{F}_{d}}\mathcal{L}\left(\frac{\psi(\mathrm{tr}_{q/p}(b))}{\sqrt{q}}, f,\psi\right)^{k}\] \[= \frac{1}{q|\mathcal{F}_{d}|}\sum_{b\in\mathbb{F}_{q}}\frac{1}{ \left(1-\frac{\psi(\mathrm{tr}_{q/p}(b))}{\sqrt{q}}\right)^{k}}\sum_{f\in \mathcal{F}_{d}}\mathcal{L}\left(\frac{\psi(\mathrm{tr}_{q/p}(b))}{\sqrt{q}}, \chi_{f}\right)^{k}\] \[= \frac{1}{q|\mathcal{F}_{d}|}\sum_{b\in\mathbb{F}_{q}}\frac{1}{ \left(1-\frac{\psi(\mathrm{tr}_{q/p}(b))}{\sqrt{q}}\right)^{k}}\sum_{f\in \mathcal{F}_{d}}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq kd}\\ F(0)\neq 0\end{subarray}}\frac{d_{k}(F)\chi_{f}(F)\psi(\mathrm{tr}_{q/p}(b))^{ \mathrm{deg}(F)}}{\sqrt{|F|}}\] \[= \frac{1}{q|\mathcal{F}_{d}|}\sum_{f\in\mathcal{F}_{d}}\sum_{ \begin{subarray}{c}F\in\mathcal{M}_{\leq kd}\\ F(0)\neq 0\end{subarray}}\frac{\alpha_{k}(\deg(F))d_{k}(F)\chi_{f}(F)}{\sqrt{|F|}}. \tag{18}\] Interchanging the sums over \(f\) and \(F\), we then need to study \(\sum_{f\in\mathcal{F}_{d}}\chi_{f}(F)\) for \(F\) fixed. Let \[H_{n}=\{\chi\left(\operatorname{mod}x^{n}\right)\,:\,\chi^{p}=1\}\] and \[H_{n}^{\mathrm{pr}}=\{\chi\in H_{n}\,:\,\chi\text{ primitive}\}.\] (We will work with both \(n=d\) and \(n=d+1\).) Using Proposition 2.2, there is a bijection between \(\mathcal{F}_{d}\) and \(H_{d+1}^{\mathrm{pr}}\). Moreover, a character in \(H_{d+1}\) that is not primitive is necessarily a character in \(H_{d}\). Thus we have \[H_{d+1}^{\mathrm{pr}}=H_{d+1}\setminus H_{d}.\] It follows that \[\sum_{f\in\mathcal{F}_{d}}\chi_{f}(F)=\sum_{\chi\in H_{d+1}^{\mathrm{pr}}}\chi (F)=\sum_{\chi\in H_{d+1}}\chi(F)-\sum_{\chi\in H_{d}}\chi(F).\] Now, using (9), we have \[\sum_{f\in\mathcal{F}_{d}}\chi_{f}(F)=\begin{cases}0&F\not\in H_{d}^{\perp},\\ -|H_{d}|&F\in H_{d}^{\perp}\setminus H_{d+1}^{\perp},\\ |H_{d+1}|-|H_{d}|&F\in H_{d+1}^{\perp}.\end{cases} \tag{19}\] Notice that \(H_{d}\subseteq H_{d+1}\) implies that \(H_{d+1}^{\perp}\subseteq H_{d}^{\perp}\). Let us compute the order of \(H_{n}\) with \(p\nmid n\). Following the proof of [1, Lemma 7.1], \(|H_{n}|\) corresponds to \(|(\mathbb{F}_{q}[x]/x^{n})^{\times}[p]|\) and that corresponds to counting polynomials \(g(x)=\sum_{j=0}^{n-1}c_{j}x^{j}\) such that \(g(x)^{p}\equiv 1\,(\operatorname{mod}x^{n})\) and \(c_{0}\neq 0\) (so that \(g(x)\) is a unit). Taking the \(p\)th power, we see that this imposes the condition \(c_{0}^{p}=1\) (implying \(c_{0}=1\)) and \(c_{1}^{p}=\dots=c_{\left\lfloor\frac{n-1}{p}\right\rfloor}^{p}=0\) (implying \(c_{1}=\dots=c_{\left\lfloor\frac{n-1}{p}\right\rfloor}=0\)). The total count is then \[|H_{n}|=\frac{q^{n-1}(q-1)}{q^{\left\lfloor\frac{n-1}{p}\right\rfloor}(q-1)}=q ^{n-1-\left\lfloor\frac{n}{p}\right\rfloor},\] where we have used that, since \(p\nmid n\), we have \(\left\lfloor\frac{n-1}{p}\right\rfloor=\left\lfloor\frac{n}{p}\right\rfloor\). By [EP, Lemma 4.1], \(F(x)\in\mathbb{F}_{q}[x]\) with \(F(0)\neq 0\) satisfies \(\chi(F)=1\) for all \(\chi\in H_{d+1}\) if and only if \(F(x)\equiv R(x^{p})\,(\operatorname{mod}x^{d+1})\) for some \(R(x)\in\mathbb{F}_{q}[x]\) with \(R(0)\neq 0\) and \(\deg(R)\leq\left\lfloor\frac{d}{p}\right\rfloor\). A similar result applies for \(F(x)\in\mathbb{F}_{q}[x]\) with \(F(0)\neq 0\) that satisfies \(\chi(F)=1\) for all \(\chi\in H_{d}\). Putting (18), (19) and the observation above together, we get that \[\frac{1}{|\mathcal{A}\mathcal{S}_{d}^{0}|}\sum_{f\in\mathcal{A} \mathcal{S}_{d}^{0}}\mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi\right)^{k}= \frac{|H_{d+1}|}{q|\mathcal{F}_{d}|}\sum_{\begin{subarray}{c}F \in\mathcal{M}_{\leq kd}\\ F(0)\neq 0\\ F(x)\in H_{d+1}^{1}\end{subarray}}\frac{\alpha_{k}(\deg(F))d_{k}(F)}{\sqrt{|F|}}\] \[-\frac{|H_{d}|}{q|\mathcal{F}_{d}|}\sum_{\begin{subarray}{c}F \in\mathcal{M}_{\leq kd}\\ F(0)\neq 0\\ F(x)\in H_{d}^{1}\end{subarray}}\frac{\alpha_{k}(\deg(F))d_{k}(F)}{\sqrt{|F|}}\] \[= \frac{|H_{d+1}|}{q|\mathcal{F}_{d}|}S_{k,d+1}-\frac{|H_{d}|}{q| \mathcal{F}_{d}|}S_{k,d} \tag{20}\] \[= \frac{S_{k,d+1}}{q-1}-\frac{S_{k,d}}{q(q-1)},\] where \[S_{k,d+1}=\sum_{\begin{subarray}{c}\deg(R)\leq\left\lfloor\frac{d}{p}\right \rfloor\\ R(0)\neq 0\end{subarray}}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq kd}\\ F(x)\equiv R(x^{p})\,(\operatorname{mod}x^{d+1})\end{subarray}}\frac{\alpha_{k }(\deg(F))d_{k}(F)}{\sqrt{|F|}},\] and \[S_{k,d}=\sum_{\begin{subarray}{c}\deg(R)\leq\left\lfloor\frac{d-1}{p}\right \rfloor\\ R(0)\neq 0\end{subarray}}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq kd}\\ F(x)\equiv R(x^{p})\,(\operatorname{mod}x^{d})\end{subarray}}\frac{\alpha_{k}( \deg(F))d_{k}(F)}{\sqrt{|F|}}.\] Notice that since \(p\nmid d\), the condition \(\deg(R)\leq\left\lfloor\frac{d}{p}\right\rfloor\) is equivalent to the condition \(\deg(R)\leq\left\lfloor\frac{d-1}{p}\right\rfloor\). Now note that the terms in the inner sum that satisfy \(\deg(F)<d\) for \(S_{k,d}\) (resp. \(\deg(F)<d+1\) for \(S_{k,d+1}\)) have the property that \(F(x)=R(x^{p})\), and we can write \(R(x^{p})=R_{0}(x)^{p}\) by applying the Frobenius automorphism. When \(k=1\), note that we have \(S_{1,d+1}=S_{1,d}\), and using (20), we have that the moment under consideration equals \[\sum_{\begin{subarray}{c}R\in\mathcal{M}_{\leq\left\lfloor\frac{d}{p}\right\rfloor }\\ R(0)\neq 0\end{subarray}}\frac{\alpha_{1}(\deg(R)p)}{|R|^{\frac{p}{2}}}=\alpha_{1}(0 )\sum_{n=0}^{\left\lfloor\frac{d}{p}\right\rfloor}\frac{q^{n-1}(q-1)}{q^{\frac{ nn}{2}}}=\alpha_{1}(0)(1-q^{-1})\frac{1-q^{\left(1-\frac{p}{2}\right)\left(\left\lfloor \frac{d}{p}\right\rfloor+1\right)}}{1-q^{\left(1-\frac{p}{2}\right)}}.\] Note that Lemmas 3.3 and 2.8 imply \[\alpha_{1}(0)=\frac{q}{p}\sum_{n=0}^{p-1}\frac{1}{1-\frac{\xi_{p}^{n}}{\sqrt{q }}}=-\frac{q^{\frac{3}{2}}}{p}\sum_{m=0}^{p-1}\frac{\xi_{p}^{m}}{1-\xi_{p}^{m} \sqrt{q}}=-\frac{q^{\frac{3}{2}}}{p}S_{1}\Big{(}-1,\sqrt{q}\Big{)}=\frac{q}{1 -q^{-\frac{p}{2}}}.\] Putting the above together finishes the proof of Theorem 1.1 in the case \(k=1\). Now we consider \(k>1\). We evaluate \(S_{k,d+1}\) for \(n>d\). Using Lemma 3.2, we have that \[\sum_{\begin{subarray}{c}\deg(F)=n\\ F(x)\equiv R(x^{p})\,(\bmod x^{d+1})\end{subarray}}d_{k}(F)=\binom{n+k-1}{k-1 }q^{n-d}+O\left(\binom{n+k-1}{k-1}(k+2)^{n+d}q^{\frac{1}{2}\left(n-d+\left\lfloor \frac{n}{p}\right\rfloor-\left\lfloor\frac{d}{p}\right\rfloor\right)}\right).\] Notice that the above expression is independent of \(R(x^{p})\). Moreover, \(d_{k}(F)=d_{k}(cF)\) for any \(c\in\mathbb{F}_{q}^{*}\) and similarly \(|F|=|cF|\). Thus, we get the same value if we sum over \(F\) non monic and divide by \(q-1\) to account for the leading coefficient. Putting all of this together, we have \[S_{k,d+1}= \alpha_{k}(0)\sum_{\begin{subarray}{c}R_{0}\in\mathcal{M}_{ \leq\left\lfloor\frac{d}{p}\right\rfloor}\\ R_{0}(0)\neq 0\end{subarray}}\frac{d_{k}(R_{0}^{p})}{|R_{0}|^{\frac{p}{2}}}\] \[+\frac{1}{q-1}\sum_{\begin{subarray}{c}\deg(R)\leq\left\lfloor \frac{d}{p}\right\rfloor\\ R(0)\neq 0\end{subarray}}\sum_{d<n\leq kd}\frac{\alpha_{k}(n)}{q^{\frac{n}{2}}} \sum_{\begin{subarray}{c}\deg(F)=n\\ F(x)\equiv R(x^{p})\,(\bmod x^{d+1})\end{subarray}}d_{k}(F)\] \[= \alpha_{k}(0)\sum_{\begin{subarray}{c}R_{0}\in\mathcal{M}_{ \leq\left\lfloor\frac{d}{p}\right\rfloor}\\ R_{0}(0)\neq 0\end{subarray}}\frac{d_{k}(R_{0}^{p})}{|R_{0}|^{\frac{p}{2}}}+q^{ \left\lfloor\frac{d}{p}\right\rfloor}\sum_{d<n\leq kd}\frac{\alpha_{k}(n)}{q^ {\frac{n}{2}}}\binom{n+k-1}{k-1}q^{n-d}\] \[+O\left(q^{\left\lfloor\frac{d}{p}\right\rfloor}\sum_{d<n\leq kd} \frac{1}{q^{\frac{n}{2}}}\binom{n+k-1}{k-1}(k+2)^{n+d}q^{\frac{1}{2}\left(n-d+ \left\lfloor\frac{n}{p}\right\rfloor-\left\lfloor\frac{d}{p}\right\rfloor \right)}\right).\] Similarly, \[S_{k,d}= \alpha_{k}(0)\sum_{\begin{subarray}{c}R_{0}\in\mathcal{M}_{ \leq\left\lfloor\frac{d}{p}\right\rfloor}\\ R_{0}(0)\neq 0\end{subarray}}\frac{d_{k}(R_{0}^{p})}{|R_{0}|^{\frac{p}{2}}}+q^{ \left\lfloor\frac{d}{p}\right\rfloor}\sum_{d\leq n\leq kd}\frac{\alpha_{k}(n)} {q^{\frac{n}{2}}}\binom{n+k-1}{k-1}q^{n-d+1}\] \[+O\left(q^{\left\lfloor\frac{d}{p}\right\rfloor}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Finally \[\frac{S_{k,d+1}}{q-1}-\frac{S_{k,d}}{q(q-1)}= \frac{\alpha_{k}(0)}{q}\sum_{\begin{subarray}{c}R_{0}\in\mathcal{M }_{\leq\left\lfloor\frac{d}{p}\right\rfloor}\\ R_{0}(0)\neq 0\end{subarray}}\frac{d_{k}(R_{0}^{p})}{|R_{0}|^{\frac{p}{2}}}-\frac{q \left\lfloor\frac{d}{p}\right\rfloor-\frac{d}{2}}{q-1}\alpha_{k}(d)\binom{d+k-1 }{k-1}\] \[+O\left(\sum_{d\leq n\leq kd}\binom{n+k-1}{k-1}(k+2)^{n+d}q^{\frac{ 1}{2}\left(-d+\left\lfloor\frac{n}{p}\right\rfloor+\left\lfloor\frac{d}{p} \right\rfloor\right)}\right).\] The second term above is bounded by \(\ll q^{\frac{d}{p}-\frac{d}{2}}d^{k-1}\). Now we also bound \[\sum_{d\leq n\leq kd}\binom{n+k-1}{k-1}(k+2)^{n+d}q^{\frac{1}{2}\left(-d+\left \lfloor\frac{n}{p}\right\rfloor+\left\lfloor\frac{d}{p}\right\rfloor\right)} \ll q^{\frac{d}{2}\left(\frac{k+1}{p}-1\right)}d^{k+1}k^{d(k+1)+k},\] and note that the second error term above dominates the first error term. Hence we get that \[\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}} \mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi\right)^{k}= \frac{\alpha_{k}(0)}{q}\sum_{\begin{subarray}{c}R\in\mathcal{M }\\ R(0)\neq 0\end{subarray}}\frac{d_{k}(R^{p})}{|R|^{\frac{p}{2}}}+O\Bigg{(}q^{ \frac{d}{2}\left(\frac{k+1}{p}-1\right)}d^{k+1}k^{d(k+1)+k}\Bigg{)}.\] We further write the main term above as \[\frac{\alpha_{k}(0)}{q}\sum_{\begin{subarray}{c}R\in\mathcal{M }\\ R(0)\neq 0\end{subarray}}\frac{d_{k}(R^{p})}{|R|^{\frac{p}{2}}}= \frac{1}{p}\sum_{\ell=0}^{p-1}\frac{1}{\left(1-\frac{\xi_{p}^{ \ell}}{\sqrt{q}}\right)^{k}}\prod_{P\neq x}\Bigg{(}\frac{1}{p}\sum_{\ell=0}^{ p-1}\frac{1}{\left(1-\frac{\xi_{p}^{\ell}}{\sqrt{|P|}}\right)^{k}}\Bigg{)}\] \[= \prod_{P}\Bigg{(}\frac{1}{p}\sum_{\ell=0}^{p-1}\frac{1}{\left(1- \frac{\xi_{p}^{\ell}}{\sqrt{|P|}}\right)^{k}}\Bigg{)}.\] Combining the two equations above finishes the proof of Theorem 1.1 in the case \(k>1\). ## 4. Proof of Theorem 1.3 We first need to prove the following approximate functional equation. **Lemma 4.1** (Approximate Functional Equation).: _For \(f\in\mathcal{AS}_{d}^{0}\) and \(k\in\mathbb{N}\), we have_ \[\left|\mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi\right)\right|^{2 k} =\sum_{\begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq k(d-1)\end{subarray}}\frac{d_{k}(F)d_{k}(H)\psi_{f}(F)\overline{ \psi_{f}}(H)}{\sqrt{|FH|}}\] \[+\sum_{\begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq k(d-1)-1\end{subarray}}\frac{d_{k}(F)d_{k}(H)\psi_{f}(F)\overline{ \psi_{f}}(H)}{\sqrt{|FH|}}.\] Proof.: Using (6), we have \[\left|\mathcal{L}(u,f,\psi)\right|^{2k}=(qu^{2})^{k(d-1)}\Big{|}\mathcal{L} \Big{(}\frac{1}{qu},f,\overline{\psi}\Big{)}\Big{|}^{2k}.\] Now \[\left|\mathcal{L}(u,f,\psi)\right|^{2k}=\sum_{n=0}^{2k(d-1)}u^{n}\sum_{ \begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)=n\end{subarray}}d_{k}(F)d_{k}(H)\psi_{f}(F)\overline{\psi_{f}}(H)=\sum_ {n=0}^{2k(d-1)}a(n)u^{n}.\] From the functional equation above, we get that \[a(n)=q^{n-k(d-1)}\overline{a(2k(d-1)-n)}.\] Using this, and plugging in \(u=\frac{1}{\sqrt{q}}\), we conclude. The following result allows us to compute averages of \(\psi_{f}(F)\) with \(f\) varying over the family \(\mathcal{AS}_{d}^{0}\). Let \[\langle\psi_{f}(F)\rangle_{d}=\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in \mathcal{AS}_{d}^{0}}\psi_{f}(F).\] **Lemma 4.2**.: _Let \(P_{1},\ldots,P_{s}\) be distinct monic irreducible polynomials in \(\mathbb{F}_{q}[x]\) such that \(\deg(P_{1})+\cdots+\deg(P_{s})\leq d\), and \(h_{1},\ldots,h_{s}\) integers. Then_ \[\langle\psi_{f}(P_{1})^{h_{1}}\ldots\psi_{f}(P_{s})^{h_{s}}\rangle_{d}=\left\{ \begin{array}{ll}1&\text{if $p\mid h_{i}$ for $1\leq i\leq s$},\\ 0&\text{otherwise}.\end{array}\right.\] Proof.: This is a simple case of [1, Lemma 9.1]. We are now ready to begin the proof of Theorem 1.3. Proof of Theorem 1.3.: Using Lemma 4.1 for \(k=1\), we have \[\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}} \left|\mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi\right)\right|^{2}\] \[= \frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}} \sum_{\begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq d-1\end{subarray}}\frac{\psi_{f}(F)\overline{\psi_{f}}(H)}{\sqrt{ |FH|}}+\frac{1}{|\mathcal{AS}_{d}^{0}|}\sum_{f\in\mathcal{AS}_{d}^{0}}\sum_{ \begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq d-2\end{subarray}}\frac{\psi_{f}(F)\overline{\psi_{f}}(H)}{\sqrt{ |FH|}}\] \[=\sum_{\begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq d-1\end{subarray}}\frac{\langle\psi_{f}(F/H)\rangle_{d}}{\sqrt{ |FH|}}+\sum_{\begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq d-2\end{subarray}}\frac{\langle\psi_{f}(F/H)\rangle_{d}}{\sqrt{ |FH|}}.\] Thus consider the general sum \[S(n):=\sum_{\begin{subarray}{c}F,H\in\mathcal{M}\\ \deg(FH)\leq n\end{subarray}}\frac{\langle\psi_{f}(F/H)\rangle_{d}}{\sqrt{ |FH|}}.\] By Lemma 4.2, \(\langle\psi_{f}(F/H)\rangle_{d}\) is trivial unless \(F/H\) is a \(p\)-power. Write \(R=(F,H)\) and \(F=F_{1}R\), \(H=H_{1}R\) so that \((F_{1},H_{1})=1\). Thus \(F/H=F_{1}/H_{1}\) and we must have \(F_{1}=F_{0}^{p}\), \(H_{1}=H_{0}^{p}\). We then have to evaluate the term \[S(n)=\sum_{R\in\mathcal{M}_{\leq\frac{n}{2}}}\frac{1}{|R|}\sum_{ \begin{subarray}{c}F_{0},H_{0}\in\mathcal{M}\\ p\deg(F_{0}H_{0})\leq n-2\deg(R)\\ (F_{0},H_{0})=1\end{subarray}}\frac{1}{|F_{0}H_{0}|^{\frac{p}{2}}}.\] First we consider the inner sum. Its generating series is given by \[\mathcal{F}(u,v)=\sum_{\begin{subarray}{c}F_{0},H_{0}\in\mathcal{M}\\ (F_{0},H_{0})=1\end{subarray}}\frac{u^{p\deg(F_{0})}v^{p\deg(H_{0})}}{|F_{0}H_{0}| ^{\frac{p}{2}}}.\] Then we have \[\mathcal{F}(u,v)=\prod_{Q}\Big{(}1+\sum_{j=1}^{\infty}\frac{u^{pj\deg(Q)}}{|Q|^ {\frac{pj}{2}}}+\sum_{j=1}^{\infty}\frac{v^{pj\deg(Q)}}{|Q|^{\frac{pj}{2}}} \Big{)}=\frac{\mathcal{Z}\left(u^{p}/q^{\frac{p}{2}}\right)\mathcal{Z}\left(v^ {p}/q^{\frac{p}{2}}\right)}{\mathcal{Z}\left(u^{p}v^{p}/q^{p}\right)}=\frac{1- q^{1-p}u^{p}v^{p}}{(1-q^{1-\frac{p}{2}}v^{p})}.\] Now using Perron's formula (5) for the sums over \(F_{0}\) and \(H_{0}\) we get that \[\sum_{\begin{subarray}{c}F_{0},H_{0}\in\mathcal{M}\\ p\deg(F_{0}H_{0})\leq n-2\deg(R)\\ (F_{0},H_{0})=1\end{subarray}}\frac{1}{|F_{0}H_{0}|^{\frac{p}{2}}}=\frac{1}{(2 \pi i)^{2}}\oint\oint\frac{\mathcal{F}(u,uv)}{(1-u)(1-v)(uv)^{n-2\deg(R)}}\, \frac{du}{u}\,\frac{dv}{v},\] where the integral takes place over small circles around the origin. Introducing the sum over \(R\) as well and using Perron's formula (5), we get that \[S(n)=\frac{1}{(2\pi i)^{3}}\oint\oint\oint\frac{\mathcal{F}(u, uv)\mathcal{Z}(u^{2}v^{2}z^{2}/q)}{(1-u)(1-v)(1-z)(uvz)^{n}}\,\frac{dz}{z}\,\frac{ du}{u}\,\frac{dv}{v}\] \[= \frac{1}{(2\pi i)^{3}}\oint\oint\oint\frac{1-q^{1-p}u^{2p}v^{p}} {(1-q^{1-\frac{p}{2}}u^{p})(1-q^{1-\frac{p}{2}}u^{p}v^{p})(1-u^{2}v^{2}z^{2}) (1-u)(1-v)(1-z)(uvz)^{n}}\,\frac{dz}{z}\,\frac{du}{u}\,\frac{dv}{v},\] where the integral takes place over small circles around the origin. Since we need to consider \[S(d-1)+S(d-2),\] we will sum the integral expressions for \(S(n)\) and \(S(n-1)\) and later set \(n=d-1\). Thus we get \[S(n)+S(n-1)=\] \[\frac{1}{(2\pi i)^{3}}\oint\oint\oint\frac{1-q^{1-p}u^{2p}v^{p}} {(1-q^{1-\frac{p}{2}}u^{p})(1-q^{1-\frac{p}{2}}u^{p}v^{p})(1-uvz)(1-u)(1-v)(1- z)(uvz)^{n}}\,\frac{dz}{z}\,\frac{du}{u}\,\frac{dv}{v}.\] In the integral above, we can choose the contour to be \(|u|=|v|=|z|=q^{-\epsilon}\). In the integral over \(z\), we shift the contour of integration to \(|z|=R\) and \(R\to\infty\). Then the integral over \(z\) is given by the residues at \(z=1\) and \(z=\frac{1}{uv}\). We write \[S(n)+S(n-1)=A+B,\] where \(A\) corresponds to the residue at \(z=1\) and \(B\) corresponds to the residue at \(z=1/(uv)\). We have that \[A=\frac{1}{(2\pi i)^{2}}\oint\oint\frac{1-q^{1-p}u^{2p}v^{p}}{(1-q^{1-\frac{p }{2}}u^{p})(1-q^{1-\frac{p}{2}}u^{p}v^{p})(1-uv)(1-u)(1-v)(uv)^{n}}\,\frac{du} {u}\,\frac{dv}{v},\] and \[B=\frac{1}{(2\pi i)^{2}}\oint\oint\frac{(1-q^{1-p}u^{2p}v^{p})}{(1-q^{1-\frac{ p}{2}}u^{p})(1-q^{1-\frac{p}{2}}u^{p}v^{p})(uv-1)(1-u)(1-v)}\,du\,dv.\] Note that in the integral for \(B\), there are no poles of the integrand inside the contour of integration, so \(B=0\). Hence we have \[S(n)+S(n-1)=A.\] In the expression for \(A\), we shift the contour over \(u\) to \(|u|=R\) and let \(R\to\infty\). We encounter poles at \(u=1,u=\frac{1}{v}\), \(u^{p}v^{p}=q^{\frac{p}{2}-1}\) and \(u^{p}=q^{\frac{p}{2}-1}\). Then we have poles at \(u=1,u=\frac{1}{v},u=q^{\frac{1}{2}-\frac{1}{p}}\xi_{p}^{j},u=q^{\frac{1}{2}- \frac{1}{p}}\xi_{p}^{j}v^{-1}\), for \(j=0,\dots,p-1\). Thus, we have that \[S(n)+S(n-1)=A_{1}+A_{v^{-1}}+\sum_{j=0}^{p-1}\Big{(}A_{\xi_{p}^{j}}+A_{\xi_{p}^{ j}v^{-1}}\Big{)},\] where \(A_{1},A_{v^{-1}}\) are the negative of the residues at \(u=1,u=\frac{1}{v}\) respectively, and \(A_{\xi_{p}^{j}},A_{\xi_{p}^{j}v^{-1}}\) are the negative of the residues at \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}},u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1} {p}}v^{-1}\) respectively. We have that \[A_{1}=\frac{1}{2\pi i}\oint\frac{1-q^{1-p}v^{p}}{(1-q^{1-\frac{p}{2}})(1-q^{1 -\frac{p}{2}}v^{p})(1-v)^{2}v^{n}}\,\frac{dv}{v}.\] Now we have a double pole at \(v=1\) and poles at \(v=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\) for \(j=0,\dots,p-1\). We write \[A_{1}=A_{1,1}+\sum_{j=0}^{p-1}A_{1,\xi_{p}^{j}},\] where \(A_{1,1}\) corresponds to the pole at \(v=1\), and \(A_{1,\xi_{p}^{j}}\) corresponds to the pole at \(v=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\). We have \[A_{1,1}= \frac{(1-q^{1-p})(n+1)}{(1-q^{1-\frac{p}{2}})^{2}}-\frac{pq^{1- \frac{p}{2}}(1-q^{-\frac{p}{2}})}{(1-q^{1-\frac{p}{2}})^{3}},\] and \[A_{1,\xi_{p}^{j}}= \frac{1-q^{-\frac{p}{2}}}{(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{ p}})^{2}\xi_{p}^{nj}q^{n\left(\frac{1}{2}-\frac{1}{p}\right)}(1-q^{1-\frac{p}{2}} )\prod_{\ell\neq j}(1-\xi_{p}^{j-\ell})}\] \[= \frac{1-q^{-\frac{p}{2}}}{(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{ p}})^{2}\xi_{p}^{nj}q^{n\left(\frac{1}{2}-\frac{1}{p}\right)}(1-q^{1-\frac{p}{2}} )p}.\] The sum over these residues gives \[\sum_{j=0}^{p-1}A_{1,\xi_{p}^{j}}= q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}\frac{1-q^{-\frac{p}{2}}}{p (1-q^{1-\frac{p}{2}})}\sum_{j=0}^{p-1}\frac{1}{(1-\xi_{p}^{j}q^{\frac{1}{2}- \frac{1}{p}})^{2}\xi_{p}^{nj}}\] \[= q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}\frac{1-q^{-\frac{p}{2 }}}{p(1-q^{1-\frac{p}{2}})}S_{2}(n,q^{\frac{1}{2}-\frac{1}{p}}).\] We now compute \(A_{v^{-1}}\), the negative of the residue of the pole in \(A\) coming from \(u=\frac{1}{v}\). We have that \[A_{v^{-1}}= -\frac{1}{(2\pi i)}\oint\frac{v^{p}-q^{1-p}}{(1-q^{1-\frac{p}{2}})(v ^{p}-q^{1-\frac{p}{2}})(1-v)^{2}}\,dv.\] In the expression above, we make the change of variables \(v\mapsto 1/v\). We get that \[A_{v^{-1}}= -\frac{1}{(2\pi i)}\oint\frac{1-v^{p}q^{1-p}}{(1-q^{1-\frac{p}{2}}) (1-v^{p}q^{1-\frac{p}{2}})(1-v)^{2}}\,dv.\] Recall that we are now integrating over \(|v|=q^{\epsilon}\). Hence the integral is equal to the residue of the pole at \(v=1\). This gives \[A_{v^{-1}}=-\frac{pq^{1-\frac{p}{2}}(1-q^{-\frac{p}{2}})}{(1-q^{1-\frac{p}{2}} )^{3}}.\] We consider the negative of the residues at \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\) and at \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}/v\). For \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\) we get \[A_{\xi_{p}^{j}}=\frac{1}{2\pi i}\oint\frac{(1-q^{-1}v^{p})v^{-n}}{p(1-\xi_{p} ^{j}q^{\frac{1}{2}-\frac{1}{p}})(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}v)(1 -v^{p})(1-v)(\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{n}}\,\frac{dv}{v}.\] In the above, we shift the contour of integration to \(|v|=R\) and let \(R\to\infty\). We encounter poles at \(v=\xi_{p}^{k}\) for \(k=0,\dots,p-1\) (a double pole at \(v=1\) and simple poles at \(v=\xi_{p}^{k}\) and \(k=1,\dots,p-1\)). We then have that \[A_{\xi_{p}^{j}}=\sum_{k=0}^{p-1}A_{\xi_{p}^{j},\xi_{p}^{k}},\] where \(A_{\xi_{p}^{j},\xi_{p}^{k}}\) corresponds to the residue at \(v=\xi_{p}^{k}\). Computing the residue at \(v=\xi_{p}^{k}\) for \(k=1,\dots,p-1\) and \(v=1\) we get \[A_{\xi_{p}^{j}} =\sum_{k=1}^{p-1}\frac{(1-q^{-1})q^{n\left(\frac{1}{p}-\frac{1}{2 }\right)}}{p^{2}(1-\xi_{p}^{j+k}q^{\frac{1}{2}-\frac{1}{p}})(1-\xi_{p}^{j}q^{ \frac{1}{2}-\frac{1}{p}})(1-\xi_{p}^{k})(\xi_{p}^{k+j})^{n}}+\frac{q^{n\left( \frac{1}{p}-\frac{1}{2}\right)}}{p(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})( \xi_{p}^{j})^{n}}\Bigg{(}\frac{n(q-1)}{pq(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1} {p}})}\] \[+\frac{p+q-1+pq-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}(pq+3q+p-3) }{2pq(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{2}}\Bigg{)}.\] Now we want to sum over \(j=0,\ldots,p-1\) and then over \(k=1,\ldots,p-1\). Notice that \[\sum_{j=0}^{p-1}\sum_{k=1}^{p-1}\frac{\xi_{p}^{-n(k+j)}}{(1-\xi_{p} ^{j+k}q^{\frac{1}{2}-\frac{1}{p}})(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})(1- \xi_{p}^{k})}\] \[= \sum_{j=0}^{p-1}\sum_{k=1}^{p-1}\frac{\xi_{p}^{-n(k+j)-k}}{(1-\xi _{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{2}(1-\xi_{p}^{k})}-\frac{\xi_{p}^{-n(k+j )-k}}{(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{2}(1-\xi_{p}^{j+k}q^{\frac{1 }{2}-\frac{1}{p}})}\] \[= \sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{j}q^{\frac{1}{2 }-\frac{1}{p}})^{2}}\sum_{k=1}^{p-1}\frac{\xi_{p}^{-(n+1)k}}{(1-\xi_{p}^{k})}- \sum_{j=0}^{p-1}\sum_{k=0}^{p-1}\frac{\xi_{p}^{-(n+1)(k+j)+j}}{(1-\xi_{p}^{j}q ^{\frac{1}{2}-\frac{1}{p}})^{2}(1-\xi_{p}^{j+k}q^{\frac{1}{2}-\frac{1}{p}})}\] \[+\sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{j}q^{\frac{1}{2 }-\frac{1}{p}})^{3}}\] \[= \sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{j}q^{\frac{1}{2 }-\frac{1}{p}})^{2}}\sum_{k=1}^{p-1}\frac{\xi_{p}^{-(n+1)k}}{(1-\xi_{p}^{k})}- \sum_{j=0}^{p-1}\frac{\xi_{p}^{j}}{(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}) ^{2}}\sum_{\ell=0}^{p-1}\frac{\xi_{p}^{-(n+1)\ell}}{(1-\xi_{p}^{\ell}q^{\frac{ 1}{2}-\frac{1}{p}})}\] \[+\sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{j}q^{\frac{1}{2 }-\frac{1}{p}})^{3}}\] \[= S_{2}(n,q^{\frac{1}{2}-\frac{1}{p}})\sum_{k=1}^{p-1}\frac{\xi_{p }^{-(n+1)k}}{(1-\xi_{p}^{k})}-S_{2}(-1,q^{\frac{1}{2}-\frac{1}{p}})S_{1}(n+1,q ^{\frac{1}{2}-\frac{1}{p}})+S_{3}(n,q^{\frac{1}{2}-\frac{1}{p}}).\] We also have \[\sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}(pq+p+q-1-\xi_{p}^{j}q^{\frac{ 1}{2}-\frac{1}{p}}(pq+3q+p-3))}{2pq(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^ {3}}\] \[= \sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}(pq+p+q-1)(1-\xi_{p}^{j}q^{ \frac{1}{2}-\frac{1}{p}})}{2pq(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{3}}- \frac{\xi_{p}^{(1-n)j}q^{\frac{1}{2}-\frac{1}{p}}(2q-2)}{2pq(1-\xi_{p}^{j}q^{ \frac{1}{2}-\frac{1}{p}})^{3}}.\] Putting the above together, we have \[\sum_{j=0}^{p-1}A_{\xi_{p}^{j}}= \frac{q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q}\Big{[} S_{2}(n,q^{\frac{1}{2}-\frac{1}{p}})\left(\frac{p-1}{2}-[n+1]_{p}\right)-S_{2}(-1,q^{ \frac{1}{2}-\frac{1}{p}})S_{1}(n+1,q^{\frac{1}{2}-\frac{1}{p}})\] \[+S_{3}(n,q^{\frac{1}{2}-\frac{1}{p}})\Big{]}+\frac{q^{n\left(\frac {1}{p}-\frac{1}{2}\right)}}{p}\Big{[}\frac{n(q-1)}{pq}S_{2}(n,q^{\frac{1}{2}- \frac{1}{p}})+\frac{pq+p+q-1}{2pq}S_{2}(n,q^{\frac{1}{2}-\frac{1}{p}})\] \[-\frac{q^{\frac{1}{2}-\frac{1}{p}}(q-1)}{pq}S_{3}(n-1,q^{\frac{1} {2}-\frac{1}{p}})\Big{]}\] \[= \frac{q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}}{p}S_{2}(n,q^{ \frac{1}{2}-\frac{1}{p}})+\frac{q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1) }{p^{2}q}S_{2}(n,q^{\frac{1}{2}-\frac{1}{p}})(n-[n+1]_{p})\] \[-\frac{q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q}S_{ 2}(-1,q^{\frac{1}{2}-\frac{1}{p}})S_{1}(n+1,q^{\frac{1}{2}-\frac{1}{p}})+\frac {q^{n\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q}S_{3}(n,q^{\frac{1}{2} -\frac{1}{p}})\] \[-\frac{q^{(n-1)\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q }S_{3}(n-1,q^{\frac{1}{2}-\frac{1}{p}}).\] For \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}/v\) we get \[A_{\xi_{p}^{j}v^{-1}}=-\frac{1}{2\pi i}\oint\frac{(1-q^{-1})v^{p}}{p(1-\xi_{p} ^{j}q^{\frac{1}{2}-\frac{1}{p}})(v-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})(v^ {p}-1)(1-v)(\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{n}}dv.\] Note that there is no pole of the integrand inside the contour of integration with \(|v|=q^{-\epsilon}\), so this integral is equal to \(0\). Finally, putting all the non-zero residues together and setting \(n+1=d\) gives \[S(d-1)+S(d-2)=A_{1,1}+A_{v^{-1}}+\sum_{j=0}^{p-1}A_{1,\xi_{p}^{j }}+\sum_{j=0}^{p-1}\sum_{k=1}^{p-1}A_{\xi_{p}^{j},\xi_{p}^{k}}\] \[= \frac{d(1-q^{1-p})}{(1-q^{1-\frac{p}{2}})^{2}}-\frac{2pq^{1-\frac {p}{2}}(1-q^{-\frac{p}{2}})}{(1-q^{1-\frac{p}{2}})^{3}}+\frac{q^{(d-1)\left( \frac{1}{p}-\frac{1}{2}\right)}(1-q^{-\frac{p}{2}})}{p(1-q^{1-\frac{p}{2}})}S_ {2}(d-1,q^{\frac{1}{2}-\frac{1}{p}})\] \[+\frac{q^{(d-1)\left(\frac{1}{p}-\frac{1}{2}\right)}}{p}S_{2}(d-1,q^{\frac{1}{2}-\frac{1}{p}})+\frac{q^{(d-1)\left(\frac{1}{p}-\frac{1}{2} \right)}(q-1)}{p^{2}q}S_{2}(d-1,q^{\frac{1}{2}-\frac{1}{p}})(d-1-[d]_{p})\] \[-\frac{q^{(d-1)\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q }S_{2}(-1,q^{\frac{1}{2}-\frac{1}{p}})S_{1}(d,q^{\frac{1}{2}-\frac{1}{p}})+ \frac{q^{(d-1)\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q}S_{3}(d-1,q^ {\frac{1}{2}-\frac{1}{p}})\] \[-\frac{q^{(d-2)\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{p^{2}q }S_{3}(d-2,q^{\frac{1}{2}-\frac{1}{p}}).\] ## 5. Proofs of Theorems 1.4 and 1.5 In this section we consider the case of the odd polynomial Artin-Schreier family defined by (3). Recall that \[H_{\rm odd}=\{\chi_{f}\,:\,f\in\mathcal{AS}_{e}^{0,\rm odd},1\leq e\leq d\}\cup\{1\},\] and by [EP, Proposition 2.5], we have that \(H_{\rm odd}\) is a subgroup of the group of characters of order \(p\) modulo \(x^{d+1}\). Moreover, \[H_{\rm odd}^{\rm pr}=\{\chi_{f}\,:\,f\in\mathcal{AS}_{d}^{0,\rm odd},\deg(f)=d\}.\] As proven in [EP, Proposition 5.1], we have \[|H_{\rm odd,d+1}|=q^{\frac{d+1}{2}-\lfloor\frac{d}{p}\rfloor+\lfloor\frac{d}{2 p}\rfloor},\,|H_{\rm odd,d}|=q^{\frac{d-1}{2}-\lfloor\frac{d}{p}\rfloor+ \lfloor\frac{d}{2p}\rfloor}.\] As before, since \((d,p)=1\), we have \(\left\lfloor\frac{d}{p}\right\rfloor=\left\lfloor\frac{d-1}{p}\right\rfloor\) and \(\left\lfloor\frac{d}{2p}\right\rfloor=\left\lfloor\frac{d-1}{2p}\right\rfloor\), so using (8), we have that \[|\mathcal{AS}_{d}^{0,\rm odd}|=(q-1)|H_{\rm odd,d}|=\frac{q-1}{q}|H_{\rm odd,d+1}|. \tag{21}\] As before, we have that \[H_{\rm odd,d+1}^{\rm pr}=H_{\rm odd,d+1}\setminus H_{\rm odd,d}. \tag{22}\] Now for \(F\) fixed, we have, using (22), \[\sum_{f\in\mathcal{AS}_{d}^{0,\rm odd}}\chi_{f}(F)=\sum_{\chi\in H_{\rm odd, d+1}^{\rm pr}}\chi(F)=\sum_{\chi\in H_{\rm odd,d+1}}\chi(F)-\sum_{\chi\in H_{ \rm odd,d}}\chi(F).\] Similarly as before, using the orthogonality relations (9), we have \[\sum_{f\in\mathcal{AS}_{d}^{0,\rm odd}}\chi_{f}(F)=\begin{cases}0&F\not\in H_ {\rm odd,d}^{\perp},\\ -|H_{\rm odd,d}|&F\in H_{\rm odd,d}^{\perp}\setminus H_{\rm odd,d+1}^{\perp}, \\ |H_{\rm odd,d+1}|-|H_{\rm odd,d}|&F\in H_{\rm odd,d+1}^{\perp}.\end{cases} \tag{23}\] ### The first moment Here, we will prove Theorem 1.4. Proof.: By the discussion of Section 2.4, we have that \[\sum_{f\in\mathcal{AS}_{d}^{0,\rm odd}}\mathcal{L}\Big{(}\frac{1 }{\sqrt{q}},f,\psi\Big{)}= \frac{1}{1-q^{-\frac{1}{2}}}\sum_{f\in\mathcal{AS}_{d}^{0,\rm odd }}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},\chi_{f}\Big{)}\] \[= \frac{1}{1-q^{-\frac{1}{2}}}\sum_{f\in\mathcal{AS}_{d}^{0,\rm odd }}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\end{subarray}}\frac{\chi_{f}(F)}{\sqrt{|F|}}.\] From (23), it then follows that \[\frac{1}{|\mathcal{AS}_{d}^{0,\mathrm{odd}}|}\sum_{f\in\mathcal{AS}_{ d}^{0,\mathrm{odd}}}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)} =\frac{1}{(1-q^{-\frac{1}{2}})|\mathcal{AS}_{d}^{0,\mathrm{odd}} |}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\\ F\in H_{\mathrm{odd},d+1}^{\perp}\end{subarray}}\frac{|H_{\mathrm{odd},d+1}|}{ \sqrt{|F|}}\] \[-\frac{1}{(1-q^{-\frac{1}{2}})|\mathcal{AS}_{d}^{0,\mathrm{odd}}| }\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\\ F\in H_{\mathrm{odd},d}^{\perp}\end{subarray}}\frac{|H_{\mathrm{odd},d}|}{ \sqrt{|F|}}\] \[=\frac{1}{1-q^{-\frac{1}{2}}}\Bigg{(}\frac{q}{q-1}\sum_{ \begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\\ F\in H_{\mathrm{odd},d+1}^{\perp}\end{subarray}}\frac{1}{\sqrt{|F|}}-\frac{1}{q -1}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\\ F\in H_{\mathrm{odd},d}^{\perp}\end{subarray}}\frac{1}{\sqrt{|F|}}\Bigg{)},\] where we used equation (21). Note that also \[\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\\ F\in H_{\mathrm{odd},d+1}^{\perp}\end{subarray}}\frac{1}{\sqrt{|F|}}=\sum_{ \begin{subarray}{c}F\in\mathcal{M}_{\leq d}\\ F(0)\neq 0\\ F\in H_{\mathrm{odd},d}^{\perp}\end{subarray}}\frac{1}{\sqrt{|F|}}.\] Using Lemma 2.4, we get \[\frac{1}{|\mathcal{AS}_{d}^{0,\mathrm{odd}}|}\sum_{f\in\mathcal{AS}_{d}^{0, \mathrm{odd}}}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}=\frac{1}{(1-q ^{-\frac{1}{2}})}\sum_{\begin{subarray}{c}g_{1}\in\mathcal{M}_{\leq\frac{d-1} {2}}\\ g_{1}(0)\neq 0\end{subarray}}\frac{1}{|g_{1}|}\sum_{g_{2}\in\mathcal{M}_{\leq \frac{d-2}{\mathrm{deg}}(g_{1})}}\frac{1}{|g_{2}|^{\frac{p}{2}}}.\] Let \(S_{\mathrm{odd},1}\) denote the double sum over \(g_{1},g_{2}\) above. We now evaluate \(S_{\mathrm{odd},1}\). Using Perron's formula (5), we have for the inner sum \[\sum_{\begin{subarray}{c}g_{2}\in\mathcal{M}_{\leq\frac{d-2}{ \mathrm{deg}}(g_{1})}\\ g_{2}(0)\neq 0\end{subarray}}\frac{1}{|g_{1}|^{\frac{p}{2}}}=\frac{1}{2\pi i} \oint\frac{\mathcal{Z}\left(u^{p}/q^{\frac{p}{2}}\right)\left(1-u^{p}/q^{ \frac{p}{2}}\right)}{(1-u)u^{d-2\deg(g_{1})}}\frac{du}{u}=\frac{1}{2\pi i} \oint\frac{u^{2\deg(g_{1})}\left(1-q^{-\frac{p}{2}}u^{p}\right)}{\left(1-q^{ 1-\frac{p}{2}}u^{p}\right)(1-u)u^{d}}\frac{du}{u},\] where we are integrating along a small circle around the origin. Introducing the sum over \(g_{1}\), we have \[\sum_{\begin{subarray}{c}g_{1}\in\mathcal{M}_{\leq\frac{d-1}{2} }\\ g_{1}(0)\neq 0\end{subarray}}\frac{1}{|g_{1}|}\sum_{g_{2}\in\mathcal{M}_{ \leq\frac{d-2}{\mathrm{deg}}(g_{1})}}\frac{1}{|g_{2}|^{\frac{p}{2}}}=\frac{1 }{(2\pi i)^{2}}\oint\oint\frac{\mathcal{Z}\left(u^{2}v^{2}/q\right)(1-u^{2}v^ {2}/q)\left(1-q^{-\frac{p}{2}}u^{p}\right)}{(1-v)v^{d}\left(1-q^{1-\frac{p}{2 }}u^{p}\right)(1-u)u^{d}}\frac{du}{u}\frac{dv}{v}\] \[= \frac{1}{(2\pi i)^{2}}\oint\oint\frac{(1-u^{2}v^{2}/q)\left(1-q^ {-\frac{p}{2}}u^{p}\right)}{(1-u^{2}v^{2})\left(1-q^{1-\frac{p}{2}}u^{p} \right)(1-u)(1-v)(uv)^{d}}\frac{du}{u}\frac{dv}{v}.\] We start integrating in \(|u|=|v|=q^{-\epsilon}\). We move the integral over \(v\) to \(|v|=R\) with \(R\to\infty\) and we encounter poles at \(v=1\), and \(v=\pm\frac{1}{u}\). We write \[S_{\mathrm{odd},1}=B_{1}+B_{u^{-1}}+B_{-u^{-1}}, \tag{24}\] where \(B_{1},B_{u^{-1}}\), and \(B_{-u^{-1}}\) correspond to the residues at \(u=1,\frac{1}{u}\), and \(-\frac{1}{u}\) respectively. For \(v=1\) we have \[B_{1}=\frac{1}{2\pi i}\oint\frac{\left(1-\frac{u^{2}}{q}\right)\left(1-q^{- \frac{p}{2}}u^{p}\right)}{\left(1+u\right)\left(1-u\right)^{2}\left(1-q^{1- \frac{p}{2}}u^{p}\right)u^{d}}\frac{du}{u}.\] Now we shift the integral over \(u\) to \(|u|=R\) with \(R\to\infty\) and we encounter poles at \(u=1\), \(u=-1\) and \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\). We write \[B_{1}=B_{1,1}+B_{1,-1}+\sum_{j=0}^{p-1}B_{1,\xi_{p}^{j}}, \tag{25}\] where \(B_{1,1}\) corresponds to the pole at \(u=1\), \(B_{1,-1}\) corresponds to the pole at \(u=-1\) and \(B_{1,\xi_{p}^{j}}\) corresponds to the pole at \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\). For the pole at \(u=-1\), we have \[B_{1,-1}=(-1)^{d}\frac{\left(1-\frac{1}{q}\right)\left(1+q^{- \frac{p}{2}}\right)}{4\left(1+q^{1-\frac{p}{2}}\right)}=-\frac{\left(1-\frac{1 }{q}\right)\left(1+q^{-\frac{p}{2}}\right)}{4\left(1+q^{1-\frac{p}{2}}\right)},\] since \(d\) is odd. For the double pole at \(u=1\) we have the contribution \[B_{1,1}= \frac{d\left(1-\frac{1}{q}\right)\left(1-q^{-\frac{p}{2}}\right) }{2\left(1-q^{1-\frac{p}{2}}\right)}+\frac{\left(3+\frac{1}{q}\right)\left(1- q^{-\frac{p}{2}}\right)}{4\left(1-q^{1-\frac{p}{2}}\right)}-\frac{pq^{1-\frac{p}{2}} \left(1-\frac{1}{q}\right)^{2}}{2\left(1-q^{1-\frac{p}{2}}\right)^{2}}.\] For the pole at \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\), we get \[B_{1,\xi_{p}^{j}}=\frac{(1-\xi_{p}^{2j}q^{-\frac{2}{p}})\left(1- \frac{1}{q}\right)}{(1+\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})(1-\xi_{p}^{j}q ^{\frac{1}{2}-\frac{1}{p}})^{2}p(\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{d}}.\] The sum over these residues gives \[\sum_{j=0}^{p-1}B_{1,\xi_{p}^{j}}= \frac{q^{d\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{pq}\sum_{j= 0}^{p-1}\frac{(1-\xi_{p}^{2j}q^{-\frac{2}{p}})}{(1+\xi_{p}^{j}q^{\frac{1}{2}- \frac{1}{p}})(1-\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}})^{2}\xi_{p}^{dj}}\] \[= \frac{q^{d\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{pq}\sum_{j= 0}^{p-1}\frac{(1-\xi_{p}^{2j}q^{-\frac{2}{p}})(1+\xi_{p}^{j}q^{\frac{1}{2}- \frac{1}{p}})\xi_{p}^{-dj}}{(1-\xi_{p}^{2j}q^{1-\frac{2}{p}})^{2}}\] \[= \frac{q^{d\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{pq}\sum_{j= 0}^{p-1}\frac{(1-\xi_{p}^{2j}q^{-\frac{2}{p}}+\xi_{p}^{j}q^{\frac{1}{2}-\frac {1}{p}}-\xi_{p}^{3j}q^{\frac{1}{2}-\frac{3}{p}})\xi_{p}^{-dj}}{(1-\xi_{p}^{2j} q^{1-\frac{2}{p}})^{2}}.\] We split the above sum into a linear combination of four sums, each of the form \(\sum_{j=0}^{p-1}\frac{\xi_{p}^{-nj}}{(1-\xi_{p}^{2j}q^{-\frac{2}{p}})^{2}}\). These sums can we rewritten in terms of \(S_{2}(n,q^{1-\frac{2}{p}})\) by modifying the exponent of \(\xi_{p}\) to be a multiple of \(2j\). (As \(j\) goes over the classes modulo \(p\), so does \(2j\).) For example, for the first sum, we write \(\xi_{p}^{-dj}=\xi_{p}^{-(d+p)j}=\xi_{p}^{-\left(\frac{d+p}{2}\right)(2j)}\), since \(d\) is odd. We get \[\sum_{j=0}^{p-1}B_{1,\xi_{p}^{j}}= \frac{q^{d\left(\frac{1}{p}-\frac{1}{2}\right)}(q-1)}{pq}\left(S_ {2}\left(\frac{d+p}{2},q^{1-\frac{2}{p}}\right)-q^{-\frac{2}{p}}S_{2}\left( \frac{d+p-2}{2},q^{1-\frac{2}{p}}\right)\right.\] \[\left.+q^{\frac{1}{2}-\frac{1}{p}}S_{2}\left(\frac{d-1}{2},q^{1- \frac{2}{p}}\right)-q^{\frac{1}{2}-\frac{3}{p}}S_{2}\left(\frac{d-3}{2},q^{1- \frac{2}{p}}\right)\right).\] For \(v=\frac{1}{u}\), we have \[B_{u^{-1}}= -\frac{1}{2\pi i}\oint\frac{\left(1-\frac{1}{q}\right)\left(1-q^{ -\frac{p}{2}}u^{p}\right)}{2\left(1-q^{1-\frac{p}{2}}u^{p}\right)(1-u)^{2}}du.\] There are no poles of the integrand inside the contour of integration (\(|u|<1\)) so the integral above is \(0\). For \(v=-\frac{1}{u}\), we have \[B_{-u^{-1}}=(-1)^{d}\frac{1}{2\pi i}\oint\frac{\left(1-\frac{1}{q}\right)\left( 1-q^{-\frac{p}{2}}u^{p}\right)}{2\left(1-q^{1-\frac{p}{2}}u^{p}\right)(1-u^{2 })}du\] Similarly as before, this integral is equal to \(0\). Putting together the nonzero residues at \(\pm 1\), we obtain \[B_{1,-1}+B_{1,1}= \frac{d\left(1-\frac{1}{q}\right)\left(1-q^{-\frac{p}{2}}\right) }{2\left(1-q^{1-\frac{p}{2}}\right)}+\frac{\left(3+\frac{1}{q}\right)\left(1- q^{-\frac{p}{2}}\right)}{4\left(1-q^{1-\frac{p}{2}}\right)}-\frac{pq^{1-\frac{p}{2}} \left(1-\frac{1}{q}\right)^{2}}{2\left(1-q^{1-\frac{p}{2}}\right)^{2}}\] \[-\frac{\left(1-\frac{1}{q}\right)\left(1+q^{-\frac{p}{2}}\right) }{4\left(1+q^{1-\frac{p}{2}}\right)}\] \[= \frac{d\left(1-\frac{1}{q}\right)\left(1-q^{-\frac{p}{2}}\right) }{2\left(1-q^{1-\frac{p}{2}}\right)}-\frac{pq^{1-\frac{p}{2}}\left(1-\frac{1} {q}\right)^{2}}{2\left(1-q^{1-\frac{p}{2}}\right)^{2}}+\frac{\left(1+\frac{1} {q}\right)+2\left(1-\frac{1}{q}\right)q^{1-\frac{p}{2}}-\left(1+\frac{1}{q} \right)q^{1-p}}{2\left(1-q^{2-p}\right)}.\] Combining the above, (24), (25) and the observation that \(B_{u^{-1}}=B_{-u^{-1}}=0\) finishes the proof of Theorem 1.4. ### The second moment Here, we will prove Theorem 1.5. We first need the following approximate functional equation. **Lemma 5.1** (Approximate Functional Equation).: _For \(f\in\mathcal{AS}_{d}^{0,odd}\), we have_ \[\mathcal{L}\left(\frac{1}{\sqrt{q}},\chi_{f}\right)^{2}= 2\sum_{F\in\mathcal{M}_{\leq d-2}}\frac{d_{2}(F)\chi_{f}(F)}{ \sqrt{|F|}}+\left(1-q^{-\frac{1}{2}}\right)^{2}\sum_{F\in\mathcal{M}_{d-1}} \frac{d_{2}(F)\chi_{f}(F)}{\sqrt{|F|}}\] \[+q^{-\frac{d}{2}}\sum_{j=0}^{d-2}\left((d-j)\left(q^{-\frac{1}{2} }-q^{\frac{1}{2}}\right)-2\right)\sum_{F\in\mathcal{M}_{j}}d_{2}(F)\chi_{f}(F).\] Proof.: Using Lemma 2.3, for \(f\in\mathcal{AS}_{d}^{0,\text{odd}}\) the coefficients of \(\mathcal{L}(u,f,\psi)\) are real, hence \(\epsilon(f)=1\). Then from (6), we have \[\mathcal{L}(u,f,\psi)^{2}=(qu^{2})^{d-1}\mathcal{L}\Big{(}\frac{1}{qu},f,\psi \Big{)}^{2}.\] Let \(a(n)\) be the coefficients of \(\mathcal{L}(u,f,\psi)^{2}\) and \(b(n)\) those of \(\mathcal{L}(u,\chi_{f})^{2}\). We have \[\mathcal{L}(u,f,\psi)^{2}=\sum_{n=0}^{2(d-1)}u^{n}\sum_{\begin{subarray}{c}F,H \in\mathcal{M}\\ \deg(FH)=n\end{subarray}}\psi_{f}(FH)=\sum_{n=0}^{2(d-1)}a(n)u^{n}.\] From the functional equation, we get that \[a(n)=q^{n-(d-1)}a(2(d-1)-n).\] The relationship between \(\mathcal{L}(u,f,\psi)^{2}\) and \(\mathcal{L}(u,\chi_{f})^{2}\) implies that \[\mathcal{L}(u,\chi_{f})^{2}=\sum_{n=0}^{2d}u^{n}\sum_{\deg(F)=n}d_{2}(F)\chi_{ f}(F)=\sum_{n=0}^{2d}b(n)u^{n}=(1-u)^{2}\sum_{n=0}^{2d-2}a(n)u^{n}.\] Therefore, we have \[b(n)=a(n)-2a(n-1)+a(n-2),\] where we extend the definition of \(a(n)\) to be zero outside the original definition (i.e., \(a(-1)=0\), etc.) From this, we deduce \[a(n)=\sum_{j=0}^{n}(n-j+1)b(j).\] Using the approximate functional equation for \(\mathcal{L}(u,f,\psi)^{2}\), we have the following \[\mathcal{L}(u,f,\psi)^{2}= \frac{\mathcal{L}(u,\chi_{f})^{2}}{(1-q^{-\frac{1}{2}})^{2}}=\sum _{n=0}^{d-1}\frac{a(n)}{q^{\frac{1}{2}}}+\sum_{n=0}^{d-2}\frac{a(n)}{q^{\frac{ 3}{2}}}\] \[= \sum_{n=0}^{d-1}\frac{1}{q^{\frac{n}{2}}}\sum_{j=0}^{n}(n-j+1)b(j )+\sum_{n=0}^{d-2}\frac{1}{q^{\frac{n}{2}}}\sum_{j=0}^{n}(n-j+1)b(j)\] \[= \sum_{j=0}^{d-1}b(j)\sum_{n=j}^{d-1}\frac{n-j+1}{q^{\frac{n}{2}} }+\sum_{j=0}^{d-2}b(j)\sum_{n=j}^{d-2}\frac{n-j+1}{q^{\frac{n}{2}}}\] \[= \sum_{j=0}^{d-1}\frac{b(j)}{q^{\frac{j-1}{2}}}\sum_{\ell=1}^{d-j }\frac{\ell}{q^{\frac{\ell}{2}}}+\sum_{j=0}^{d-2}\frac{b(j)}{q^{\frac{j-1}{2} }}\sum_{\ell=1}^{d-j-1}\frac{\ell}{q^{\frac{\ell}{2}}}.\] Completing the inner sums, \[\frac{\mathcal{L}(u,\chi_{f})^{2}}{(1-q^{-\frac{1}{2}})^{2}}= \sum_{j=0}^{d-1}\frac{b(j)}{q^{\frac{j-1}{2}}}\frac{q^{-\frac{1}{2 }}((d-j)q^{-\frac{d-j+1}{2}}-(d-j+1)q^{-\frac{d-j}{2}}+1)}{(1-q^{-\frac{1}{2}} )^{2}}\] \[+\sum_{j=0}^{d-2}\frac{b(j)}{q^{\frac{j-1}{2}}}\frac{q^{-\frac{1}{ 2}}((d-j-1)q^{-\frac{d-j}{2}}-(d-j)q^{-\frac{d-j-1}{2}}+1)}{(1-q^{-\frac{1}{2}} )^{2}}.\] By multiplying by \((1-q^{-\frac{1}{2}})^{2}\), we get \[\mathcal{L}(u,\chi_{f})^{2}= \sum_{j=0}^{d-1}\frac{b(j)}{q^{\frac{j}{2}}}((d-j)q^{-\frac{d-j+1}{2 }}-(d-j+1)q^{-\frac{d-j}{2}}+1)\] \[+\sum_{j=0}^{d-2}\frac{b(j)}{q^{\frac{j}{2}}}((d-j-1)q^{-\frac{d-j }{2}}-(d-j)q^{-\frac{d-j-1}{2}}+1)\] \[= \sum_{j=0}^{d-1}\frac{b(j)}{q^{\frac{j}{2}}}+\sum_{j=0}^{d-2} \frac{b(j)}{q^{\frac{j}{2}}}\] \[+q^{-\frac{d}{2}}\sum_{j=0}^{d-1}b(j)((d-j)q^{-\frac{1}{2}}-(d-j+ 1))+q^{-\frac{d}{2}}\sum_{j=0}^{d-2}b(j)((d-j-1)-(d-j)q^{\frac{1}{2}})\] \[= \sum_{j=0}^{d-1}\frac{b(j)}{q^{\frac{j}{2}}}+\sum_{j=0}^{d-2} \frac{b(j)}{q^{\frac{j}{2}}}\] \[+q^{-\frac{d}{2}}\sum_{j=0}^{d-2}b(j)\left((d-j)(q^{-\frac{1}{2}} -q^{\frac{1}{2}})-2\right)+q^{-\frac{d}{2}}b(d-1)(q^{-\frac{1}{2}}-2)\] \[= 2\sum_{j=0}^{d-2}\frac{b(j)}{q^{\frac{j}{2}}}+(1-q^{-\frac{1}{2} })^{2}\frac{b(d-1)}{q^{\frac{d-1}{2}}}+q^{-\frac{d}{2}}\sum_{j=0}^{d-2}b(j) \left((d-j)(q^{-\frac{1}{2}}-q^{\frac{1}{2}})-2\right).\] Replacing by \(u=\frac{1}{\sqrt{q}}\) completes the proof. Proof of Theorem 1.5.: Using Lemma 5.1, we have \[= \frac{1}{(1-q^{-\frac{1}{2}})^{2}|\mathcal{AS}_{d}^{0,\text{odd}} |}\sum_{f\in\mathcal{AS}_{d}^{0,\text{odd}}}\bigg{[}2\sum_{F\in\mathcal{M}_{ \leq d-2}}\frac{d_{2}(F)\chi_{f}(F)}{\sqrt{|F|}}+(1-q^{-\frac{1}{2}})^{2}\sum_ {F\in\mathcal{M}_{\leq d-1}}\frac{d_{2}(F)\chi_{f}(F)}{\sqrt{|F|}}\] \[+q^{-\frac{d}{2}}\sum_{j=0}^{d-2}\left((d-j)\left(q^{-\frac{1}{2} }-q^{\frac{1}{2}}\right)-2\right)\sum_{F\in\mathcal{M}_{j}}d_{2}(F)\chi_{f}(F) \bigg{]}. \tag{26}\] Thus, evaluating the second moment reduces to evaluating \[\sum_{f\in\mathcal{AS}_{d}^{0,\text{odd}}}\sum_{F\in\mathcal{M}_{ j}}d_{2}(F)\chi_{f}(F)= \sum_{\begin{subarray}{c}F\in\mathcal{M}_{j}\\ F(0)\neq 0\\ F\in H_{\text{odd},d+1}^{\perp}\end{subarray}}d_{2}(F)|H_{\text{odd},d+1}|- \sum_{\begin{subarray}{c}F\in\mathcal{M}_{j}\\ F(0)\neq 0\\ F\in H_{\text{odd},d}^{\perp}\end{subarray}}d_{2}(F)|H_{\text{odd},d}|\] \[= (|H_{\text{odd},d+1}|-|H_{\text{odd},d}|)\sum_{\begin{subarray}{c} g_{1},g_{2}\in\mathcal{M}\\ 2\deg(g_{1})+p\deg(g_{2})=j\\ g_{1}(0)g_{2}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g(x^{p})).\] Using (21), we have \((|H_{\rm odd,\,\!d+1}|-|H_{\rm odd,\,\!d}|)/|\mathcal{AS}_{d}^{0,\rm odd}|=1.\) In the above, we can write \(g(x^{p})=g_{2}(x)^{p}\) by applying the Frobenius automorphism. Let \[\mathcal{G}(u,v)=\sum_{\begin{subarray}{c}g_{1},g_{2}\in\mathcal{M}\\ g_{1}(0)g_{2}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g_{2}(x)^{p})u^{2\deg(g_{1})}v^{ p\deg(g_{2})}.\] We will prove the following lemma. **Lemma 5.2**.: _We have_ \[\mathcal{G}(u,v)=\mathcal{Z}(u^{2})^{3}\mathcal{L}(u^{2},\chi_{x})(1-u^{2})^{ 3}\mathcal{H}(u,v),\] _where \(\mathcal{L}(u,\chi_{x})\) is the \(L\)-function associated to the quadratic residue \(\chi_{x}\), and where \(\mathcal{H}(u,v)\) is an explicit function given in equation (31) (and also (28), (29), (30)). Moreover, \(\mathcal{H}(u,v)\) has an analytic continuation for \(|u|<1,|v|<q^{-1/p}\)._ Proof.: We have \[\sum_{\begin{subarray}{c}g_{2}\in\mathcal{M}\\ g_{2}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g_{2}(x)^{p})v^{p\deg(g_{2})}= \prod_{\begin{subarray}{c}P\neq x\\ P|g_{1}(x^{2})\end{subarray}}\sum_{\begin{subarray}{c}j=0\\ P|g_{1}(x^{2})\end{subarray}}^{\infty}(jp+1+{\rm ord}_{P}(g_{1}(x^{2})))v^{jp \deg(P)}\sum_{\begin{subarray}{c}P\neq x\\ P|g_{1}(x^{2})\end{subarray}}^{\infty}(jp+1+{\rm ord}_{P}(g_{1}(x^{2})))v^{jp \deg(P)}, \tag{27}\] where the product over \(P\) above is over monic, irreducible polynomials. Let \(a(P)\) denote the Euler factor for \(P|g_{1}(x^{2})\). Now we introduce the sum over \(g_{1}\) and we have to evaluate \[\sum_{\begin{subarray}{c}g_{1}\in\mathcal{M}\\ g_{1}(0)\neq 0\end{subarray}}u^{2\deg(g_{1})}\prod_{\begin{subarray}{c}P\neq x \\ P|g_{1}(x^{2})\end{subarray}}a(P)=\prod_{Q\neq x}\Big{(}1+\sum_{k=1}^{\infty}u ^{2k\deg(Q)}\prod_{P|Q(x^{2})}a(P)\Big{)},\] where the product is over monic irreducible \(Q\). The product above depends on whether \(Q\) splits or is inert in \(\mathbb{F}_{q}[x](\sqrt{x})\). It is indeed equal to \[\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{\rm\,inert}\end{subarray}}\Big{(}1+\frac{(1-v^{p\deg(Q)})^{2}}{1-v^{ p\deg(Q)}+pv^{p\deg(Q)}}\sum_{k=1}^{\infty}u^{2k\deg(Q)}\sum_{j=0}^{\infty}(jp+1+k )v^{2jp\deg(Q)}\Big{)}\] \[\times\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{\rm\,splits}\end{subarray}}\Big{(}1+\Big{(}\frac{(1-v^{p\deg(Q)})^{2}}{ 1-v^{p\deg(Q)}+pv^{p\deg(Q)}}\Big{)}^{2}\sum_{k=1}^{\infty}u^{2k\deg(Q)}\Big{(} \sum_{j=0}^{\infty}(jp+1+k)v^{jp\deg(Q)}\Big{)}^{2}\Big{)}.\] Combining the above and (27), we get \[\mathcal{G}(u,v)=\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{\rm\,inert}\end{subarray}}A_{Q}(u,v)\prod_{\begin{subarray}{c}Q\neq x \\ Q\text{\rm\,splits}\end{subarray}}B_{Q}(u,v),\] where \[A_{Q}(u,v)=\frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}-\frac{u^{2d}\left(pu^{2d}v^{2dp} -u^{2d}v^{2dp}-pv^{2dp}+2v^{2dp}+u^{2d}-2\right)}{\left(u^{2d}-1\right)^{2} \left(v^{2dp}-1\right)^{2}}, \tag{28}\] and \[B_{Q}(u,v)=\frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}+\frac{u^{2d}} {(1-v^{pd}+pv^{pd})(1-v^{pd})^{2}(1-u^{2d})^{3}}\] \[\quad\times\Big{(}-2p^{2}u^{2d}v^{2dp}+p^{2}u^{4d}v^{2dp}+p^{2}v^ {2dp}-6pu^{2d}v^{dp}+6u^{2d}v^{dp}+6pu^{2d}v^{2dp}-3u^{2d}v^{2dp} \tag{29}\] \[\quad+2pu^{4d}v^{dp}-2u^{4d}v^{dp}-2pu^{4d}v^{2dp}+u^{4d}v^{2dp}+4 pv^{dp}-8v^{dp}-4pv^{2dp}+4v^{2dp}-3u^{2d}+u^{4d}+4\Big{)}, \tag{29}\] where \(d\) stands for \(\deg(Q)\) in the product. We further write \[\mathcal{G}(u,v)=\prod_{Q\neq x}\frac{1}{(1-u^{2\deg(Q)}))^{3}}\prod_{ \begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}(1-u^{2\deg(Q)})^{3}A_{Q}(u,v)\prod_{ \begin{subarray}{c}Q\neq x\\ Q\text{ splits}\end{subarray}}(1-u^{2\deg(Q)})^{3}B_{Q}(u,v),\] as well as \[(1-u^{2\deg(Q)})^{3}A_{Q}(u,v)=\frac{C_{Q}(u,v)}{1+u^{2\deg(Q)}},\,(1-u^{2\deg (Q)})^{3}B_{Q}(u,v)=\frac{D_{Q}(u,v)}{1-u^{2\deg(Q)}}. \tag{30}\] Since \[\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}\frac{1}{1+u^{2\deg(Q)}}\prod_{\begin{subarray}{c }Q\neq x\\ Q\text{ splits}\end{subarray}}\frac{1}{1-u^{2\deg(Q)}}=\mathcal{L}(u^{2},\chi_{ x}),\] it follows that \[\mathcal{G}(u,v)=\mathcal{Z}(u^{2})^{3}\mathcal{L}(u^{2},\chi_{x})(1-u^{2})^{3 }\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}C_{Q}(u,v)\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ splits}\end{subarray}}D_{Q}(u,v).\] Let \[\mathcal{H}(u,v)=\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}C_{Q}(u,v)\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ splits}\end{subarray}}D_{Q}(u,v). \tag{31}\] We claim that \(\mathcal{H}(u,v)\) has an analytic continuation for \(|u|<1,|v|<q^{-1/p}\). Indeed, from (28), (29), (30), note that we can write \[C_{Q}(u,v)= \frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}+\frac{(p+1)v^{pd}+(2p-1)v^ {2pd}+(p-1)v^{3pd}}{(1-v^{2dp})^{2}}\Big{(}-2u^{2d}+2u^{6d}-u^{8d}\Big{)}\] \[-\frac{u^{4d}}{(1-v^{2pd})^{2}}+O\Big{(}|u|^{2d}|v|^{2dp}\Big{)}\] \[= \frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}-\frac{u^{4d}}{(1-v^{2pd})^ {2}}+O\Big{(}|u|^{2d}|v|^{dp}\Big{)},\] \[D_{Q}(u,v)= \frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}-\frac{(p-1)v^{pd}(2+(p-1)v^{ pd})}{(1-v^{pd})^{2}(1-v^{pd}+pv^{pd})}\Big{(}4u^{2d}+4u^{6d}-u^{8d}\Big{)}\] \[+\frac{u^{4d}}{(1-v^{pd})^{2}(1-v^{pd}+pv^{pd})}\Big{(}-1+12(p-1) v^{pd}+6(p-1)^{2}v^{2pd}\Big{)}+O\Big{(}|u|^{2d}|v|^{pd}\Big{)}\] \[= \frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}-\frac{u^{4d}}{(1-v^{pd})^{ 2}(1-v^{pd}+pv^{pd})}+O\Big{(}|u|^{2d}|v|^{pd}\Big{)}.\] It follows that \(\mathcal{H}(u,v)\) converges absolutely for \(|u|<q^{-1/4},|v|<q^{-1/p}\). We rewrite \[\mathcal{H}(u,v)= \prod_{Q\neq x}\frac{1-v^{pd}+pv^{pd}}{(1-v^{pd})^{2}}\prod_{ \begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}\Big{(}1-\frac{u^{4d}}{(1+v^{pd})^{2}(1-v^{pd}+pv ^{pd})}+O\Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}\] \[\times\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ split}\end{subarray}}\Big{(}1-\frac{u^{4d}}{(1-v^{pd}+pv^{pd})^{2}}+O \Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}. \tag{32}\] Let \(\mathcal{R}(u,v)\) denote the product over \(Q\) inert and \(Q\) split. We have \[\mathcal{R}(u,v)= \prod_{Q\neq x}\frac{1-u^{4d}}{1-u^{8d}}\prod_{\begin{subarray}{ c}Q\neq x\\ Q\text{ inert}\end{subarray}}\Big{(}1-\frac{u^{8d}}{(1+v^{pd})^{2}(1-v^{pd}+pv ^{pd})}+O\Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}\] \[\times\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ split}\end{subarray}}\Big{(}1-\frac{u^{8d}}{(1-v^{pd}+pv^{pd})^{2}}+O \Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}\] \[= \frac{1}{(1-u^{4})\mathcal{Z}(u^{4})}\prod_{Q\neq x}\frac{1}{1-u^ {8d}}\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}\Big{(}1-\frac{u^{8d}}{(1+v^{pd})^{2}(1-v^{pd}+pv ^{pd})}+O\Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}\] \[\times\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ split}\end{subarray}}\Big{(}1-\frac{u^{8d}}{(1-v^{pd}+pv^{pd})^{2}}+O \Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}.\] Using the above, and (32) it follows that \(\mathcal{H}(u,v)\) has an analytic continuation for \(|u|<q^{-1/8},|v|<q^{-1/p}\). We repeat the argument above, and we get that at step \(n\), \[\mathcal{R}(u,v)= \frac{1}{(1-u^{4})\mathcal{Z}(u^{4})}\prod_{Q\neq x}\frac{1}{1-u^ {2n}d}\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ inert}\end{subarray}}\Big{(}1-\frac{u^{2^{n}d}}{(1+v^{pd})^{2}(1-v^{ pd}+pv^{pd})}+O\Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}\] \[\times\prod_{\begin{subarray}{c}Q\neq x\\ Q\text{ split}\end{subarray}}\Big{(}1-\frac{u^{2^{n}d}}{(1-v^{pd}+pv^{pd})^{2}}+O \Big{(}|u|^{2d}|v|^{pd}\Big{)}\Big{)}.\] Letting \(n\to\infty\), we get that \(\mathcal{H}(u,v)\) has an analytic continuation for \(|u|<1,|v|<q^{-1/p}\). This finishes the proof of Lemma 5.2. Recall that our goal is to evaluate \[\sum_{\begin{subarray}{c}g_{1},g_{2}\in\mathcal{M}\\ 2\deg(g_{1})+p\deg(g_{2})=j\\ g_{1}(0)g_{2}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g_{2}(x)^{p}).\] Using Perron's formula (5) for the sum over \(g_{2}\), we have \[\sum_{\begin{subarray}{c}g_{2}\in\mathcal{M}\\ p\deg(g_{2})=j-2\deg(g_{1})\\ g_{1}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g_{2}(x)^{p})=\frac{1}{2\pi i} \oint\frac{\sum_{g_{2}\in\mathcal{M}}\,d_{2}(g_{1}(x^{2})g_{2}(x)^{p})\upsilon ^{jp\deg(g_{2})}}{\upsilon^{j-2\deg(g_{1})}}\,\frac{dv}{v}.\] Introducing the sum over \(g_{1}\) and using Perron's formula again, we have \[\sum_{\begin{subarray}{c}g_{1},g_{2}\in\mathcal{M}\\ 2\deg(g_{1})+p\deg(g_{2})=j\\ g_{1}(0)g_{2}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g_{2}(x)^{p})=\frac{1}{(2 \pi i)^{2}}\oint\oint\frac{\mathcal{G}(uv,v)}{u^{2\lfloor\frac{j}{2}\rfloor}(1 -u^{2})v^{j}}\,\frac{du}{u}\,\frac{dv}{v}.\] Using Lemma 5.2, we get \[\sum_{\begin{subarray}{c}g_{1},g_{2}\in\mathcal{M}\\ 2\deg(g_{1})+p\deg(g_{2})=j\\ g_{1}(0)g_{2}(0)\neq 0\end{subarray}}d_{2}(g_{1}(x^{2})g_{2}(x)^{p})=\frac{1}{(2 \pi i)^{2}}\oint\oint\frac{(1-u^{2}v^{2})^{3}\mathcal{L}(u^{2}v^{2},\chi_{x}) \mathcal{H}(uv,v)}{u^{2\lfloor\frac{j}{2}\rfloor}(1-qu^{2}v^{2})^{3}(1-u^{2}) v^{j}}\,\frac{du}{u}\,\frac{dv}{v}.\] We proceed to evaluate the first term in (26). Let us denote it by \(S_{1}\). We have \[S_{1}=\frac{2}{(1-q^{-\frac{1}{2}})^{2}}\frac{1}{(2\pi i)^{2}}\oint\oint\sum_ {j=0}^{d-2}\frac{1}{u^{2\lfloor\frac{j}{2}\rfloor}\upsilon^{j}q^{\frac{j}{2}} }\frac{(1-u^{2}v^{2})^{3}\mathcal{L}(u^{2}v^{2},\chi_{x})\mathcal{H}(uv,v)}{(1 -qu^{2}v^{2})^{3}(1-u^{2})}\,\frac{du}{u}\,\frac{dv}{v},\] and we initially integrate along small circles around the origin, say \(|u|=\frac{1}{q^{100}}\). (Here it suffices to require \(|u|<q^{\frac{1}{p}-\frac{1}{2}}\).) Computing the sum over \(j\) we get that \[S_{1}=\frac{2}{(1-q^{-\frac{1}{2}})^{2}}\frac{1}{(2\pi i)^{2}}\oint\oint\Big{(} 1+\frac{1}{q^{\frac{1}{2}}v}\Big{)}\frac{(1-u^{2}v^{2})^{3}\mathcal{L}(u^{2} v^{2},\chi_{x})\mathcal{H}(uv,v)}{(1-qu^{2}v^{2})^{4}(1-u^{2})(qu^{2}v^{2})^{ \frac{d-3}{2}}}\,\frac{du}{u}\,\frac{dv}{v}.\] Now we shift the contour over \(u\) to \(|u|=q^{-\epsilon}\) and encounter the poles at \(u=\pm\frac{q^{-\frac{1}{2}}}{v}\). The integral over the new contour will be bounded by \(q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\). We will compute the residues at \(u=\pm\frac{q^{-\frac{1}{2}}}{v}\). Let \(\mathcal{M}(u,v)=\mathcal{H}(uv,v)(1-u^{2}v^{2})^{3}\mathcal{L}(u^{2}v^{2}, \chi_{x})\), which is an even function of \(u\). We write \[S_{1}=A_{1}+O\Big{(}q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\Big{)},\] and by computing the residues we have \[A_{1}= -\frac{2}{3!(1-q^{-\frac{1}{2}})^{2}}\frac{1}{2\pi i}\oint\Big{(}1+ \frac{1}{q^{\frac{1}{2}}v}\Big{)}\frac{1}{q^{\frac{d+5}{2}}v^{d+5}}\] \[\times\Bigg{[}\left.\frac{\partial^{3}}{\partial u^{3}}\frac{ \mathcal{M}(u,v)}{(\frac{q^{-\frac{1}{2}}}{v}+u)^{4}(1-u^{2})u^{d-2}}\right|_{ u=q^{-1/2}v^{-1}}+\left.\frac{\partial^{3}}{\partial u^{3}}\frac{\mathcal{M}(u,v)}{( \frac{q^{-\frac{1}{2}}}{v}-u)^{4}(1-u^{2})u^{d-2}}\right|_{u=-q^{-1/2}v^{-1}} \Bigg{]}\,\frac{dv}{v}.\] Computing the derivatives above, we can write \(A_{1}\) in the form \[A_{1}=\frac{1}{2\pi i}\oint\frac{\sum_{j=0}^{3}d^{j}\sum_{j=0}^{3-j}\mathcal{M }^{(k,0)}\Big{(}\frac{1}{q^{1/2v}},v\Big{)}\sum_{\ell=j}^{3}(1-qv^{2})^{\ell} R_{j,k,\ell}(\sqrt{q}v)}{(1-qv^{2})^{3}(1-\sqrt{q}v)}\,dv,\] where \(R_{j,k,\ell}(x)\) are polynomials such that \(R_{j,k,\ell}(1)\neq 0.\) Now note that there are no poles of the integrand inside the contour of integration, except for possible poles when \(qv^{2}=1.\) Writing the integral as a sum of four integrals (corresponding to the powers of \(d,\) we can then compute each integral exactly by computing the sum of the residues at \(v=\pm q^{-1/2}.\) For example, we get that the coefficient of \(d^{3}\) is equal to the integral \[\frac{1}{24(1-q^{-\frac{1}{2}})^{2}}\frac{1}{2\pi i}\oint\Big{(}1 +\frac{1}{q^{\frac{1}{2}}v}\Big{)}\frac{qv^{2}(qv^{2}-1)^{3}\mathcal{M}(q^{- \frac{1}{2}}v^{-1},v)}{(qv^{2}-1)^{4}}\frac{dv}{v}\] \[= \frac{1}{24(1-q^{-\frac{1}{2}})^{2}}\frac{1}{2\pi i}\oint\frac{q ^{\frac{1}{2}}\mathcal{M}(q^{-\frac{1}{2}}v^{-1},v)}{(q^{\frac{1}{2}}v-1)}dv.\] There is only one pole inside the contour of integration at \(v=q^{-1/2},\) so we get that the coefficient of \(d^{3}\) is equal to the residue of the pole at \(v=q^{-1/2},\) which equals \[\frac{1}{24(1-q^{-\frac{1}{2}})^{2}}\mathcal{M}(1,q^{-1/2})=\frac{1}{24(1-q^{ -\frac{1}{2}})^{2}}\mathcal{H}(q^{-1/2},q^{-1/2})\left(1-\frac{1}{q}\right)^{3 }\mathcal{L}(1/q,\chi_{x}). \tag{33}\] Similarly computing the coefficient of \(d^{i},\)\(i\leq 2\) leads to \[S_{1}=P_{1}(d)+O\Big{(}q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\Big{)}, \tag{34}\] for an explicit polynomial \(P_{1}\) whose leading coefficient is given by (33). Now let \(S_{2}\) denote the term corresponding to the second term in (26). We have that \[S_{2}=\frac{1}{(2\pi i)^{2}}\oint\oint\frac{\mathcal{M}(u,v)}{(1-qu^{2}v^{2}) ^{3}(1-u^{2})(qu^{2}v^{2})^{\frac{d-1}{2}}}\,\frac{du}{u}\,\frac{dv}{v},\] and we are integrating over \(|u|=\frac{1}{q^{100}}\) and \(|v|=\frac{1}{q^{\frac{1}{p}+\epsilon}}\). In the above, we have poles of order \(3\) at \(u=\pm\frac{q^{-\frac{1}{2}}}{v}\). We shift the contour of integration to \(|u|=q^{-\epsilon},\) and the integral over the new contour will be bounded by \(q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\). We compute the residues at \(u=\pm\frac{q^{-\frac{1}{2}}}{v}\) and we then get that \[S_{2} =\frac{1}{4\pi i}\oint\frac{qv^{2}\left(d^{2}\left(qv^{2}-1\right)^{ 2}+4d\left(q^{2}v^{4}-3qv^{2}+2\right)+3q^{2}v^{4}-10qv^{2}+15\right)\mathcal{M} (q^{-\frac{1}{2}}v^{-1},v)}{4v(qv^{2}-1)^{3}}\] \[-\frac{\left(qv^{2}-1\right)\sqrt{q}v\left(2d\left(qv^{2}-1\right) +3qv^{2}-7\right)\mathcal{M}^{(1,0)}(q^{-\frac{1}{2}}v^{-1},v)+\left(1-qv^{2} \right)\mathcal{M}^{(2,0)}(q^{-\frac{1}{2}}v^{-1},v)}{4v\left(qv^{2}-1\right)^ {3}}\,dv\] \[+O\Big{(}q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\Big{)}.\] Inside the contour of integration, the only poles are at \(v^{2}=1/q\). We compute the residue of the poles at \(v^{2}=1/q\), and we get that \[S_{2}=P_{2}(d)+O\Big{(}q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\Big{)}, \tag{35}\] where \(P_{2}\) is an explicit degree 2 polynomial. Now we write the third term in (26) as a difference as \(S_{3}-S_{4}\). We have that \[S_{3} =-\frac{1+\sqrt{q}}{1-\frac{1}{\sqrt{q}}}q^{-\frac{d}{2}}\frac{1} {(2\pi i)^{2}}\oint\oint\frac{u^{2}v\mathcal{M}(u,v)}{u^{d}v^{d}(1-u^{2})(1-u^ {2}v^{2})^{2}(1-qu^{2}v^{2})^{3}}\Big{(}-u^{3}v^{4}\] \[+uv(2+3v)+u^{2+d}v^{2+d}(-1+d+dv)-u^{d}v^{d}(1+d+2v+dv)\Big{)}\, \frac{du}{u}\,\frac{dv}{v},\] and we are integrating over \(|u|=\frac{1}{q^{100}}\) and \(|v|=\frac{1}{q^{\frac{1}{p}+\epsilon}}\). Note that the terms involving \(u^{d}v^{d}\) in the parenthesis will not have any contribution to the integral because their corresponding integrals have no poles inside the contour of integration. Thus we have \[S_{3} =-\frac{1+\sqrt{q}}{1-\frac{1}{\sqrt{q}}}q^{-d/2}\frac{1}{(2\pi i )^{2}}\oint\oint\frac{u^{2}v\mathcal{M}(u,v)}{u^{d}v^{d}(1-u^{2})(1-u^{2}v^{2} )^{2}(1-qu^{2}v^{2})^{3}}\] \[\Big{(}-u^{3}v^{4}+uv(2+3v)\Big{)}\,\frac{du}{u}\,\frac{dv}{v}.\] As before, we shift the contour of integration to \(|u|=q^{-\epsilon}\) and encounter the poles at \(u^{2}=1/(qv^{2})\). The integral over the new contour is bounded by \(q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\). We compute the residue of the pole at \(u^{2}=1/(qv^{2})\), and as before, we obtain a single integral over \(v\). The integrand will have poles inside the contour of integration at \(v^{2}=1/q\). We get that \[S_{3}=P_{3}(d)+O\Big{(}q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\Big{)}, \tag{36}\] where \(P_{3}\) is an explicit degree 2 polynomial. Finally, we have \[S_{4}=\frac{2q^{-\frac{d}{2}}}{(1-q^{-\frac{1}{2}})^{2}}\frac{1}{(2\pi i)^{2} }\oint\oint\frac{u^{2}v(1+v)\mathcal{M}(u,v)}{u^{d}v^{d}(1-u^{2})(1-u^{2}v^{2} )(1-qu^{2}v^{2})^{3}}(uv-u^{d}v^{d})\,\frac{du}{u}\,\frac{dv}{v}.\] As before, the term corresponding to \(u^{d}v^{d}\) in the parenthesis is 0, so \[S_{4}=\frac{2q^{-\frac{d}{2}}}{(1-q^{-\frac{1}{2}})^{2}}\frac{1}{(2\pi i)^{2} }\oint\oint\frac{u^{2}v(1+v)\mathcal{M}(u,v)}{u^{d}v^{d}(1-u^{2})(1-u^{2}v^{2} )(1-qu^{2}v^{2})^{3}}\,du\,dv.\] We similarly get that \[S_{4}=P_{4}(d)+O\Big{(}q^{\frac{d}{p}-\frac{d}{2}+\epsilon d}\Big{)}, \tag{37}\] where \(P_{4}\) is a degree \(2\) polynomial. Combining (34), (35), (36) and (37) and setting \(P(x)=\sum_{i=0}^{4}P_{i}(x)\), we finish the proof of Theorem 1.5. ## 6. Proof of Theorem 1.6 We will first prove the following two propositions. **Proposition 6.1**.: _Suppose that \(g\) is a monic square-free polynomial of degree \(d\). Then_ \[\frac{1}{|\mathcal{AS}_{d,g}^{ord}|}\sum_{f\in\mathcal{AS}_{d,g}^{ord }}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}=\frac{|g|}{\phi(g)(1-q^{- \frac{p}{2}})}\frac{1}{2\pi i}\oint_{|u|=q^{-\epsilon}}\frac{\prod_{P|g}\Big{(}1 -\frac{u^{p\deg(P)}}{|P|^{\frac{p}{2}}}\Big{)}\Big{(}1-\frac{u^{\deg(P)}}{|P|} \Big{)}}{(1-q^{1-\frac{p}{2}}u^{p})(1-u)u^{2d}}\,du\] \[\qquad+\frac{q^{d(\frac{2}{p}-1)}}{p}\Bigg{[}\frac{1}{1-q^{- \frac{p}{2}}}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2d-1)}}{q^{\frac{1}{p}-\frac{1} {2}}-\xi_{p}^{-j}}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1} {2}+\frac{1}{p}}}\Big{)}\] \[\qquad+\frac{q^{\frac{1}{2}}\left(q^{\frac{1+[2d-2]p}{2}}+q^{ \frac{p-1-[2d-2]p}{2}}\right)}{q^{\frac{p}{2}}-1}\sum_{j=0}^{p-1}\frac{\xi_{p} ^{j(2d-1)}}{q^{\frac{1}{p}}-\xi_{p}^{-j}}\prod_{P|g}\Bigg{(}1-\frac{\xi_{p}^{- j\deg(P)}}{|P|^{\frac{1}{p}}}\Bigg{)}\] \[\qquad-\frac{q-1}{p}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(4d-1)}}{(1-q ^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi _{p}^{-j})}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-2j\deg(P)}}{|P|^{\frac{1}{p}+ \frac{1}{2}}}\Big{)}\Bigg{]}+O\Big{(}q^{-d+d\epsilon}\Big{)}.\] **Remark 6.2**.: _We note that the first term in the asymptotic formula above is of constant size, while the second term is a lower-order term of size \(q^{d(\frac{2}{p}-1)}\). We leave the main term in its integral form, and will evaluate its contribution once we introduce the sum over \(g\) at the end._ **Proposition 6.3**.: _Suppose that \(g\) is a monic square-free polynomial of degree \(d-1\). Then_ \[\frac{1}{|\mathcal{AS}_{d,g}^{ord}|}\sum_{f\in\mathcal{AS}_{d,g}^{ ord}}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}= \frac{|g|}{\phi(g)(1-q^{-\frac{p}{2}})}\frac{1}{2\pi i}\oint_{|u|=q ^{-\epsilon}}\frac{\prod_{P|g}\Big{(}1-\frac{u^{p\deg(P)}}{|P|^{\frac{p}{2}}} \Big{)}\Big{(}1-\frac{u^{\deg(P)}}{|P|}\Big{)}}{(1-q^{1-\frac{p}{2}}u^{p})(1-u) u^{2d-2}}\,du\] \[\qquad+\frac{q^{(d-1)(\frac{2}{p}-1)}}{p(1-q^{-\frac{p}{2}})}\sum _{j=0}^{p-1}\frac{\xi_{p}^{j(2d-3)}}{q^{\frac{1}{p}-\frac{1}{2}}-\xi_{p}^{-j}} \prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{2}+\frac{1}{p}}} \Big{)}+O\Big{(}q^{-d+d\epsilon}\Big{)}.\] We first prove Proposition 6.1. Proof of Proposition 6.1.: Using (11), we have that \[\frac{1}{|\mathcal{AS}_{d,g}^{ord}|}\sum_{f\in\mathcal{AS}_{d,g}^{ord}}\mathcal{ L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}=\frac{1}{|\mathcal{AS}_{d,g}^{ord}|}\sum_{b \in\mathbb{F}_{q}}\sum_{f\in\mathcal{H}_{g}}\mathcal{L}\Big{(}\frac{1}{\sqrt{q }},f+b,\psi\Big{)}. \tag{38}\] Using Lemma 2.1, it follows that \[(38)=\frac{1}{q|\mathcal{H}_{g}|}\sum_{b\in\mathbb{F}_{q}}\sum_{f\in\mathcal{H}_{g }}\mathcal{L}\Big{(}\frac{\psi(\operatorname{tr}_{q/p}b)}{\sqrt{q}},f,\psi\Big{)}.\] Applying Propositions 2.6 and 2.7 we have \[=\frac{1}{q|\mathcal{H}_{g}|}\sum_{b\in\mathbb{F}_{q}}\frac{1}{(1- \frac{\psi(\operatorname{tr}_{q/p}b)}{\sqrt{q}})}\sum_{\begin{subarray}{c} \chi\,(\operatorname{mod}g^{2})\\ \chi\,\operatorname{primitive}\\ \chi^{p}=\chi_{0}\end{subarray}}\mathcal{L}\Big{(}\frac{\psi(\operatorname{ tr}_{q/p}b)}{\sqrt{q}},\chi\Big{)}\] \[=\frac{1}{q|\mathcal{H}_{g}|}\sum_{b\in\mathbb{F}_{q}}\frac{1}{(1 -\frac{\psi(\operatorname{tr}_{q/p}b)}{\sqrt{q}})}\sum_{\begin{subarray}{c} \chi\,(\operatorname{mod}g^{2})\\ \chi\,\operatorname{primitive}\\ \chi^{p}=\chi_{0}\end{subarray}}\sum_{F\in\mathcal{M}_{\leq 2d-1}}\frac{\psi( \operatorname{tr}_{q/p}b)^{\deg(F)}\chi(F)}{\sqrt{|F|}}\] \[=\frac{1}{q|\mathcal{H}_{g}|}\sum_{\begin{subarray}{c}\chi\,( \operatorname{mod}g^{2})\\ \chi\,\operatorname{primitive}\\ \chi^{p}=\chi_{0}\end{subarray}}\sum_{F\in\mathcal{M}_{\leq 2d-1}}\frac{ \alpha_{1}(\deg(F))\chi(F)}{\sqrt{|F|}}, \tag{38}\] where \(\alpha_{1}(\ell)\) is defined in (17). Now let \[G=\{\chi\,(\operatorname{mod}g^{2})\,:\,\chi^{p}=\chi_{0}\}.\] For \(Q|g^{2}\), we denote by \(G_{Q}\) the subgroup of characters in \(G\) of period \(Q\). Now using equation (2.14) in [EP] (which is inclusion-exclusion), we have \[\langle\chi(F)\rangle_{\chi\in G^{\operatorname{pr}}}=\sum_{Q|g^{2}}\mu(g^{2} /Q)\frac{|G_{Q}|}{|G^{\operatorname{pr}}|}\langle\chi(F)\rangle_{\chi\in G_{Q }}.\] Note that since \(g\) is square-free, we have that \(\mu(g^{2}/Q)=0\) unless \(g|Q\). Replacing \(Q\mapsto gQ\) we get \[\langle\chi(F)\rangle_{\chi\in G^{\operatorname{pr}}}=\sum_{Q|g}\mu(g/Q)\frac {|G_{gQ}|}{|G^{\operatorname{pr}}|}\langle\chi(F)\rangle_{\chi\in G_{gQ}}.\] Now from Proposition 2.10 in [EP], we also have \[|G^{\operatorname{pr}}|=\phi(g),\,|G_{gQ}|=q^{\deg(Q)}.\] Using the orthogonality relation (9), we get that \(\langle\chi(F)\rangle_{\chi\in G_{gQ}}=1\) if \(F\in G_{gQ}^{\perp}\), and it is \(0\) otherwise. Thus we have \[\langle\chi(F)\rangle_{\chi\in G^{\operatorname{pr}}}=\sum_{Q|g}\mu(g/Q)\frac {|Q|}{\phi(g)}\mathds{1}\left(F\in G_{gQ}^{\perp}\right). \tag{40}\] Recalling that \[G_{gQ}^{\perp}=\{F\,(\operatorname{mod}g^{2})\,:\,F\,(\operatorname{mod}gQ) \in(\mathbb{F}_{q}[x]/gQ)^{\times p}\},\] we rewrite (40) as \[\langle\chi(F)\rangle_{\chi\in G^{\operatorname{pr}}}=\sum_{Q|g}\mu\left( \frac{g}{Q}\right)\frac{|Q|}{\phi(g)}\mathds{1}\Big{(}F\,(\operatorname{mod}gQ) \in(\mathbb{F}_{q}[x]/gQ)^{\times p}\Big{)}. \tag{41}\] Using (41) and (39), we get that \[(38)=\frac{1}{q}\sum_{Q|g}\mu\left(\frac{g}{Q}\right)\frac{|Q|}{\phi(g)}\sum_{ \begin{subarray}{c}R\in\mathbb{F}_{q}[x]\\ p\deg(R)<\deg(gQ)\\ (R,gQ)=1\end{subarray}}\sum_{\begin{subarray}{c}F\in\mathcal{M}_{\leq 2d-1}\\ F(x)\equiv R(x)^{p}\,(\operatorname{mod}gQ)\end{subarray}}\frac{\alpha_{1}( \deg(F))}{\sqrt{|F|}}.\] Notice that since \(Q\mid g\), the condition \((R,gQ)=1\) below the second sum can be replaced by \((R,g)=1\). Now we split the sum above into a sum over \(\deg(F)<\deg(gQ)\) and a sum over \(\deg(F)\geq\deg(gQ)\). Let \(T_{1,d}\) and \(T_{2,d}\) denote these two terms. We have \[T_{1,d}=\frac{\alpha_{1}(0)}{q}\sum_{Q|g}\mu\left(\frac{g}{Q}\right)\frac{|Q|} {\phi(g)}\sum_{\begin{subarray}{c}R\in\mathcal{M}\\ p\deg(R)<\deg(gQ)\\ (R,g)=1\end{subarray}}\frac{1}{|R|^{\frac{p}{2}}},\] where we have applied the fact that \(\alpha_{1}(p\deg(R))=\alpha_{1}(0)\), which follows from Lemma 3.3, and \[T_{2,d} =\frac{1}{q}\sum_{Q|g}\mu\left(\frac{g}{Q}\right)\frac{|Q|}{\phi( g)}\sum_{\begin{subarray}{c}R\in\mathbb{F}_{q}[x]\\ p\deg(R)<\deg(gQ)\\ (R,g)=1\end{subarray}}\sum_{n=\deg(gQ)}^{2d-1}\frac{q^{\frac{n}{2}}\alpha_{1}(n )}{|gQ|}\] \[=\frac{1}{q\phi(g)|g|}\sum_{Q|g}\mu\left(\frac{g}{Q}\right)\sum_{ \begin{subarray}{c}R\in\mathbb{F}_{q}[x]\\ p\deg(R)<\deg(gQ)\\ (R,g)=1\end{subarray}}\sum_{n=\deg(gQ)}^{2d-1}q^{\frac{n}{2}}\alpha_{1}(n),\] where the sum over \(R\) is over all the polynomials (and not necessarily monic). We first focus on evaluating \(T_{1,d}\). Using Perron's formula (5), we write \[\sum_{\begin{subarray}{c}R\in\mathcal{M}\\ \deg(R)<\deg(gQ)/p\\ (R,g)=1\end{subarray}}\frac{1}{|R|^{\frac{p}{2}}}=\frac{1}{2\pi i}\oint\frac{ \mathcal{Z}\left(\frac{u^{p}}{q^{\frac{p}{2}}}\right)\prod_{P|g}(1-\frac{u^{p \deg(P)}}{|P|^{\frac{p}{2}}})}{(1-u)u^{\deg(gQ)}}\,du,\] where we are integrating along a small circle around the origin. We shift the contour of integration to \(|u|=q^{-\epsilon}\). We encounter poles at \(u=\xi_{p}^{j}q^{\frac{1}{2}-\frac{1}{p}}\) and get that \[\sum_{\begin{subarray}{c}R\in\mathcal{M}\\ \deg(R)<\deg(gQ)/p\\ (R,gQ)=1\end{subarray}}\frac{1}{|R|^{\frac{p}{2}}}= \frac{1}{2\pi i}\oint_{|u|=q^{-\epsilon}}\frac{\prod_{P|g}(1- \frac{u^{p\deg(P)}}{|P|^{\frac{p}{2}}})}{(1-q^{1-\frac{p}{2}}u^{p})(1-u)u^{ \deg(gQ)}}\,du\] \[+\sum_{j=0}^{p-1}\frac{q^{(\frac{1}{p}-\frac{1}{2})(\deg(gQ)-1)} \xi_{p}^{j}(\deg(gQ)-1)}{p(1-q^{\frac{1}{2}-\frac{1}{p}}\xi_{p}^{-j})}\prod_{P |g}\left(1-\frac{1}{|P|}\right).\] We plug this into the expression for \(T_{1,d}\), and write \(T_{1,d}=T_{11,d}+T_{12,d}\). We get that \[\begin{split} T_{11,d}&=\frac{1}{\phi(g)(1-q^{-\frac {p}{2}})}\sum_{Q|g}\mu\Big{(}\frac{g}{Q}\Big{)}|Q|\frac{1}{2\pi i}\oint_{|u|=q^ {-\epsilon}}\frac{\prod_{P|g}(1-\frac{u^{p\deg(P)}}{|P|^{\frac{p}{2}}})}{(1-q^ {1-\frac{p}{2}}u^{p})(1-u)u^{\deg(gQ)}}\,du\\ &=\frac{|g|}{\phi(g)(1-q^{-\frac{p}{2}})}\frac{1}{2\pi i}\oint_{|u |=q^{-\epsilon}}\sum_{Q|g}\frac{\mu(Q)}{|Q|}u^{\deg(Q)}\frac{\prod_{P|g}(1- \frac{u^{p\deg(P)}}{|P|^{\frac{p}{2}}})}{(1-q^{1-\frac{p}{2}}u^{p})(1-u)u^{2 \deg(g)}}\,du\\ &=\frac{|g|}{\phi(g)(1-q^{-\frac{p}{2}})}\frac{1}{2\pi i}\oint_{|u |=q^{-\epsilon}}\frac{1}{(1-q^{1-\frac{p}{2}}u^{p})(1-u)u^{2\deg(g)}}\prod_{P |g}\Big{(}1-\frac{u^{p\deg(P)}}{|P|^{\frac{p}{2}}}\Big{)}\Big{(}1-\frac{u^{ \deg(P)}}{|P|}\Big{)}\,du.\end{split} \tag{42}\] We focus on \(T_{12,d}\). We have \[\begin{split} T_{12,d}&=\frac{\alpha_{1}(0)}{q} \frac{|g|}{\phi(g)}\frac{q^{(\frac{1}{p}-\frac{1}{2})(2\deg(g)-1)}}{p}\sum_{j= 0}^{p-1}\frac{\xi_{p}^{j(2\deg(g)-1)}}{1-q^{\frac{1}{2}-\frac{1}{p}}\xi_{p}^{- j}}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{2}+\frac{1}{p}}} \Big{)}\frac{\phi(g)}{|g|}\\ &=\frac{1}{1-q^{-\frac{p}{2}}}\frac{q^{(\frac{1}{p}-\frac{1}{2})( 2\deg(g)-1)}}{p}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2\deg(g)-1)}}{1-q^{\frac{1}{2 }-\frac{1}{p}}\xi_{p}^{-j}}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{ \frac{1}{2}+\frac{1}{p}}}\Big{)},\end{split} \tag{43}\] where we used Lemmas 2.8 and 3.3 to evaluate \(\alpha_{1}(0)\). Now we evaluate \(T_{2,d}\). Using Perron's formula, we get that \[\sum_{\begin{subarray}{c}p\deg(R)<\deg(gQ)\\ (R,gQ)=1\end{subarray}}1=(q-1)\frac{1}{2\pi i}\oint\frac{\mathcal{Z}(u^{p}) \prod_{P|gQ}(1-u^{p\deg(P)})}{u^{\deg(gQ)}(1-u)}\,du,\] where we are integrating along a small circle around the origin. Using Lemma 3.3 for \(\alpha_{1}(n)\), it follows that \[\begin{split} T_{2,d}&=\frac{q-1}{p\phi(g)|g|}\sum_ {Q|g}\mu\left(\frac{g}{Q}\right)\sum_{j=0}^{p-1}\frac{1}{(1-q^{-\frac{1}{2}}\xi _{p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}-1)}\Big{[}(q^{\frac{1}{2}}\xi_{p}^{j})^{2 d}-(q^{\frac{1}{2}}\xi_{p}^{j})^{\deg(gQ)}\Big{]}\\ &\times\frac{1}{2\pi i}\oint\frac{\prod_{P|g}(1-u^{p\deg(P)})}{u^{ \deg(gQ)}(1-u)(1-qu^{p})}\,du.\end{split}\] We write the above as \(T_{21,d}-T_{22,d}\) corresponding to the difference of the terms in square brackets. Notice that we have \[\frac{1}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(1-q^{\frac{1}{2}}\xi_{p}^{j})}= \frac{1}{(q^{-\frac{1}{2}}-q^{\frac{1}{2}})\xi_{p}^{j}(1-q^{-\frac{1}{2}} \xi_{p}^{j})}-\frac{1}{(q^{-\frac{1}{2}}-q^{\frac{1}{2}})\xi_{p}^{j}(1-q^{ \frac{1}{2}}\xi_{p}^{j})}\] \[\sum_{j=0}^{p-1}\frac{\xi_{p}^{j2d}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q ^{\frac{1}{2}}\xi_{p}^{j}-1)}= -\frac{1}{q^{-\frac{1}{2}}-q^{\frac{1}{2}}}\sum_{j=0}^{p-1}\frac{ \xi_{p}^{j(2d-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})}+\frac{1}{q^{-\frac{1}{2}}-q ^{\frac{1}{2}}}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2d-1)}}{(1-q^{\frac{1}{2}}\xi_ {p}^{j})}\] \[= \frac{q^{\frac{1}{2}}}{q^{-\frac{1}{2}}-q^{\frac{1}{2}}}\sum_{j=0 }^{p-1}\frac{\xi_{p}^{j(2d-2)}}{(1-q^{\frac{1}{2}}\xi_{p}^{-j})}+\frac{1}{q^{- \frac{1}{2}}-q^{\frac{1}{2}}}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2d-1)}}{(1-q^{ \frac{1}{2}}\xi_{p}^{j})}\] \[= \frac{q^{\frac{1}{2}}}{q^{-\frac{1}{2}}-q^{\frac{1}{2}}}S_{1}(2d -2,q^{\frac{1}{2}})+\frac{1}{q^{-\frac{1}{2}}-q^{\frac{1}{2}}}S_{1}(1-2d,q^{ \frac{1}{2}})\] \[= \frac{p\left(q^{\frac{1+|2d-2|p}{2}}+q^{\frac{p-1-|2d-2|p}{2}} \right)}{(q^{-\frac{1}{2}}-q^{\frac{1}{2}})(1-q^{\frac{p}{2}})},\] where we have used the identity \([-n-1]_{p}+[n]_{p}=p-1\). It follows that \[T_{21,d}= (q-1)\frac{q^{d}}{\phi(g)|g|}\frac{\left(q^{\frac{1+|2d-2|p}{2}}+ q^{\frac{p-1-|2d-2|p}{2}}\right)}{(q^{-\frac{1}{2}}-q^{\frac{1}{2}})(1-q^{ \frac{p}{2}})}\frac{1}{2\pi i}\oint\frac{1}{u^{d}(1-u)(1-qu^{p})}\sum_{Q|g}\mu \left(\frac{g}{Q}\right)\] \[\times\frac{1}{u^{\deg(Q)}}\prod_{P|gQ}\left(1-u^{p\deg(P)}\right)du.\] As before, the condition \(P|gQ\) is equivalent to \(P|g\). We shift the contour of integration to \(|u|=q^{-\epsilon}\) and encounter poles when \(u=\xi_{p}^{j}q^{-\frac{1}{p}}\) for \(j=0,\ldots,p-1\). The integral over \(|u|=q^{-\epsilon}\) is bounded by \(q^{-d+d\epsilon}\). By computing the residues, we get that \[T_{21,d}=(q-1)\frac{q^{d}}{p|g|^{2}}q^{\frac{d-1}{p}}\frac{\left( q^{\frac{1+|2d-2|p}{2}}+q^{\frac{p-1-|2d-2|p}{2}}\right)}{(q^{-\frac{1}{2}}-q^{ \frac{1}{2}})(1-q^{\frac{p}{2}})}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(d-1)}}{1-q^{- \frac{1}{p}}\xi_{p}^{-j}}\sum_{Q|g}\mu\left(\frac{g}{Q}\right)\xi_{p}^{j\deg( Q)}|Q|^{\frac{1}{p}}\] \[=(q-1)\frac{q^{\frac{2d-1}{p}}}{p|g|}\frac{\left(q^{\frac{1+|2d-2 |p}{2}}+q^{\frac{p-1-|2d-2|p}{2}}\right)}{(q^{-\frac{1}{2}}-q^{\frac{1}{2}})(1 -q^{\frac{p}{2}})}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2d-1)}}{1-q^{-\frac{1}{p}} \xi_{p}^{-j}}\sum_{Q|g}\frac{\mu\left(Q\right)}{|Q|^{\frac{1}{p}}}\xi_{p}^{-j \deg(Q)}\] \[=(q-1)\frac{q^{-d}}{p}q^{\frac{2d-1}{p}}\frac{\left(q^{\frac{1+|2d- 2|p}{2}}+q^{\frac{p-1-|2d-2|p}{2}}\right)}{(q^{-\frac{1}{2}}-q^{\frac{1}{2}})(1 -q^{\frac{p}{2}})}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2d-1)}}{1-q^{-\frac{1}{p}} \xi_{p}^{-j}}\prod_{P|g}\left(1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{p}}}\right) \tag{44}\] \[=\frac{q^{d(\frac{2}{p}-1)+\frac{1}{2}}}{p}\frac{\left(q^{\frac{1+ |2d-2|p}{2}}+q^{\frac{p-1-|2d-2|p}{2}}\right)}{q^{\frac{p}{2}}-1}\sum_{j=0}^{p- 1}\frac{\xi_{p}^{j(2d-1)}}{\xi_{p}^{\frac{1}{p}}-\xi_{p}^{-j}}\prod_{P|g}\left( 1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{p}}}\right).\] We now proceed to evaluate \(T_{22,d}\): \[T_{22,d} =\frac{q-1}{p^{2}|g|^{\frac{3}{2}-\frac{1}{p}}}\sum_{j=0}^{p-1}\frac {\xi_{p}^{j(2\deg(g)-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{2}}\xi_{p} ^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}\sum_{Q|g}\mu\Big{(}\frac{g}{Q}\Big{)}|Q|^ {\frac{1}{p}+\frac{1}{2}}\xi_{p}^{2j\deg(Q)}\] \[=\frac{q-1}{p^{2}|g|^{1-\frac{2}{p}}}\sum_{j=0}^{p-1}\frac{\xi_{p }^{j(4\deg(g)-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}-1 )(q^{\frac{1}{p}}-\xi_{p}^{-j})}\sum_{Q|g}\frac{\mu(Q)}{|Q|^{\frac{1}{p}+\frac{ 1}{2}}}\xi_{p}^{-2j\deg(Q)}\] \[=\frac{q-1}{p^{2}|g|^{1-\frac{2}{p}}}\sum_{j=0}^{p-1}\frac{\xi_{p }^{j(4\deg(g)-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}- 1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-2j\deg(P) }}{|P|^{\frac{1}{p}+\frac{1}{2}}}\Big{)} \tag{45}\] \[=\frac{q-1}{p^{2}}q^{d(\frac{2}{p}-1)}\sum_{j=0}^{p-1}\frac{\xi_ {p}^{j(4d-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}-1)( q^{\frac{1}{p}}-\xi_{p}^{-j})}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-2j\deg(P)}}{|P|^ {\frac{1}{p}+\frac{1}{2}}}\Big{)}.\] Now combining equations (42), (43), (44) and (45), the conclusion follows. Next, we will prove Proposition 6.3. Proof of Proposition 6.3.: Following the discussion after Proposition 2.7 in Section 2.4, we have \[\frac{1}{|\mathcal{A}\mathcal{S}_{d,g}^{\mathrm{ord}}|}\sum_{f\in \mathcal{A}_{d,g}^{\mathrm{ord}}}\mathcal{L}\left(\frac{1}{\sqrt{q}},f,\psi \right) =\frac{1}{|\mathcal{A}\mathcal{S}_{d,g}^{\mathrm{ord}}|}\sum_{a\in \mathbb{F}_{q}^{*}}\sum_{b\in\mathbb{F}_{q}}\sum_{f\in\mathcal{H}_{g}} \mathcal{L}\Big{(}\frac{\psi(\mathrm{tr}_{q/p}(b))}{\sqrt{q}},f+ax,\psi\Big{)}\] \[=\frac{1}{q(q-1)|\mathcal{H}_{g}|}\sum_{a\in\mathbb{F}_{q}^{*}} \sum_{b\in\mathbb{F}_{q}}\frac{1}{\Big{(}1-\frac{\psi(\mathrm{tr}_{q/p}(b))}{ \sqrt{q}}\Big{)}}\sum_{f\in\mathcal{H}_{g}}\mathcal{L}\Big{(}\frac{\psi( \mathrm{tr}_{q/p}(b))}{\sqrt{q}},\chi_{f+ax}\Big{)}.\] Using Propositions 2.6 and 2.7, \[=\frac{1}{q(q-1)|\mathcal{H}_{g}|}\sum_{a\in\mathbb{F}_{q}^{*}} \sum_{b\in\mathbb{F}_{q}}\frac{1}{\Big{(}1-\frac{\psi(\mathrm{tr}_{q/p}(b))}{ \sqrt{q}}\Big{)}}\sum_{\begin{subarray}{c}\chi\,(\mathrm{mod}\,g^{2})\\ \chi\,\mathrm{primitive}\\ \chi^{p}=\chi_{0}\end{subarray}}\sum_{F\in\mathcal{M}_{\leq 2d-3}}\frac{\psi( \mathrm{tr}_{q/p}(b))^{\deg(F)}\chi(F)\chi_{ax}(F)}{\sqrt{|F|}} \tag{46}\] Now we have \[\sum_{a\in\mathbb{F}_{q}^{*}}\chi_{ax}(F)=\sum_{a\in\mathbb{F}_{q}^{*}}\psi \Big{(}\,\mathrm{tr}_{q/p}\,\Big{(}\sum_{F(\alpha)=0}a\alpha\Big{)}\Big{)}.\] If \(F(x)=x^{k}+c_{k-1}x^{k-1}+\cdots+c_{0}\), then \(\sum_{a\in\mathbb{F}_{q}^{*}}\chi_{ax}(F)=\sum_{a\in\mathbb{F}_{q}^{*}}\psi\left( \operatorname{tr}_{q/p}(ac_{k-1})\right)\). Note that this is equal to \(-1\) except when \(c_{k-1}=0\) (this is because the trace pairing is non-trivial on separable extensions), and in that case, it is equal to \(q-1\). For a polynomial \(F\), let \(c_{F}\) denote the coefficient of \(x^{\deg(F)-1}\). We rewrite \[(46)= \frac{1}{(q-1)|\mathcal{H}_{g}|}\sum_{\begin{subarray}{c}\chi \!\!\!\!\!\pmod{g^{2}}\\ \chi\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For the sum over \(F\), we have \[\sum_{\begin{subarray}{c}F\in\mathcal{M}_{n}\\ F\equiv R(x)^{p}\,(\text{mod}\,gQ)\\ c_{P}=0\end{subarray}}1=\sum_{\begin{subarray}{c}B\in\mathcal{M}_{n-\deg(gQ)} \\ c_{B}=c_{0}\end{subarray}}1=\frac{q^{n-\deg(gQ)}}{q},\] where \(c_{0}\) is fixed (more precisely, \(c_{0}=-r_{n-1}-c_{gQ}\), where \(r_{n-1}\) is the coefficient of \(x^{n-1}\) in \(R(x)^{p}\)). It then follows that \[C_{2}=\frac{1}{q(q-1)\phi(g)|g|}\sum_{Q|g}\mu\Big{(}\frac{g}{Q}\Big{)}\sum_{ \begin{subarray}{c}R\in\mathbb{F}_{q}[x]\\ p\deg(R)<\deg(gQ)\\ (R,g)=1\end{subarray}}\sum_{n=\deg(gQ)}^{2d-3}q^{\frac{n}{2}}\alpha_{1}(n).\] Note that we have \[C_{2}=\frac{1}{q-1}T_{2,d-1}. \tag{48}\] We also note that \[D=\frac{1}{q-1}T_{1,d-1}+\frac{1}{q-1}T_{2,d-1}, \tag{49}\] so combining equations (47), (48), (49), we get that \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq where \(\mathcal{E}(w,u)\) has an analytic continuation for \(|w|<1,|wu|<1,|wu^{p}|<q^{\frac{p}{2}-1}.\) Using Perron's formula (5) for the sum over \(g\), we get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{|g|}{(1-q^{-\frac{p}{2}})}\frac{1}{2 \pi i}\oint_{|u|=q^{-\epsilon}}\frac{\prod_{P|g}\Big{(}1-\frac{u^{p\deg(P)}}{|P |^{\frac{p}{2}}}\Big{)}\Big{(}1-\frac{u^{\deg(P)}}{|P|}\Big{)}}{(1-q^{1-\frac{ p}{2}}u^{p})(1-u)u^{2n}}\,du\] \[=\frac{q^{n}}{(1-q^{-\frac{p}{2}})}\frac{1}{(2\pi i)^{2}}\oint_{ |w|<\frac{1}{q}}\oint_{|u|=q^{-\epsilon}}\frac{\mathcal{E}(w,u)}{(1-q^{1-\frac {p}{2}}u^{p})(1-u)u^{2n}(1-qw)w^{n+1}}\,dw\,du. \tag{51}\] We first shift the contour over \(w\) to \(|w|=q^{-\epsilon}\) and we encounter the pole at \(w=1/q\). The integral over the new contour will be bounded by \(q^{n\epsilon}\). By computing the residue at \(w=1/q\), we get that \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeqeq:eqeq:eqeq:eq:eq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq: where \(\mathcal{F}_{1}(w)\) converges absolutely for \(|w|<q^{-\frac{3}{2}}\). Now using Perron's formula (5) for the sum over \(g\), we get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p(1-q^{- \frac{p}{2}})}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2\deg(g)-1)}}{q^{\frac{1}{p}- \frac{1}{2}}-\xi_{p}^{-j}}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{ \frac{1}{2}+\frac{1}{p}}}\Big{)}\] \[=\frac{q^{n(\frac{2}{p}-1)}}{p(1-q^{-\frac{p}{2}})}\sum_{j=0}^{p- 1}\frac{\xi_{p}^{j(2n-1)}}{q^{\frac{1}{p}-\frac{1}{2}}-\xi_{p}^{-j}}\frac{1}{ 2\pi i}\oint_{|w|<q^{-2}}\frac{\mathcal{F}_{1}(w)}{(1-q^{2}w)w^{n+1}}\,dw.\] We shift the contour to \(|w|=q^{-\frac{3}{2}+\epsilon}\) and encounter the pole at \(w=1/q^{2}\). We then get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p(1-q^{- \frac{p}{2}})}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2\deg(g)-1)}}{q^{\frac{1}{p}- \frac{1}{2}}-\xi_{p}^{-j}}\prod_{P|g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{ \frac{1}{2}+\frac{1}{p}}}\Big{)}\] \[=\frac{q^{n(1+\frac{2}{p})}\mathcal{F}_{1}(1/q^{2})}{p(1-q^{- \frac{p}{2}})}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2n-1)}}{q^{\frac{1}{p}-\frac{1} {2}}-\xi_{p}^{-j}}=\frac{q^{n(1+\frac{2}{p})}\mathcal{F}_{1}(1/q^{2})}{(1-q^{- \frac{p}{2}})(1-q^{\frac{p}{2}-1})}q^{(\frac{1}{2}-\frac{1}{p})([2n-1]_{p}-1)} +O\Big{(}q^{n(\frac{1}{2}+\frac{2}{p}+\epsilon)}\Big{)},\] where we used Lemma 2.8 to evaluate the sum over \(j\). **Lemma 6.6**.: _For \(n\) a nonnegative integer, we have_ \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p}\sum_{j =0}^{p-1}\frac{\xi_{p}^{j(2n-1)}}{q^{\frac{1}{p}}-\xi_{p}^{-j}}\prod_{P|g} \Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{p}}}\Big{)}=\frac{q^{n(1+ \frac{2}{p})}\mathcal{F}_{2}(1/q^{2})}{1-q}q^{\frac{[-2n]_{p}}{p}}+O\Big{(}q^{ n(\frac{1}{2}+\frac{2}{p}+\epsilon)}\Big{)},\] _where \(\mathcal{F}_{2}(w)\) is given in (53)._ Proof.: The generating series of the sum over \(g\) is \[\sum_{\begin{subarray}{c}g\text{ square-free}\end{subarray}}\phi(g) \prod_{P|g}\Bigg{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{p}}}\Bigg{)}w^{ \deg(P)} =\prod_{P}\Bigg{(}1+w^{\deg(P)}(|P|-1)\Big{(}1-\frac{\xi_{p}^{-j \deg(P)}}{|P|^{\frac{1}{p}}}\Big{)}\Bigg{)} \tag{53}\] \[=\mathcal{Z}(qw)\mathcal{F}_{2}(w),\] where \(\mathcal{F}_{2}(w)\) converges absolutely for \(|w|<q^{-\frac{3}{2}}\). Now using Perron's formula (5) for the sum over \(g\), we get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p}\sum_{j =0}^{p-1}\frac{\xi_{p}^{j(2\deg(g)-1)}}{q^{\frac{1}{p}}-\xi_{p}^{-j}}\prod_{P |g}\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{p}}}\Big{)}\] \[=\frac{q^{n(\frac{2}{p}-1)}}{p}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(2 n-1)}}{q^{\frac{1}{p}}-\xi_{p}^{-j}}\frac{1}{2\pi i}\oint_{|w|<q^{-2}}\frac{ \mathcal{F}_{2}(w)}{(1-q^{2}w)w^{n+1}}\,dw.\] We shift the contour to \(|w|=q^{-\frac{3}{2}+\epsilon}\) and encounter the pole at \(w=1/q^{2}\). We then get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p}\sum_{j=0 }^{p-1}\frac{\xi_{p}^{j(2\deg(g)-1)}}{q^{\frac{1}{p}}-\xi_{p}^{-j}}\prod_{P|g }\Big{(}1-\frac{\xi_{p}^{-j\deg(P)}}{|P|^{\frac{1}{p}}}\Big{)}\] \[=\frac{q^{n(1+\frac{2}{p})}\mathcal{F}_{2}(1/q^{2})}{p}\sum_{j=0} ^{p-1}\frac{\xi_{p}^{j(2n-1)}}{q^{\frac{1}{p}}-\xi_{p}^{-j}}=\frac{q^{n(1+ \frac{2}{p})}\mathcal{F}_{2}(1/q^{2})}{q-1}q^{\frac{[-2n]_{p}}{p}}+O\Big{(}q^{ n(\frac{1}{2}+\frac{2}{p}+\epsilon)}\Big{)},\] where we used Lemma 2.8 to evaluate the sum over \(j\). **Lemma 6.7**.: _For \(n\) a nonnegative integer, we have_ \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p^{2}}\sum_{j =0}^{p-1}\frac{\xi_{p}^{j(4n-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{ 2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}\prod_{P|g}\Big{(}1-\frac{\xi_ {p}^{-2j\deg(P)}}{|P|^{\frac{1}{p}+\frac{1}{2}}}\Big{)}\] \[=\frac{q^{n(1+\frac{2}{p})}\mathcal{F}_{3}(1/q^{2})}{p^{2}}\sum_{ j=0}^{p-1}\frac{\xi_{p}^{j(4n-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{ 2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}+O\Big{(}q^{n(\frac{1}{2}+ \frac{2}{p}+\epsilon)}\Big{)},\] _where \(\mathcal{F}_{3}(w)\) is given in (54)._ Proof.: The generating series of the sum over \(g\) is \[\sum_{\begin{subarray}{c}g\text{ square-free}\end{subarray}}\phi(g) \prod_{P|g}\Bigg{(}1-\frac{\xi_{p}^{-2j\deg(P)}}{|P|^{\frac{1}{2}+\frac{1}{p} }}\Bigg{)}w^{\deg(P)} =\prod_{P}\Bigg{(}1+w^{\deg(P)}(|P|-1)\Big{(}1-\frac{\xi_{p}^{-2j \deg(P)}}{|P|^{\frac{1}{2}+\frac{1}{p}}}\Big{)}\Bigg{)} \tag{54}\] \[=\mathcal{Z}(qw)\mathcal{F}_{3}(w),\] where \(\mathcal{F}_{3}(w)\) converges absolutely for \(|w|<q^{-\frac{3}{2}}\). Now using Perron's formula (5) for the sum over \(g\), we get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p^{2}}\sum_{j =0}^{p-1}\frac{\xi_{p}^{j(4n-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1} {2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}\prod_{P|g}\Big{(}1-\frac{\xi_ {p}^{-2j\deg(P)}}{|P|^{\frac{1}{2}+\frac{1}{p}}}\Big{)}\] \[=\frac{q^{n(\frac{2}{p}-1)}}{p^{2}}\sum_{j=0}^{p-1}\frac{\xi_{p}^{ j(4n-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}-1)(q^{\frac{1} {p}}-\xi_{p}^{-j})}\frac{1}{2\pi i}\oint_{|w|<q^{-2}}\frac{\mathcal{F}_{3}(w)}{ (1-q^{2}w)w^{n+1}}\,dw.\] We shift the contour to \(|w|=q^{-\frac{3}{2}+\epsilon}\) and encounter the pole at \(w=1/q^{2}\). We then get that \[\sum_{\begin{subarray}{c}g\in\mathcal{M}_{n}\\ g\text{ square-free}\end{subarray}}\frac{\phi(g)|g|^{\frac{2}{p}-1}}{p^{2}}\sum_{j =0}^{p-1}\frac{\xi_{p}^{j(4n-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1} {2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}\prod_{P|g}\Big{(}1-\frac{\xi_ {p}^{-2j\deg(P)}}{|P|^{\frac{1}{2}+\frac{1}{p}}}\Big{)}\] \[=\frac{q^{n(1+\frac{2}{p})}\mathcal{F}_{3}(1/q^{2})}{p^{2}}\sum_{ j=0}^{p-1}\frac{\xi_{p}^{j(4n-1)}}{(1-q^{-\frac{1}{2}}\xi_{p}^{j})(q^{\frac{1} {2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}+O\Big{(}q^{n(\frac{1}{2}+ \frac{2}{p}+\epsilon)}\Big{)},\] where we used Lemma 2.8 to evaluate the sum over \(j\). We are now ready to prove Theorem 1.6. Proof of Theorem 1.6.: Our goal is to evaluate \[\frac{1}{|\mathcal{AS}_{d}^{\mathrm{ord}}|}\Bigg{(}\sum_{\begin{subarray}{c}g\in \mathcal{M}_{d}\\ g\text{ square-free}\end{subarray}}\sum_{f\in\mathcal{AS}_{d,g}^{\mathrm{ord}}} \mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}+\sum_{\begin{subarray}{c}g \in\mathcal{M}_{d-1}\\ g\text{ square-free}\end{subarray}}\sum_{f\in\mathcal{AS}_{d,g}^{\mathrm{ord}}} \mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}\Bigg{)}.\] We combine the results of Propositions 6.1 and 6.3, together with Lemmas 6.4, 6.5, 6.6, 6.7 to obtain that the above equals \[\frac{1}{|\mathcal{AS}_{d}^{\mathrm{ord}}|}\left[\frac{q^{2d-1}(q^ {2}+q-1)\mathcal{E}(1/q,1)}{(1-q^{-\frac{p}{2}})(1-q^{1-\frac{p}{2}})}+\frac{q ^{d(1+\frac{2}{p})}(q+q^{1-\frac{2}{p}}-q^{-\frac{2}{p}})}{p(1-q^{-\frac{p}{2} })}\sum_{j=0}^{p-1}\frac{\mathcal{E}(1/q,q^{\frac{1}{2}-\frac{1}{p}}\xi_{p}^{j })}{(q^{\frac{1}{p}-\frac{1}{2}}-\xi_{p}^{j})\xi_{p}^{2jd}}\right.\] \[+\frac{q^{d(1+\frac{2}{p})+1}\mathcal{F}_{1}(1/q^{2})}{(1-q^{- \frac{p}{2}})(1-q^{\frac{p}{2}-1})}q^{(\frac{1}{2}-\frac{1}{p})([2d-1]_{p}+1) }+\frac{(1-q^{-1})q^{d(1+\frac{2}{p})-\frac{2}{p}+1}\mathcal{F}_{1}(1/q^{2})} {(1-q^{-\frac{p}{2}})(1-q^{\frac{p}{2}-1})}q^{(\frac{1}{2}-\frac{1}{p})([2d-3] _{p}+1)}\] \[+\frac{q^{d(1+\frac{2}{p})}\mathcal{F}_{2}(1/q^{2})}{1-q^{-1}} \frac{\Big{(}q^{\frac{2-p+[2d-2]_{p}}{2}}+q^{\frac{-[2d-2]_{p}}{2}}\Big{)}}{1- q^{-\frac{p}{2}}}q^{\frac{[-2d]_{p}}{p}}\] \[\left.-\frac{(1-q^{-1})q^{d(1+\frac{2}{p})+2}\mathcal{F}_{3}(1/q^ {2})}{p^{2}}\sum_{j=0}^{p-1}\frac{\xi_{p}^{j(4d-1)}}{(1-q^{-\frac{1}{2}}\xi_{ p}^{j})(q^{\frac{1}{2}}\xi_{p}^{j}-1)(q^{\frac{1}{p}}-\xi_{p}^{-j})}\right]+O \Big{(}q^{d(\frac{2}{p}-\frac{3}{2}+\epsilon)}\Big{)}. \tag{55}\] Finally, using Lemma 2.5, we find that \[\frac{1}{|\mathcal{AS}_{d}^{\mathrm{ord}}|}\sum_{f\in\mathcal{AS}_{d}^{\mathrm{ ord}}}\mathcal{L}\Big{(}\frac{1}{\sqrt{q}},f,\psi\Big{)}=\frac{\mathcal{E}(1/q,1)}{ \mathcal{G}(1/q^{2})(1-q^{-\frac{p}{2}})(1-q^{1-\frac{p}{2}})}+c_{0}q^{d( \frac{2}{p}-1)}+O\Big{(}q^{d(\frac{2}{p}-\frac{3}{2}+\epsilon)}\Big{)},\] for a specific coefficient \(c_{0}\) which can be obtained from the previous equation. ## 7. Agreement with the Random Matrix Models Starting with the observation of Montgomery and Dyson that the zeros of \(\zeta(s)\) seem to obey the same distribution patterns as the eigenvalues of large random unitary matrices, random matrix models have been given for families of \(L\)-functions and have been instrumental in the formulation of conjectures in number theory. Associating a random matrix group to each family of \(L\)-functions as suggested by the work of Katz and Sarnak [10], Keating and Snaith [10, 11] used random matrix theory computations to conjecture formulas for moments in families of \(L\)-functions. We reproduce here the expected conjectures in the cases of families of \(L\)-functions with unitary and symplectic symmetries respectively. Let \(\mathcal{F}\) denote a family of \(L\)-functions. For \(f\in\mathcal{F}\), let \(c(f)\) denote the conductor of the \(L\)-function associated to \(f\), denoted by \(L_{f}(s)\). Let \[X^{*}=\Big{|}\Big{\{}f\in\mathcal{F}\,:\,c(f)\leq X\Big{\}}\Big{|}.\] In the case of a unitary family, the Keating-Snaith conjecture states the following. **Conjecture 7.1**.: _For a family \(\mathcal{F}\) of \(L\)-functions with unitary symmetry and \(k\) a positive integer,_ \[\frac{1}{X^{*}}\sum_{\begin{subarray}{c}f\in\mathcal{F}\\ c(f)\leq X\end{subarray}}|L_{f}(1/2)|^{2k}\sim a(k)g_{\text{U}}(k)(\log X)^{k^{2 }},\] _where \(a(k)\) is an arithmetic factor depending on the specific family considered, and where_ \[g_{\text{U}}(k)=\prod_{j=0}^{k-1}\frac{j!}{(j+k)!}.\] In particular, in the case of the second moment (\(k=1\)), we have \(g_{\text{U}}(1)=1\). Note that Conjecture 7.1 above is stated for a family of \(L\)-functions over number fields, but a similar Conjecture can be stated in the function field setting. Namely, for a family \(\mathcal{F}\) of \(L\)-functions over function fields with expected unitary symmetry, one would expect \[\frac{1}{D^{*}}\sum_{\begin{subarray}{c}f\in\mathcal{F}\\ \log_{q}c(f)=d\end{subarray}}|L_{f}(1/2)|^{2k}\sim a(k)g_{\text{U}}(k)d^{k^{2 }},\] where \[D^{*}=\Big{|}\{f\in\mathcal{F}\,:\,\log_{q}|c(f)|=d\}\Big{|}.\] Note that the leading order term in Theorem 1.3 (apart from the arithmetic factor which depends on \(q\) and which corresponds to the factor \(a(1)\) in Conjecture 7.1) matches the conjecture above (it is equal to \(1\)), under the correspondence: \[\mathcal{F}=\mathcal{AS}_{d}^{0},\,L_{f}=\mathcal{L}(u,f,\psi),\,\log_{q}|c(f) |=d.\] Hence the polynomial family is expected to have unitary symmetry, as suggested both by Theorem 1.3 and by the local statistics results due to Entin [12, EP]. Now in the case of a family with symplectic symmetry, the Keating-Snaith conjecture states the following. **Conjecture 7.2**.: _For a family \(\mathcal{F}\) of \(L\)-functions with symplectic symmetry and \(k\) a positive integer,_ \[\frac{1}{X^{*}}\sum_{\begin{subarray}{c}f\in\mathcal{F}\\ c(f)\leq X\end{subarray}}L_{f}(1/2)^{k}\sim a(k)g_{\text{Sp}}(k)\Big{(}\frac{ \log X}{2}\Big{)}^{\frac{k(k+1)}{2}},\] _where \(a(k)\) is an arithmetic factor depending on the specific family considered, and where_ \[g_{\text{Sp}}(k)=\Big{(}\prod_{j=0}^{k}(2j-1)!!\Big{)}^{-1}.\] Note that \(g_{\text{Sp}}(1)=1,g_{\text{Sp}}(2)=\frac{1}{3}\). Similarly one could write the Conjecture above for a family over function fields with symplectic symmetry. In this case, one expects \[\frac{1}{D^{*}}\sum_{\begin{subarray}{c}f\in\mathcal{F}\\ \log_{q}|c(f)|=d\end{subarray}}L_{f}(1/2)^{k}\sim a(k)g_{\text{Sp}}(k)\Big{(} \frac{d}{2}\Big{)}^{\frac{k(k+1)}{2}}.\] A precise statement of the conjecture above was also written down explicitly by Andrade and Keating in [1] for the family of quadratic Dirichlet \(L\)-functions over function fields. Now note that the leading order term in Theorem 1.4 and Theorem 1.5 (\(\frac{1}{2}\) and \(\frac{1}{24}\) respectively) match the conjecture above, where \[\mathcal{F}=\mathcal{AS}_{d}^{0,\mathrm{odd}},\,L_{f}=\mathcal{L}(u,f,\psi),\, \log_{q}|c(f)|=d.\] This confirms the fact that the odd polynomial Artin-Schreier family has expected symplectic symmetry, as already observed in the work of Entin-Pirani [1].
2303.08067
WHYPE: A Scale-Out Architecture with Wireless Over-the-Air Majority for Scalable In-memory Hyperdimensional Computing
Hyperdimensional computing (HDC) is an emerging computing paradigm that represents, manipulates, and communicates data using long random vectors known as hypervectors. Among different hardware platforms capable of executing HDC algorithms, in-memory computing (IMC) has shown promise as it is very efficient in performing matrix-vector multiplications, which are common in the HDC algebra. Although HDC architectures based on IMC already exist, how to scale them remains a key challenge due to collective communication patterns that these architectures required and that traditional chip-scale networks were not designed for. To cope with this difficulty, we propose a scale-out HDC architecture called WHYPE, which uses wireless in-package communication technology to interconnect a large number of physically distributed IMC cores that either encode hypervectors or perform multiple similarity searches in parallel. In this context, the key enabler of WHYPE is the opportunistic use of the wireless network as a medium for over-the-air computation. WHYPE implements an optimized source coding that allows receivers to calculate the bit-wise majority of multiple hypervectors (a useful operation in HDC) being transmitted concurrently over the wireless channel. By doing so, we achieve a joint broadcast distribution and computation with a performance and efficiency unattainable with wired interconnects, which in turn enables massive parallelization of the architecture. Through evaluations at the on-chip network and complete architecture levels, we demonstrate that WHYPE can bundle and distribute hypervectors faster and more efficiently than a hypothetical wired implementation, and that it scales well to tens of receivers. We show that the average error rate of the majority computation is low, such that it has negligible impact on the accuracy of HDC classification tasks.
Robert Guirado, Abbas Rahimi, Geethan Karunaratne, Eduard Alarcón, Abu Sebastian, Sergi Abadal
2023-02-04T22:41:27Z
http://arxiv.org/abs/2303.08067v1
# WHYPE: A Scale-Out Architecture with ###### Abstract Hyperdimensional computing (HDC) is an emerging computing paradigm that represents, manipulates, and communicates data using long random vectors known as hypervectors. Among different hardware platforms capable of executing HDC algorithms, in-memory computing (IMC) has shown promise as it is very efficient in performing matrix-vector multiplications, which are common in the HDC algebra. Although HDC architectures based on IMC already exist, how to scale them remains a key challenge due to collective communication patterns that these architectures required and that traditional chip-scale networks were not designed for. To cope with this difficulty, we propose a scale-out HDC architecture called WHYPE, which uses wireless in-package communication technology to interconnect a large number of physically distributed IMC cores that either encode hypervectors or perform multiple similarity searches in parallel. In this context, the key enabler of WHYPE is the opportunistic use of the wireless network as a medium for over-the-air computation. WHYPE implements an optimized source coding that allows receivers to calculate the bit-wise majority of multiple hypervectors (a useful operation in HDC) being transmitted concurrently over the wireless channel. By doing so, we achieve a joint broadcast distribution and computation with a performance and efficiency unattainable with wired interconnects, which in turn enables massive parallelization of the architecture. Through evaluations at the on-chip network and complete architecture levels, we demonstrate that WHYPE can bundle and distribute hypervectors faster and more efficiently than a hypothetical wired implementation, and that it scales well to tens of receivers. We show that the average error rate of the majority computation is low, such that it has negligible impact on the accuracy of HDC classification tasks. ## I Introduction Hyperdimensional computing (HDC) is an emerging computational framework and is based on the observation that key aspects of human memory, perception and cognition can be explained by the mathematical properties of hyperdimensional spaces comprising high-dimensional vectors known as hypervectors [1]. The \(d\)-dimensional hypervectors are generated using (pseudo)random process such that their components are independent and identically distributed. When the dimensionality (\(d\)) is in the thousands, a large number of quasi-orthogonal hypervectors exist. This allows HDC to combine existing hypervectors into new hypervectors using well-defined vector operations, such that the resulting hypervector is unique and preserves the dimensionality. To review HDC and related computational models in detail refer to [2]. HDC has been employed in a range of applications including cognitive computing [3, 4], robotics [5], distributed computing [6, 7, 8], communications [9, 10, 11, 12, 13, 14], and in various aspects of machine learning. See [15] for a comprehensive review. HDC has achieved particularly notable accuracy in machine learning applications that demand few-shot learning, where other alternative approaches have generally struggled [16, 17, 18, 19, 20, 21]. Among other advantages, HDC is extremely robust in the presence of failures, defects, variations, and noise, all of which are synonymous to ultra-low energy computation. For instance, it has been shown that HDC degrades gracefully in the presence of various faults in comparison to other alternative classifiers: HDC tolerates intermittent errors [22], permanent hard errors in memory [23] or in logic [24], and spatio-temporal variations [25] in emerging memory technologies. In a similar vein, it also tolerates noise and interference in the communication channels [11, 14]. These demonstrate robust operations of HDC under low signal-to-noise ratio and high variability conditions thanks to the special brain-inspired properties of HDC: (pseudo)randomness with i.i.d. components, high-dimensionality, and holographic representations (see [22] for more details). What different HDC algorithms have in common is that they operate on wide vectors. Therefore, HDC calls for architectures that handle operations on a large number of wide vectors efficiently. One of the key operation of HDC is similarity search. It compares an input hypervector with a typically large number of hypervectors that are stored in an associative memory. As a similarity metric, dot-product is often used. This provides a natural fit to exploit in-memory computing (IMC) for HDC [16, 25]. An IMC core departs from the von Neumann architectures which move data from a processing unit to a memory unit and vice versa by exploiting the possibility of performing operations (dot products, in our case) within the memory device itself [26]. This improves both time complexity and energy consumption of the architecture. IMC systems have been proposed recently to execute HDC tasks using hypervectors as wide as 10,000-bit [25]. As further elaborated in Section II, IMC cores are capable of performing similarity searches through dot-products with unprecedented energy efficiency, i.e. \(\sim\)100\(\times\) more efficiently than a digital accelerator [25]. This has sparked interest in HDC systems that can handle a large search space. For example, certain applications require to continually add new hypervectors for representing novel classes in the incremental learning regime [16, 17], and performing similarity search on them, that can grow over thousands hypervectors. Yet still, how to scale HDC architectures to perform searches across a large number of classes remains unclear due to the associated challenges. HDC architectures can be scaled by either increasing the size of the IMC cores to accommodate many hypervectors (scale-up) or by deploying multiple moderately-sized IMC cores to execute the similarity searches in parallel (scale-out). On the one hand, scaling up requires large IMC cores for the architecture to be usable in incremental learning applications. This poses a fundamental problem in terms of array impedances and programming complexity for the IMC core [27]. On the other hand, scaling out implies distributing wide hypervectors across a potentially large number of modules, which puts a large pressure on the system interconnect. More specifically, such an architecture generates reduction and broadcast communication patterns for which conventional Networks-on-Chip (NoC) and Networks-in-Package (NiP) suffer to deliver competitive performance. To address the scalability problem of IMC-based HDC architectures, we propose to use wireless communications within the computing package. Wireless Network-on-Chip (WNoC) have shown promise in alleviating the bottlenecks that traditional NoC and NiP face, especially for the collective traffic patterns that appear when scaling out HDC architectures [28, 29, 30, 31, 32]. To that end, WNoCs provide native broadcast capabilities, which are put to use to implement a chip-scale network that, at the same time, is able to bundle multiple hypervectors and distribute the resulting bundled hypervector to a number of physically distributed similarity search engines. In this paper, we present WHYPE, a scale-out architecture that employs wireless over-the-air (OTA) computing to enable scalable in-memory HDC. The architecture, summarized in Fig. 1, consists of a set of \(M\) encoders that generate query hypervectors. These hypervectors are transmitted simultaneously, bundled over the air, and received by a set of \(N\) similarity search engines. Bundling is possible thanks to the OTA computing of the bit-wise majority of the hypervectors, which eliminates the need for a central point of reduction in the architecture. In turn, the concept of OTA is feasible because we have full electromagnetic knowledge of the chip package and we can engineer the constellations to calculate the majority over the air with low error. The WNoC proposed for WHYPE is uniquely suited to HDC for two main reasons. On the one hand, it delivers seamless support for the reduction and broadcast patterns required by scale-out HDC architectures. On the other hand, it does so while bypassing the main limitations of wireless interconnects. The generally low aggregate bandwidth is multiplied by \(M\) in WHYPE as we allow the concurrent transmission of \(M\) hypervectors. Also, the impact of the typically high error rates is minimized thanks to the inherent resilience of HDC algorithms to noise. In summary, this paper makes the following three novel contributions. First, we present WHYPE, a wireless-enabled architecture for scale-out hyperdimensional computing. Second, we assess the capacity of WHYPE's wireless interconnect to deliver lightweight all-to-all concurrent communications at the chip scale. Third, we evaluate the impact of imperfect wireless communications on the accuracy of the similarity search. It is worth noting that this paper significantly expands on the work presented in [33] by: * Presenting a complete design for both a wired baseline and the WHYPE architecture, including details on the wireless interfaces and their connection to the encoders and similarity search engines. * Making a comparison of the throughput, area, and power of both the baseline and the WHYPE architecture, which both motivates the need for the proposed architecture. * Extending the performance analysis of WHYPE to the time domain to understand the achievable bandwidth for the OTA computation. * Evaluating the accuracy of the WHYPE's similarity search considering a distributed dataset. ## II Background To facilitate the understanding of the principles behind WHYPE, here we provide background on the topics of HDC, IMC, and wireless communications at the chip scale. ### _Hyperdimensional Computing_ In this work we focus on a variant of HDC models using 512-dimensional binary hypervectors. Under this setting, by employing a random process, it is easy to find a huge number Fig. 1: Overview of WHYPE, a many-core wireless-enabled IMC platform. Orange encoders map to our wireless transmitters, while green IMCs map to our wireless-augmented IMCs. Bit-wise majority operation required for hypervector bundling is performed via wireless over-the-air computation. of non-coincident quasi-orthogonal vectors that exhibit normalized Hamming distance close to 0.5. We call these random hypervectors _atomic_ hypervectors. One can further create an _encoder_ to operate on these atomic hypervectors by using operations such as binding, bundling (i.e. superposition), and permutation to obtain a composite hypervector describing an object or event of interest. In a classification task, the composite hypervectors, generated from various examples of the same class, can be further bundled together to create a single prototype hypervector representing a class. In this work, the bundling operation is implemented as a logical element-wise majority operation. During training, the prototype hypervectors are stored in the associative memory. In the inference stage, the query hypervectors of unknown objects/events are generated by the same procedure as in the training stage. The query hypervector is compared to the prototype hypervectors in the associative memory. Then, the chosen label is the one corresponding to the prototype hypervector that has the highest similarity to the query vector. In HDC, the robustness to failure is given by the spreading of information across thousands of dimensions. See [20] for more details. ### _In-memory Computing_ With each new technology node, the gap between the speed and efficiency of computation and memory continues to grow. The effects of such a disparity, commonly known as the _memory wall_, have been addressed with novel concepts such as high-bandwidth memory [34] or 2.5D and 3D monolithic integration [35], among others. However, from an architectural point of view, these solutions are not solving the fundamental bottleneck arising from the need to move large quantities of data from memory and back. Instead, IMC appears as a promising candidate to overcome these challenges [26]. IMC is a form of non von-Neumann computing paradigm that leverages the memory unit to perform in-place computational tasks, reducing the amount of data movement and therefore cutting down the latency and energy consumption associated with in-package communication [26]. At the core of IMC is a crossbar array with a memory device lying at each cross point of the array. IMC cores in which these memory devices are resistance-based, and more specifically those based on phase-change memory (PCM) devices, have recently shown promising results [36]. In a resistance-based IMC core, we can program certain values as conductances of cross point memory devices. Executing a matrix-vector multiplication (MVM), essential to any machine learning algorithm, is as simple as, first, tuning conductances to match the matrix values. Second, by exploiting Ohm's law and Kirchhoff's law, inputting the vector as voltages from one side and finally reading the output currents from a perpendicular side. IMC architectures are capable of executing various HDC operations [25]. In this work, we are particularly interested in the similarity search in the associative memory. As shown in Fig. 2, since the prototype hypervectors \(P_{i}\) will be programmed in an IMC core, the similarity search through the dot product can be implemented as a MVM with the query hypervector \(Q\) as input vector. This allows performing a dot-product in \(O(1)\) time complexity. ### _Wireless Network-on-Chip_ Manycore architectures currently rely on NoCs as their interconnect backbone. However, the performance of NoCs can quickly degrade when serving collective communication patterns such as reductions or broadcasts, especially when scaled. In light of this, WNoCs have been recently proposed to complement wired interconnects due to their natural broadcast support, low system-wide latency, and adaptive network topology [29, 32, 37, 38]. Even though WNoC technology is not mature, proof-of-concept designs have been implemented and tested [39]. In WNoCs, a core or a cluster of cores are equipped with RF transceivers and antennas [40, 41, 42]. This allows them to modulate and transmit data, which propagates through the chip package until being picked up and demodulated by the receivers in the transmitter's range. By tuning all antennas to the same channel, WNoC allows to perform low-latency broadcast. However, this comes at the cost of a low aggregate bandwidth as it two simultaneous broadcast transmissions will interfere with each other. Moreover, for the same energy, wireless interconnects are generally much less reliable than wired interconnects due to their _radiative_ nature. Fortunately, as described in next sections, the forgiving nature of HDC and the collective nature of its operations minimize the impact of the WNoC disadvantages. ## III Motivation The main aim of this work is to propose an architecture that employs IMC cores for HDC and that can be scaled to satisfy the insatiable appetite of the most demanding workloads. The top chart of Fig. (a)a shows a logical diagram of a possible architecture template for an IMC-enabled HDC-based classifier. The encoding system on the left, possibly divided Fig. 2: Similarity search example in an IMC core. Since the prototype hypervector of the third column is the most similar one to the query vector \(Q\), it will output more current than the others and its associated label will be chosen. in a number \(M\) of parallel encoders, translate the input data into query hypervectors. These hypervectors are then bundled via a bit-wise majority operation, which virtually increases the computation throughput proportionally to the number of bundled vectors. The search engine on the right, possibly composed by \(N\) IMC cores storing \(K\) class templates each, compares the bundled query hypervectors with the \(N\times K\) prototype hypervectors. We note that both the encoders and the search engines can be implemented with IMC cores. To scale such an architecture, two broad decisions need to be taken from an architectural standpoint. First, whether to scale by increasing the size of the IMC cores (scale-up; increasing \(K\)) or by placing more similarity search engines in the system (scale-out, increasing \(N\)). Second, whether the scaling is done in a fully integrated way, i.e. placing all encoders and search engines within a single chip, or using disintegrated alternatives such as the recent chiplet paradigm. Our proposed architecture, called WHYPE, is build upon three main observations: **Observation 1: In-memory cores do not scale up well.** Increasing the size \(K\) of the IMC arrays allows accommodating a larger number and of prototype hypervectors (classes). However increasing the crossbar size brings with itself a number of non-ideal properties. For instance, IR drop across the array is increased due to the interconnect resistance of longer wires and RC latency is increased due to the increased parasitic capacitance. To counter these problems, one has to sacrifice one or few important metrics related to performance, power and area. For e.g. the IR drop can be reduced by increasing the wire width. The required minimum wire width at different array sizes to maintain the same effect of IR drop is given in Fig. 4 and in [43]. As shown in the figure, the wire width exponentially rises, limiting the scalability of array sizes. Moreover, the complexity of weight programming also increases with the array size [27]. There are already IMC prototypes supporting up to 256 prototype hypervectors [16]. This may be stretched up to 512 and 1024 but most likely not to bigger sizes due to the rise of non-ideal properties. _Therefore, scale-out architectures that employ multiple, but relatively small IMC cores, may be preferable in this scenario._ **Observation 2: There exist different application spaces in terms of required search throughput, and the number of inputs and classes.** Following [20], a generic scalable HDC architecture for various workloads should include a set of encoders (shown in the left hand-side of Fig. 1) and a set of similarity search engines (shown in the right hand-side of Fig. 1). The number of encoders is typically determined by the demand of application, ranging from few encoders to operate on data from different sensory modalities [44, 45], to a larger number of encoders working with independent streaming channels [14]. Similarly, the number of required similarity search engines is determined by the application. It is based on the number of classes that the application has to support which ranges e.g., from a handful of classes [16, 25] to over one thousand classes [17], or half million classes [46]. Further, the number of _active_ classes could change over time, e.g., in a continual learning regime at the beginning there are about 60 classes that can grow up to 1600 during the course Fig. 4: Minimum wire width requirement of crossbar arrays as a function of the array size, in order to maintain the same effect from IR drop and other non-idealities on the MVM accuracy Fig. 3: Towards a wireless-enabled scale-out HDC architecture with over-the-air computing. (a, top) The architecture involves \(M\) encoders generating queries \(q_{1}\cdots q_{M}\), the computation of a composite query \(Q\) via bit-wise majority, and \(N\) IMC cores performing similarity search over multiple copies of \(Q\). (a, bottom) In the wireless implementation, the IMC cores receive different versions of \(Q\) over space (\(Q^{\prime}\cdots Q^{N}\)) that need to be decoded. (b) To enable the decoding of \(Q^{\prime}\cdots Q^{N}\) with low error, \(q_{1}\cdots q_{M}\) are modulated with BPSK and shifted so that the overlapped symbols can be decoded easily at the receiver. of incremental learning [16, 17]. In computational terms, this means that the capacity of the encoders and similarity search engines is highly dependent on the application. We therefore aim for a generic architecture to be able to scale out by many inputs or classes coming in. Assuming a fully integrated design, architects would need to carefully dimension the encoders and similarity search engines in order to cater to the needs of a specific application. This fundamentally limits the reuse of the system in other domains, which may either require faster/broader or smaller/more efficient searches and where the designed architecture would either underperform or waste area and power. _Such an observation suggests that a disintegrated architecture, possibly chiplet-based, would be a viable route for scaling-out IMC-based HDC architectures._ **Observation 3: Scaling out HDC architectures is costly.** In particular, scaling the architecture template described in Fig. (a)a quickly leads to a communication bottleneck, especially in chiplet-based systems. First, even though bundling compresses \(M\) hypervectors of length \(d\) into a single composite query, that comes at the cost of a heavy reduction communication flow. This \(M\)-to-1 traffic pattern is due to the need to bring the different hypervectors to a common circuit performing the bit-wise majority operation. The majority operation, besides the significantly large buffers required to store the \(M\) inputs and the output hypervector, takes \(M\) inputs of size \(d\) with a known complexity of \(d\times M^{3.5}\) gates [47]. Only when the bundling is finalized, the output can be distributed to the \(N\) IMC cores. However, broadcast patterns are generally expensive with growing \(N\)[29]. In summary, bundling and data movement quickly becomes a bottleneck when scaling, especially in disintegrated architectures. In chiplet-based systems, significant energy (\(\sim\)1 pJ) and latency (\(\sim\)20 ns) can be expected per hop due to connectivity and I/O pin limitations [48], whereas hop counts will grow at least proportionally to the number of encoders \(M\) and the number of search engines \(N\) due to the reduction and broadcast flows, respectively [42]. _Hence, alternative implementations of the reduction-majority-broadcast pattern are necessary to enable the effective scale-out of HDC architectures._ ## IV Whype: A Wireless-Enabled Architecture for Scale-Out of Hyperdimensional Computing As depicted in Section III, even though disintegrated scale-out is a desirable choice for scaling HDC systems, implementing such architectures with wired interconnects is challenging. This is because of three main reasons: * The bundling operation generates a reduction pattern that can create a communication bottleneck at the vicinity of the bundling circuit. * The query distribution requires broadcast communication, which is inherently costly for wired interconnects in general and in chiplet-based systems in particular. * The bundling operation creates an implicit barrier that forces encoding and similarity search to be done sequentially. While all three operations can be pipelined, the end-to-end latency increases. Our proposed architecture, called WHYPE, addresses the three problems of wired scale-out at once. To that end, we augment a many-core HDC system with a wireless chip-scale network specifically designed to eliminate the need to transfer all the hypervectors to a central point for bundling. Fig. (b)b shows the proposed implementation of WHYPE, which is composed by \(M\) encoders augmented with wireless transmitters alongside \(N\) IMC cores augmented with wireless receivers. The transmitters and receivers are slightly modified versions of a simple BPSK modulator and a coherent receiver and decoder, respectively. The frequency of operation is high, e.g. 60 GHz, to minimize the area and power overhead of the RF circuits and the antenna. Finally, given the monolithic nature of the system, we assume that the clocks of all transmitters are synchronized. The mode of operation is as follows. All the encoders broadcast, simultaneously and through the same wireless channel, the queries \(q_{1}\cdots q_{M}\) that must be bundled to form \(Q\). As a consequence of wave propagation within the package, the receivers will obtain \(N\) different versions of the superposition of the \(M\) symbols transmitted concurrently, \(Q^{\prime}\cdots Q^{N}\). Each receiver will then take its received signal and attempt to decode a single bit representing the majority operation of the \(M\) transmitted symbols. In other words, the decoders will be carefully designed to achieve \(Q^{\prime}=Q\), \(\cdots\), \(Q^{N}=Q\) with high probability so that, effectively, _the bit-wise majority of the transmitted hypervectors is computed over the air._ To further illustrate the idea behind over-the-air majority computation, Fig. 5 shows a sample constellation that could result from the superposition of three transmissions. Each point represents one of the \(2^{3}=8\) possible combinations. Since we are not interested in the values of the three transmitted bits, but rather in the result of the majority operation, the decoders only need to distinguish between the combinations that lead to \(maj(\cdot)=0\) and \(maj(\cdot)=1\), respectively. The key objective, then, is to modulate the information in each transmitter so that the received constellations form two easily separable clusters for \(maj(\cdot)=0\) and \(maj(\cdot)=1\). WHYPE achieves this with a very simple variant of source coding, i.e. through pre-assigned phase shifts. As detailed further in Section V-A, an exhaustive search is performed offline to find the transmitters' phase shifts leading to easy-to-decode majority at the receivers. Fig. 5: Example of decision regions of over-the-air (OTA) majority computation for three transmitters \(\{q_{1},q_{2},q_{3}\}\) at two distinct receivers. Blue/green regions map to 0/1. At its core, WHYPE exploits three key opportunities, which allow to boost the value and minimize the disadvantages of wireless chip-scale communications: **Key Opportunity 1: Over-the-air computing is possible because the channel is static and known beforehand.** OTA computing is certainly not new, but it has generally been hindered by the need of accurate and up-to-date channel state information, which is extremely hard to guarantee in conventional wireless networks [49]. In contrast, the in-package scenario is static and allows for a pre-characterization of the channel [50], hence allowing for an OTA calculation of the majority operations required by the bundling of hypervectors. **Key Opportunity 2: The inherent broadcast nature of wireless communication allows for a single-hop distribution of bundled hypervectors.** By using omnidirectional antennas such as vertical monopoles [51], data is naturally broadcast with a latency and efficiency hard to achieve with wired on-chip networks [52]. This feature, together with the OTA bundling, allows to eliminate the communication bottleneck of HDC scale-out architectures. **Key Opportunity 3: The resilience of the HDC paradigm to errors makes it tolerant to unreliable communication.** A drawback of wireless communications in general (and of OTA computing in particular) is that it can suffer from relatively high error rates when compared to wired communications. This generally leads to low energy efficiency. However, as we illustrate in Fig. 6, HDC is inherently resistant to errors and opens the door to the use of unreliable wireless communications without compromising the energy efficiency and scalability of the architecture. ## V Methodology Fig. 7 summarizes the procedures followed to evaluate WHYPE from the perspectives of wireless communications and HDC architectures. First, the computing package has been modeled in CST [53] assuming a disintegrated architecture with \(M+N\) chiplets with their respective antennas and transceivers, implementing the scheme from Fig. (a)a. The output of CST simulations is fed to MATLAB to assess the bit error rate (BER) of the resulting constellations. The BER is then used in a python-based HDC framework to characterize the impact of imperfect communication on the HDC classification accuracy. Next, we describe the methods to obtain the source coding in Sec. V-A, to assess the speed and reliability of OTA computing in Sec. V-B, to evaluate the classification accuracy in Sec. V-C, and the details of the classification benchmark in Sec. V-D. ### _Source Coding Optimization_ As shown in Fig. (b)b, transmitters encode the bits of their queries using a BPSK encoder plus two specific phase shifts (for the symbols '0' and '1', respectively). That is, all symbols will have the same amplitude, but a different phase. Since we let the \(M\) encoders to transmit simultaneously, each of the \(N\) receivers will observe a slightly different superposition of all the transmitted symbols. This leads to a \(N\) different constellations of \(2^{M}\) points each. In this context, the objective is to select the phase shifts at each transmitter so that all the received constellations are clustered in two separable decision regions, each corresponding to the case where \(maj(\cdot)=0\) and \(maj(\cdot)=1\), respectively. This optimization process has two constraints. On the one hand, we have to make sure that each transmitter only uses two phases, for its symbol '0' and '1', respectively. On the other hand, the phases at one transmitter have an impact on the constellations of all receivers, which implies that _a joint optimization considering all RXs is needed_. The optimization process starts by simulating the simultaneous transmissions in CST. We consider a set of 8 phases in each transmitter (i.e. in 45 degree steps) and evaluate the amplitude and phase obtained at each of the receivers. Then, the results are fed to MATLAB, where the decision regions are computed using the \(K\)-means clustering algorithm with \(K=2\) over the constellation. These regions are used to evaluate the average error across all receivers using the methods described in Sec. V-B. The combination of phases leading to the lower average error rate is selected. An illustrative example with three transmitters and three receivers is shown in Fig. 8, with the detail of a specific receiver in Fig. 9. ### _Over-The-Air Computing Evaluation_ **Frequency-Domain Simulation.** As described in the previous section, electromagnetic simulations are needed to assess the performance of the over-the-air computation process. We model an interposer-based package with \(M+N\) chiplets, each with its own antenna, with the dimensions depicted in Fig. 7. We sweep the phases in each transmitter and obtain the amplitude and phase at the receivers in the frequency domain. The operating frequency is 60 GHz and the transmission power is 0 dBm per antenna, compatible with existing WNoCs [41, 39]. Further, we assume \(M=3\) transmitters and \(N=64\) receivers, unless otherwise noted. Nevertheless, the analysis can be extended to higher frequencies, different power levels, different package configurations, or different number of transmitters and receivers. **Reliability Analysis.** Once the phases are swept and the candidate constellations are obtained, we compute the BER at each RX, for all the different possible constellations, and choose the one that leads to the lowest average BER across Fig. 6: Impact of the bit error rate (rate of erroneous bits in a bundled hypervector) on the accuracy of a classification task under the conditions described in Section V-C. RXs. In all cases, the BER is evaluated approximating the centroids of each cluster as received symbols, and using the analytical expression of error rate of BPSK, which is the modulation used in WHYPE. This yields \[BER^{BPSK}=0.5\cdot\text{erfc}\bigg{(}\frac{0.5\cdot d_{c}}{\sqrt{N_{0}}}\bigg{)}, \tag{1}\] where \(erfc(\cdot)\) is the complementary error function, \(d_{c}\) is the distance among centroids and \(N_{0}\) is the noise spectral density. **Transmission Speed Analysis.** To assess the speed at which the information can be modulated reliably, time-domain simulations are required. In particular, we use CST to obtain the channel impulse response \(h_{i}(t)\) with simultaneous source excitation from the \(M\) transmitters to the receiver \(i\in[1,N]\). Then, the power-delay profile (PDP) of receiver \(i\), \(P_{i}(\tau)\), is evaluated as \(P_{i}(t)=|h(t)|^{2}\). The delay spread \(\tau_{rms}^{(i)}\) between the sources and receiver \(i\) is calculated as the second moment of the PDP as \[\tau_{rms}^{(i)}=\sqrt{\frac{\int(\tau-\bar{\tau}_{i})^{2}P_{i}(\tau)d\tau}{ \int P_{i}(\tau)d\tau}}, \tag{2}\] where \(\bar{\tau}_{i}=\frac{\int\tau P_{i}(\tau)d\tau}{\int P_{i}(\tau)d\tau}\) is the mean delay of the PDP for receiver \(i\). Since all transmitters need to be synchronized, the modulation speed needs to be the same for all transmitters and below the coherence bandwidth \(B_{c}\) of the system. Hence, we calculate the coherence bandwidth via the worst-case delay spread among all receivers, \(\tau_{rms}\), so that \[\tau_{rms}=\max_{i}\tau_{rms}^{(i)}\Longrightarrow B_{c}=\frac{1}{\tau_{rms }}. \tag{3}\] ### _Similarity Search Evaluation_ Once the final transmitter phases leading to the lowest average BER are chosen, an in-house python framework is used to evaluate the impact of imperfect OTA majority computation on the accuracy of the classification task. Every similarity search engine connected to a receiver stores 64 different prototype hypervectors, i.e., 64 different classes, each with 512-bit that suffices for the scenario considered in this paper. This is compatible with current experimentally validated IMC cores [36, 54]. The area and energy evaluations in this paper assumed a scaled version of the IMC from [36, 54]. Finally, errors coming from the OTA computations are modeled as uncorrelated bit flips over the query hypervectors. **Bundling alternatives.** While the baseline bundling consists of simply computing the bit-wise logical majority result across the different TX bits, we also consider permuted bundling. Fig. 8: Sweep of all possible phase combinations (top) and the one that minimizes the error rate of the majority computation (bottom). Blue/green symbols map to logical 0/1. Fig. 7: Overview of the evaluation methodology and layout of a sample architecture with 3 TXs and 64 RXs. The package is enclosed in a metallic lid and empty spaces are filled with vacuum. \(h_{1}=0.1\) mm; \(h_{2}=0.01\) mm; \(l_{1}=7.5\) mm; \(s=3.75\) mm; \(L_{1}=33\) mm; \(L_{2}=30\) mm. Fig. 9: Constellation and truth table with transmitted phases/ bits for a specific RX. Blue/green symbols map to logical 0/1. This bundling consists of permuting the queries in the TXs prior to applying the majority operation on them. By permuting the hypervectors we obtain two benefits. First, this allows the identification of the transmitter of the detected class from the composite query. The second direct benefit of permuting the hypervectors is that it helps increasing the quasi-orthogonality between them. This has a direct impact on accuracy, since the majority operation over multiple non-orthogonal hypervectors (i.e. not permuted) would yield a bundled hypervector that is hard to classify. ### _Classification Benchmarks_ We perform experiments using the Omniglot dataset [55]. The dataset provides handwritten images of characters from 50 different alphabets. The number of characters in each alphabet varies from 14 to 55. In total there are 1623 character classes and 20 example images from each class. The dataset is further divided into a training and test set containing 964 and 659 character classes respectively. The goal of this benchmark is to train an encoder on the image and label data given in the training set and evaluate classification accuracy on the test images. We perform two types of experiments. **Experiment 1: Few-shot Learning.** First, we evaluate the few-shot learning capability of the system. We do this by, first, meta-training an encoder on the training set data and evaluating the few-shot classification accuracy over a series of 1000 episodes. In each episode, we select 100 classes and 20 encoded hypervector examples (shots) from each class within the test set. These support vectors are distributed among the IMC modules on the RXs. From the remaining images of the same 100 classes, we select 1 query image to encode per TX. The encoded query hypervectors are over-the-air bundled (with or without permutation) and received at each RX to perform the similarity search. The final classification accuracy is the average accuracy across the 1000 episodes. **Experiment 2: Continual Learning.** Secondly, we focus on the continual learning capability of the system. For this, we have a similar setting to that of Experiment 1, with few changes. Here, instead of always choosing a fixed 100 classes per episode, we start with 64 classes and gradually incorporate new classes over a series of sessions. In each session, 64 additional classes are selected from the set of unselected classes so far and 5 selected support examples from these novel classes are provided to the IMCs. During a session, queries can be selected from all classes (both novel and old) currently under selection and one each is assigned to the encoders at the transmitters. ## VI Performance Evaluation We next present the evaluation of WHYPE, first from the perspective of the OTA majority in Section VI-A, and then from the perspective of the classification task in Section VI-B. Additionally, we evaluate the area and power overhead of the architecture in Section VI-C. ### _Over-the-Air Computing_ Fig. 8 illustrates the exhaustive search performed in our system with \(M=3\) transmitters, shown only in three receivers chosen at random. The bottom chart of the same figure shows the resulting constellations after the optimization. The constellation in another random receiver is shown in more detail in Fig. 9, together with the phases chosen for the three transmitters: 0\({}^{\circ}\)/90\({}^{\circ}\), 315\({}^{\circ}\)/135\({}^{\circ}\), and 225\({}^{\circ}\)/180\({}^{\circ}\) for the symbols '0'/'1' of transmitters TX1, TX2 and TX3, respectively. **Reliability Analysis.** Once the phases are set, we evaluate the error rate of all 64 receivers using Eq. (1). Fig. 10 shows the BER of each receiver. It can be observed how the BER values are highly dependent on the receiver position, with values as large as \(\sim\)0.1 and also lower than 10\({}^{\circ}\) in a significant amount of cases. In average, the error rate is below 0.01. To understand how the error rate could scale with the number of receivers, we re-simulate the entire architecture with a varying number of RX cores and computing the average Fig. 11: Error rate as a function of the number of receivers. Fig. 12: Scaling of the delay spread and coherence bandwidth over an increasing number of receivers for the different transmitted symbols in the scenario with three transmitters. Fig. 10: Error rate for each individual RX in the architecture. The dashed line indicates the average value. BER obtained in each case. As shown in Fig. 11, the average BER generally increases with the number of receivers for which we are optimizing the architecture. This is expected since, when accommodating more constellations in our optimal TX phases search, we are imposing more conditions and hindering the joint optimization across all receivers. **Time-Domain Analysis.** Still with three transmitters and a number of receivers growing from 1 to 64, we obtain the delay spread and coherence bandwidth of the received signal for each of the \(2^{3}=8\) possible transmission combinations. As described earlier, we take the worst-case among all receivers in each simulation. As Fig. 12 shows, all symbol combinations (marked as different IDs in the figure) have a similar scaling behavior in terms of delay spread. Values lower than 0.1 ns (coherence bandwidth greater than 10 GHz) are obtained consistently for systems with less than 10 receivers. The performance degrades to values around 0.166 ns (\(\sim\) 6 GHz) in larger architectures. Given that BPSK has a spectral efficiency of 1 b/s/Hz, the evaluated system would have a total throughput between \(3\times 6=18\) Gb/s and \(3\times 10=30\) Gb/s from the encoders to the similarity search engines. ### _Classification Experiments_ To start the assessment of the HDC-based classification tasks, we first perform the few-shot learning experiment while increasing the BER gradually. As Fig. 6 depicts, the class accuracy remains above 99% even when we apply bit flips equivalent to a BER of 0.26. This means that the noise robustness provided by the HDC properties relaxes the error link conditions, ensuring a correct behaviour under the worst-case wireless scenarios, as we show next. **Few-shot Learning.** Fig. (a)a and Fig. (b)b show the similarity search result for the baseline bundling and permuted bundling cases, respectively, in the few-shot learning experiment. The figures show how a single 512-bit query can accommodate several queries via bundling (blue line), and that the wireless system (green line) is able to correctly classify the same queries despite having some bits flipped. Table I shows the numerical results of the final class accuracy for the executed task, comparing an ideal channel without errors with our wireless channel with a sizable BER. The effect of the imperfect bundling is negligible in terms of accuracy, as predicted by Fig. 6. Moreover, the permuted bundling significantly improves the baseline bundling, confirming that the proposed approach supports the aggregation of a dozen hypervectors over the air and the parallelization of similarity search over tens of IMCs. **Continual Learning.** Figure 14 shows the evolution of the accuracy as the system learns new classes from the initial dictionary of 100 classes, until the entire test set is learnt. Each new set of classes degrades the classification accuracy because possible similarity between classes represented in 512-bit hypervectors. From the figure, it is again clear that (i) the bit flips associated to imperfect OTA bundling have a negligible effect over the classification accuracy, and (ii) permutation provides a consistent improvement over the baseline bundling. Table II shows the final accuracy for 600 classes and different bundling configurations. The results confirm the conclusions given above in terms of impact of wireless OTA and permuted bundling. It is worth noting that the similarity search over the Omniglot dataset in continual learning does not \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Baseline} & \multicolumn{5}{|c|}{**Number of bundled hypervectors**} \\ \cline{2-9} & **Channel** & 1 & 3 & 5 & 7 & 9 & 11 \\ \cline{2-9} & Ideal & 0.87 & 0.60 & 0.47 & 0.40 & 0.35 & 0.33 \\ \cline{2-9} & Wireless & 0.81 & 0.59 & 0.47 & 0.39 & 0.35 & 0.32 \\ \hline \hline \multirow{4}{*}{Permuted} & \multicolumn{5}{|c|}{**Number of bundled hypervectors**} \\ \cline{2-9} & **Channel** & 1 & 3 & 5 & 7 & 9 & 11 \\ \cline{1-1} \cline{2-9} & Ideal & 0.87 & 0.81 & 0.73 & 0.63 & 0.55 & 0.46 \\ \cline{1-1} \cline{2-9} & Wireless & 0.81 & 0.80 & 0.73 & 0.63 & 0.51 & 0.46 \\ \hline \end{tabular} \end{table} TABLE II: Accuracy in the continual learning experiment. Fig. 13: Similarity results comparison for different forms of bundling one, three, five, and seven hypervectors. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Baseline} & \multicolumn{5}{|c|}{**Number of bundled hypervectors**} \\ \cline{2-9} & **Channel** & 1 & 3 & 5 & 7 & 9 & 11 \\ \cline{2-9} & Ideal & 1 & 0.966 & 0.902 & 0.803 & 0.704 & 0.543 \\ \cline{2-9} & Wireless & 1 & 0.966 & 0.9 & 0.801 & 0.699 & 0.537 \\ \hline \hline \multirow{4}{*}{Permuted} & \multicolumn{5}{|c|}{**Number of bundled hypervectors**} \\ \cline{2-9} & **Channel** & 1 & 3 & 5 & 7 & 9 & 11 \\ \cline{1-1} \cline{2-9} & Ideal & 1 & 1 & 1 & 1 & 0.995 & 0.978 \\ \cline{1-1} \cline{2-9} & Wireless & 1 & 1 & 1 & 1 & 0.994 & 0.965 \\ \hline \end{tabular} \end{table} TABLE I: Accuracy in the few-shot learning experiment. perform well when bundles exceed a few hypervectors, even with permutations. This suggests that longer hypervectors or an application with a larger dataset would be required to fully benefit from bundling. ### _Comparison with Wired Alternative_ Here, we discuss about the performance of WHYPE in terms of speed and the overheads in terms of area and power. To that end, we calculate the area and energy of the entire architecture except for the encoders, which may widely vary depend on the application space, leading to various types of implementations and hardware realizations [25, 56, 57]. To evaluate the area and energy of the similarity search engines, we model them as IMC cores using the design from [36, 54] as baseline. In more detail, the unit cell at each cross-point of the crossbar consists of 8 transistors and 4 PCM devices (8T4R). The PCM crossbar is back-end-of-the-line integrated with CMOS peripherals, namely, a Pulse-Width Modulation (PWM) circuit which convert a 8-bit digital input vector to an array of time encoded pulses, and an ADC that digitizes the output current of the crossbar to multi-bit values. The area and energy consumption of these components, which are summarized in Table III, have been scaled to consider the technology and dimensions of our architecture. For the sake of comparison, we evaluate the area and power of both WHYPE's interconnect and a hypothetical wired interposer-based alternative. In the former case, we consider wireless transmitters and receivers compatible with the requirements of WHYPE [58, 59, 60]. In the latter case, the scheme from Figure 2(b) is implemented using off-chip interconnects. The majority operation is performed in a central chiplet containing input/output buffers and a 512-wide \(M\)-input majority gate. The majority and similarity search chiplets are equipped with routers and circuitry to implement serial chip-to-chip links. We assume that the chiplets form a mesh network, similarly to [48]. Routers are modeled using DSENT [61] with a four-stage pipeline and minimal buffering. Off-chip links are modeled as single-lane links with the energy calculated according to the UCIE standard [62] and the area adapted from [63]. We assume a 32nm technology node with \(V_{DD}=1\) V and \(f_{clk}=1\) GHz. See Table IV for a summary. **Overhead Analysis.** We evaluate the area overhead of the system and the total energy consumed in the hypervector collection, bundling, distribution, and similarity search process using data from Table III and Table IV. It is observed that the off-chip links and wireless transceivers are the most power-hungry components in the interconnect, while the majority gate circuitry and the routers are also area consuming, mostly due to the wide datapath of 512 bits required to transport the hypervectors. The IMC cores are very efficient in terms of area compared with the interconnect components, while their energy efficiency is penalized by the PWM and ADC circuitry. Figure 15 shows a breakdown of the area and energy of the HDC system assuming \(M=3\) encoders and \(N=8\) similarity search engines (512 classes). As can be seen, the interconnect is the most area consuming sub-system, whereas the similarity search is energy consuming due to the large MVM operations performed in the IMC tiles. WHYPE reduces the interconnect area by 3.2\(\times\) with a modest effect in the energy because of the small scale of the system. To evaluate scalability, Figure 16 shows the area and energy of the interconnect as a function of the number of similarity search engines \(N\), for \(M=3\). It is observed how, even conservatively assuming single-lane wired links, WHYPE is superior in both area and energy consumption, with a gap that widens as the system is scaled out. This is because WHYPE eliminates any wired connection between chiplets, while the wired alternative needs to traverse the entire system through a mesh topology. **Bottleneck Analysis.** We assess the performance of WHYPE through an analysis of the latency and throughput of the steps followed since the encoders output the hypervectors until the bundled hypervectors reach the similarity search engines. In the wireless case, the delay analysis is simple: the latency Fig. 14: Classification accuracy for the continuous learning case with 3 bundled hypervectors. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Component**} & \begin{tabular}{c} **Area** \\ **[mm\({}^{2}\)]** \\ \end{tabular} & \begin{tabular}{c} **Energy** \\ **[nJ/bit]** \\ \end{tabular} & \begin{tabular}{c} **Source** \\ **S**ource \\ \end{tabular} \\ \hline \begin{tabular}{c} Majority \\ Chiplet \\ \end{tabular} & \begin{tabular}{c} Majority Gate\$ \\ \end{tabular} & 0.32 & 0.17 & [47] \\ \cline{2-5} \cline{2-5} \multicolumn{1}{c|}{} & Buffer & 0.009 & 0.0053 & [64] \\ \hline \begin{tabular}{c} Wired \\ Network \\ \end{tabular} & \begin{tabular}{c} Router \\ Link (on-chip) \\ \end{tabular} & 0.36 & 0.03 & [61] \\ \cline{2-5} \multicolumn{1}{c|}{} & Link (off-chip) & 4.10* & 0.06 & [62] \\ \cline{2-5} \multicolumn{1}{c|}{} & Link (off-chip) & 0.25* & 1 & [62, 63] \\ \hline \begin{tabular}{c} Wireless \\ Interface\({}^{\dagger}\) \\ \end{tabular} & \begin{tabular}{c} SerDes \\ \end{tabular} & 0.04 & 0.54 & [58] \\ \cline{2-5} \multicolumn{1}{c|}{} & Data Converter & 0.03 & 0.07 & [59] \\ \cline{2-5} \multicolumn{1}{c|}{} & Transmitter & 0.12 & 1.5 & [39] \\ \cline{2-5} \multicolumn{1}{c|}{} & Receiver & 0.12 & 1.3 & [60] \\ \cline{2-5} \multicolumn{1}{c|}{} & Antenna & 0.08 & N/A & [65] \\ \hline \multicolumn{5}{c|}{Unless noted, the width of the components is 512 bits.} \\ \multicolumn{5}{c|}{\#Five-input, one-output gate. \(\ddagger\)per operation (read, write).} \\ \multicolumn{5}{c|}{\(\dagger\)Dimensioned to operate at 10 Gb/s.} \\ \multicolumn{5}{c|}{*Per pin, serial link operating at 16 Gb/s.} \\ \end{tabular} \end{table} TABLE IV: Area and energy breakdown of single instances of interconnect components in 32nm technology. corresponds to the time needed to wirelessly transmit 512 bits. Assuming a 10 Gb/s transmitter, the latency of the majority operation in WHYPE is 51.2 ns independently of the number of encoders or similarity search engines. In contrast, a wired alternative would be bottlenecked by the majority calculation and the communications happening before and after. Latency-wise, the main delay in the wired case is the time required for the hypervectors to travel through the chip-to-chip network. With the conditions here assumed, the delay scales as 2\(\sqrt{M+N}\)/3 which is the average number of hops to connect two arbitrary chiplets, with each hop taking 36 ns (4 to traverse a router and 32 to transmit 512 bits through a 16 Gb/s serial link). Quickly, the delay becomes much higher than in the wireless case. In comparison, the IMC cores considered here can take between 10 and 128 nanoseconds to realize the similarity calculations, depending on the required accuracy [36]. Although majority and similarity operations can be pipelined, a wired NiP would easily bottleneck the system. In terms of throughput, WHYPE does not present a bottleneck due to the seamless all-to-all connection between encoders and similarity search engines. At 10 Gb/s of line rate, since all receivers will obtain the bundled hypervector at the same time, the overall throughput is 10\(\times\)M\(\times\)N Gb/s. In the hypothetical wired alternative, the bottleneck is the bisection bandwidth of the system. Since the majority chiplet, being in a mesh network, has only four chip-to-chip links, the bisection bandwidth between the encoders and the similarity search engine will be, at most, twice the capacity of the chip-to-chip links. In our assumed scenario, the throughput would be 32 Gb/s. Therefore, WHYPE will be faster even for relatively low values of \(M\) and \(N\). **Comparison with 3D Interconnects.** A few works have proposed to implement HDC-based 3D ICs [23, 24]. One of the reasons could be the reduced link length as compared to the planar NoC or NiP alternatives. However, the wired nature of 3D ICs and the _AllGather_ nature of the communication suggest that the interconnect will continue being a bottleneck. Moreover, 3D ICs can suffer from heat dissipation issues which limit their scaling ability. Due to this, existing efforts only consider up to a couple dozen classes [23]. ## VII Conclusion In this work, we introduced an OTA on-chip computing concept capable of overcoming the scalability bottleneck present in wired NoC architectures when scaling out IMC-based HDC systems. By using a WNoC communication layer, a number of encoders is able to concurrently broadcast HDC queries towards all the IMC cores within the architecture. Then, a pre-characterization of the propagation environment allows to map the received constellations to the computed composite query, in each core, based on a decision region strategy. Through a proper correspondence between the TX phases, the received constellation and the decision region, we have shown that the opportunistic calculation of the bit-wise majority of the transmitted HDC queries is possible with low error. We demonstrated the concept and shown its scalability up to 11 TXs and 64 RXs, obtaining the BER of the OTA approach and later employing it to evaluate the impact of the WNoC errors in a HDC classification task. Overall, we conclude that the quality of the WNoC links are solid enough to have a negligible impact on the application accuracy, mostly thanks to the great error robustness of HDC. ## Acknowledgment Authors gratefully acknowledge funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 863337 (WiPLASH), and Horizon Europe research and innovation programme under grant agreement No 101042080 (WINC).
2305.05815
$Γ$-Convergence of the Ginzburg-Landau Functional with tangential boundary conditions
A classical result in the study of Ginzburg-Landau equations is that, for Dirichlet or Neumann boundary conditions, if a sequence of functions has energy uniformly bounded on a logarithmic scale then we can find a subsequence whose Jacobians are convergent in suitable dual spaces and whose renormalized energy is at least the sum of absolute degrees of vortices. However, the corresponding question for the case of tangential or normal boundary conditions has not been considered. In addition, the question of convergence of up to the boundary is not very well understood. Here, we consider these questions for a bounded, connected, open set of $\mathbb{R}^{2}$ with $C^{2,1}$ boundary.
Stan Alama, Lia Bronsard, Andrew Colinet
2023-05-10T00:01:02Z
http://arxiv.org/abs/2305.05815v1
# \(\Gamma\)-Convergence of the Ginzburg-Landau ###### Abstract A classical result in the study of Ginzburg-Landau equations is that, for Dirichlet or Neumann boundary conditions, if a sequence of functions has energy uniformly bounded on a logarithmic scale then we can find a subsequence whose Jacobians are convergent in suitable dual spaces and whose renormalized energy is at least the sum of absolute degrees of vortices. However, the corresponding question for the case of tangential or normal boundary conditions has not been considered. In addition, the question of convergence of up to the boundary is not very well understood. Here, we consider these questions for a bounded, connected, open set of \(\mathbb{R}^{2}\) with \(C^{2,1}\) boundary. In this paper we study the Ginzburg-Landau functional in a problem with tangential \(u\cdot\nu=0\) (or normal \(u\cdot\tau=0\)) boundary conditions. These are motivated by experimental results of Volovik and Lavrentovich [22] in which nematic drops are placed in an isotropic medium, allowing for the control of nematic boundary behaviour. For 3D samples, Volovik and Lavrentovich found a single interior hedgehog defect when molecules are asked to be normal to the boundary and a bipolar boojum pair when requiring tangential conditions. Inspired by the physical phenomena observed in [22], in [2] Alama, Bronsard, and van Brussel studied minimizers for the Ginzburg-Landau energy in a 2D setting with tangential and normal boundary conditions. In particular, they show that minimizers can exhibit half-degree vortices on the boundary. Here we study the full \(\Gamma\)-convergence in their setting, by extending the work of Jerrard and Soner from [18]. In their work, Jerrard and Soner relate, through the framework of \(\Gamma\)-convergence, convergence of the Jacobian in the interior of the domain \(\Omega\subseteq\mathbb{R}^{2}\) to lower bounds on energy, and hence to formation of interior defects. We show that, under appropriate restrictions of the functions along the boundary, we may extend the convergence of the Jacobian to hold up to the boundary and hence recover boundary defects as well. To be more precise we will provide an explicit statement of our result. We begin by describing our variational problem. Note that an extended discussion about notation used in the paper can be found in Section 1. We let \(\Omega\subseteq\mathbb{R}^{2}\) be a bounded connected open set with \(C^{2,1}\)-boundary. In addition, we suppose that \(\partial\Omega\) has \(b+1\) connected components where \(b\geq 0\). We introduce the function spaces \[W^{1,2}_{T}(\Omega;\mathbb{R}^{2}) \coloneqq\Big{\{}u\in W^{1,2}(\Omega;\mathbb{R}^{2}):u_{T}:=u \cdot\tau=0\Big{\}},\] \[W^{1,2}_{N}(\Omega;\mathbb{R}^{2}) \coloneqq\Big{\{}u\in W^{1,2}(\Omega;\mathbb{R}^{2}):u_{N}:=u|_{ \partial\Omega}-u_{T}=0\Big{\}}.\] With this, we consider the Ginzburg-Landau energy defined on the above function spaces: For \(\varepsilon>0\), we let \[E_{\varepsilon}(u)\coloneqq\int_{\Omega}\frac{1}{2}\big{|}\nabla u\big{|}^{2 }+\frac{1}{4\varepsilon^{2}}\big{(}|u|^{2}-1\big{)}^{2}.\] We are ready to state our theorem: **Theorem 0.1**.: 1. _Suppose_ \(\{u_{\varepsilon}\}_{\varepsilon\in(0,1]}\subseteq W^{1,2}_{T}(\Omega; \mathbb{R}^{2})\) _satisfies_ \(E_{\varepsilon}(u_{\varepsilon})\leq C|\mathrm{log}(\varepsilon)|\) _for all_ \(\varepsilon\in(0,1]\) _and some_ \(C>0\)_. Then, up to a subsequence that we do not relabel, we have that there is a signed Radon measure,_ \(J_{*}\)_, supported on_ \(\overline{\Omega}\) _such that_ \[\lim_{\varepsilon\to 0^{+}}\lVert\star J(u_{\varepsilon})-J_{*}\rVert_{(C^{0, \alpha}(\Omega))^{*}}=0\] (1) _for all_ \(0<\alpha\leq 1\)_. In particular, we can express the limit,_ \(J_{*}\)_, in terms of_ \(M_{1}\) _interior defects and of_ \(M_{2,j}\) _boundary defects around the_ \(j\)_-th connected component of_ \(\partial\Omega\)_:_ \[J_{*}=\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}+\frac{\pi}{2}\sum_{j=0}^{b}\sum _{k=1}^{M_{2,j}}d_{jk}\delta_{c_{jk}},\] (2) _where_ \(d_{i}\) _and_ \(d_{jk}\) _are non-zero integers for_ \(i=1,2,\ldots,M_{1}\)_,_ \(j=0,1,\ldots,b\)_, and_ \(k=1,2,\ldots,M_{2,j}\)_,_ \(a_{i}\in\Omega\) _and_ \(c_{jk}\in(\partial\Omega)_{j}\) _for_ \(i=1,2,\ldots,M_{1}\)_,_ \(j=0,1,\ldots,b\)_, and_ \(k=1,2,\ldots,M_{2,j}\)_, and_ \[\sum_{i=1}^{M_{1}}d_{i}+\frac{1}{2}\sum_{j=0}^{b}\sum_{k=1}^{M_{2,j}}d_{jk}= \chi_{Euler}(\Omega).\] (3) _In addition, we have that_ \[\frac{1}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\in\mathbb{Z}\] (4) _for each_ \(j=0,1,\ldots,b\)_._ 2. _If_ \(\{u_{\varepsilon}\}_{\varepsilon\in(0,1]}\subseteq W^{1,2}_{T}(\Omega;\mathbb{R}^{2})\) _and_ \(J_{*}=\pi\sum\limits_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}+\frac{\pi}{2}\sum\limits_ {j=0}^{b}\sum\limits_{k=1}^{M_{2,j}}d_{jk}\delta_{c_{jk}}\) _is a signed measure as described below_ (2) _satisfying_ (3) _and_ (4) _as well as_ \[\lim_{\varepsilon\to 0^{+}}\|\star J(u_{\varepsilon})-J_{*}\|_{(C^{0,\alpha}( \Omega))^{*}}=0,\] _for some_ \(0<\alpha\leq 1\)_, then_ \[\pi\sum\limits_{i=1}^{M_{1}}|d_{i}|+\frac{\pi}{2}\sum\limits_{j=0}^{b}\sum \limits_{k=1}^{M_{2,j}}|d_{jk}|\leq\liminf_{\varepsilon\to 0^{+}}\frac{E_{ \varepsilon}(u_{\varepsilon})}{|\mathrm{log}(\varepsilon)|}.\] 3. _For each signed measure_ \(J_{*}=\pi\sum\limits_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}+\frac{\pi}{2}\sum \limits_{j=0}^{b}\sum\limits_{k=1}^{M_{2,j}}d_{jk}\delta_{c_{jk}}\) _as described below_ (2) _satisfying_ (3) _and_ (4) _we can find_ \(\{u_{\varepsilon}\}_{\varepsilon\in(0,1]}\subseteq W^{1,2}_{T}(\Omega;\mathbb{ R}^{2})\) _such that_ \[\lim_{\varepsilon\to 0^{+}}\|\star J(u_{\varepsilon})-J_{*}\|_{(C^{0, \alpha}(\Omega))^{*}}=0,\ \ \forall\,0<\alpha\leq 1,\qquad\limsup_{ \varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{\varepsilon})}{|\mathrm{log}( \varepsilon)|}=\|J_{*}\|\,.\] **Remark 0.2**.: We obtain a corresponding theorem for functions \(u\in W^{1,2}_{N}(\Omega;\mathbb{R}^{2})\) since the function, \(v\), defined by \[v(x)\coloneqq(-u_{2}(x),u_{1}(x))\in W^{1,2}_{T}(\Omega;\mathbb{R}^{2})\] satisfies \[E_{\varepsilon}(v)=E_{\varepsilon}(u),\quad Jv=Ju.\] The proof of Theorem 0.1 makes use of the reflection technique from [17] and the results of [18]. However, since we make no restrictions on our domain beyond connectedness and boundary regularity, and since we permit singularities to develop along the boundary, we encounter a few additional obstacles. 1. In the proof of compactness we needed to demonstrate that along each boundary component one half of the sum of the degrees of boundary vortices is an integer. This is a necessary restriction in order for the boundary vector field to be well defined. The natural integer to compare this quantity to is the degree near the boundary. However, since the Jacobian is only known to converge in a suitable weak norm, some care is needed in order to facilitate this comparison. 2. In order to relate the degrees of vortices to topological restrictions we needed to demonstrate, in a suitable weak sense, convergence of the tangential portion of the current, \(ju\), along the boundary to the signed curvature of the boundary components. In order to estimate error terms, which involved the modulus of our sequence of functions, we needed to use an adapted slicing argument similar to the one found in [18] to estimate the size of sets where the modulus was not close to \(1\). 3. In order to find a suitable sequence for the proof of the upper bound we need to construct, for our setting, the canonical harmonic map. Unfortunately, since we permit boundary singularities the construction from [8] is not directly applicable. However, after some adapting, their construction extends to our setting. Note that this obstacle has been encountered in other papers, for instance see [15], but the techniques used there are not suitable for our setting since we do not impose restrictions on our domain beyond connectedness and boundary regularity. We note that while there have been many generalizations of the interior convergence results for the Ginzburg-Landau energy \(E_{\varepsilon}(u)\) including [3] for domains in every dimension, [14] for two-dimensional compact manifolds, and [1] for convergence for a modified functional, there have not been many results in the Ginzburg-Landau literature addressing convergence up to the boundary. A couple of results that did consider convergence of the Ginzburg-Landau energy \(E_{\varepsilon}(u)\) up to the boundary were [17] and [4]. Through a counterexample presented in [17] one can see the importance of boundary conditions on the functions considered. In particular, in the absence of boundary conditions the convergence of the Jacobian from [18] cannot be extended up to the boundary. On the other hand, the paper of [4] shows that under the assumption of _full Dirichlet_ boundary conditions one can obtain up to the boundary convergence. In particular, [4] goes on to consider first order \(\Gamma\)-convergence of the Ginzburg-Landau functional. A related problem was studied in [15] in a different regime where boundary vortices are energetically preferable than interior ones and they introduce the notion of global jacobian to obtain a first order \(\Gamma\)-convergence of their functional. We provide a brief overview of the organization of the paper and the content of each section. In Section 1 we outline notation to be used and some preliminary concepts. Here, we provide extended discussions about concepts and notation that will be used throughout the document. In particular, we provide exposition regarding a particular tangent-normal coordinate system that we make use of. In Section 2 we provide a number of lemmas needed to prove Theorem 0.1. In Section 3 we provide a proof of Theorem 0.1 in three subsections. Each subsection is dedicated to one of compactness, the lower bound, and the upper bound. In addition, we provide an Appendix for some basic topological results that we need. ## Acknowledgements We would like to thank Prof. Robert Jerrard for his very helpful and fruitful discussions. Preliminaries ### Notation For \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\) we define \(x^{\perp}\coloneqq(-x_{2},x_{1})\). For \((x_{1},x_{2}),(y_{1},y_{2})\in\mathbb{R}^{2}\) we also define \[(x_{1},x_{2})\times(y_{1},y_{2})\coloneqq x_{1}y_{2}-x_{2}y_{1}.\] We will use \(\Omega\subseteq\mathbb{R}^{2}\) to denote an open bounded set whose boundary is of \(C^{2,1}\) regularity. In addition, we will assume that \(\partial\Omega\) has \(b+1\) connected components, for \(b\geq 0\), denoted \((\partial\Omega)_{i}\) for \(i=0,1,\ldots,b\). We assume that these components are indexed so that \((\partial\Omega)_{0}\) coincides with the outermost boundary component and \((\partial\Omega)_{i}\) coincides with an interior boundary component for \(i=1,2,\ldots,b\). For a function \(u\colon\Omega\to\mathbb{R}^{2}\), with well-defined trace, we let \(u_{T}\), \(u_{N}\) denote, respectively, the _tangential part_ of \(u\) along \(\partial\Omega\) and the _normal part_ of \(u\) along \(\partial\Omega\). These are defined, respectively, by \[u_{T}\coloneqq u\cdot\tau,\ \ \ \ u_{N}\coloneqq u|_{\partial\Omega}-u_{T}\] where \(\tau\) is a locally defined unit tangent vector. For \(\Omega\) as above we introduce the function spaces \[W^{1,2}_{T}(\Omega;\mathbb{R}^{2}) \coloneqq\bigg{\{}u\in W^{1,2}(\Omega;\mathbb{R}^{2}):u_{T}=0 \bigg{\}},\] \[W^{1,2}_{N}(\Omega;\mathbb{R}^{2}) \coloneqq\bigg{\{}u\in W^{1,2}(\Omega;\mathbb{R}^{2}):u_{N}=0 \bigg{\}}.\] For appropriate functions \(u\colon\Omega\to\mathbb{R}^{2}\) we let \[\nabla\times u\coloneqq\frac{\partial u^{2}}{\partial x_{1}}-\frac{\partial u ^{1}}{\partial x_{2}}.\] In addition, for appropriate functions \(\varphi\colon\Omega\to\mathbb{R}\) we let \[\nabla^{\perp}\varphi\coloneqq(\nabla\varphi)^{\perp}.\] For \(\varepsilon>0\) we let the symbol \(E_{\varepsilon}(u,\Omega)\) denote the following energy \[E_{\varepsilon}(u,\Omega)\coloneqq\int_{\Omega}\!e_{\varepsilon}(u),\ \ \ \ \ e_{ \varepsilon}(u)\coloneqq\frac{1}{2}\big{|}\nabla u\big{|}^{2}+\frac{1}{4 \varepsilon^{2}}\big{(}|u|^{2}-1\big{)}^{2}.\] We will also use this notation for more general measurable sets \(A\subseteq\mathbb{R}^{2}\). We will consider this energy over \(W^{1,2}_{T}(\Omega;\mathbb{R}^{2})\) and \(W^{1,2}_{N}(\Omega;\mathbb{R}^{2})\). We let \(C(\Omega)\) denote the space of continuous functions on \(\Omega\) into \(\mathbb{R}\) and we pair this with \(\|\cdot\|_{L^{\infty}(\Omega)}\). Next, we let, for \(0<\alpha\leq 1\) and \(\varphi\colon\Omega\to\mathbb{R}\), \([\varphi]_{\alpha}\) denote \[[\varphi]_{\alpha}\coloneqq\sup_{\begin{subarray}{c}x,y\in\Omega\\ x\neq y\end{subarray}}\biggl{\{}\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{\alpha}} \biggr{\}}\] and we let \(C^{0,\alpha}(\Omega)\) denote \[C^{0,\alpha}(\Omega)\coloneqq\{\varphi\in C(\Omega):[\varphi]_{\alpha}<\infty\}.\] We pair \(C^{0,\alpha}(\Omega)\), for \(0<\alpha\leq 1\), with the norm \[\|\cdot\|_{C^{0,\alpha}(\Omega)}\coloneqq\max\bigl{\{}\|\cdot\|_{L^{\infty}( \Omega)}\,,[\,\cdot\,]_{\alpha}\bigr{\}}.\] When we want to refer to functions in \(C^{0,\alpha}(\Omega)\) which have compact support, we use the notation \(C^{0,\alpha}_{c}(\Omega)\) for the space and \(\|\cdot\|_{C^{0,\alpha}_{c}(\Omega)}\) for the norm \[\|\cdot\|_{C^{0,\alpha}_{c}(\Omega)}\coloneqq[\,\cdot\,]_{\alpha}. \tag{5}\] Since we will be concerned with boundary behaviour we let, for \(r>0\), \(B_{r,+}(0)\) denote \[B_{r,+}(0)\coloneqq\{(x_{1},x_{2})\in\mathbb{R}^{2}:|(x_{1},x_{2})|<r,\ \ x_{2}>0\}.\] For \(y\in\mathbb{R}\) we also use the notation \[B_{r,+}(y,0)\coloneqq(y,0)+B_{r,+}(0).\] We also introduce, for \(0<\alpha\leq 1\) and \(r>0\), the space of functions \[\mathcal{A}_{\alpha,r}\coloneqq\bigl{\{}\varphi\in C^{0,\alpha}(B_{r,+}(0)): \begin{subarray}{c}\varphi\equiv 0\text{ in a neighbourhood}\\ \text{ of }\partial B_{r,+}(0)\cap\{x_{2}>0\}\end{subarray}\bigr{\}}\] which we equip with the norm, \(\|\cdot\|_{\mathcal{A}_{\alpha,r}}\) defined to be the same as (5). For \(0<\alpha\leq 1\) we let \((C^{0,\alpha}(\Omega))^{*}\) be the dual space to \(C^{0,\alpha}(\Omega)\) paired with the norm \[\|\mu\|_{(C^{0,\alpha}(\Omega))^{*}}\coloneqq\sup_{\begin{subarray}{c}\varphi \in C^{0,\alpha}(\Omega)\\ \|\varphi\|_{C^{0,\alpha}(\Omega)}\leq 1\end{subarray}}\bigl{\{}\bigl{\langle} \mu,\varphi\bigr{\rangle}\bigr{\}}\] where \(\bigl{\langle}\cdot,\cdot\bigr{\rangle}\) denotes a duality pairing. For each of \(C^{0,\alpha}_{c}(\Omega)\), \(\mathcal{A}_{\alpha,r}\), and \(C(\Omega)\) we define respective dual spaces. Given a function \(u\in W^{1,2}(\Omega;\mathbb{R}^{2})\) we define the _Jacobian_, \(\star J(u)\), to be the measure in \(\bigl{(}C^{0,\alpha}(\Omega)\bigr{)}^{*}\), for \(0<\alpha\leq 1\), defined by \[\bigl{\langle}\star J(u),\varphi\bigr{\rangle}\coloneqq\int_{\Omega}\!\! \varphi Ju\] where \(\varphi\in C^{0,\alpha}(\Omega)\) and \(Ju\coloneqq\det(\nabla u)\). We use this definition of the Jacobian for \((C(\Omega))^{*}\) as well. For \(u\in W^{1,2}(\Omega;\mathbb{R}^{2})\) we also let \(ju\) denote \[ju\coloneqq\biggl{(}u\times\frac{\partial u}{\partial x_{1}},u\times\frac{ \partial u}{\partial x_{2}}\biggr{)}.\] Note that since \(Ju=\frac{1}{2}\nabla\times ju\) then, for \(\varphi\in C^{0,1}(\Omega)\), integrating by parts gives: \[\int_{\Omega}\!\!\varphi Ju=-\frac{1}{2}\int_{\Omega}\!\!\nabla^{\perp}\varphi \cdot ju+\frac{1}{2}\int_{\partial\Omega}(ju\cdot\tau)\varphi\] where \(\tau(x)\) denotes, for each \(x\in\partial\Omega\), the unit tangent vector at \(x\) such that \(\{\mathbf{n}(x),\tau(x)\}\) is a positively oriented basis where \(\mathbf{n}(x)\) is the outward unit normal to \(\partial\Omega\) at \(x\). We provide a detailed discussion of notation for tangential and normal unit vectors in Section 1.2. ### Coordinates Here we will construct a coordinate system for a neighbourhood of \(\partial\Omega\) which uses the unit tangent and unit inward normal. Our construction will follow the ideas found in Section 2 of [6]. See also [16] for similar ideas. To construct the desired coordinate system we will need to consider each of the connected components of \(\partial\Omega\) separately. We parametrize each \((\partial\Omega)_{i}\), for \(i=0,1,\ldots,b\), by its arclength, \(L_{i}\), using a \(C^{2,1}\) curve \(\gamma_{i}(y_{1})=(\gamma_{i,1}(y_{1}),\gamma_{i,2}(y_{1}))\) where \(\gamma_{i}\colon\mathbb{R}/L_{i}\mathbb{Z}\to(\partial\Omega)_{i}\). We define the unit tangent and unit normal vectors, \(\tau_{i}\) and \(\nu_{i}\), respectively, as \[\tau_{i}(y_{1}) \coloneqq(\gamma^{\prime}_{i,1}(y_{1}),\gamma^{\prime}_{i,2}(y_ {1})),\] \[\nu_{i}(y_{1}) \coloneqq(-\gamma^{\prime}_{i,2}(y_{1}),\gamma^{\prime}_{i,1}(y _{1})).\] We also let, for \(i=0,1,\ldots,b\), \(\mathbf{n}_{i}\coloneqq-\nu_{i}\) denote the outward unit normal along \((\partial\Omega)_{i}\). By perhaps reversing the orientation on \(\gamma_{i}\) we may ensure that \(\nu_{i}\) coincides with the inward unit normal to \((\partial\Omega)_{i}\) for each \(i=0,1,\ldots,b\). Notice that this choice of orientation matches the induced orientation from \(\Omega\) which is clockwise on \((\partial\Omega)_{i}\) for \(i=1,2,\ldots,b\) and counterclockwise on \((\partial\Omega)_{0}\). Recall Chapter 2.2 of [21] gives \[\tau^{\prime}_{i}(y_{1})=\kappa_{i}(y_{1})\nu_{i}(y_{1}),\ \ \ \ \nu^{\prime}_{i}(y_{1})=-\kappa_{i}(y_{1})\tau_{i}(y_{1}), \tag{6}\] where \(\kappa_{i}\colon\mathbb{R}/L_{i}\mathbb{Z}\to\mathbb{R}\), for \(i=0,1,\ldots,b\) is the signed curvature of \((\partial\Omega)_{i}\). Notice that \(\kappa_{i}\in C^{0,1}(\mathbb{R}/L_{i}\mathbb{Z})\) since \(\partial\Omega\) is \(C^{2,1}\). Figure 1: Depiction of tangent and normal vectors. Next we will define a coordinate chart on \(\Omega\) which extends to a larger domain. Suppose \(0<r\leq r_{0}<\frac{1}{2}\mathrm{inj}(\partial\Omega)\), where \(\mathrm{inj}(\partial\Omega)\) denotes the injectivity radius of \(\partial\Omega\), and set \[\Omega_{i,r} \coloneqq\{x\in\Omega:0<\mathrm{dist}(x,(\partial\Omega)_{i})<r\},\] \[\overline{\Omega}_{i,r} \coloneqq\{x\in\overline{\Omega}:0\leq\mathrm{dist}(x,(\partial \Omega)_{i})\leq r\}.\] We define, for \(i=0,1,\ldots,b\), \(C^{1,1}\) maps \(X_{i}\colon(\mathbb{R}/L_{i}\mathbb{Z})\times(0,r_{0})\to\Omega_{i,r_{0}}\) by \[X_{i}(y_{1},y_{2})\coloneqq\gamma_{i}(y_{1})+y_{2}\nu_{i}(y_{1}). \tag{7}\] We observe that by (6) we have \[\frac{\partial X_{i}}{\partial y_{1}}=\tau_{i}(y_{1})-y_{2}\kappa_{i}(y_{1}) \tau_{i}(y_{1}),\ \ \ \ \ \frac{\partial X_{i}}{\partial y_{2}}=\nu_{i}(y_{1}). \tag{8}\] From (8) we have \[JX_{i}=1-y_{2}\kappa_{i}(y_{1}).\] By perhaps shrinking \(r_{0}\) we may ensure \(JX_{i}\) is bounded away from zero on \((\mathbb{R}/L_{i}\mathbb{Z})\times(0,r_{0})\). Next we let \[r_{1}\coloneqq\frac{1}{2}\min\Bigl{\{}\min_{i=0}^{b}\{L_{i}\},r_{0}\Bigr{\}}\] and notice that for each \(y\in(\mathbb{R}/L_{i}\mathbb{Z})\) we can define a map \(i_{y}\colon B_{r_{1},+}(0)\to B_{r_{1},+}(y,0)\) as \[i_{y}(z)\coloneqq(y,0)+z.\] Since \((\mathbb{R}/L_{i}\mathbb{Z})\times[0,r_{1}/2]\) is compact for each \(i=0,1,2,\ldots,b\) then we can find, for each \(i=0,1,\ldots,b\), finitely many points \(\{y_{i,j}\}_{j=1}^{N_{i}}\subseteq(\mathbb{R}/L_{i}\mathbb{Z})\) such that \[(\mathbb{R}/L_{i}\mathbb{Z})\times[0,r_{1}/2]\subseteq\bigcup_{j=1}^{N_{i}}B _{r_{1},+}(y_{i,j},0).\] Next we define, for each \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\), the \(C^{1,1}\) map \(\psi_{i,j}\colon B_{r_{1},+}(0)\to\mathcal{U}_{i,j}\) by \[\psi_{i,j}\coloneqq X_{i}\circ i_{y_{i,j}}\] where \(\mathcal{U}_{i,j}\coloneqq X_{i}(B_{r_{1},+}(y_{i,j},0))\). Notice that, for each \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\), \(J\psi_{i,j}\) is Lipschitz and bounded away from zero by our choice of \(r_{0}\), \(\psi_{i,j}\) is proper (i.e. preimages of compact sets are compact), and \(\mathcal{U}_{i,j}\) is simply connected. By Theorem 6.2.8 of [19] or Theorem \(B\) of [11] we conclude that \(\psi_{i,j}\) is invertible with differentiable inverse. It follows from \[D\psi_{i,j}^{-1}(x)=\bigl{[}D\psi_{i,j}(\psi_{i,j}^{-1}(x))\bigr{]}^{-1}\] and that \(J\psi_{i,j}^{-1}\) is bounded away from zero, that \(D\psi_{i,j}^{-1}\) has Lipschitz components. For each \(i=0,1,\ldots,b\) we now obtain a local chart for \(\overline{\Omega}_{i,\frac{r_{1}}{2}}\setminus(\partial\Omega)_{i}\) from \(\{(\mathcal{U}_{i,j},\psi_{i,j})\}_{j=1}^{N_{i}}\). Collecting all of these local charts gives an atlas for \(\{x\in\Omega:\operatorname{dist}(x,\partial\Omega)\leq\frac{r_{1}}{2}\}\). Observe that we may adjoin \(\mathcal{U}_{0,0}\coloneqq\{x\in\Omega:\operatorname{dist}(x,\partial\Omega)> \frac{r_{1}}{4}\}\) paired with the identity map \(\psi_{0,0}\) to obtain an atlas for \(\Omega\). We also notice that since the parametrizing curves \(\gamma_{i}\) were not reparametrized in a way that distorts length, we retain that \((\partial\Omega)_{i}\), for \(i=0,1,\ldots,b\), is still described by an arclength parametrized curve in \(\mathcal{U}_{i,j}\). We observe that this atlas extends to an atlas for \(\overline{\Omega}\) with similar properties. Next, we let \(\widetilde{\Omega}_{i,r}\) denote, for \(i=0,1,2,\ldots,b\) and \(0<r\leq r_{0}\), \[\widetilde{\Omega}_{i,r}\coloneqq\{x\in\mathbb{R}^{2}:0\leq \operatorname{dist}(x,(\partial\Omega)_{i})<r\}. \tag{9}\] Notice that we can extend \(X_{i}\), for \(i=0,1,\ldots,b\), to a map \(\widetilde{X}_{i}\coloneqq(\mathbb{R}/L_{i})\times(-r_{0},r_{0})\to \widetilde{\Omega}_{i,r_{0}}\) by the same definition as in (7). This allows us to view each \(\psi_{i,j}\), for \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\), as the restriction to \(B_{r_{1},+}(0)\) of a map \(\widetilde{\psi}_{i,j}\colon B_{r_{1}}(0)\to\widetilde{\mathcal{U}}_{i,j}\), where \(\widetilde{\mathcal{U}}_{i,j}\coloneqq\widetilde{X}_{i}(B_{r_{1}}(y_{i,j},0))\). By definition of the extension of \(\widetilde{X}_{i}\) and the chart \(\widetilde{\psi}_{i,j}\) we have, for \((y_{1},s)\in\psi_{i,j}^{-1}(\mathcal{U}_{i,j})\), that \[\operatorname{dist}(\widetilde{\psi}_{i,j}(y_{1},-s),(\partial\Omega)_{i})= \operatorname{dist}(\widetilde{\psi}_{i,j}(y_{1},s),(\partial\Omega)_{i}).\] Let \(\widetilde{\mathcal{U}}_{0,0}\coloneqq\{x\in\Omega:\operatorname{dist}(x, \partial\Omega)>\frac{r_{1}}{4}\}\). We may pair \(\widetilde{\mathcal{U}}_{0,0}\) with the identity map, \(\widetilde{\psi}_{0,0}(y)\coloneqq y\), in order to extend the atlas for \(\bigcup_{i=0}^{b}\widetilde{\Omega}_{i,\frac{r_{1}}{2}}\) to one for \(\Omega\). We let \(\{\widetilde{\rho}_{i,j}\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\cup\{ \widetilde{\rho}_{0,0}\}\) be a smooth partition of unity subordinate to the cover \(\{\widetilde{\mathcal{U}}_{i,j}\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\cup\{ \widetilde{\mathcal{U}}_{0,0}\}\). We observe that since, for each \(i_{1}=0,1,\ldots,b\), \((\partial\Omega)_{i_{1}}\cap\widetilde{\mathcal{U}}_{i_{2},j_{2}}=\varnothing\) for \(i_{1}\neq i_{2}\), \(i_{2}=0,1,\ldots,b\) and \(j_{2}=1,2,\ldots,N_{i_{2}}\) and since \(\widetilde{\mathcal{U}}_{0,0}\subset\subset\Omega\) then \[\sum_{j_{1}=1}^{N_{i_{1}}}\widetilde{\rho}_{i_{1},j_{1}}=1\quad \text{ on }(\partial\Omega)_{i_{1}},\,i_{1}=0,1,\ldots,b. \tag{10}\] Next, we notice that, due to the choice of orientation about \((\partial\Omega)_{i}\) for \(i=0,1,\ldots,b\), we have, by Lemma A.4 and the Gauss-Bonnet theorem, that \[\int_{(\partial\Omega)_{0}}\widetilde{\kappa}_{0}=2\pi,\quad\ \int_{(\partial\Omega)_{i}}\widetilde{\kappa}_{i}=-2\pi,\text{ for }i=1,2,\ldots,b, \tag{11}\] where \(\widetilde{\kappa}_{i}\coloneqq\kappa_{i}\circ(\widetilde{\psi}_{i,j}^{-1})^{1}\). By Corollary A.3 or the Gauss-Bonnet theorem we now have \[\int_{\partial\Omega}\widetilde{\kappa}=2\pi\chi_{Euler}(\Omega).\] We introduce some additional notation for expressing and extending functions \(u_{\varepsilon}\) using the previously described coordinates on \(\Omega\): \[z_{i,j,\varepsilon} \coloneqq u_{\varepsilon}\circ\psi_{i,j}, \tag{12}\] \[w_{i,j,\varepsilon} \coloneqq z_{i,j,\varepsilon,\tau}e_{1}+z_{i,j,\varepsilon,\nu}e_{2}, \tag{13}\] where \(z_{i,j,\varepsilon,\tau}\) and \(z_{i,j,\varepsilon,\nu}\) are, respectively, the components of \(z_{i,j,\varepsilon}\) in the \(\tau\) and \(\nu\) directions. Note that these functions are defined over \(B_{r_{1},+}(0)\). ## 2 Lemmas ### Extension for \(B_{r,+}(0)\) **Lemma 2.1**.: _Suppose \(u\in W^{1,2}_{T}(B_{r,+}(0);\mathbb{R}^{2})\) and \(\varphi\in\mathcal{A}_{\alpha,r}\) where \(0<\alpha\leq 1\) and \(r>0\). Then there is an extension \(\tilde{u}\in W^{1,2}(B_{r}(0);\mathbb{R}^{2})\) of \(u\) and an extension \(\tilde{\varphi}\in C^{0,\alpha}_{c}(B_{r}(0))\) of \(\varphi\) such that_ \[\left\langle\star J(\tilde{u}),\tilde{\varphi}\right\rangle=2\big{\langle} \star J(u),\varphi\big{\rangle} \tag{14}\] _and_ \[\|\tilde{u}\|_{W^{1,2}(B_{r}(0);\mathbb{R}^{2})} =2\|u\|_{W^{1,2}(B_{r,+}(0);\mathbb{R}^{2})}, \tag{15}\] \[\|\tilde{\varphi}\|_{C^{0,\alpha}_{c}(B_{r}(0))} =\|\varphi\|_{\mathcal{A}_{\alpha,r}},\] (16) \[E_{\varepsilon}(\tilde{u},B_{r}(0)) =2E_{\varepsilon}(u,B_{r,+}(0)). \tag{17}\] Proof.: Since \(u_{T}=0\) then if \(u=u^{1}e_{1}+u^{2}e_{2}\) then \(u^{1}(x_{1},0)=0\) for \(|x_{1}|<r\). We define \(\tilde{u}\colon B_{r}(0)\to\mathbb{R}^{2}\) by \[\tilde{u}(x_{1},x_{2})\coloneqq\begin{cases}u^{1}(x_{1},x_{2})e_{1}+u^{2}(x_{ 1},x_{2})e_{2},&\text{if $|x_{1}|<r$ and $x_{2}\geq 0$}\\ -u^{1}(x_{1},-x_{2})e_{1}+u^{2}(x_{1},-x_{2})e_{2},&\text{if $|x_{1}|<r$ and $x_{2}<0$}.\end{cases}\] Note that we are able to extend \(u^{1}\) as an odd function because it is zero along \(\left\{(x_{1},0):|x_{1}|\leq r\right\}\). We note that \(\tilde{u}\in W^{1,2}(B_{r}(0);\mathbb{R}^{2})\) and that both (15) and (17) are satisfied. Next we compute the Jacobian of \(\tilde{u}\). Doing this, we see that \[J\tilde{u}(x_{1},x_{2})=\begin{cases}Ju(x_{1},x_{2}),&\text{if $|x_{1}|<r$ and $x_{2}\geq 0$}\\ Ju(x_{1},-x_{2}),&\text{if $|x_{1}|<r$ and $x_{2}<0$}.\end{cases} \tag{18}\] Next, consider a function \(\varphi\in\mathcal{A}_{\alpha,r}\), where \(\alpha\in(0,1]\) and \(r>0\). We extend \(\varphi\) to \(B_{r}(0)\) in the following way: \[\tilde{\varphi}(x_{1},x_{2})\coloneqq\varphi(x_{1},|x_{2}|)=\begin{cases} \varphi(x_{1},x_{2}),&\text{if $|x_{1}|<r$ and $x_{2}\geq 0$}\\ \varphi(x_{1},-x_{2}),&\text{if $|x_{1}|<r$ and $x_{2}<0$}\end{cases} \tag{19}\] and note that (16) holds. We also note that \(\tilde{\varphi}\in C^{0,\alpha}_{c}(B_{r}(0))\) since \(\varphi\equiv 0\) in a neighbourhood of \(\partial B_{r,+}(0)\cap\{x_{2}>0\}\). Finally, (14) now follows from (18), (19), and a change of variables. **Lemma 2.2**.: _Suppose \(\{u_{\varepsilon}\}_{\varepsilon\in(0,1]}\subseteq W^{1,2}_{T}(B_{r,+}(0); \mathbb{R}^{2})\) satisfies_ \[E_{\varepsilon}(u_{\varepsilon},B_{r,+}(0))\leq C|\mathrm{log}(\varepsilon)|.\] _Then there exists a subsequence \(\{\varepsilon_{k}\}_{k=1}^{\infty}\), non-zero integers \(d_{i}\) and \(d_{j}\) for \(i=1,2,\ldots,M_{1}\) and \(j=1,2,\ldots,M_{2}\) such that for all \(0<\alpha\leq 1\) we have_ \[\left\|\star J(u_{\varepsilon_{k}})-\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{x_{i}}- \frac{\pi}{2}\sum_{j=1}^{M_{2}}d_{j}\delta_{x_{j}}\right\|_{\mathcal{A}^{*}_{ \alpha,r}}\longrightarrow 0^{+},\] _where \(x_{i}\in B_{r,+}(0)\) for \(i=1,2,\ldots,M_{1}\) and \(x_{j}\in\{(x,0):|x|<r\}\) for \(j=1,2,\ldots,M_{2}\)._ Proof.: It follows from Theorem 3.1 of [18] that there is a subsequence \(\{u_{\varepsilon_{k}}\}_{k=1}^{\infty}\subseteq W_{T}^{1,2}(B_{r,+}(0); \mathbb{R}^{2})\), points \(\{x_{i}\}_{i=1}^{M_{1}}\subseteq B_{r,+}(0)\), and non-zero integers \(\{d_{i}\}_{i=1}^{M_{1}}\) such that \[\left\|\star J(u_{\varepsilon_{k}})-\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{x_{i}} \right\|_{(C_{c}^{0,\alpha}(B_{r,+}(0)))^{*}}\longrightarrow 0^{+}, \tag{20}\] for each \(0<\alpha\leq 1\) as \(k\to\infty\). We now demonstrate that we can extend this convergence, after an appropriate modification to the limiting measure, to hold for \(\mathcal{A}_{\alpha,r}^{*}\), the dual of \(\mathcal{A}_{\alpha,r}\). By Lemma 2.1 we may extend each \(u_{\varepsilon_{k}}\) to a function \(\tilde{u}_{\varepsilon_{k}}\) defined over \(B_{r}(0)\). We also have by (17) of Lemma 2.1 that \[E_{\varepsilon_{k}}(\tilde{u}_{\varepsilon_{k}},B_{r}(0))\leq 2C|\text{log}( \varepsilon_{k})|\] for each \(k\in\mathbb{N}\). Applying Theorem 3.1 of [18] again we obtain a further subsequence \(\{\tilde{u}_{\varepsilon_{k}}\}_{k=1}^{\infty}\), sequence with \(\varepsilon_{k}\), points \(\{\tilde{x}_{i}\}_{i=1}^{\tilde{M}}\subseteq B_{r}(0)\), and non-zero integers \(\{\tilde{d}_{i}\}_{i=1}^{\tilde{M}}\) such that \[\left\|\star J(\tilde{u}_{\varepsilon_{k}})-\pi\sum_{i=1}^{\tilde{M}}\tilde{ d}_{i}\delta_{\tilde{x}_{i}}\right\|_{(C_{c}^{0,\alpha}(B_{r}(0)))^{*}} \longrightarrow 0^{+} \tag{21}\] for each \(0<\alpha\leq 1\) as \(k\to\infty\). We let \(J_{0}\coloneqq\pi\sum\limits_{i=1}^{\tilde{M}}\tilde{d}_{i}\delta_{\tilde{x}_{ i}}\). Observe that we may decompose \(J_{0}\) as \[J_{0}=\pi\sum_{i=1}^{\tilde{M}_{1}}\tilde{d}_{i}\delta_{\tilde{x}_{i}}+\pi \sum_{i=1}^{\tilde{M}_{2}}\tilde{d}_{i}\delta_{\tilde{x}_{i}}+\pi\sum_{i=1}^{ \tilde{M}_{3}}\tilde{d}_{i}\delta_{\tilde{x}_{i}}\] where \[\{\tilde{x}_{i}\}_{i=1}^{\tilde{M}_{1}} \subseteq B_{r,+}(0),\] \[\{\tilde{x}_{i}\}_{i=1}^{\tilde{M}_{2}} \subseteq\{(x,0):|x|<r\},\] \[\{\tilde{x}_{i}\}_{i=1}^{\tilde{M}_{3}} \subseteq B_{r}(0)\setminus\overline{B_{r,+}(0)}\] and \(\tilde{M}=\tilde{M}_{1}+\tilde{M}_{2}+\tilde{M}_{3}\). Combining (20) and (21) we find that \(\tilde{M}_{1}=M_{1}\), as well as that \(\tilde{x}_{i}=x_{i}\) and \(\tilde{d}_{i}=d_{i}\) for \(i=1,2,\ldots,M_{1}\). In addition, by a symmetry argument combined with the identity (14) we conclude that \(\tilde{M}_{3}=M_{1}\), \(\tilde{d}_{i}=d_{j}\) for \(i=1,2,\ldots,\tilde{M}_{3}\) and some \(j=1,2,\ldots,M_{1}\), and each \(\tilde{x}_{i}\), for \(i=1,2,\ldots,\tilde{M}_{3}\) is the reflection of some point \(x_{j}\) for \(j=1,2,\ldots,M_{1}\). Thus, after possibly reindexing some of the points, we have \[J_{0}=\pi\sum_{i=1}^{M_{1}}d_{i}\big{(}\delta_{x_{i}}+\delta_{\tilde{x}_{i}} \big{)}+\pi\sum_{i=1}^{\tilde{M}_{2}}\tilde{d}_{i}\delta_{\tilde{x}_{i}},\] where \(\bar{x}\) denotes the reflection of \(x\) across the \(x\)-axis. With this in place we now prove the desired convergence. Let \(\varphi\in\mathcal{A}_{\alpha,r}\) be such that \(\left\|\varphi\right\|_{\mathcal{A}_{\alpha,r}}\leq 1\). Observe that \[\Big{\langle}\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{x_{i}}+\frac{\pi}{2 }\sum_{j=1}^{\tilde{M}_{2}}\tilde{d}_{j}\delta_{x_{j}},\varphi\Big{\rangle} =\left\langle\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{x_{i}}+\frac{\pi}{ 2}\sum_{j=1}^{\tilde{M}_{2}}\tilde{d}_{j}\delta_{x_{j}},\tilde{\varphi}\right\rangle\] \[=\left\langle\frac{\pi}{2}\sum_{i=1}^{M_{1}}d_{i}\big{(}\delta_{x _{i}}+\delta_{\bar{x}_{i}}\big{)}+\frac{\pi}{2}\sum_{j=1}^{\tilde{M}_{2}} \tilde{d}_{j}\delta_{x_{j}},\tilde{\varphi}\right\rangle\] \[=\frac{1}{2}\big{\langle}J_{0},\tilde{\varphi}\big{\rangle}.\] Hence, by combining the above observation with (14) and (16) we have \[\Big{\langle}\star J(u_{\varepsilon_{k}})-\pi\sum_{i=1}^{M_{1}}d_ {i}\delta_{x_{i}}-\frac{\pi}{2}\sum_{j=1}^{\tilde{M}_{2}}\tilde{d}_{j}\delta_{ \bar{x}_{j}},\varphi\Big{\rangle} =\frac{1}{2}\big{\langle}\star J(\tilde{u}_{\varepsilon_{k}})-J_{ 0},\tilde{\varphi}\big{\rangle}\] \[\leq\frac{1}{2}\|\star J(\tilde{u}_{\varepsilon_{k}})-J_{0}\|_{( C_{c}^{0,\alpha}(B_{r}(0)))^{*}}.\] Taking the supremum over such \(\varphi\) and using (21) gives the desired result. ### Interpolation Here we prove an analog of Lemma 3.3 of [18] for \(B_{r,+}(0)\). Specifically, we show that for each signed Radon measure on \(B_{r,+}(0)\), \(\nu\), we can estimate its norm in \(\mathcal{A}_{\alpha,r}^{*}\), for \(0<\alpha<1\) in terms of its norm in \(\mathcal{A}_{1,r}^{*}\) and \((C(B_{r,+}(0)))^{*}\) as well as the constant \(\alpha\). We introduce some notation in order to prove the desired result. We let \(\eta\colon\mathbb{R}^{2}\to\mathbb{R}\) be a non-negative, smooth, symmetric function with \(\operatorname{supp}(\eta)\subseteq B_{1}(0)\) satisfying \(\int_{B_{1}(0)}\eta=1\). We define, for \(\delta>0\), \(r>0\), \(0<\alpha\leq 1\), and \(\varphi\in\mathcal{A}_{\alpha,r}\), the mollified function \(\varphi_{\delta}\colon B_{r,+}(0)\to\mathbb{R}\) by \[\varphi_{\delta}(y)\coloneqq\int_{B_{\delta}(y)\cap B_{r,+}(0)}\frac{1}{\delta^ {2}}\eta\bigg{(}\frac{y-z}{\delta}\bigg{)}\varphi(z)\,\mathrm{d}z. \tag{22}\] We use the mollified function to obtain an approximation to \(\varphi\) whose Lipschitz norm can be explicitly estimated. We begin with a preparatory lemma. **Lemma 2.3**.: _Suppose \(r>0\), \(0<\alpha\leq 1\), \(\varphi\in\mathcal{A}_{\alpha,r}\), and \(\delta>0\). Suppose further that \(s\in(0,r)\) is such that_ \[\operatorname{supp}(\varphi)\subseteq B_{s,+}(0)\] _and \(M_{\delta}>0\) is chosen so that_ \[[\varphi_{\delta}]_{1}\leq M_{\delta}. \tag{23}\] _Then the functions \(\sigma_{1,\delta},\sigma_{2,\delta}\colon B_{r,+}(0)\to\mathbb{R}\) defined by_ \[\sigma_{1,\delta}(y) \coloneqq\sup_{x\in B_{r,+}(0)\setminus B_{s,+}(0)}\max\bigl{\{} \varphi_{\delta}(x)-M_{\delta}|x-y|,0\bigr{\}},\] \[\sigma_{2,\delta}(y) \coloneqq\sup_{x\in B_{r,+}(0)\setminus B_{s,+}(0)}\min\bigl{\{} \varphi_{\delta}(x)+M_{\delta}|x-y|,0\bigr{\}},\] _satisfy_ \[\varphi_{\delta}=\sigma_{1,\delta}-\sigma_{2,\delta}\quad\text{ on }B_ {r,+}(0)\setminus B_{s,+}(0), \tag{24}\] \[\|\sigma_{1,\delta}\|_{L^{\infty}(B_{r,+}(0))}+\|\sigma_{2,\delta }\|_{L^{\infty}(B_{r,+}(0))}\leq 2\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+} (0))},\] (25) \[[\sigma_{1,\delta}]_{1}+[\sigma_{2,\delta}]_{1}\leq 2M_{ \delta}. \tag{26}\] Proof.: The proof is similar to the one found in the proof of Lemma 3.3 of [18]. **Lemma 2.4**.: _Suppose \(\nu\) is a signed Radon measure on \(B_{r,+}(0)\) and \(0<\alpha<1\) and \(r>0\). Then_ \[\|\nu\|_{\mathcal{A}^{*}_{\alpha,r}}\leq C\|\nu\|_{\mathcal{A}^{*}_{1,r}}^{ \frac{\alpha}{1+\alpha}}\|\nu\|_{(C(B_{r,+}(0)))^{*}}^{\frac{1}{1+\alpha}}.\] Proof.: Without loss of generality we may assume \(0<\|\nu\|_{(C(B_{r,+}(0)))^{*}}<\infty\) and \(0<\|\nu\|_{\mathcal{A}^{*}_{1,r}}<\infty\). Let \(\varphi\in\mathcal{A}_{\alpha,r}\) for \(0<\alpha<1\). We let \(\delta>0\) denote a scale of regularization to be chosen later. We prove a preliminary estimate by considering two cases. Case 1: \(0<\delta<r\) Let \(\zeta\colon[0,\infty)\to\mathbb{R}\) denote a smooth non-increasing function satisfying \(\zeta\geq 0\), \(\zeta\equiv 1\) on \([0,\delta]\), \(\zeta\equiv 0\) on \([2\delta,\infty)\), and \(\|\zeta^{\prime}\|_{L^{\infty}}\leq\frac{2}{\delta}\). Observe that \[\int_{B_{r,+}(0)}\varphi(y)\,\mathrm{d}\nu =\int_{B_{r,+}(0)}(1-\zeta(y_{2}))\varphi(y)\,\mathrm{d}\nu+\int_ {B_{r,+}(0)}\zeta(y_{2})[\varphi(y)-\varphi(y_{1},0)]\,\mathrm{d}\nu\] \[+\int_{B_{r,+}(0)}\zeta(y_{2})\varphi(y_{1},0)\,\mathrm{d}\nu\] \[=I_{1}+I_{2}+I_{3}.\] Then notice that we can estimate \(I_{2}\) as \[|I_{2}| \leq\|\zeta(y_{2})[\varphi(y)-\varphi(y_{1},0)]\|_{L^{\infty}(B_{ r,+}(0))}\|\nu\|_{(C(B_{r,+}(0)))^{*}}\] \[\leq 2^{\alpha}\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0) )}\|\nu\|_{(C(B_{r,+}(0)))^{*}}.\] We let \(\varphi_{\delta}\) be as defined in (22) and we observe that for \(y=(y_{1},0)\), using that \(\varphi=0\) in a neighbourhood of \(\partial B_{r,+}(0)\cap\{y_{2}>0\}\) as well as symmetry of \(\eta\), we have \[\left\|\frac{1}{2}\varphi(y_{1},0)-\varphi_{\delta}(y_{1},0)\right\|_{L^{ \infty}(-r,r)}\leq\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}. \tag{27}\] We now decompose \(I_{3}\) as \[I_{3} =2\int_{B_{r,+}(0)}\zeta(y_{2})\frac{\varphi(y_{1},0)}{2}\,\mathrm{d}\nu\] \[=2\int_{B_{r,+}(0)}\zeta(y_{2})\Big{[}\frac{1}{2}\varphi(y_{1},0)- \varphi_{\delta}(y_{1},0)\Big{]}\,\mathrm{d}\nu\] \[+2\int_{B_{r,+}(0)}\zeta(y_{2})[\varphi_{\delta}(y_{1},0)- \varphi_{\delta}(y_{1},y_{2})]\,\mathrm{d}\nu\] \[+2\int_{B_{r,+}(0)}\zeta(y_{2})\big{[}\varphi_{\delta}(y_{1},y_{2 })-\sigma_{1,\delta}(y_{1},y_{2})+\sigma_{2,\delta}(y_{1},y_{2})\big{]}\, \mathrm{d}\nu\] \[+2\int_{B_{r,+}(0)}\zeta(y_{2})\big{[}\sigma_{1,\delta}(y_{1},y_{ 2})-\sigma_{2,\delta}(y_{1},y_{2})\big{]}\,\mathrm{d}\nu\] \[=A+B+C+D.\] By (27) we have \[|A|\leq 2\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_ {r,+}(0)))^{*}}.\] Since \(\mathrm{supp}(\zeta)\subseteq[0,2\delta]\) then \[\|\zeta(y_{2})[\varphi_{\delta}(y_{1},0)-\varphi_{\delta}(y_{1},y_{2})]\|_{L^{ \infty}(B_{r,+}(0))}\leq 2^{\alpha}\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+ }(0))}\] and hence \[|B|\leq 2^{\alpha+1}\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\| \nu\|_{(C(B_{r,+}(0)))^{*}}.\] Next by (25) we have \[|D|\leq 4\delta^{\alpha}\left\|\varphi\right\|_{C^{0,\alpha}(B_{r,+}(0))}\| \nu\|_{(C(B_{r,+}(0)))^{*}}.\] We observe that, since \(\varphi=0\) in a neighbourhood of \(\partial B_{r,+}(0)\cap\{y_{2}>0\}\), for each \((y_{1},y_{2})\in B_{r,+}(0)\) we have \[\left|\frac{\partial\varphi_{\delta}}{\partial y_{1}}(y_{1},y_{2 })\right| =\left|\int_{B_{\delta}(y_{1},y_{2})\cap B_{r,+}(0)}\!\!\frac{1}{ \delta^{3}}\frac{\partial\eta}{\partial y_{1}}\Big{(}\frac{(y_{1},y_{2})-z}{ \delta}\Big{)}\varphi(z)\,\mathrm{d}z\right|\] \[\leq\delta^{-1}r^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\| \eta\|_{W^{1,1}(\mathbb{R}^{2})}\] using that \[\varphi\Big{(}z_{1},\sqrt{r^{2}-z_{1}^{2}}\Big{)}=0,\qquad\left|\sqrt{r^{2}- z_{1}^{2}}-z_{2}\right|\leq r.\] A similar estimate holds for the partial derivative in \(y_{2}\) and we conclude that \[[\varphi_{\delta}]_{1}\leq 2\delta^{-1}r^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+ }(0))}\|\eta\|_{W^{1,1}(\mathbb{R}^{2})}. \tag{28}\] Now we estimate \(C\). Setting \(M_{\delta}\coloneqq 2\delta^{-1}r^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\| \eta\|_{W^{1,1}(\mathbb{R}^{2})}\) we see that by (23), (24), (25), and (26) we have \(\zeta[\varphi_{\delta}-\sigma_{1,\delta}+\sigma_{2,\delta}]\in\mathcal{A}_{1,r}\). and \[|C| \leq\Big{[}12\delta^{-1}r^{\alpha}\|\eta\|_{W^{1,1}(\mathbb{R}^{2 })}+8\delta^{\alpha-1}+4r^{\alpha}\delta^{-1}\Big{]}\|\varphi\|_{C^{0,\alpha}(B _{r,+}(0))}\|\nu\|_{\mathcal{A}_{1,r}^{*}}\] \[\leq C\delta^{-1}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{ \mathcal{A}_{1,r}^{*}}.\] Now we estimate \(I_{1}\). \[I_{1} =\int_{B_{r,+}(0)}(1-\zeta(y_{2}))[\varphi(y)-\varphi_{\delta}(y)]\, \mathrm{d}\nu\] \[+\int_{B_{r,+}(0)}(1-\zeta(y_{2}))[\varphi_{\delta}(y)-\sigma_{1, \delta}(y)+\sigma_{2,\delta}(y)]\,\mathrm{d}\nu\] \[+\int_{B_{r,+}(0)}(1-\zeta(y_{2}))[\sigma_{1,\delta}(y)-\sigma_{2,\delta}(y)]\,\mathrm{d}\nu\] \[=(i)+(ii)+(iii)\] where \(\sigma_{1,\delta}\) and \(\sigma_{2,\delta}\) are the functions introduced in Lemma 2.3. Since \(\varphi\equiv 0\) in a neighbourhood of \(\partial B_{r,+}(0)\cap\{y_{2}>0\}\) and since for \((y_{1},y_{2})\in B_{r,+}(0)\cap\{y_{2}\geq\delta\}\) we have \(B_{\delta}(y_{1},y_{2})\cap\{y_{2}=0\}=\varnothing\) then \[\varphi(y)-\varphi_{\delta}(y)=\int_{B_{\delta}(y)}\frac{1}{\delta^{2}}\eta \Big{(}\frac{y-z}{\delta}\Big{)}[\varphi(y)-\varphi(z)]\,\mathrm{d}z\leq \delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\] for \(B_{r,+}(0)\cap\{y_{2}\geq\delta\}\). Since \(\mathrm{supp}(1-\zeta(y_{2}))\subseteq B_{r,+}(0)\cap\{y_{2}\geq\delta\}\) then \[\|(1-\zeta(y_{2}))[\varphi-\varphi_{\delta}]\|_{L^{\infty}(B_{r,+}(0))}\leq \delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}.\] Thus, \[|(i)|\leq\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_{ r,+}(0)))^{*}}.\] Using (23), (24), (25), (26), and (28) we have \[|(ii)| \leq\Big{[}6\delta^{-1}r^{\alpha}\|\eta\|_{W^{1,1}(\mathbb{R}^{2} )}+4\delta^{\alpha-1}+2r^{\alpha}\delta^{-1}\Big{]}\|\varphi\|_{C^{0,\alpha}( B_{r,+}(0))}\|\nu\|_{\mathcal{A}^{*}_{1,r}}\] \[\leq C\delta^{-1}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{ \mathcal{A}^{*}_{1,r}}.\] Finally, by (25) we have \[|(iii)|\leq 2\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C( B_{r,+}(0)))^{*}}.\] The above gives \[\int_{B_{r,+}(0)}\varphi\,\mathrm{d}\nu\leq C\Big{[}\delta^{\alpha}\|\nu\|_{( C(B_{r,+}(0)))^{*}}+\delta^{-1}\|\nu\|_{\mathcal{A}^{*}_{1,r}}\Big{]}.\] Case 2: \(\delta\geq r\) In this case we write \[\int_{B_{r,+}(0)}\varphi(y_{1},y_{2})\,\mathrm{d}\nu =\int_{B_{r,+}(0)}[\varphi(y_{1},y_{2})-\varphi(y_{1},0)]\, \mathrm{d}\nu+\int_{B_{r,+}(0)}\varphi(y_{1},0)\,\mathrm{d}\nu\] \[=E+F.\] Similar to the estimate for \(I_{2}\) we observe that, since \(|y_{2}|<r\) on \(B_{r,+}(0)\) and since \(\delta\geq r\), we have \[|E|\leq r^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_{r,+}(0)))^ {*}}\leq\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_{r,+} (0)))^{*}}.\] Using the mollification, \(\varphi_{\delta}\), from (22) as well as the functions \(\sigma_{1,\delta}\) and \(\sigma_{2,\delta}\) from Lemma 2.3 we can rewrite \(F\) similar to the estimate for \(I_{3}\) to obtain \[F =2\int_{B_{r,+}(0)}\frac{1}{2}\varphi(y_{1},0)\,\mathrm{d}\nu\] \[=2\int_{B_{r,+}(0)}\Bigl{[}\frac{1}{2}\varphi(y_{1},0)-\varphi_{ \delta}(y_{1},0)\Bigr{]}\,\mathrm{d}\nu+2\int_{B_{r,+}(0)}[\varphi_{\delta}(y _{1},0)-\varphi_{\delta}(y_{1},y_{2})]\,\mathrm{d}\nu\] \[+2\int_{B_{r,+}(0)}[\varphi_{\delta}(y)-\sigma_{1,\delta}(y)+ \sigma_{2,\delta}(y)]\,\mathrm{d}\nu+2\int_{B_{r,+}(0)}[\sigma_{1,\delta}(y)- \sigma_{2,\delta}(y)]\,\mathrm{d}\nu\] \[=FA+FB+FC+FD.\] After possibly extending \(\varphi\) by zero across \(\partial B_{r,+}(0)\cap\{y_{2}>0\}\) we have, by symmetry of \(\eta\) and (27) that \[|FA|\leq 2\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_{r,+}(0)))^{*}}.\] Next, we estimate \(FB\) similar to \(B\) and use that \(r\leq\delta\) to obtain \[|FB|\leq 2\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_{r,+}(0)))^{*}}.\] We estimate \(FD\) using (25) as \[|FD|\leq 4\delta^{\alpha}\|\varphi\|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{(C(B_{r,+}(0)))^{*}}.\] Finally, (24), (26), and (28) gives that \(\varphi_{\delta}-\sigma_{1,\delta}+\sigma_{2,\delta}\in\mathcal{A}_{\alpha,r}\) and hence \[|FC|\leq 12\delta^{-1}r^{\alpha}\|\eta\|_{W^{1,1}(\mathbb{R}^{2})}\|\varphi \|_{C^{0,\alpha}(B_{r,+}(0))}\|\nu\|_{\mathcal{A}_{\alpha,r}^{*}}.\] Putting the estimates of this case together we have \[\int_{B_{r,+}(0)}\varphi\,\mathrm{d}\nu\leq C\Bigl{[}\delta^{\alpha}\|\nu\|_{( C(B_{r,+}(0)))^{*}}+\delta^{-1}\|\nu\|_{\mathcal{A}_{1,r}^{*}}\Bigr{]}\| \varphi\|_{C^{0,\alpha}(B_{r,+}(0))}.\] Putting the two cases together and noting that \(\varphi\in\mathcal{A}_{\alpha,r}\) was arbitrary then we obtain \[\|\nu\|_{\mathcal{A}_{\alpha,r}^{*}}\leq C\bigl{[}\delta^{\alpha}\|\nu\|_{(C( B_{r,+}(0)))^{*}}+\delta^{-1}\|\nu\|_{\mathcal{A}_{1,r}^{*}}\bigr{]}\] for all \(\delta>0\). Taking \(\delta=\|\nu\|_{\mathcal{A}_{1,r}^{*}}^{\frac{1}{1+\alpha}}\|\nu\|_{(C(B_{r,+} (0)))^{*}}^{\frac{-1}{1+\alpha}}\) we obtain \[\|\nu\|_{\mathcal{A}_{\alpha,r}^{*}}\leq C\ \|\nu\|_{\mathcal{A}_{1,r}^{*}}^{ \frac{\alpha}{1+\alpha}}\|\nu\|_{(C(B_{r,+}(0)))^{*}}^{\frac{1}{1+\alpha}}.\] ### Slicing In this subsection we prove, through a slicing argument, that the portion of the vortex set near the boundary cannot meet many normal lines to the boundary. This will be used in order to estimate the size of error terms involving the modulus. **Lemma 2.5**.: _Suppose \(u\in W^{1,2}(\Omega;\mathbb{R}^{2})\) satisfies \(E_{\varepsilon}(u)\leq C|\mathrm{log}(\varepsilon)|\). Then, for each \(i=0,1,2,\ldots,b\) and \(j=1,2,\ldots,N_{i}\), if we set_ \[\mathcal{B}_{i,j,\varepsilon}\coloneqq\left\{t\in(-r_{1},r_{1}):\exists s\in \left[0,\sqrt{r_{1}^{2}-t^{2}}\right)\,\mathrm{s.t.}\,\left|\left|u(\psi_{i,j} (t,s))\right|-1\right|>\varepsilon^{\frac{1}{b}}\right\}\] _then we have_ \[\mathcal{L}^{1}(\mathcal{B}_{i,j,\varepsilon})\leq C(\Omega)\varepsilon^{ \frac{1}{4}}.\] Proof.: Notice that by the construction in Section 1.2 there are \(C^{1,1}\)-coordinate charts \(\psi_{i,j}\colon B_{r_{1},+}(0)\to\mathcal{U}_{i,j}\), for \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\). Our first "bad" set is \[\mathcal{B}_{0,i,j,\varepsilon}\coloneqq\left\{t\in(-r_{1},r_{1}):t\,\, \mathrm{is\,\,a\,\,singular\,\,value\,\,for}\,\,(\psi_{i,j}^{-1})^{1}\right\}.\] Since \(\psi_{i,j}\) is a \(C^{1,1}\)-diffeomorphism then by Sard's theorem, see [7], we have that \[\mathcal{L}^{1}(\mathcal{B}_{0,i,j,\varepsilon})=0.\] We let \(\mathcal{G}_{0,i,j,\varepsilon}\coloneqq(-r_{1},r_{1})\setminus\mathcal{B}_{0,i,j,\varepsilon}\). Next, we consider the set \[\mathcal{B}_{1,i,j,\varepsilon}\coloneqq\left\{t\in(-r_{1},r_{1}):\int_{ \Gamma_{i,j,\varepsilon}}\bigl{|}\nabla|u|\bigr{|}^{2}>\frac{1}{\varepsilon^ {\frac{1}{4}}}|\mathrm{log}(\varepsilon)|\right\}\!,\] Figure 2: Depiction of extended normal lines. where we have set \(\Gamma_{i,j,t}\coloneqq\big{\{}x\in\mathcal{U}_{i,j}:((\psi_{i,j}^{-1})^{1})(x)=t \big{\}}\). By Chebyshev's inequality and the coarea formula we have \[\mathcal{L}^{1}(\mathcal{B}_{1,i,j,\varepsilon})\leq\frac{\int_{\mathcal{B}_{1,i,j,\varepsilon}}\int_{\Gamma_{i,j,t}}\big{|}\nabla|u|\big{|}^{2}}{\frac{|\log( \varepsilon)|}{\varepsilon^{\frac{1}{4}}}}\leq\frac{\varepsilon^{\frac{1}{4}} }{|\log(\varepsilon)|}\cdot\int_{\mathcal{U}_{i,j}}\big{|}\nabla|u|\big{|}^{2} |\nabla(\psi_{i,j}^{-1})^{1}|\leq C(\Omega)\varepsilon^{\frac{1}{4}}.\] We let \(\mathcal{G}_{1,i,j,\varepsilon}\coloneqq(-r_{1},r_{1})\setminus\mathcal{B}_{1,i,j,\varepsilon}\). The final "bad" set is \[\mathcal{B}_{2,i,j,\varepsilon}\coloneqq\bigg{\{}t\in\mathcal{G}_{0,i,j, \varepsilon}\cap\mathcal{G}_{1,i,j,\varepsilon}:\exists y\in\Big{[}0,\sqrt{r _{1}^{2}-t^{2}}\Big{)}\text{ such that }\big{|}|u(\psi_{i,j}(t,y))|-1\big{|}> \varepsilon^{\frac{1}{8}}\bigg{\}}.\] To estimate this we let \(t\in\mathcal{B}_{2,i,j,\varepsilon}\) and find \(y\) such that \[\big{|}|u(\psi_{i,j}(t,y))|-1\big{|}>\varepsilon^{\frac{1}{8}}.\] Suppose there is \(y^{\prime}\in[0,\sqrt{r_{1}^{2}-t^{2}})\) such that \(\big{|}|u(\psi_{i,j}(t,y^{\prime}))|-1\big{|}\leq\frac{\varepsilon^{\frac{1}{8 }}}{2}\). Then since \(t\in\mathcal{G}_{0,i,j,\varepsilon}\cap\mathcal{G}_{1,i,j,\varepsilon}\) we have \[\frac{\varepsilon^{\frac{1}{8}}}{2} \leq\Big{|}\big{|}|u(\psi_{i,j}(t,y))|-1\big{|}-\big{|}|u(\psi_{i, j}(t,y^{\prime}))|-1\big{|}\Big{|}\] \[\leq C(\Omega)\bigg{(}\int_{\Gamma_{i,j,t}}\big{|}\nabla|u|\big{|} ^{2}\bigg{)}^{\frac{1}{2}}|\psi_{i,j}(t,y)-\psi_{i,j}(t,y^{\prime})|^{\frac{1 }{2}}\] \[\leq\frac{C(\Omega)|\log(\varepsilon)|^{\frac{1}{2}}}{\varepsilon ^{\frac{1}{8}}}|\psi_{i,j}(t,y)-\psi_{i,j}(t,y^{\prime})|^{\frac{1}{2}}\] and hence \[|\psi_{i,j}(t,y)-\psi_{i,j}(t,y^{\prime})|\geq\frac{C(\Omega)\varepsilon^{ \frac{1}{2}}}{|\log(\varepsilon)|}.\] We conclude that the closest point along \(\Gamma_{i,j,t}\) satisfying \(\big{|}|u|-1\big{|}\leq\frac{\varepsilon^{\frac{1}{8}}}{2}\) is at least of distance \(\frac{C(\Omega)\varepsilon^{\frac{1}{2}}}{|\log(\varepsilon)|}\) away and hence \[\mathcal{H}^{1}\bigg{(}\Gamma_{i,j,t}\cap\Big{\{}\big{|}|u|-1\big{|}>\frac{ \varepsilon^{\frac{1}{8}}}{2}\Big{\}}\bigg{)}\geq\frac{C(\Omega)\varepsilon^{ \frac{1}{2}}}{|\log(\varepsilon)|}.\] From this we conclude that \[\int_{\mathcal{B}_{2,i,j,\varepsilon}}\int_{\Gamma_{i,j,t}}\frac{ (1-|u|^{2})^{2}}{4\varepsilon^{2}} \geq\int_{\mathcal{B}_{2,i,j,\varepsilon}}\int_{\Gamma_{i,j,t}} \frac{\big{|}1-|u|\big{|}^{2}}{4\varepsilon^{2}}\] \[\geq\frac{\varepsilon^{\frac{1}{4}}}{16\varepsilon^{2}}\int_{ \mathcal{B}_{2,i,j,\varepsilon}}\mathcal{H}^{1}\bigg{(}\Gamma_{i,j,t}\cap \Big{\{}\big{|}|u|-1\big{|}>\frac{\varepsilon^{\frac{1}{8}}}{2}\Big{\}}\bigg{)}\] \[\geq\frac{C(\Omega)\varepsilon^{\frac{3}{4}}}{\varepsilon^{2}| \log(\varepsilon)|}\cdot\mathcal{L}^{1}(\mathcal{B}_{2,i,j,\varepsilon}).\] By the coarea formula we have \[\int_{\mathcal{B}_{2,i,j,\varepsilon}}\int_{\Gamma_{i,j,\varepsilon}}\frac{(1-|u|^ {2})^{2}}{4\varepsilon^{2}}\leq\int_{\mathcal{U}_{i,j}}\frac{(1-|u|^{2})^{2}}{4 \varepsilon^{2}}|\nabla(\psi_{i,j}^{-1})^{1}|\leq C(\Omega)|\mathrm{log}( \varepsilon)|\] and hence \[\mathcal{L}^{1}(\mathcal{B}_{2,i,j,\varepsilon})\leq C(\Omega)\varepsilon^{ \frac{5}{4}}|\mathrm{log}(\varepsilon)|^{2}.\] We set \(\mathcal{G}_{2,i,j,\varepsilon}\coloneqq(-r_{1},r_{1})\setminus\mathcal{B}_{2,i,j,\varepsilon}\). Finally we set \(\mathcal{G}_{i,j,\varepsilon}\coloneqq\mathcal{G}_{0,i,j,\varepsilon}\cap \mathcal{G}_{1,i,j,\varepsilon}\cap\mathcal{G}_{2,i,j,\varepsilon}\) and \(\mathcal{B}_{i,j,\varepsilon}\coloneqq(-r_{1},r_{1})\setminus\mathcal{G}_{i,j,\varepsilon}\). We notice that \[\mathcal{B}_{i,j,\varepsilon}\subseteq\mathcal{B}_{0,i,j,\varepsilon}\cup \mathcal{B}_{1,i,j,\varepsilon}\cup\mathcal{B}_{2,i,j,\varepsilon}\] which has measure \[\mathcal{L}^{1}(\mathcal{B}_{i,j})\leq\mathcal{L}^{1}(\mathcal{B}_{0,i,j})+ \mathcal{L}^{1}(\mathcal{B}_{1,i,j})+\mathcal{L}^{1}(\mathcal{B}_{2,i,j})\leq C (\Omega)\varepsilon^{\frac{1}{4}}\] and we have \(\big{|}|u|^{2}-1\big{|}\leq C(\Omega)\varepsilon^{\frac{1}{8}}\) on \(\mathcal{G}_{i,j,\varepsilon}\). **Lemma 2.6**.: _Suppose \(u\in W^{1,2}(\Omega;\mathbb{R}^{2})\) satisfies \(E_{\varepsilon}(u)\leq C|\mathrm{log}(\varepsilon)|\). Then_ \[\big{\|}|u|^{2}-1\big{\|}_{L^{2}(\partial\Omega)}\leq C(\Omega)\varepsilon^{ \frac{1}{16}}|\mathrm{log}(\varepsilon)|.\] Proof.: By appealing to a partition of unity and compactness of \(\partial\Omega\) it suffices to verify \[\big{\|}|u|^{2}-1\big{\|}_{L^{2}(\psi_{i,j}((-r_{1},r_{1})\times\{0\}))}\leq C (\Omega)\varepsilon^{\frac{1}{16}}|\mathrm{log}(\varepsilon)|\] for an arbitrary \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\). Suppose that we let \(\mathcal{G}_{i,j,\varepsilon}\coloneqq(-r_{1},r_{1})\setminus\mathcal{B}_{i,j,\varepsilon}\) where \(\mathcal{B}_{i,j,\varepsilon}\) is as in Lemma 2.5. It then suffices to estimate each of \[\int_{\psi_{i,j}(\mathcal{B}_{i,j,\varepsilon})}(|u|^{2}-1)^{2},\qquad\int_{ \psi_{i,j}(\mathcal{G}_{i,j,\varepsilon})}(|u|^{2}-1)^{2}.\] Since \(\big{|}|u|-1\big{|}<\varepsilon^{\frac{1}{8}}\) on \(\psi_{i,j}(\mathcal{G}_{i,j,\varepsilon})\) then \[\left|\int_{\psi_{i,j}(\mathcal{G}_{i,j,\varepsilon})}(|u|^{2}-1)^{2}\right| \leq C(\Omega)\varepsilon^{\frac{1}{4}}. \tag{29}\] Since \((|u|^{2}-1)\in W^{1,p}(\Omega)\) for each \(1\leq p<2\) and since \(W^{1,p}(\Omega)\hookrightarrow W^{1-\frac{1}{p},p}(\partial\Omega)\hookrightarrow L ^{\frac{p}{2-p}}(\partial\Omega)\) for all \(1\leq p<2\) then for \(p=\frac{8}{5}\) we have \[\left|\int_{\psi_{i,j}(\mathcal{B}_{i,j,\varepsilon})}(|u|^{2}-1 )^{2}\right| \leq(\mathcal{H}^{1}(\psi_{i,j}(\mathcal{B}_{i,j,\varepsilon})))^{ \frac{1}{2}}\Big{\|}|u_{\varepsilon_{k}}|^{2}-1\Big{\|}_{L^{4}(\partial\Omega)}^ {2}\] \[\leq C(\Omega)(\mathcal{H}^{1}(\psi_{i,j}(\mathcal{B}_{i,j, \varepsilon})))^{\frac{1}{2}}|\mathrm{log}(\varepsilon)|^{2}.\] By Lemma 2.5 and the fact that \(\psi_{i,j}\) is Lipschitz we now have \[\left|\int_{\psi_{i,j}(\mathcal{B}_{i,j,\varepsilon})}(|u|^{2}-1)^{2}\right| \leq C(\Omega)\varepsilon^{\frac{1}{8}}|\mathrm{log}(\varepsilon)|^{2}. \tag{30}\] Putting together (29) and (30) gives the result. Proof of Theorem 0.1 ### Proof of Compactness We proceed using the idea of extension by reflection in order to make use of the compactness result from [18] on a slightly larger open set. This approach is a modification of the ideas found in Propositions 3.1 and 3.2 of [17] (see also Theorem 6.1 of [4]). #### 3.1.1 Step 1: We use the \(C^{1,1}\)-coordinates \(\{(\mathcal{U}_{i,j},\psi_{i,j})\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\cup \{(\mathcal{U}_{0,0},\psi_{0,0})\}\) introduced in Section 1.2 which extend to the boundary. We also recall that in Section 1.2 we introduced a partition of unity \(\{\widetilde{\rho}_{i,j}\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\cup\{ \widetilde{\rho}_{0,0}\}\) subordinate to the extended open cover. Next, for \(0\leq i\leq b\) with \(1\leq j\leq N_{i}\) or \(i=0\) with \(j=0\), and \(k\geq 1\), we use the functions \(z_{i,j,\varepsilon_{k}}\) and \(w_{i,j,\varepsilon_{k}}\), defined in (12) and (13). Since \(w_{i,j,\varepsilon_{k}}^{1}(y_{1},0)=z_{i,j,\varepsilon_{k},\tau}(y_{1},0)=0\) for all \(y_{1}\in(-r_{1},r_{1})\) and since this is a function defined on \(B_{r_{1},+}(0)\) then the extension results of Lemma 2.1 apply. After extending \(w_{i,j,\varepsilon_{k}}\) by reflection, perhaps passing to a subsequence, and applying Lemma 2.2, we may assume that \[\left\|\star J(w_{i,j,\varepsilon_{k}})-\pi\sum_{p=1}^{M_{i,j,1}}d_{p}\delta_{ y_{p}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}\delta_{y_{\ell}}\right\|_{ \mathcal{A}_{\alpha,r_{1}}^{*}}\longrightarrow 0^{+} \tag{31}\] for each \(i=0,1,2,\ldots,b\) with \(j=1,2,\ldots,N_{i}\) and each \(0<\alpha\leq 1\). By using Theorem 3.1 of [18] and perhaps passing to a further subsequence we may assume \[\left\|\star J(u_{\varepsilon_{k}})-\pi\sum_{l=1}^{M_{0,0,1}}d_{l}\delta_{x_{ l}}\right\|_{(C_{c}^{0,\alpha}(\mathcal{U}_{0,0}))^{*}}\longrightarrow 0^{+} \tag{32}\] for all \(0<\alpha\leq 1\). We set \[V_{\Omega} \coloneqq\bigcup_{\begin{subarray}{c}i=0,1,\ldots,b\\ j=1,2,\ldots,N_{i}\end{subarray}}\bigcup_{p=1}^{M_{i,j,1}}\{\psi_{i,j}(y_{p}) \}\cup\bigcup_{l=1}^{M_{0,0,1}}\{x_{l}\},\] \[V_{\partial\Omega} \coloneqq\bigcup_{\begin{subarray}{c}i=0,1,\ldots,b\\ j=1,2,\ldots,N_{i}\end{subarray}}\bigcup_{\ell=1}^{M_{i,j,2}}\{\psi_{i,j}(y_{ \ell})\}\] which denote, respectively, the collection of interior and boundary vortices. Note that there may have been duplicates due to overlapping charts but \(V_{\Omega}\) and \(V_{\partial\Omega}\) consider such points only once. Next, we set \[J_{*}\coloneqq\pi\sum_{x\in V_{\Omega}}d_{x}\delta_{x}+\frac{\pi}{2}\sum_{x\in V _{\partial\Omega}}d_{x}\delta_{x}\] where \(d_{x}\) denotes the respective non-zero integer corresponding to \(x\in V_{\Omega}\sqcup V_{\partial\Omega}\). We now show that \(\star J(u_{\varepsilon_{k}})\) converges to \(J_{*}\) in \((C^{0,\alpha}(\Omega))^{*}\) for all \(0<\alpha\leq 1\). Let \(\varphi\in C^{0,\alpha}(\Omega)\) and notice that since \[\sum_{i=0}^{b}\sum_{j=1}^{N_{i}}\widetilde{\rho}_{i,j}+\widetilde{\rho}_{0,0}=1 \quad\text{ on }\overline{\Omega}\] then \[\big{\langle}\star J(u_{\varepsilon_{k}})-J_{*},\varphi\big{\rangle}=\sum_{i=0} ^{b}\sum_{j=1}^{N_{i}}\big{\langle}\star J(u_{\varepsilon_{k}})-J_{*}, \widetilde{\rho}_{i,j}\varphi\big{\rangle}+\big{\langle}\star J(u_{\varepsilon _{k}})-J_{*},\widetilde{\rho}_{0,0}\varphi\big{\rangle}.\] Since \(\mathcal{U}_{0,0}\subset\subset\Omega\) \[\big{\langle}\star J(u_{\varepsilon_{k}})-J_{*},\widetilde{\rho }_{0,0}\varphi\big{\rangle} =\bigg{\langle}\star J(u_{\varepsilon_{k}})-\pi\sum_{l=1}^{M_{0, 0,1}}d_{\ell}\delta_{x_{\ell}},\widetilde{\rho}_{0,0}\varphi\bigg{\rangle}\] \[\leq C(\Omega)\|\varphi\|_{C^{0,\alpha}(\Omega)}\bigg{\|}\star J (u_{\varepsilon_{k}})-\pi\sum_{l=1}^{M_{0,0,1}}d_{l}\delta_{x_{l}}\bigg{\|}_{( C^{0,\alpha}_{c}(\mathcal{U}_{0,0}))^{*}}\] where the last term tends to zero due to (32). For \(0\leq i\leq b\) and \(1\leq j\leq N_{i}\) we notice that since \(\mathcal{U}_{i,j}\cap(V_{\Omega}\sqcup V_{\partial\Omega})=\bigg{(}\bigcup \limits_{p=1}^{M_{i,j,1}}\{\psi_{i,j}(y_{p})\}\bigg{)}\sqcup\bigg{(}\bigcup \limits_{\ell=1}^{M_{i,j,2}}\{\psi_{i,j}(y_{\ell})\}\bigg{)}\) then \[\big{\langle}\star J(u_{\varepsilon_{k}})-J_{*},\widetilde{\rho}_{i,j}\varphi \big{\rangle}=\bigg{\langle}\star J(u_{\varepsilon_{k}})-\pi\sum_{p=1}^{M_{i, j,1}}d_{p}\delta_{\psi_{i,j}(y_{p})}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{ \ell}\delta_{\psi_{i,j}(y_{\ell})},\widetilde{\rho}_{i,j}\varphi\bigg{\rangle}.\] Notice that \[\pi\sum_{p=1}^{M_{i,j,1}}d_{p}\delta_{\psi_{i,j}(y_{p})}+\frac{ \pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}\delta_{\psi_{i,j}(y_{\ell})} =(\psi_{i,j})_{\#}\bigg{(}\pi\sum_{p=1}^{M_{i,j,1}}d_{p}\delta_{y_ {p}}+\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}\delta_{y_{\ell}}\bigg{)},\] \[\star J(u_{\varepsilon_{k}}) =(\psi_{i,j})_{\#}\bigg{(}\star J(z_{i,j,\varepsilon_{k}})\bigg{)}.\] Thus, \[\bigg{\langle}\star J(u_{\varepsilon_{k}})-\pi\sum_{p=1}^{M_{i,j,1}}d_{p}\delta_{\psi_{i,j}(y_{p})}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{ \ell}\delta_{\psi_{i,j}(y_{\ell})},\widetilde{\rho}_{i,j}\varphi\bigg{\rangle}\] \[= \bigg{\langle}\star J(z_{i,j,\varepsilon_{k}})-\pi\sum_{p=1}^{M_{ i,j,1}}d_{p}\delta_{y_{p}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}\delta_{y_ {\ell}},(\widetilde{\rho}_{i,j}\varphi)\circ\psi_{i,j}\bigg{\rangle}.\] Next we show, for each \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\), that \[\|\star J(z_{i,j,\varepsilon_{k}})-\star J(w_{i,j,\varepsilon_{k}})\|_{ \mathcal{A}_{\alpha,r_{1}}^{*}}\longrightarrow 0^{+} \tag{33}\] as \(k\to\infty\) for each \(0<\alpha\leq 1\). Observe that we can express \(u_{\varepsilon_{k}}\) in \(\mathcal{U}_{i,j}\) as \[u_{\varepsilon_{k}}(x)=u_{\varepsilon_{k}\tau}(x)\tau_{i}\big{(}(\psi_{i,j}^{-1} )^{1}(x)\big{)}+u_{\varepsilon_{k},\nu}(x)\nu_{i}\big{(}(\psi_{i,j}^{-1})^{1}(x )\big{)}\] where \(u_{\varepsilon_{k},\tau}(x)\) and \(u_{\varepsilon_{k},\nu}(x)\) are the projections of \(u_{\varepsilon_{k}}(x)\) onto the basis \(\{\tau_{i}\circ(\psi_{i,j}^{-1})^{1},\nu_{i}\circ(\psi_{i,j}^{-1})^{1}\}\) at the point \(x\). Next, since our coordinates \(\psi_{i,j}\) are chosen so that \(\psi_{i,j}(y)=x\) then we have \[z_{i,j,\varepsilon_{k}}=u_{\varepsilon_{k}}(\psi_{i,j}(y))=u_{\varepsilon_{k},\tau}(\psi_{i,j}(y))\tau_{i}(y_{1})+u_{\varepsilon_{k},\nu}(\psi_{i,j}(y)) \nu_{i}(y_{1}). \tag{34}\] Taking partial derivatives of (34) and using (6) gives \[Jz_{i,j,\varepsilon_{k}}(y)=Jw_{i,j,\varepsilon_{k}}(y) -\kappa_{i}(y_{1})z_{i,j,\varepsilon_{k},\tau}(y)\frac{\partial z _{i,j,\varepsilon_{k},\tau}}{\partial y_{2}}(y)\] \[-\kappa_{i}(y_{1})z_{i,j,\varepsilon_{k},\nu}(y)\frac{\partial z _{i,j,\varepsilon_{k},\nu}}{\partial y_{2}}(y).\] which can be rewritten as: \[Jz_{i,j,\varepsilon_{k}}(y) =Jw_{i,j,\varepsilon_{k}}(y)-\kappa_{i}(y_{1})\frac{\partial}{ \partial y_{2}}\bigg{(}\frac{(z_{i,j,\varepsilon_{k},\tau})^{2}+(z_{i,j, \varepsilon_{k},\nu})^{2}}{2}\bigg{)}\] \[=Jw_{i,j,\varepsilon_{k}}(y)-\kappa_{i}(y_{1})\frac{\partial}{ \partial y_{2}}\bigg{(}\frac{|z_{i,j,\varepsilon_{k}}|^{2}}{2}\bigg{)}. \tag{35}\] First note that for \(\phi\in\mathcal{A}_{1,r_{1}}\) \[\int_{B_{r_{1},+}(0)}\!\!\phi(y)\big{[}Jz_{i,j,\varepsilon_{k}}( y)-Jw_{i,j,\varepsilon_{k}}(y)\big{]} =-\int_{B_{r_{1},+}(0)}\!\!\phi(y)\kappa_{i}(y_{1})\frac{\partial }{\partial y_{2}}\bigg{(}\frac{|z_{i,j,\varepsilon_{k}}|^{2}}{2}\bigg{)}\] \[=-\int_{B_{r_{1},+}(0)}\!\!\phi(y)\kappa_{i}(y_{1})\frac{\partial }{\partial y_{2}}\bigg{(}\frac{|z_{i,j,\varepsilon_{k}}|^{2}-1}{2}\bigg{)}\] \[=(A).\] Integrating by parts gives \[(A) =\int_{B_{r_{1},+}(0)}\!\!\kappa_{i}(y_{1})\frac{\partial\phi}{ \partial y_{2}}(y)\cdot\frac{|z_{i,j,\varepsilon_{k}}|^{2}-1}{2}\] \[-\int_{\partial B_{r_{1},+}(0)\cap\{y_{2}=0\}}\!\!\phi(y)\kappa_{ i}(y_{1})\cdot\frac{|z_{i,j,\varepsilon_{k}}|^{2}-1}{2}\] \[=(AA)+(AB).\] Next observe that since \(\phi=0\) in a neighbourhood of \(\{(x,0):|x|<r_{1}\}\) we may estimate \((AA)\) as \[|(AA)| \leq C(\Omega)\|\phi\|_{\mathcal{A}_{1,r_{1}}}\left(\int_{B_{r_{1 },+}(0)}\frac{(|z_{i,j,\varepsilon_{k}}|^{2}-1)^{2}}{4}\right)^{\frac{1}{2}}\] \[=C(\Omega)\|\phi\|_{\mathcal{A}_{1,r_{1}}}\left(\int_{\psi_{i,j}( B_{r_{1},+}(0))}\frac{(|u_{\varepsilon_{k}}|^{2}-1)^{2}}{4}|J\psi_{i,j}^{-1}| \right)^{\frac{1}{2}}\] \[\leq C(\Omega)\|\phi\|_{\mathcal{A}_{1,r_{1}}}\varepsilon_{k}E_{ \varepsilon_{k}}(u_{\varepsilon_{k}},\Omega)^{\frac{1}{2}}.\] To estimate \((AB)\) first note \[|(AB)|\leq C(\Omega)\|\phi\|_{\mathcal{A}_{1,r_{1}}}\int_{\partial B_{r_{1},+}(0) \cap\{y_{2}=0\}}\frac{\left|\left|z_{i,j,\varepsilon_{k}}\right|^{2}-1\right|}{2}\] and then we apply Lemma 2.6 after a coordinate change. The above then shows that \[\|Jz_{i,j,\varepsilon_{k}}-Jw_{i,j,\varepsilon_{k}}\|_{\mathcal{A}^{*}_{1,r_{1 }}}\leq C(\Omega)\varepsilon_{k}^{\frac{1}{3}}.\] Next observe that for \(\phi\in C(B_{r_{1},+}(0))\) we have \[|(A)|\leq C(\Omega)\|\phi\|_{L^{\infty}(B_{r_{1},+}(0))}\big{[}\varepsilon_{k} E_{\varepsilon_{k}}(u_{\varepsilon_{k}},\Omega)+E_{\varepsilon_{k}}(u_{ \varepsilon_{k}},\Omega)\big{]}\] and hence \[\|Jz_{i,j,\varepsilon_{k}}-Jw_{i,j,\varepsilon_{k}}\|_{(C(B_{r_{1},+}(0)))^{* }}\leq C(\Omega)|\text{log}(\varepsilon_{k})|.\] By Lemma 2.4 we have for each \(0<\alpha<1\) that \[\|Jz_{i,j,\varepsilon_{k}}-Jw_{i,j,\varepsilon_{k}}\|_{\mathcal{A}^{*}_{ \alpha,r_{1}}}\leq C(\Omega)\varepsilon_{k}^{\frac{\alpha}{8(1+\alpha)}}| \text{log}(\varepsilon_{k})|^{\frac{1}{1+\alpha}}.\] Since \((\widetilde{\rho}_{i,j}\varphi)\circ\psi_{i,j}\in\mathcal{A}_{\alpha,r_{1}}\) then from the above estimates we conclude (33) by letting \(k\to\infty\). Thus, combined with the above it suffices to estimate \[\bigg{\langle}\star J(w_{i,j,\varepsilon_{k}})-\pi\sum_{p=1}^{M_{i,j,1}}d_{p} \delta_{y_{p}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}\delta_{y_{\ell}},(\widetilde{\rho}_{i,j}\varphi)\circ\psi_{i,j}\bigg{\rangle}.\] But this tends to zero due to (31) since \((\widetilde{\rho}_{i,j}\varphi)\circ\psi_{i,j}\in\mathcal{A}_{\alpha,r_{1}}\). #### 3.1.2 Step \(2\): Now we show that if \(\{u_{\varepsilon_{k}}\}_{k=1}^{\infty}\) denotes a convergent subsequence from Step \(1\) and we have \[\left\|\star J(u_{\varepsilon_{k}})-\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}- \frac{\pi}{2}\sum_{j=0}^{b}\sum_{\ell=1}^{M_{2,j}}d_{j\ell}\delta_{c_{j\ell}} \right\|_{(C^{0,\alpha}(\Omega))^{*}}\longrightarrow 0^{+}, \tag{36}\] for each \(0<\alpha\leq 1\), where \(a_{i}\in\Omega\) for \(1\leq i\leq M_{1}\), \(c_{j\ell}\in\partial\Omega\) for \(0\leq j\leq b\), \(1\leq\ell\leq M_{2,j}\), and \(d_{i},d_{j\ell}\in\mathbb{Z}\setminus\{0\}\) then \[\sum_{i=1}^{M_{1}}d_{i}+\frac{1}{2}\sum_{j=0}^{b}\sum_{\ell=1}^{M_{2,j}}d_{j \ell}=\chi_{Euler}(\Omega).\] **Substep 1** Before we are able to demonstrate our desired result we first show that \[\left|\int_{\partial\Omega}(ju_{\varepsilon_{k}}\cdot\tau)\varphi-\sum_{i=0}^ {b}\int_{(\partial\Omega)_{i}}\kappa_{i}\varphi\right|\longrightarrow 0^{+}\] as \(k\to\infty\) where \(\varphi\in C^{0,\alpha}(\Omega)\) for \(0<\alpha\leq 1\). We use the atlas \(\{(\widetilde{\mathcal{U}}_{i,j},\widetilde{\psi}_{i,j})\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\) from Section 1.2 to form a coordinate system for \(\partial\Omega\). We also use the partition of unity \(\{\widetilde{\rho}_{i,j}\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\) from Section 1.2. Specifically, we notice that \(\partial\Omega\cap\widetilde{\mathcal{U}}_{i,j}\) is the image of the map \[\gamma_{i,j}\colon(-r_{1},r_{1})\ni s\mapsto\widetilde{\psi}_{i,j}(s,0)= \gamma_{i}(s)\] where \(\gamma_{i}\), and hence \(\gamma_{i,j}\), is an arclength parametrized curve. We can then write \[\int_{\partial\Omega}(ju_{\varepsilon_{k}}\cdot\tau)\varphi=\sum_{i=0}^{b} \sum_{j=1}^{N_{i}}\int_{(\partial\Omega)_{i}\cap\widetilde{\mathcal{U}}_{i,j} }\Bigl{(}u_{\varepsilon_{k}}\times\frac{\partial u_{\varepsilon_{k}}}{\partial \tau}\Bigr{)}\widetilde{\rho}_{i,j}\varphi.\] Observe that since \((u_{\varepsilon_{k}})_{T}=0\) then \(u_{\varepsilon_{k}}=u_{\varepsilon_{k},\nu}\nu_{i}\circ(\widetilde{\psi}_{i,j }^{-1})^{1}\) along \((\partial\Omega)_{i}\) for each \(i=0,1,\ldots,b\). By (6) we have using local coordinates that \[\frac{\partial u_{\varepsilon_{k}}}{\partial\tau}=\frac{\partial z_{i,j, \varepsilon_{k},\nu}}{\partial y_{1}}(y_{1},0)\nu_{i}(y_{1})-z_{i,j, \varepsilon_{k},\nu}(y_{1},0)\kappa_{i}(y_{1})\tau_{i}(y_{1}).\] Thus, \[u_{\varepsilon_{k}}\times\frac{\partial u_{\varepsilon_{k}}}{\partial\tau}=| z_{i,j,\varepsilon_{k}}|^{2}(y_{1},0)\kappa_{i}(y_{1})\] using that \(z_{i,j,\varepsilon_{k},\tau}=0\) in the last equality. From this we conclude that \[\int_{(\partial\Omega)_{i}\cap\widetilde{\mathcal{U}}_{i,j}} \Bigl{(}u_{\varepsilon_{k}}\times\frac{\partial u_{\varepsilon_{k}}}{\partial \tau}\Bigr{)}\widetilde{\rho}_{i,j}\varphi =\int_{(\partial\Omega)_{i}\cap\widetilde{\mathcal{U}}_{i,j}}|u_{ \varepsilon_{k}}|^{2}\widetilde{\kappa}_{i}\widetilde{\rho}_{i,j}\varphi\] \[=\int_{(\partial\Omega)_{i}\cap\widetilde{\mathcal{U}}_{i,j}} \bigl{(}|u_{\varepsilon_{k}}|^{2}-1\bigr{)}\widetilde{\kappa}_{i}\widetilde{ \rho}_{i,j}\varphi+\int_{\partial\Omega\cap\widetilde{\mathcal{U}}_{i,j}} \widetilde{\kappa}_{i}\widetilde{\rho}_{i,j}\varphi\] where \(\widetilde{\kappa}_{i}=\kappa_{i}\circ(\widetilde{\psi}_{i,j}^{-1})^{1}\). Summing in \(j\) and using (10) we now have \[\int_{(\partial\Omega)_{i}}(ju_{\varepsilon_{k}}\cdot\tau)\varphi=\sum_{j=1}^{ N_{i}}\int_{(\partial\Omega)_{i}\cap\widetilde{\mathcal{U}}_{i,j}}\bigl{(}|u_{ \varepsilon_{k}}|^{2}-1\bigr{)}\widetilde{\kappa}_{i}\widetilde{\rho}_{i,j} \varphi+\int_{(\partial\Omega)_{i}}\widetilde{\kappa}_{i}\varphi.\] Now summing in \(i\), with consideration to orientation, and using that \((\partial\Omega)_{i}\) for \(i=0,1,\ldots,b\) are disjoint we have \[\int_{\partial\Omega}(ju_{\varepsilon_{k}}\cdot\tau)\varphi =\sum_{i=0}^{b}\sum_{j=1}^{N_{i}}\int_{(\partial\Omega)_{i}\cap \widetilde{\mathcal{U}}_{i,j}}\bigl{(}|u_{\varepsilon_{k}}|^{2}-1\bigr{)} \widetilde{\kappa}_{i}\widetilde{\rho}_{i,j}\varphi\] \[+\int_{(\partial\Omega)_{0}}\widetilde{\kappa}_{0}\varphi+\sum_{i= 1}^{b}\int_{(\partial\Omega)_{i}}\widetilde{\kappa}_{i}\varphi.\] To prove the desired result we note that \(L^{2}(\partial\Omega)\subseteq(C^{0,\alpha}(\partial\Omega))^{*}\) for each \(0<\alpha\leq 1\) and apply Lemma 2.6 to \[\sum_{i=0}^{b}\sum_{j=1}^{N_{i}}\int_{(\partial\Omega)_{i}\cap\widetilde{ \mathcal{U}}_{i,j}}\bigl{(}|u_{\varepsilon_{k}}|^{2}-1\bigr{)}\widetilde{\kappa }_{i}\widetilde{\rho}_{i,j}\varphi.\] **Substep 2** Now we show the desired conclusion. Observe that integrating by parts gives \[0=-\frac{1}{2}\int_{\Omega}ju_{\varepsilon_{k}}\cdot\nabla^{\perp}(1)=-\frac{1}{ 2}\int_{\partial\Omega}(ju_{\varepsilon_{k}}\cdot\tau)+\int_{\Omega}Ju_{ \varepsilon_{k}}.\] Since \(\phi\equiv 1\) is a member of \(C^{0,\alpha}(\Omega)\) for each \(0<\alpha\leq 1\) then the above work shows that as \(k\to\infty\) we have by the Gauss-Bonnet theorem, see [10]1, that Footnote 1: Here, we use the regular region \(R\subseteq\mathbb{R}^{3}\) given by \(\iota(\Omega)\) where \(\iota\colon\Omega\to R\) is given by \(\iota(x)=(x,0)\). \[\pi\sum_{i=1}^{M_{1}}d_{i}+\frac{\pi}{2}\sum_{j=1}^{b}\sum_{\ell=1}^{M_{2,j}}d _{j\ell}=\frac{1}{2}\int_{\partial\Omega}\widetilde{\kappa}=\pi\chi_{Euler}( \overline{\Omega})=\pi\chi_{Euler}(\Omega)\] using that the Gaussian curvature of \(\Omega\) is zero. #### 3.1.3 Step \(3\): Finally, we show that for each \(j=0,1,\ldots,b\) we have \[\frac{1}{2}\sum_{\ell=1}^{M_{2,j}}d_{j\ell}\in\mathbb{Z}\] where \(d_{j\ell}\) is the degree associated to \(c_{j\ell}\in(\partial\Omega)_{j}\). To do this, we use Lipschitz test functions and the convergence in Holder dual spaces to isolate a level set of the distance to a boundary component of \(\Omega\) for which the degree is defined and relate it to one half of the sum of boundary degrees along this component. Specifically, we proceed as follows: 1. Following ideas from [9] we first make use of a carefully chosen test function which allows us to use the dual convergence of the Jacobian in order to argue that on most level sets, \((\partial\Omega)_{j,t}\), of the distance to a boundary component we have that \[\frac{1}{2}\int_{(\partial\Omega)_{j,t}}ju\cdot t_{j}=\frac{\pi}{2}\sum_{\ell =1}^{M_{2,j}}d_{j\ell}+\pi+o(1).\] 2. By making use of a separate test function we can now use the work of [18] to show that for most level sets \((\partial\Omega)_{j,t}\) we have \[\frac{1}{2}\int_{(\partial\Omega)_{j,t}}ju\cdot t_{j}=\pi\text{deg}(u,( \partial\Omega)_{j,t})+o(1).\] Combined with the previous statement we obtain the desired statement for \(\varepsilon>0\) sufficiently small. We now prove the desired statement. We first show that \[\int_{0}^{r_{1}}\!\!\left|\frac{1}{2}\int_{(\partial\Omega)_{j,t}}\!\!ju \cdot t_{j}-\frac{\pi}{2}\sum_{k=1}^{M_{2,j}}d_{jk}-\pi\right|\!\mathrm{d}t \leq\left\|Ju-\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}-\frac{\pi}{2 }\sum_{j=0}^{b}\sum_{k=1}^{M_{2,j}}d_{jk}\delta_{c_{jk}}\right\|_{(C^{0,\alpha }(\Omega))^{*}}\] \[+\frac{1}{2}\!\left|\!\int_{(\partial\Omega)_{j}}\!\!ju\cdot \tau_{j}+2\pi\right|\] \[\eqqcolon\sigma(\varepsilon) \tag{37}\] where \(t_{j}\) is the tangent vector to \((\partial\Omega)_{j,t}\) so that \(\{\nu_{j},t_{j}\}\) is positively oriented. As a result, we may conclude that for a subset of the levels sets of the distance to a boundary component of size \(r_{1}-\sqrt{s(\varepsilon)}\) we have \[\left|\frac{1}{2}\int_{(\partial\Omega)_{j,t}}\!\!ju\cdot t_{j}-\frac{\pi}{2 }\sum_{k=1}^{M_{2,j}}d_{jk}-\pi\right|\leq\sqrt{\sigma(\varepsilon)} \tag{38}\] and, hence, that "most" slices have \(\frac{1}{2}\int_{(\partial\Omega)_{j,t}}\!\!ju\cdot t_{j}\) close to \(\frac{\pi}{2}\sum\limits_{k=1}^{M_{2,j}}d_{jk}+\pi\). To do this we use a technique similar to the one found in Lemma 1 of [9]. We let \[\varphi(x)\coloneqq\int_{\mathrm{dist}(x,(\partial\Omega)_{j})}^{r_{1}}\!\!g( s)\mathrm{d}s,\hskip 14.226378ptg(s)\coloneqq\chi_{[0,r_{1}]}(s)\mathrm{sgn} \bigg{(}\frac{1}{2}\int_{(\partial\Omega)_{j,s}}\!\!ju\cdot\!t_{j}\!-\!\frac{ \pi}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\!-\!\pi\bigg{)}\] and notice that \(\mathrm{supp}(\varphi)\subseteq\overline{\Omega}_{j,r_{1}}\). By Lemma 2.5 and Lemma 2.6 we have, using the partition of unity \(\{\widetilde{\rho}_{j}\}_{\ell=1}^{N_{j}}\) subordinate to \(\{\widetilde{\mathcal{U}}_{j,\ell}\}_{\ell=1}^{N_{j}}\), that \[\int_{0}^{r_{1}}\!\left|\frac{1}{2}\int_{(\partial\Omega)_{j,t}}\!\!ju\cdot\! t_{j}\!-\!\frac{\pi}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\!-\!\pi\right|\!\mathrm{d}t= \int_{0}^{r_{1}}\!\!g(t)\bigg{[}\frac{1}{2}\int_{(\partial\Omega)_{j,t}}\!\!ju \cdot\!t_{j}\!-\!\frac{\pi}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\!-\!\pi\bigg{]}\! \mathrm{d}t.\] Next parametrizing \((\partial\Omega)_{j,t}\cap\mathcal{U}_{j,\ell}\) using the mapping \((-a_{j,\ell}^{t},a_{j,\ell}^{t})\ni y_{1}\mapsto\psi_{j,\ell}(y_{1},t)\), we have \[= \sum_{\ell=1}^{N_{j}}\int_{0}^{r_{1}}\!\frac{g(t)}{2}\int_{-a_{j \ell}^{t}}^{a_{j\ell}^{t}}(ju)(\psi_{j,\ell}(y_{1},t))\cdot t_{j}(\psi_{j, \ell}(y_{1},t))\widetilde{\rho}_{j,\ell}(\psi_{j,\ell}(y_{1},t))(1-t\kappa_{j} (y_{1}))\mathrm{d}y_{1}\mathrm{d}t\] \[- \bigg{(}\frac{\pi}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\bigg{)}\int_{0}^{r _{1}}\!g(t)\mathrm{d}t-\pi\int_{0}^{r_{1}}\!g(t)\mathrm{d}t.\] Notice that since \(\varphi(c_{jk})=\int_{0}^{r_{1}}\!g(t)\mathrm{d}t\) for each \(k=1,2,\ldots,M_{2,j}\) \[-\bigg{(}\frac{\pi}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\bigg{)}\int_{0}^{r_{1}}\!\!g (t)\mathrm{d}t-\pi\int_{0}^{r_{1}}\!\!g(t)\mathrm{d}t=-\frac{\pi}{2}\sum_{k=1}^ {M_{2,j}}d_{jk}\varphi(c_{jk})-\pi\varphi\big{(}(\partial\Omega)_{j}\big{)}\] where \(\varphi\big{(}(\partial\Omega)_{j}\big{)}\) denotes the value of \(\varphi\) at any point along the boundary. If \(x\in\mathcal{U}_{j,\ell}\) then since \(\mathrm{dist}(x,(\partial\Omega)_{j})=(\psi_{j,\ell}^{-1})^{2}(x)\) then we have \[\nabla\varphi(x)=-g\big{(}(\psi_{j,\ell}^{-1})^{2}(x)\big{)}\nabla(\psi_{j, \ell}^{-1})^{2}(x)=-g\big{(}(\psi_{j,\ell}^{-1})^{2}(x)\big{)}\nu_{j}\big{(}( \psi_{j,\ell}^{-1})^{1}(x)\big{)}.\] Hence, we have by the above and the Change of Variables Theorem that \[\sum_{\ell=1}^{N_{j}}\int_{0}^{r_{1}}\frac{g(t)}{2}\int_{-a_{j\ell }^{t}}^{a_{j\ell}^{t}}t_{j}(y_{1})\cdot ju(\psi_{j,\ell}(y_{1},t))\widetilde{ \rho}_{j,\ell}(\psi_{j,\ell}(y_{1},t))(1-t\kappa_{j}(y_{1}))\mathrm{d}y_{1} \mathrm{d}t\] \[=\sum_{\ell=1}^{N_{j}}\frac{-1}{2}\int_{0}^{r_{1}}\int_{-a_{j \ell}^{t}}^{a_{j\ell}^{t}}(\nabla^{\perp}\varphi)(\psi_{j,\ell}(y_{1},t))\cdot ju (\psi_{j,\ell}(y_{1},t))\widetilde{\rho}_{j,\ell}(\psi_{j,\ell}(y_{1},t))(1-t \kappa_{j}(y_{1}))\mathrm{d}y_{1}\mathrm{d}t\] \[=\sum_{\ell=1}^{N_{i}}\frac{-1}{2}\int_{\Omega_{j,r_{1}}}\nabla^{ \perp}\varphi(x)\cdot ju(x)\widetilde{\rho}_{j,\ell}(x)\mathrm{d}x\] \[=\frac{-1}{2}\int_{\Omega_{j,r_{1}}}\nabla^{\perp}\varphi(x)\cdot ju (x)\mathrm{d}x.\] Integrating by parts now gives that \[\frac{-1}{2}\int_{\Omega_{j,r_{1}}}\nabla^{\perp}\varphi(x)\cdot ju (x)\mathrm{d}x =\frac{-1}{2}\int_{(\partial\Omega)_{j}}\varphi(x)[ju\cdot\tau_{ j}]+\int_{\Omega_{j,r_{1}}}\varphi(x)\cdot Ju(x)\] \[=\frac{-\varphi\big{(}(\partial\Omega)_{j}\big{)}}{2}\int_{( \partial\Omega)_{j}}ju\cdot\tau_{j}+\int_{\Omega_{j,r_{1}}}\varphi(x)\cdot Ju (x).\] Rearranging our previous work we now have that \[\int_{0}^{r_{1}}\!\!\left|\frac{1}{2}\int_{(\partial\Omega)_{j, t}}ju\cdot t_{j}-\frac{\pi}{2}\sum_{k=1}^{M_{2,j}}d_{j,k}-\pi\right|\!\! \mathrm{d}t =\left[\int_{\Omega_{j,r_{1}}}\!\!\varphi(x)\cdot Ju(x)-\frac{\pi}{2} \sum_{k=1}^{M_{2,j}}d_{jk}\varphi(c_{jk})\right]\] \[-\frac{\varphi\big{(}(\partial\Omega)_{j}\big{)}}{2}\bigg{[}\int_ {(\partial\Omega)_{j}}ju\cdot\tau_{j}+2\pi\bigg{]}.\] By the conclusion of Substep 1 of Step 2 as well as (36) we have (37). Next, we show that \(\frac{1}{2}\int_{(\partial\Omega)_{j,t}}ju\cdot t_{j}\) is close to an integer multiple of \(\pi\) on most slices \((\partial\Omega)_{j,t}\). To do this we apply the work of Jerrard and Soner from [18]. We consider the test function \[\varphi_{0}(x)\coloneqq\max\Bigl{\{}\frac{r_{1}}{2}-\mathrm{dist}(x,(\partial \Omega)_{j,\frac{r_{1}}{2}}),0\Bigr{\}}.\] From the proof of Theorem 2.1 and equation 2.15 of [18] there is a set \(A\subseteq\big{[}0,\frac{r_{1}}{2}\big{]}\), taking \(\lambda=\frac{3}{2}\), such that \(\mathcal{L}^{1}\big{(}\big{[}0,\frac{r_{1}}{2}\big{]}\setminus A\big{)}\leq C \varepsilon^{\frac{1}{6}}\) and \[\int_{A}\!\left|\frac{1}{2}\int_{(\partial\Omega)_{j,\frac{r_{1}}{2}\pm t}} ju\cdot t_{j}-\pi\mathrm{deg}(u,(\partial\Omega)_{j,\frac{r_{1}}{2}\pm t}) \right|\!\mathrm{d}t\leq C\varepsilon|\!\log(\varepsilon)|.\] We conclude that except on a set of measure \(C\varepsilon^{\frac{1}{\delta}}\) we have that \(\frac{1}{2}\int_{(\partial\Omega)_{j,t}}ju\cdot t_{j}\) is close to an integer multiple of \(\pi\). Combined with (38) we conclude, for \(\varepsilon>0\) sufficiently small, that \[\frac{1}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\in\mathbb{Z}.\] ### Proof of Lower Bound Since \((C^{0,\alpha}(\Omega))^{*}\subseteq(C^{0,1}(\Omega))^{*}\) it suffices to consider the case when the Jacobians converge in \((C^{0,1}(\Omega))^{*}\). We may also suppose that \[E_{\varepsilon}(u_{\varepsilon},\Omega)\leq\bigg{[}\pi\sum_{i=1}^{M_{1}}|d_{i}| +\frac{\pi}{2}\sum_{j=1}^{M_{2}}|d_{j}|\bigg{]}|\text{log}(\varepsilon)|+1 \tag{39}\] for all \(\varepsilon\in(0,1]\) since if (39) fails for a collection, \(\mathcal{C}\), of \(\varepsilon\in(0,1]\) then \[\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{ \varepsilon},\Omega)}{|\text{log}(\varepsilon)|} =\lim_{\varepsilon_{0}\to 0^{+}}\inf_{0<\varepsilon\leq \varepsilon_{0}}\frac{E_{\varepsilon}(u_{\varepsilon},\Omega)}{|\text{log}( \varepsilon)|}\] \[\geq\lim_{\varepsilon_{0}\to 0^{+}}\min\biggl{\{}\inf_{ \varepsilon\in(0,\varepsilon_{0}]\cap((0,1]\setminus\mathcal{C})}\frac{E_{ \varepsilon}(u_{\varepsilon},\Omega)}{|\text{log}(\varepsilon)|},\pi\sum_{i=1 }^{M_{1}}|d_{i}|+\frac{\pi}{2}\sum_{j=1}^{M_{2}}|d_{j}|\biggr{\}}\] \[=\min\biggl{\{}\liminf_{\varepsilon\in(0,1]\setminus\mathcal{C}} \frac{E_{\varepsilon}(u_{\varepsilon},\Omega)}{|\text{log}(\varepsilon)|},\pi \sum_{i=1}^{M_{1}}|d_{i}|+\frac{\pi}{2}\sum_{j=1}^{M_{2}}|d_{j}|\biggr{\}}\] and hence it would suffice to prove the desired result among the collection \((0,1]\setminus\mathcal{C}\) where (39) holds. #### 3.2.1 Step \(1\): First we show that we can extend convergence to be in \((C^{0,1}_{c}(\widetilde{\Omega}_{\frac{r_{1}}{2}}))^{*}\) where \(\widetilde{\Omega}_{\frac{r_{1}}{2}}\) is a slightly larger open set than \(\Omega\) in \(\mathbb{R}^{2}\). The strategy in extending \(u_{\varepsilon}\) will be by reflection as in [16]. More specifically, if we let \[\widetilde{\Omega}_{\frac{r_{1}}{2}}\coloneqq\Omega\cup\bigcup_{i=0}^{b} \widetilde{\Omega}_{i,\frac{r_{1}}{2}},\] where \(\widetilde{\Omega}_{i,r}\) is defined in (9), then we will show \[\left\|\star J(\widetilde{u}_{\varepsilon})-\widetilde{J}\right\|_{(C^{0,1}_{ c}(\widetilde{\Omega}_{\frac{r_{1}}{2}}))^{*}}\longrightarrow 0^{+}.\] where \(\widetilde{u}_{\varepsilon}\) is an extension of \(u_{\varepsilon}\) to \(\widetilde{\Omega}_{\frac{r_{1}}{2}}\) and \(\widetilde{J}\) is an extension of \(J\) to \(\widetilde{\Omega}_{\frac{r_{1}}{2}}\) as defined below. We use the atlas, \(\{(\mathcal{U}_{i,j},\widetilde{\psi}_{i,j})\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i }}\cup\{(\widetilde{\mathcal{U}}_{0,0},\widetilde{\psi}_{0,0})\}\), constructed in Section 1.2, which covers \(\widetilde{\Omega}_{\frac{r_{1}}{2}}\), as well as the smooth partition of unity \(\{\widetilde{\rho}_{i,j}\}_{i=0,1,\ldots,b}^{j=1,2,\ldots,N_{i}}\cup\{ \widetilde{\rho}_{0,0}\}\) subordinate to this cover. We index the chart functions so that \(\operatorname{supp}(\widetilde{\rho}_{i,j})\subseteq\widetilde{\mathcal{U}}_{ i,j}\). For each \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\) we have \[\operatorname{supp}(J)\cap\widetilde{\mathcal{U}}_{i,j}=\bigg{(}\bigcup_{k=1 }^{M_{i,j,1}}\{x_{k}^{i,j}\}\bigg{)}\cup\bigg{(}\bigcup_{\ell=1}^{M_{i,j,2}} \{x_{\ell}^{i,j}\}\bigg{)}\] where \(x_{k}^{i,j}\in\Omega\) for \(k=1,2,\ldots,M_{i,j,1}\) and \(x_{\ell}^{i,j}\in(\partial\Omega)_{i}\) for \(\ell=1,2,\ldots,M_{i,j,2}\). We also have \[\operatorname{supp}(J)\cap\widetilde{\mathcal{U}}_{0,0}=\bigcup_{k=1}^{M_{0, 0,1}}\{x_{k}^{0,0}\}.\] For each \(x_{k}^{i,j}\), where \(k=1,2,\ldots,M_{i,j,1}\), we have \(x_{k}^{i,j}=\widetilde{\psi}_{i,j}(y_{k}^{i,j})\) for some \(y_{k}^{i,j}\coloneqq(y_{1,k}^{i,j},y_{2,k}^{i,j})\in B_{\frac{r_{1}}{2},+}(0)\). In addition, for each \(x_{\ell}^{i,j}\), where \(\ell=1,2,\ldots,M_{i,j,2}\), we have \(x_{\ell}^{i,j}=\widetilde{\psi}_{i,j}(y_{\ell}^{i,j})\) where \(y_{\ell}^{i,j}\coloneqq(y_{1,\ell}^{i,j},0)\). For each \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\) we let \(\bar{x}_{k}\in\widetilde{\Omega}_{\frac{r_{1}}{2}}\) denote \[\bar{x}_{k}\coloneqq\widetilde{\psi}_{i,j}(\bar{y}_{k}^{i,j})\] where \(\bar{y}_{k}^{i,j}\coloneqq(y_{1,k}^{i,j},-y_{2,k}^{i,j})\). Next we set \[V_{\Omega} \coloneqq\bigg{(}\bigcup_{i=0}^{b}\bigcup_{j=1}^{N_{i}}\bigcup_{ k=1}^{M_{i,j,1}}\{\widetilde{\psi}_{i,j}(y_{k}^{i,j})\}\bigg{)}\cup\bigg{(} \bigcup_{k=1}^{M_{0,0,1}}\{x_{k}\}\bigg{)}\] \[V_{\partial\Omega} \coloneqq\bigcup_{i=0}^{b}\bigcup_{j=1}^{N_{i}}\bigcup_{\ell=1}^{ M_{i,j,2}}\{\widetilde{\psi}_{i,j}(y_{1,\ell}^{i,j},0)\}\] \[V_{\widetilde{\Omega}_{\frac{r_{1}}{2}}\setminus\overline{ \Omega}} \coloneqq\bigcup_{i=0}^{b}\bigcup_{j=1}^{N_{i}}\bigcup_{k=1}^{M_{i,j,1}}\{ \widetilde{\psi}_{i,j}(\bar{y}_{k}^{i,j})\}.\] We then define \(\widetilde{J}\) to be the measure given by \[\widetilde{J}\coloneqq\pi\sum_{x\in V_{\Omega}\sqcup V_{\partial\Omega}}d_{x} \delta_{x}+\pi\sum_{\tilde{x}\in V_{\widetilde{\Omega}_{\frac{r}{2}}},\overline {\Omega}}d_{\tilde{x}}\delta_{\tilde{x}}\] where \(d_{x}\) denotes the corresponding non-zero integer associated to \(x\). Next we extend \(u_{\varepsilon}\) to a function on \(\widetilde{\Omega}_{\frac{r_{1}}{2}}\). For each \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\) we set \[\widetilde{z}_{i,j,\varepsilon} \coloneqq u_{\varepsilon}\circ\widetilde{\psi}_{i,j}\] \[\widetilde{w}_{i,j,\varepsilon} \coloneqq\widetilde{z}_{i,j,\varepsilon,\tau}e_{1}+\widetilde{z} _{i,j,\varepsilon,\nu}e_{2}\] \[\widetilde{Z}_{i,j,\varepsilon}(y_{1},y_{2}) \coloneqq\operatorname{sgn}(y_{2})\widetilde{z}_{i,j,\varepsilon, \tau}(y_{1},|y_{2}|)\widetilde{\tau}_{i,j}+\widetilde{z}_{i,j,\varepsilon,\nu} (y_{1},|y_{2}|)\widetilde{\nu}_{i,j}\] where \(\widetilde{z}_{i,j,\varepsilon,\tau}\) and \(\widetilde{z}_{i,j,\varepsilon,\nu}\) denote, respectively, the components of \(\widetilde{z}_{i,j,\varepsilon}\) in the \(\tau\) and \(\nu\) directions and where \(\widetilde{\tau}_{i,j}\) and \(\widetilde{\nu}_{i,j}\) are \[\widetilde{\tau}_{i,j}\coloneqq\tau_{i,j}((\widetilde{\psi}_{i,j}^{-1})^{1}(x )),\ \ \ \ \widetilde{\nu}_{i,j}\coloneqq\nu_{i,j}((\widetilde{\psi}_{i,j}^{-1})^{1}(x )).\] We define the extension \(\widetilde{u}\) by \[\widetilde{u}(x)=\begin{cases}u(x)&\text{if }x\in\overline{\Omega}\\ \widetilde{Z}_{i,j,\varepsilon}(\widetilde{\psi}_{i,j}^{-1}(x))&\text{if }( \widetilde{\psi}_{i,j}^{-1})^{2}(x)<0.\end{cases}\] This is well defined since if \(x\in\partial\Omega\) then there is a local chart \(\widetilde{\psi}_{i,j}\) such that \(x=\widetilde{\psi}_{i,j}(y_{1},0)\) and \[u_{\varepsilon}(x)=u_{\varepsilon}\circ\widetilde{\psi}_{i,j}(y_{1},0)=z_{i, j,\varepsilon}(y_{1},0)=z_{i,j,\varepsilon,\nu}(y_{1},0)\widetilde{\nu}_{i,j}=z_{i,j,\varepsilon,\nu}\big{(}\widetilde{\psi}_{i,j}^{-1}(x)\big{)}\widetilde{\nu }_{i,j}\] where we have used that \(z_{i,j,\varepsilon,\tau}=0\) on \(\partial\Omega\). We now claim that \[\Big{\|}\star J(\widetilde{u}_{\varepsilon})-\widetilde{J}\Big{\|}_{(C_{c}^ {0,1}(\widetilde{\Omega}_{\frac{r_{1}}{2}}))^{*}}\longrightarrow 0^{+}.\] Let \(\varphi\in C_{c}^{0,1}(\widetilde{\Omega}_{\frac{r_{1}}{2}})\) satisfy \(\|\varphi\|_{C_{c}^{0,1}(\widetilde{\Omega}_{\frac{r_{1}}{2}})}\leq 1\) and observe that \[\big{\langle}\star J(\widetilde{u}_{\varepsilon})-\widetilde{J},\varphi \big{\rangle}=\sum_{i=0}^{b}\sum_{j=1}^{N_{i}}\bigl{\langle}\star J(\widetilde{ u}_{\varepsilon})-\widetilde{J},\widetilde{\rho}_{i,j}\varphi\bigr{\rangle}+ \big{\langle}\star J(\widetilde{u}_{\varepsilon})-\widetilde{J},\widetilde{ \rho}_{0,0}\varphi\bigr{\rangle}.\] For \(i=0\) and \(j=0\) we have \[\big{\langle}\star J(\widetilde{u}_{\varepsilon})-\widetilde{J},\widetilde{\rho} _{0,0}\varphi\big{\rangle}=\big{\langle}\star J(u_{\varepsilon})-J,\widetilde{ \rho}_{0,0}\varphi\big{\rangle}\leq C(\Omega)\|\star J(u_{\varepsilon})-J\|_{ (C^{0,1}_{c}(\Omega))^{*}}.\] For \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\) we have \[\big{\langle}\widetilde{J},\rho_{i,j}\varphi\big{\rangle} =\bigg{\langle}\pi\sum_{k=1}^{M_{i,j,1}}d_{k}^{i,j}(\delta_{x_{k} ^{i,j}}+\delta_{\bar{x}_{k}^{i,j}})+\pi\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j} \delta_{x_{\ell}^{i,j}},\widetilde{\rho}_{i,j}\varphi\bigg{\rangle}\] \[=\bigg{\langle}(\widetilde{\psi}_{i,j})_{\#}\bigg{(}\pi\sum_{k=1 }^{M_{i,j,1}}d_{k}^{i,j}(\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k}^{i,j}})+\pi \sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}}\bigg{)}, \widetilde{\rho}_{i,j}\varphi\bigg{\rangle}.\] Next observe that \[\big{\langle}\star J(\widetilde{u}_{\varepsilon}),\widetilde{\rho}_{i,j} \varphi\big{\rangle}=\big{\langle}(\widetilde{\psi}_{i,j})_{\#}(\star J( \widetilde{u}\circ\widetilde{\psi}_{i,j})),\widetilde{\rho}_{i,j}\varphi \big{\rangle}\] and hence \[\big{\langle}\star J(\widetilde{u}_{\varepsilon})-\widetilde{J}, \widetilde{\rho}_{i,j}\varphi\big{\rangle} =\bigg{\langle}(\widetilde{\psi}_{i,j})_{\#}\bigg{(}\star J( \widetilde{u}\circ\widetilde{\psi}_{i,j})-\pi\sum_{k=1}^{M_{i,j,1}}d_{k}^{i,j} (\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k}^{i,j}})-\pi\sum_{\ell=1}^{M_{i,j,2} }d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}}\bigg{)},\widetilde{\rho}_{i,j}\varphi \bigg{\rangle}\] \[=\bigg{\langle}\star J(\widetilde{u}\circ\widetilde{\psi}_{i,j})- \pi\sum_{k=1}^{M_{i,j,1}}d_{k}^{i,j}(\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k} ^{i,j}})-\pi\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},( \widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}\bigg{\rangle}.\] We notice that \[\widetilde{u}\circ\widetilde{\psi}_{i,j}(y_{1},y_{2})=\begin{cases}z_{i,j, \varepsilon,\tau}(y_{1},y_{2})\tau_{i,j}(y_{1})+z_{i,j,\varepsilon,\nu}(y_{1},y_{2})\nu_{i,j}(y_{1})&\text{if }y_{2}\geq 0\\ -z_{i,j,\varepsilon,\tau}(y_{1},-y_{2})\tau_{i,j}(y_{1})+z_{i,j,\varepsilon, \nu}(y_{1},-y_{2})\nu_{i,j}(y_{1})&\text{if }y_{2}<0.\end{cases}\] Repeating the calculation from (35) we have \[J(\widetilde{u}\circ\widetilde{\psi}_{i,j})(y_{1},y_{2})\] \[= \begin{cases}Jw_{i,j,\varepsilon}(y_{1},y_{2})-\kappa_{i}(y_{1}) \frac{\partial}{\partial y_{2}}\bigg{(}\frac{|z_{i,j,\varepsilon}(y_{1},y_{2} )|^{2}-1}{2}\bigg{)}&\text{if }y_{2}\geq 0\\ Jw_{i,j,\varepsilon}(y_{1},-y_{2})+\kappa_{i}(y_{1})\frac{\partial}{\partial y_{ 2}}\bigg{(}\frac{|z_{i,j,\varepsilon}(y_{1},-y_{2})|^{2}-1}{2}\bigg{)}&\text {if }y_{2}<0.\end{cases}\] Combined with our previous calculations we have that \[\bigg{\langle}\star J(\widetilde{u}\circ\widetilde{\psi}_{i,j})- \pi\sum_{k=1}^{M_{i,j,1}}d_{k}^{i,j}(\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k} ^{i,j}})-\pi\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},( \widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}\bigg{\rangle}\] \[= \bigg{\langle}\star J(w_{i,j,\varepsilon})(y_{1},|y_{2}|)-\pi \sum_{k=1}^{M_{i,j,1}}d_{k}^{i,j}(\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k}^{i,j }})-\pi\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},( \widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}\bigg{\rangle}\] \[- \bigg{\langle}\operatorname{sgn}(y_{2})\kappa_{i}(y_{1})\frac{ \partial}{\partial y_{2}}\bigg{(}\frac{|z_{i,j,\varepsilon}|^{2}-1}{2}\bigg{)},( \widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}\bigg{\rangle}.\] By symmetry and the argument from Step 1 of the proof of (1) we find that \[\bigg{\|}\mathrm{sgn}(y_{2})\kappa_{i}(y_{1})\frac{\partial}{\partial y_{2}} \bigg{(}\frac{|z_{j,\varepsilon}|^{2}-1}{2}\bigg{)}\bigg{\|}_{(C^{0,1}_{c}(B_{ \frac{r_{1}}{2}}(0)))^{*}}\longrightarrow 0^{+}\] as \(\varepsilon\to 0^{+}\). Since \((\widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}\in C^{0,1}_{c}(B_{ \frac{r_{1}}{2}}(0))\) then it suffices to estimate the remaining term. We notice that we can write \[(\widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}=[(\widetilde{\rho} _{i,j}\varphi)\circ\widetilde{\psi}_{i,j}]_{e}+[(\widetilde{\rho}_{i,j}\varphi )\circ\widetilde{\psi}_{i,j}]_{o}\] where for a function \(f\colon B_{\frac{r_{1}}{2}}(0)\to\mathbb{R}^{2}\) we set \[f_{e}(x_{1},x_{2})\coloneqq\frac{f(x_{1},x_{2})+f(x_{1},-x_{2})}{2},\ \ \ \ f_{o}(x_{1},x_{2})\coloneqq\frac{f(x_{1},x_{2})-f(x_{1},-x_{2})}{2}.\] That is, \(f_{e}\) and \(f_{o}\) denote, respectively, the even and odd part of \(f\) in the second component. We also observe that \[\|f_{e}\|_{\mathcal{A}_{1,\frac{r_{1}}{2}}}=\|f_{e}\|_{C^{0,1}_{c}(B_{\frac{r_ {1}}{2}}(0))}=\|f\|_{C^{0,1}_{c}(B_{\frac{r_{1}}{2}}(0))}\,.\] Since \(Jw_{i,j,\varepsilon}(y_{1},|y_{2}|)\) and the sum of delta masses are even in \(y_{2}\) and \(B_{\frac{r_{1}}{2}}(0)\) is symmetric with respect to reflection in the \(y_{2}\) variable then \[\bigg{\langle}\star J(w_{i,j,\varepsilon})(y_{1},|y_{2}|)-\pi\sum _{k=1}^{M_{i,j,1}}d_{k}^{i,j}(\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k}^{i,j}}) -\pi\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},(\widetilde {\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}\bigg{\rangle}\] \[= \bigg{\langle}\star J(w_{i,j,\varepsilon})(y_{1},|y_{2}|)-\pi\sum _{k=1}^{M_{i,j,1}}d_{k}^{i,j}(\delta_{y_{k}^{i,j}}+\delta_{\bar{y}_{k}^{i,j}}) -\pi\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},[(\widetilde {\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}]_{e}\bigg{\rangle}\] \[= 2\bigg{\langle}\star J(w_{i,j,\varepsilon})(y_{1},y_{2})-\pi\sum _{k=1}^{M_{i,j,1}}d_{k}^{i,j}\delta_{y_{k}^{i,j}}-\frac{\pi}{2}\sum_{\ell=1}^{ M_{i,j,2}}d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},[(\widetilde{\rho}_{i,j} \varphi)\circ\widetilde{\psi}_{i,j}]_{e}\bigg{\rangle}\] where the last equality uses symmetry. In particular, the last equality is considered over \(B_{\frac{r_{1}}{2},+}(0)\). But we then note that for any \(\phi\in\mathcal{A}_{1,\frac{r_{1}}{2}}\) with \(\|\phi\|_{\mathcal{A}_{1,\frac{r_{1}}{2}}}\leq 1\) we have \[\bigg{\langle}\star J(w_{i,j,\varepsilon})(y_{1},y_{2})-\pi\sum_{ k=1}^{M_{i,j,1}}d_{k}^{i,j}\delta_{y_{k}^{i,j}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}} d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},\phi\bigg{\rangle}\] \[\leq \bigg{\langle}\star J(z_{i,j,\varepsilon})(y_{1},y_{2})-\pi\sum_{ k=1}^{M_{i,j,1}}d_{k}^{i,j}\delta_{y_{k}^{i,j}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}} d_{\ell}^{i,j}\delta_{y_{\ell}^{i,j}},\phi\bigg{\rangle}+\|\star J(w_{i,j, \varepsilon})-\star J(z_{i,j,\varepsilon})\|_{\mathcal{A}_{\frac{1}{2},r_{1}} ^{*}}\] \[= \bigg{\langle}\star J(u_{\varepsilon})-\pi\sum_{k=1}^{M_{i,j,1}}d_ {k}^{i,j}\delta_{x_{k}^{i,j}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell}^ {i,j}\delta_{x_{\ell}^{i,j}},\phi\circ\widetilde{\psi}_{i,j}^{-1}\bigg{\rangle} +\|\star J(w_{i,j,\varepsilon})-\star J(z_{i,j,\varepsilon})\|_{\mathcal{A}_{ 1,\frac{r_{1}}{2}}^{*}}\] \[\leq C(\Omega)\bigg{\|}\star J(u_{\varepsilon})-\pi\sum_{k=1}^{M_{i,j,1} }d_{k}^{i,j}\delta_{x_{k}^{i,j}}-\frac{\pi}{2}\sum_{\ell=1}^{M_{i,j,2}}d_{\ell }^{i,j}\delta_{x_{\ell}^{i,j}}\bigg{\|}_{(C^{0,1}(\Omega))^{*}}+\|\star J(w_{j, \varepsilon})-\star J(z_{j,\varepsilon})\|_{\mathcal{A}_{1,\frac{r_{1}}{2}}^{*}}\] where to obtain the last inequality we used that \(\phi\circ\widetilde{\psi}_{i,j}^{-1}\) is zero in a neighbourhood of \(\partial\widetilde{\mathcal{U}}_{i,j}\cap\Omega\). The first term tends to zero by assumption while the second term tends to zero by a similar argument to Step 1 of (1). Since \(\widetilde{\rho}_{i,j}\circ\widetilde{\psi}_{i,j}=0\) in a neighbourhood of \(\partial B_{\frac{r_{1}}{2}}(0)\) then \([(\widetilde{\rho}_{i,j}\varphi)\circ\widetilde{\psi}_{i,j}]_{e}\in\mathcal{A }_{1,\frac{r_{1}}{2}}\) and we obtain the desired convergence since \(i=0,1,\ldots,b\) and \(j=1,2,\ldots,N_{i}\) was arbitrary. Thus, we now have \[\left\|\star J(\widetilde{u}_{\varepsilon})-\widetilde{J}\right\|_{(C_{ \varepsilon}^{0,1}(\widetilde{\Omega}_{\frac{r_{1}}{2}}))^{*}}\longrightarrow 0 ^{+}.\] #### 3.2.2 Step 2: Fix \(0<r<r_{1}\). By the previous step it follows that if \(\mathcal{O}_{\frac{r}{2}}\coloneqq\Omega\setminus\left[\Omega\cap\bigcup \limits_{i=0}^{b}\widetilde{\Omega}_{i,\frac{r}{2}}\right]\) then \[\left\|\star J(u_{\varepsilon})-J\right\|_{(C_{\varepsilon}^{0,1}(\mathcal{O} _{\frac{r}{2}}))^{*}}\longrightarrow 0^{+}\] using that \(\widetilde{u}=u\) and \(\widetilde{J}=J\) on \(\Omega\). By Theorem 4.1 of [18] or Theorem 1.1 of [3] we have \[\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{\varepsilon},\mathcal{O}_{ \frac{r}{2}})}{\left|\log(\varepsilon)\right|}\geq\pi\sum_{x\in\mathcal{O}_{ \frac{r}{2}}}|d_{x}|. \tag{40}\] Similarly, we also have that the previous step gives \[\left\|\star J(\widetilde{u}_{\varepsilon})-\widetilde{J}\right\|_{(C_{ \varepsilon}^{0,1}(\widetilde{\Omega}_{\frac{r}{2}}))^{*}}\longrightarrow 0^{+},\] where \(\widetilde{\Omega}_{\frac{r}{2}}\coloneqq\bigcup_{i=0}^{b}\widetilde{\Omega}_ {i,\frac{r}{2}}\), and hence \[\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(\widetilde{u}_{ \varepsilon},\widetilde{\Omega}_{\frac{r}{2}})}{\left|\log(\varepsilon) \right|}\geq 2\pi\sum_{x\in\Omega_{\frac{r}{2}}}|d_{x}|+\pi\sum_{x\in \partial\Omega}|d_{x}|.\] Computing in local coordinates shows that \[E_{\varepsilon}(\widetilde{u}_{\varepsilon},\widetilde{\Omega}_{\frac{r}{2} })\leq 2\Big{(}1+C(\Omega)r\Big{)}E_{\varepsilon}(u_{\varepsilon},\Omega_{ \frac{r}{2}})\] and hence we obtain \[2(1+C(\Omega)r)\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{ \varepsilon},\Omega_{\frac{r}{2}})}{\left|\log(\varepsilon)\right|}\geq 2\pi \sum_{x\in\Omega_{\frac{r}{2}}}|d_{x}|+\pi\sum_{x\in\partial\Omega}|d_{x}|\] or equivalently \[(1+C(\Omega)r)\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{ \varepsilon},\Omega_{\frac{r}{2}})}{\left|\log(\varepsilon)\right|}\geq\pi \sum_{x\in\Omega_{\frac{r}{2}}}|d_{x}|+\frac{\pi}{2}\sum_{x\in\partial\Omega }|d_{x}|. \tag{41}\] Combining (40) and (41) we find that \[(1+C(\Omega)r)\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{ \varepsilon},\Omega)}{|\log(\varepsilon)|} \geq\liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{ \varepsilon},\mathcal{O}_{\frac{\pi}{2}})}{|\log(\varepsilon)|}+(1+C(\Omega)r) \liminf_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{\varepsilon},\Omega_{ \frac{\pi}{2}})}{|\log(\varepsilon)|}\] \[\geq\left(\pi\sum_{x\in\mathcal{O}_{\frac{\pi}{2}}}|d_{x}|\right) +\left(\pi\sum_{x\in\Omega_{\frac{\pi}{2}}}|d_{x}|+\frac{\pi}{2}\sum_{x\in\partial \Omega}|d_{x}|\right)\] \[=\pi\sum_{i=1}^{M_{1}}|d_{i}|+\frac{\pi}{2}\sum_{j=1}^{M_{2}}|d_{ j}|.\] Since \(r\) was arbitrary we may let \(r\to 0^{+}\) to obtain the desired inequality. ### Proof of Upper Bound #### 3.3.1 Step \(1\): In this subsection we construct the canonical harmonic map with prescribed singularities which has normal part zero. We will use this map in order to build the recovery sequence for the zeroth order \(\Gamma\)-convergence. This generalizes the construction from [15], see also [8], to consider interior vortices as well as a general connected open subset of \(\mathbb{R}^{2}\). Our approach to building the canonical harmonic map is very much inspired by the construction from [8]. Note that by rotating this map by \(\frac{\pi}{2}\) radians we obtain the canonical harmonic map with prescribed singularities which has tangential part zero. To construct the desired map we first solve the following boundary value problem: \[\left\{\begin{aligned} -\Delta\Psi&=2\pi\sum_{i=1}^{M_{1}}d_{ i}\delta_{a_{i}}&\text{ in }\Omega\\ \frac{\partial\Psi}{\partial\mathbf{n}}&=\widetilde{ \kappa}_{i}-\pi\sum\limits_{k=1}^{M_{2,j}}d_{jk}\delta_{c_{jk}}& \text{ on }(\partial\Omega)_{j}\text{ for }j=1,2,\ldots,b\\ \frac{\partial\Psi}{\partial\mathbf{n}}&= \widetilde{\kappa}_{0}-\pi\sum\limits_{k=1}^{M_{2,0}}d_{0k}\delta_{c_{0k}}& \text{ on }(\partial\Omega)_{0}\end{aligned}\right. \tag{42}\] where \(\mathbf{n}\coloneqq-\nu\) is the outward unit normal on \((\partial\Omega)_{j}\) for \(j=0,1,2,\ldots,b\). We require that \(d_{i},d_{jk}\in\mathbb{Z}\setminus\{0\}\) for \(i=1,2,\ldots,M_{1}\) and \(k=1,2,\ldots,M_{2,j}\) for \(j=0,1,\ldots,b\), \[\sum_{i=1}^{M_{1}}d_{i}+\frac{1}{2}\sum_{j=0}^{b}\sum_{k=1}^{M_{2,j}}d_{jk}=\chi_{Euler}(\Omega), \tag{43}\] \[\frac{1}{2}\sum_{k=1}^{M_{2,j}}d_{jk}\in\mathbb{Z}\text{ for }j=0,1, \ldots,b, \tag{44}\] as well as that \(\{a_{i}\}_{i=1}^{M_{1}}\subseteq\Omega\) and \(\{c_{jk}\}_{k=1}^{M_{2,j}}\subseteq(\partial\Omega)_{j}\) for each \(j=0,1,\ldots,b\). We let \(\Psi_{1}\colon\Omega\to\mathbb{R}\) be defined by \[\Psi_{1}(x)=\sum_{i=1}^{M_{1}}d_{i}\log(|x-a_{i}|). \tag{45}\] Notice that \(\Psi_{1}\) solves \[-\Delta\Psi_{1}=2\pi\sum_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}\] and so to solve (42) it suffices to solve \[\begin{cases}-\Delta\Psi=0&\text{in }\Omega\\ \frac{\partial\Psi}{\partial\mathbf{n}}=\widetilde{\kappa}_{i}-\pi\sum\limits \limits_{k=1}^{M_{2,j}}d_{jk}\delta_{c_{jk}}-\frac{\partial\Psi_{1}}{\partial \mathbf{n}}&\text{on }(\partial\Omega)_{j}\text{ for }j=1,2,\ldots,b\\ \frac{\partial\Psi}{\partial\mathbf{n}}=\widetilde{\kappa}_{0}-\pi\sum \limits_{k=1}^{M_{2,0}}d_{0k}\delta_{c_{0k}}-\frac{\partial\Psi_{1}}{\partial \mathbf{n}}&\text{on }(\partial\Omega)_{0}.\end{cases}\] Next, we let \(\Psi_{2}\colon\Omega\to\mathbb{R}\) be defined by \[\Psi_{2}(x)=\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\log(|x-c_{p\ell}|). \tag{46}\] We claim that \(\Psi_{2}\) satisfies \[\begin{cases}-\Delta\Psi_{2}=0&\text{on }\Omega\\ \frac{\partial\Psi_{2}}{\partial\mathbf{n}}=-\pi\sum\limits_{\ell=1}^{M_{2,j}} d_{j\ell}\delta_{c_{j\ell}}+\sum\limits_{p=0}^{b}\sum\limits_{\ell=1}^{M_{2,p}}d_{p \ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}&\text{on }(\partial\Omega)_{j} \text{ for }j=1,2,\ldots,b\\ \frac{\partial\Psi_{2}}{\partial\mathbf{n}}=-\pi\sum\limits_{\ell=1}^{M_{2,0}} d_{0\ell}\delta_{c_{0\ell}}+\sum\limits_{p=0}^{b}\sum\limits_{\ell=1}^{M_{2,p}}d_{p \ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}&\text{on }(\partial\Omega)_{0}.\end{cases} \tag{47}\] To see this consider \(\varphi\in W^{1,2}(\Omega)\) and observe that \[\int_{\Omega}\nabla\Psi_{2}\cdot\nabla\varphi =\lim_{\delta\to 0^{+}}\int_{\Omega\setminus\bigcup_{j=0}^{b} \bigcup_{m=1}^{M_{2,j}}B_{\delta}(c_{jm})}\nabla\Psi_{2}\cdot\nabla\varphi\] \[=\lim_{\delta\to 0^{+}}\biggl{[}\int_{(\partial\Omega)_{0} \setminus\bigcup_{m=1}^{M_{2,0}}B_{\delta}(c_{0m})}\varphi\sum_{p=0}^{b}\sum_{ \ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\] \[+\sum_{j=1}^{b}\int_{(\partial\Omega)_{j}\setminus\bigcup_{m=1}^ {M_{2,j}}B_{\delta}(c_{jm})}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p \ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\] \[+\sum_{m=1}^{M_{2,0}}\int_{\Omega\cap\partial B_{\delta}(c_{0m})} \varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_ {p\ell}|^{2}}\cdot\mathbf{n}\] \[+\sum_{j=1}^{b}\sum_{m=1}^{M_{2,j}}\int_{\Omega\cap\partial B_{ \delta}(c_{jm})}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x- c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\biggr{]}.\] Next, notice that \[\sum_{m=1}^{M_{2,0}}\int_{\Omega\cap\partial B_{\delta}(c_{0m})} \varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{ p\ell}|^{2}}\cdot\mathbf{n}+\sum_{j=1}^{b}\sum_{m=1}^{M_{2,j}}\int_{ \Omega\cap\partial B_{\delta}(c_{jm})}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_ {2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\] \[= -\sum_{m=1}^{M_{2,0}}\frac{d_{0m}}{\delta}\int_{\Omega\cap \partial B_{\delta}(c_{0m})}\varphi-\sum_{j=1}^{b}\sum_{m=1}^{M_{2,j}}\frac{d_ {jm}}{\delta}\int_{\Omega\cap\partial B_{\delta}(c_{jm})}\varphi+O(\delta)\] and hence \[\lim_{\delta\to 0^{+}}\biggl{[}\sum_{m=1}^{M_{2,0}}\int_{ \Omega\cap\partial B_{\delta}(c_{0m})}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}+\sum_{j=1}^{ b}\sum_{m=1}^{M_{2,j}}\int_{\Omega\cap\partial B_{\delta}(c_{jm})}\varphi\sum_{p=0}^{b} \sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot \mathbf{n}\biggr{]}\] \[= -\pi\sum_{m=1}^{M_{2,0}}d_{0m}\varphi(c_{0m})-\pi\sum_{j=1}^{b} \sum_{m=1}^{M_{2,j}}d_{jm}\varphi(c_{jm}).\] Now we show that each \(\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\) has a finite limit at \(c_{p\ell}\) along \((\partial\Omega)_{p}\) for \(p=0,1,\ldots,b\). It will also follow from this calculation that this function is Lipschitz on \((\partial\Omega)_{p}\). We only do the calculation when \(p=1,2,\ldots,b\) as the case of \(p=0\) is similar. We note that since \(c_{q\ell}\notin(\partial\Omega)_{p}\) then this function is smooth on \((\partial\Omega)_{q}\) and hence has a finite limit at \(c_{q\ell}\). Now we consider \(c_{p\ell}\in(\partial\Omega)_{p}\). Using a local parametrization of \((\partial\Omega)_{p}\cap B_{\delta}(c_{p\ell})\) similar to the one found in Section 1.2, except centred at \(c_{p\ell}\), we write \((x-c_{p\ell})\cdot\mathbf{n}=(\gamma_{p}(s)-\gamma_{p}(0))\cdot\tilde{\mathbf{ n}}(s)\) where \(\tilde{\mathbf{n}}(s)\) is the coordinate representation of \(\mathbf{n}\). We introduce, by mollifying, a smooth approximation \(\tilde{\mathbf{n}}_{\eta}\) to \(\tilde{\mathbf{n}}\) which converges in \(C^{1}\) and whose second derivative is uniformly bounded in terms of the Lipschitz norm of \(\tilde{\mathbf{n}}^{\prime}\). Next we note that by a Taylor expansion to second order we have \[(\gamma_{p}(s)-\gamma_{p}(0))\cdot\tilde{\mathbf{n}}_{\eta}(s)=s\gamma_{p}^{ \prime}(0)\cdot\tilde{\mathbf{n}}_{\eta}(0)+\frac{s^{2}}{2}\gamma_{p}^{\prime \prime}(0)\cdot\tilde{\mathbf{n}}_{\eta}(0)+\frac{s^{2}}{2}\gamma_{p}^{\prime}( 0)\cdot\tilde{\mathbf{n}}_{\eta}^{\prime}(0)+O(s^{3}).\] Using that \(\tilde{\mathbf{n}}_{\eta}\) approximates \(\tilde{\mathbf{n}}\) in \(C^{1}\) as \(\eta\to 0^{+}\) we see that \[(\gamma_{p}(s)-\gamma_{p}(0))\cdot\tilde{\mathbf{n}}(s)=s\gamma_{p}^{\prime}( 0)\cdot\tilde{\mathbf{n}}(0)+\frac{s^{2}}{2}\gamma_{p}^{\prime\prime}(0)\cdot \tilde{\mathbf{n}}(0)+\frac{s^{2}}{2}\gamma_{p}^{\prime}(0)\cdot\tilde{ \mathbf{n}}^{\prime}(0)+O(s^{3}).\] Noting that on \((\partial\Omega)_{p}\) we have \[\tilde{\mathbf{n}}(s)=-\gamma_{p}^{\prime}(s)^{\perp},\qquad\tilde{\mathbf{ n}}^{\prime}(s)=\kappa_{p}(s)\gamma_{p}^{\prime}(s)\] we see that \[(\gamma_{p}(s)-\gamma_{p}(0))\cdot\tilde{\mathbf{n}}(s) =-\frac{s^{2}}{2}\gamma_{p}^{\prime\prime}(0)\cdot\gamma_{p}^{ \prime}(0)^{\perp}+\frac{s^{2}}{2}\kappa_{p}(0)+O(s^{3})\] \[=-\frac{\kappa_{p}(0)s^{2}}{2}+\frac{\kappa_{p}(0)s^{2}}{2}+O(s^ {3})=O(s^{3}).\] We also observe that \[|x-c_{p\ell}|^{2}=s^{2}+O(s^{3}).\] We conclude that \[\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}=O(s). \tag{48}\] Hence, the limit is finite as \(x\) tends to \(c_{p\ell}\). One can note from the error terms that \(x\mapsto\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\) is a Lipschitz function on \((\partial\Omega)_{p}\) since \(\Omega\) has \(C^{2,1}\) boundary. This permits us to conclude that \[\lim_{\delta\to 0^{+}}\biggl{[}\int_{(\partial\Omega)_{0} \setminus\bigcup\limits_{m=1}^{M_{2,0}}B_{\delta}(c_{0m})}\varphi\sum_{p=0}^{b }\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot \mathbf{n}\] \[+\sum_{j=1}^{b}\int_{(\partial\Omega)_{j}\setminus\bigcup\limits _{p=1}^{M_{2,j}}B_{\delta}(c_{jp})}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p} }d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\biggr{]}\] \[=\int_{(\partial\Omega)_{0}}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M _{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}+\sum_{j=1 }^{b}\int_{(\partial\Omega)_{j}}\varphi\sum_{p=0}^{b}\sum_{\ell=1}^{M_{2,p}}d _{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}.\] Putting this together gives that \(\Psi_{2}\) solves (47). In addition, taking \(\varphi\equiv 1\) we obtain \[\int_{(\partial\Omega)_{0}}\sum_{p=1}^{b}\sum_{\ell=1}^{M_{2,p}}d _{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}+\sum_{j=1}^{b} \int_{(\partial\Omega)_{j}}\sum_{p=1}^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac {x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot\mathbf{n}\] \[= \pi\sum_{j=0}^{b}\sum_{m=1}^{M_{2,j}}d_{jm}. \tag{49}\] Letting \(\left(\frac{\partial\Psi_{2}}{\partial\mathbf{n}}\right)_{ac}\) denote \[\Big{(}\frac{\partial\Psi_{2}}{\partial\mathbf{n}}\Big{)}_{ac}\coloneqq\sum_{p=0 }^{b}\sum_{\ell=1}^{M_{2,p}}d_{p\ell}\frac{x-c_{p\ell}}{|x-c_{p\ell}|^{2}}\cdot \mathbf{n}\] we see that (49) gives the desired \[\int_{\partial\Omega}\Big{(}\frac{\partial\Psi_{2}}{\partial\mathbf{n}}\Big{)} _{ac}=2\pi\chi_{Euler}(\Omega)-2\pi\sum_{i=1}^{M_{1}}d_{i}.\] With \(\Psi_{2}\) it now suffices to solve \[\begin{cases}-\Delta\Psi=0&\text{in }\Omega\\ \frac{\partial\Psi}{\partial\mathbf{n}}=\widetilde{\kappa}_{i}-\Big{(}\frac{ \partial\Psi_{2}}{\partial\mathbf{n}}\Big{)}_{ac}-\frac{\partial\Psi_{1}}{ \partial\mathbf{n}}&\text{on }(\partial\Omega)_{i}\text{ for }i=1,2,\ldots,b\\ \frac{\partial\Psi}{\partial\mathbf{n}}=\widetilde{\kappa}_{0}-\Big{(}\frac{ \partial\Psi_{2}}{\partial\mathbf{n}}\Big{)}_{ac}-\frac{\partial\Psi_{1}}{ \partial\mathbf{n}}&\text{on }(\partial\Omega)_{0}\end{cases} \tag{50}\] where the Neumann data is in \(C^{0,1}(\partial\Omega)\). Observe that if \(g(x)\coloneqq e^{i\sum\limits_{j=1}^{M_{1}}d_{j}\theta(x-a_{j})}\), where \(\theta\) is the argument function as in [8], then \[\frac{\partial\Psi_{1}}{\partial\mathbf{n}}=g\times\partial_{\tau}g\] and hence \[\int_{\partial\Omega}\frac{\partial\Psi_{1}}{\partial\mathbf{n}}=2\pi\text{ deg}(g,(\partial\Omega)_{0})-2\pi\sum_{i=1}^{b}\text{deg}(g,(\partial\Omega)_{i})=2 \pi\sum_{i=1}^{M_{1}}d_{i}.\] Finally, observe that \[\int_{(\partial\Omega)_{0}}\biggl{[}\widetilde{\kappa}_{0}-\Big{(} \frac{\partial\Psi_{2}}{\partial\mathbf{n}}\Big{)}_{ac}-\frac{\partial\Psi_{ 1}}{\partial\mathbf{n}}\biggr{]}+\sum_{i=1}^{b}\int_{(\partial\Omega)_{i}} \biggl{[}\widetilde{\kappa}_{i}-\Big{(}\frac{\partial\Psi_{2}}{\partial \mathbf{n}}\Big{)}_{ac}-\frac{\partial\Psi_{1}}{\partial\mathbf{n}}\biggr{]}\] \[= 2\pi\chi_{Euler}(\Omega)-\int_{\partial\Omega}\Bigl{(}\frac{ \partial\Psi_{2}}{\partial\mathbf{n}}\Big{)}_{ac}-\int_{\partial\Omega}\frac {\partial\Psi_{1}}{\partial\mathbf{n}}\] \[= 2\pi\chi_{Euler}(\Omega)-\left[2\pi\chi_{Euler}(\Omega)-2\pi \sum_{i=1}^{M_{1}}d_{i}\right]-2\pi\sum_{i=1}^{M_{1}}d_{i}\] \[= 0\] and that the boundary data is continuous. Thus, (50) has a solution \(H_{\Omega}\). As in the proof of Proposition 20 of [15] we have \(H_{\Omega}\in W^{1,2}(\Omega)\). Letting \(\Psi\coloneqq\Psi_{1}+\Psi_{2}+H_{\Omega}\) we now have a solution to (42) in \(W^{1,p}(\Omega)\) for \(1\leq p<2\). Next, we let \(\bar{j}\colon\Omega\to\mathbb{R}^{2}\) denote the vector field \[\bar{j}\coloneqq-\nabla^{\perp}\Psi.\] We observe that \(\bar{j}\in L^{p}(\Omega;\mathbb{R}^{2})\) for \(1\leq p<2\) and satisfies \[\begin{cases}\operatorname{div}(\bar{j})=0&\text{on }\Omega\\ \operatorname{curl}(\bar{j})=2\pi\sum\limits_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}& \text{on }\Omega\\ \bar{j}\cdot\tau=\widetilde{\kappa}_{i}-\pi\sum\limits_{k=1}^{M_{2,j}}d_{jk} \delta_{c_{jk}}&\text{on }(\partial\Omega)_{j}\text{ for }j=1,2,\ldots,b\\ \bar{j}\cdot\tau=\widetilde{\kappa}_{0}-\pi\sum\limits_{k=1}^{M_{2,0}}d_{0k} \delta_{c_{0k}}&\text{on }(\partial\Omega)_{0}.\end{cases} \tag{51}\] Following the argument of Lemma \(I.1\) on page 5 of [8] we conclude from (44) and the first, third, and fourth conditions of (51) that, by integrating over paths in \(\Omega\), there is a function \(\bar{u}\in W^{1,p}(\Omega;\mathbb{S}^{1})\), for \(1\leq p<2\), such that \[j\bar{u}=\bar{j}=-\nabla\Psi. \tag{52}\] As noted on page 8 of [8] the solution, \(\bar{u}\), is unique up to a constant phase. Next, we define \(g_{i}\in BV((\partial\Omega)_{i};\mathbb{S}^{1})\), for each \(i=0,1,\ldots,b\), so that \(g_{i}\times\partial_{\tau}g_{i}=\bar{j}\cdot\tau_{i}\). The definition is done by an algorithm. We start with the case of \(i=1,2,\ldots,b\). Let \(x_{0}^{i}\in(\partial\Omega)_{i}\setminus\bigcup\limits_{j=1}^{M_{2,i}}\{c_ {ij}\}\) and define \(g_{i}(x)=\tau_{i}(x)\) on the connected component of \((\partial\Omega)_{i}\setminus\bigcup\limits_{j=1}^{M_{2,i}}\{c_{ij}\}\) containing \(x_{0}^{i}\). Following the boundary curve in the counterclockwise direction we will encounter \(c_{ij_{1}}\in\bigcup\limits_{j=1}^{M_{2,i}}\{c_{ij}\}\) which has associated degree \((-1)^{d_{ij_{1}}}\). We define \(g_{i}\) to be \((-1)^{-d_{ij_{1}}}\tau_{i}\) on the adjacent connected component of \((\partial\Omega)_{i}\setminus\bigcup\limits_{j=1}^{M_{2,i}}\{c_{ij}\}\) whose boundary contains \(c_{ij_{1}}\) (i.e. with local argument representation shifted by \(-d_{ij_{1}}\pi\)). We then repeat our algorithm after we encounter another member of \(\bigcup\limits_{j=1}^{M_{2,i}}\{c_{ij}\}\). Since \(\bigcup\limits_{j=1}^{M_{2,i}}\{c_{ij}\}\) contains only finitely many points and \((\partial\Omega)_{i}\) is a closed curve then we eventually return to the connected component containing \(x_{0}^{i}\). Our function is well defined due to (44). Observe that on \((\partial\Omega)_{i}\setminus\bigcup\limits_{j=1}^{M_{2,i}}\{c_{ij}\}\) then \[g_{i}\times\partial_{\tau}g_{i}=\widetilde{\kappa}_{i}.\] In coordinates about each \(c_{ij}\) for \(j=1,2,\ldots,M_{2,i}\) we also see that at \(c_{ij}\): \[g_{i}\times\partial_{\tau}g_{i}=\widetilde{\kappa}_{i}(c_{ij})-\pi d_{ij}\] We conclude that \[g_{i}\times\partial_{\tau}g_{i}=\widetilde{\kappa}_{i}-\pi\sum_{j=1}^{M_{2,i}}d_{ ij}\delta_{e_{ij}}.\] A similar construction works along \((\partial\Omega)_{0}\) except the singular part of \(g_{0}\times\partial_{\tau}g_{0}\) has jumps of \(-\pi d_{0j}\) at \(c_{0j}\in\bigcup\limits_{j=1}^{M_{2,0}}\{c_{0j}\}\) due to orientation. Thus, \(\bar{u}\) solves \[\begin{cases}j\bar{u}=\bar{j}&\text{on }\Omega\\ j\bar{u}=g_{i}\times\partial_{\tau}g_{i}&\text{on }(\partial\Omega)_{i}\text{ for }i=0,1,\ldots,b.\end{cases}\] Arguing as in the proof of Theorem \(I.3\) on page 9 of [8] there is a phase \(\theta_{0}\in[0,2\pi]\) so that \(\tilde{u}\coloneqq e^{-i\theta_{0}}\bar{u}\) satisfies \[\begin{cases}j\tilde{u}=\bar{j}&\text{on }\Omega\\ \tilde{u}=g_{0}&\text{on }(\partial\Omega)_{0}\\ \tilde{u}=e^{i\theta_{i}}g_{i}&\text{on }(\partial\Omega)_{i}\end{cases} \tag{53}\] where \(\theta_{i}\in[0,2\pi]\) for each \(i=1,2,\ldots,b\). Finally, we let, for \(i=1,2,\ldots,b\), \(\phi_{i}\colon\Omega\to\mathbb{R}\) be the solution to \[\begin{cases}-\Delta\phi_{i}=0&\text{on }\Omega\\ \phi_{i}(x)=1&\text{on }(\partial\Omega)_{i}\\ \phi_{i}(x)=0&\text{on }(\partial\Omega)_{j}\text{ for }j\neq i.\end{cases}\] and we let \(\eta_{i}\), for \(i=1,2,\ldots,b\) denote \(\eta_{i}=\nabla\phi_{i}\). Observe that \(\eta_{i}\in H^{1}_{T}(\Omega)\), the space of harmonic 1-forms with zero tangential part, for each \(i=1,2,\ldots,b\) and by Lemmas 9 and 12 of [5] form a basis for \(H^{1}_{T}(\Omega)\). We consider \(u^{N}_{*}\colon\Omega\to\mathbb{S}^{1}\) defined by \[u^{N}_{*}\coloneqq e^{-i\sum\limits_{i=1}^{b}\theta_{i}\phi_{i}}\tilde{u}. \tag{54}\] Observe that \(u^{N}_{*}\in W^{1,p}(\Omega;\mathbb{S}^{1})\), for \(1\leq p<2\), satisfies \((u^{N}_{*})_{N}=0\), \(ju^{N}_{*}=\bar{j}-\sum\limits_{i=1}^{b}\theta_{i}\eta_{i}\), as well as \[\begin{cases}\operatorname{div}(ju^{N}_{*})=0&\text{on }\Omega\\ \operatorname{curl}(ju^{N}_{*})=2\pi\sum\limits_{i=1}^{M_{1}}d_{i}\delta_{a_{i }}&\text{on }\Omega\\ ju^{N}_{*}\cdot\tau=\widetilde{\kappa}_{i}-\pi\sum\limits_{j=1}^{M_{2,i}}d_{ij} \delta_{e_{ij}}&\text{on }(\partial\Omega)_{i}\text{ for }i=1,2,\ldots,b\\ ju^{N}_{*}\cdot\tau=\widetilde{\kappa}_{0}-\pi\sum\limits_{j=1}^{M_{2,0}}d_{0j} \delta_{c_{0j}}&\text{on }(\partial\Omega)_{0}.\end{cases}\] This is the canonical harmonic map with prescribed singularities and normal part zero. In order to simplify the renormalized energy we observe that we may apply Gram-Schmidt and normalization to \(\{\eta_{1},\eta_{2},\ldots,\eta_{b}\}\) to obtain an orthonormal basis \(\{\bar{\eta}_{1},\bar{\eta}_{2},\ldots,\bar{\eta}_{b}\}\) of \(H^{1}_{T}(\Omega)\). Notice that by (52), \(\bar{j}\in(H^{1}_{T}(\Omega))^{\perp}\) and so if we set \[\Phi_{j}\coloneqq\int_{\Omega}ju_{*}^{N}\cdot\bar{\eta}_{j}\] we have the \(L^{2}\) orthogonal decomposition \[ju_{*}^{N}=\bar{j}+\sum_{i=1}^{b}\Phi_{j}\bar{\eta}_{j}. \tag{55}\] Finally we note that to obtain the canonical harmonic map with prescribed singularities and tangential part zero we set \[u_{*}^{T}\coloneqq(u_{*}^{N})^{\perp}.\] **Remark 3.1**.: 1. We observe that \(ju_{*}^{N}-\bar{j}\in H^{1}_{T}(\Omega)\) is a similar lattice condition to what was identified in [14] as well as [8]. 2. Observe that in the case that \(\Omega\) is simply connected one could stop at (53) since \(H^{1}_{T}(\Omega)\cong 0\). As a result, there are no flux integrals, \(\Phi_{j}\), in this case. This provides an alternative to the construction found in [15]. Note in this case that (42) characterizes the canonical harmonic map while in the non-simply connected case (42) is not sufficient. #### 3.3.2 Step \(2\): Here we compute the renormalized energy. As the Ginzburg-Landau energy is invariant under rotation by \(\frac{\pi}{2}\) radians we only do this computation for \(u_{*}^{T}\). Consider \(\{a_{i}\}_{i=1}^{M_{1}}\subseteq\Omega\) and for each \(i=0,1,\ldots,b\) we consider \(\{c_{ij}\}_{j=1}^{M_{2,i}}\subseteq(\partial\Omega)_{i}\). Corresponding to these points we let \(d_{i}\in\mathbb{Z}\setminus\{0\}\) for \(i=1,2,\ldots,M_{1}\) and for each \(i=0,1,\ldots,b\) we consider \(d_{ij}\in\mathbb{Z}\setminus\{0\}\) corresponding to \(c_{ij}\in(\partial\Omega)_{i}\). We also require that \[\sum_{i=1}^{M_{1}}d_{i}+\frac{1}{2}\sum_{i=0}^{b}\sum_{j=1}^{M_{2,i}}d_{ij}=\chi_{Euler}(\Omega)\] \[\frac{1}{2}\sum_{j=1}^{M_{2,i}}d_{ij}\in\mathbb{Z}\text{ for }i=0,1, \ldots,b.\] If \(u_{*}^{T}\) is the associated canonical harmonic map with prescribed singularities satisfying the above then we let \[\mathbf{a} \coloneqq(a_{1},a_{2},\ldots,a_{M_{1}})\] \[\mathbf{c}_{i} \coloneqq(c_{i1},c_{i2},\ldots,c_{iM_{2,i}})\text{ for }i=0,1, \ldots,b\] \[\mathbf{c} \coloneqq(\mathbf{c}_{0},\mathbf{c}_{1},\ldots,\mathbf{c}_{b})\] \[\mathbf{d}_{1} \coloneqq(d_{1},d_{2},\ldots,d_{M_{1}})\] \[\mathbf{d}_{2,i} \coloneqq(d_{i1},d_{i2},\ldots,d_{iM_{2,i}})\] \[\mathbf{d}_{2} \coloneqq(\mathbf{d}_{2,0},\mathbf{d}_{2,1},\ldots,\mathbf{d}_{2,b})\] \[\Phi \coloneqq(\Phi_{1},\Phi_{2},\ldots,\Phi_{b})\] and we define \[\mathbb{W}(\mathbf{a},\mathbf{c},\mathbf{d}_{1},\mathbf{d}_{2},\Phi)\coloneqq \lim_{\sigma\to 0^{+}}\Biggl{\{}\frac{1}{2}\int_{\Omega_{\sigma}}\lvert \nabla u_{*}^{T}\rvert^{2}-\pi\Bigl{[}\lvert\mathbf{d}_{1}\rvert^{2}+\frac{1 }{2}\lvert\mathbf{d}_{2}\rvert^{2}\Bigr{]}\lvert\log(\sigma)\rvert\Biggr{\}} \tag{56}\] where \[\Omega_{\sigma}\coloneqq\Omega\setminus\Bigl{[}\bigsqcup_{i=1}^{M_{1}}B_{ \sigma}(a_{i})\sqcup\bigsqcup_{i=0}^{b}\bigsqcup_{j=1}^{M_{2,i}}B_{\sigma}(c_{ ij})\Bigr{]}\] and where \[0<\sigma<\frac{1}{2}\min\Bigl{\{}\min_{i\neq j}\{\lvert a_{i}-a_{j}\rvert\}, \min_{i_{1}\neq i_{2},\,j_{1}\neq j_{2}}\{\lvert c_{i_{1}j_{1}}-c_{i_{2}j_{2}} \rvert\},\min_{i,j,k}\{\lvert a_{i}-c_{jk}\rvert\}\Bigr{\}}.\] We now compute (56). By the \(L^{2}\) orthogonal decomposition (55) and (52) we have \[\frac{1}{2}\int_{\Omega_{\sigma}}\lvert\nabla u_{*}^{T}\rvert^{2} =\frac{1}{2}\int_{\Omega_{\sigma}}\lvert\bar{j}\rvert^{2}+\frac{1 }{2}\lvert\Phi\rvert^{2}+o(1)\] as \[\sigma\to 0^{+}\] \[=\frac{1}{2}\int_{\Omega_{\sigma}}\lvert\nabla\Psi\rvert^{2}+ \frac{1}{2}\lvert\Phi\rvert^{2}+o(1)\] as \[\sigma\to 0^{+}\] Integrating by parts and using (42) we obtain \[\frac{1}{2}\int_{\Omega_{\sigma}}\lvert\nabla\Psi\rvert^{2} =\frac{1}{2}\int_{(\partial\Omega)_{0}\setminus\bigcup_{j=1}^{M_{ 2,0}}B_{\sigma}(c_{0j})}\Psi\frac{\partial\Psi}{\partial\mathbf{n}}+\frac{1}{2 }\sum_{i=1}^{b}\int_{(\partial\Omega)_{i}\setminus\bigcup_{j=1}^{M_{2,i}}B_{ \sigma}(c_{ij})}\Psi\frac{\partial\Psi}{\partial\mathbf{n}}\] \[+\frac{1}{2}\sum_{j=1}^{M_{2,0}}\int_{\Omega\cap\partial B_{ \sigma}(c_{0j})}\Psi\frac{\partial\Psi}{\partial\mathbf{n}}+\frac{1}{2}\sum_ {i=1}^{b}\sum_{j=1}^{M_{2,i}}\int_{\Omega\cap\partial B_{\sigma}(c_{ij})}\Psi \frac{\partial\Psi}{\partial\mathbf{n}}\] \[+\frac{1}{2}\sum_{i=1}^{M_{1}}\int_{\partial B_{\sigma}(a_{i})} \Psi\frac{\partial\Psi}{\partial\mathbf{n}}\] \[=(A)+(B)+(C)+(D)+(E).\] Observe that by (42), the calculation that lead to (48), and the trace theorem we have \[(A) =\frac{1}{2}\int_{(\partial\Omega)_{0}}\widetilde{\kappa}_{0}\Psi+o(1) \text{ as }(\sigma\to 0^{+}),\] \[(B) =\frac{1}{2}\sum_{i=1}^{b}\int_{(\partial\Omega)_{i}}\widetilde{ \kappa}_{i}\Psi+o(1) \text{ as }(\sigma\to 0^{+}).\] Next we note that since \(\Psi=\Psi_{1}+\Psi_{2}+H_{\Omega}\), where \(\Psi_{1}\) is defined in (45), \(\Psi_{2}\) is defined in (46), and \(H_{\Omega}\) solves (50), we have, as \(\sigma\to 0^{+}\), that \[(C) =-\frac{\pi}{2}\sum_{j=1}^{M_{2,0}}d_{0j}^{2}\log(\sigma)-\frac{ \pi}{2}\sum_{j=1}^{M_{2,0}}\sum_{i=1}^{M_{1}}d_{i}d_{0j}\log(|c_{0j}-a_{i}|)\] \[\quad-\frac{\pi}{2}\sum_{j=1}^{M_{2,0}}\sum_{\begin{subarray}{c }i=0,1,\ldots,b\\ k=1,2,\ldots,M_{2,i}\\ (i,k)\neq(0,j)\end{subarray}}d_{ik}d_{0j}\log(|c_{0j}-c_{ik}|)-\frac{\pi}{2} \sum_{j=1}^{M_{2,0}}d_{0j}H_{\Omega}(c_{0j})+o(1)\] \[(D) =-\frac{\pi}{2}\sum_{i=1}^{b}\sum_{j=1}^{M_{2,i}}d_{ij}^{2}\log( \sigma)-\frac{\pi}{2}\sum_{i=1}^{b}\sum_{j=1}^{M_{2,i}}\sum_{k=1}^{M_{1}}d_{ij }d_{k}\log(|c_{ij}-a_{k}|)\] \[\quad-\frac{\pi}{2}\sum_{i=1}^{b}\sum_{j=1}^{M_{2,i}}\sum_{ \begin{subarray}{c}k=0,1,\ldots,b\\ l=1,2,\ldots,M_{2,k}\\ (k,l)\neq(i,j)\end{subarray}}d_{ij}d_{kl}\log(|c_{ij}-c_{kl}|)-\frac{\pi}{2} \sum_{i=1}^{b}\sum_{j=1}^{M_{2,i}}d_{ij}H_{\Omega}(c_{ij})+o(1)\] \[(E) =-\pi\sum_{i=1}^{M_{1}}d_{i}^{2}\log(\sigma)-\pi\sum_{i=1}^{M_{1} }\sum_{\begin{subarray}{c}j=1,2,\ldots,M_{1}\\ j\neq i\end{subarray}}d_{i}d_{j}\log(|a_{i}-a_{j}|)\] \[\quad-\pi\sum_{i=1}^{M_{1}}\sum_{j=0}^{b}\sum_{k=1}^{M_{2,j}}d_{ i}d_{jk}\log(|c_{jk}-a_{i}|)-\pi\sum_{i=1}^{M_{1}}d_{i}H_{\Omega}(a_{i})+o(1)\] Combining the above computations for \((A)\), \((B)\), \((C)\), \((D)\), and \((E)\) we obtain the following expression for the renormalized energy: \[\mathbb{W}(\mathbf{a},\mathbf{b},\mathbf{d}_{1},\mathbf{d}_{2},\Phi) =-\frac{\pi}{2}\sum_{i\neq j}d_{i}d_{j}\log(|a_{i}-a_{j}|)-\frac{ \pi}{4}\sum_{(i_{1},j_{1})\neq(i_{2},j_{2})}d_{i_{1}j_{1}}d_{i_{2}j_{2}}\log( |c_{i_{1}j_{1}}-c_{i_{2}j_{2}}|)\] \[-\frac{3\pi}{2}\sum_{i=1}^{M_{1}}\sum_{j=0}^{b}\sum_{k=1}^{M_{2, j}}d_{i}d_{jk}\log(|a_{i}-c_{jk}|)-\pi\sum_{i=1}^{M_{1}}d_{i}H_{\Omega}(a_{i})\] \[-\frac{\pi}{2}\sum_{j=0}^{b}\sum_{k=1}^{M_{2,j}}d_{jk}H_{\Omega}( c_{jk})+\frac{1}{2}\int_{\partial\Omega}\widetilde{\kappa}\Psi+\frac{1}{2}| \Phi|^{2}.\] **Remark 3.2**.: In the case that \(\Omega\) is simply connected and there are no interior vortices we observe that this matches the renormalized energy calculated in Proposition 20 of [15]. In addition, the flux contribution is similar to the one found in [14]. #### 3.3.3 Step \(3\): We first observe that, by approximation, it suffices to demonstrate the upper bound in the case that \(d_{i}=\pm 1\) for all \(i=1,2,\ldots,M_{1}\) and \(d_{jk}\) for all \(k=1,2,\ldots,M_{2,j}\) and \(j=0,1,\ldots,b\). Note that the conditions on the degrees of vortices found in Theorem 0.1 part (1) can be preserved while splitting a vortex of absolute degree larger than \(1\). Next, we set \(\rho_{\varepsilon}(s)\coloneqq\min\{\frac{s}{\varepsilon},1\}\) and define a sequence \(\{u_{\varepsilon}\}_{\varepsilon\in(0,1]}\) by \[u_{\varepsilon}(x)=\biggl{(}\prod_{i=1}^{M_{1}}\rho_{\varepsilon}(|x-a_{i}|) \biggr{)}\biggl{(}\prod_{j=0}^{b}\prod_{k=1}^{M_{2,j}}\rho_{\varepsilon}(|x-c_ {jk}|)\biggr{)}u_{*}^{T}\] where \(u_{*}^{T}\) is the map defined in (54). Observe that since \(u_{\varepsilon}=u_{*}^{T}\) on \(\Omega_{\varepsilon}\coloneqq\Omega\setminus\biggl{(}\bigcup\limits_{i=1}^{M_ {1}}B_{\varepsilon}(a_{i})\biggr{)}\cup\biggl{(}\bigcup\limits_{j=0}^{b} \bigcup\limits_{k=1}^{M_{2,j}}B_{\varepsilon,+}(c_{jk})\biggr{)}\) then by (56) \[E_{\varepsilon}(u_{\varepsilon}) =\int_{\Omega\setminus\Omega_{\varepsilon}}e_{\varepsilon}(u_{ \varepsilon})+\pi\biggl{[}|\mathbf{d}_{1}|^{2}+\frac{1}{2}|\mathbf{d}_{2}|^{2} \biggr{]}|\mathrm{log}(\varepsilon)|+\mathbb{W}(\mathbf{a},\mathbf{c},\mathbf{ d}_{1},\mathbf{d}_{2},\Phi)+o(1)\] \[=\int_{\Omega\setminus\Omega_{\varepsilon}}e_{\varepsilon}(u_{ \varepsilon})+\pi\biggl{[}|\mathbf{d}_{1}|+\frac{1}{2}|\mathbf{d}_{2}|\biggr{]} |\mathrm{log}(\varepsilon)|+\mathbb{W}(\mathbf{a},\mathbf{c},\mathbf{d}_{1}, \mathbf{d}_{2},\Phi)+o(1) \tag{57}\] where we have used that \(d_{i}=\pm 1\) for all \(i=1,2,\ldots,M_{1}\) and \(d_{j,k}=\pm 1\) for all \(k=1,2,\ldots,M_{2,j}\) and \(j=0,1,\ldots,b\) to remove the squares. Since \(\Omega\setminus\Omega_{\varepsilon}=\Bigl{(}\bigcup\limits_{i=1}^{M_{1}}B_{ \varepsilon}(a_{i})\Bigr{)}\cup\Bigl{(}\bigcup\limits_{j=0}^{b}\bigcup\limits _{k=1}^{M_{2,j}}B_{\varepsilon,+}(c_{jk})\Bigr{)}\) and \[|\rho_{\varepsilon}|=\frac{|x|}{\varepsilon},\ \ \ \ \ |\nabla\rho_{\varepsilon}|= \frac{1}{\varepsilon}\ \ \ \ \ \text{on}\ B_{\varepsilon}(0),\] then \[\int_{\Omega\setminus\Omega_{\varepsilon}}e_{\varepsilon}(u_{\varepsilon})=O (1). \tag{58}\] From (57) and (58) we conclude that \[\limsup_{\varepsilon\to 0^{+}}\frac{E_{\varepsilon}(u_{\varepsilon})}{| \mathrm{log}(\varepsilon)|}=\|J_{*}\|\] where \(J_{*}=\pi\sum\limits_{i=1}^{M_{1}}d_{i}\delta_{a_{i}}+\frac{\pi}{2}\sum \limits_{j=0}^{b}\sum\limits_{k=1}^{M_{2,j}}d_{j,k}\delta_{c_{j,k}}\). Next, observe that since \(|u_{\varepsilon}|=1\) on \(\Omega_{\varepsilon}\) and \(u_{\varepsilon}\) is smooth here then \(Ju_{\varepsilon}=0\) on \(\Omega_{\varepsilon}\). Notice that for each \(1,2,\ldots,M_{1}\) we have, for \(\varphi\in C^{0,\alpha}(\Omega)\), that \[\int_{B_{\varepsilon}(a_{i})}\!\!\varphi(x)\cdot Ju_{\varepsilon}(x) \mathrm{d}x =\int_{B_{\varepsilon}(a_{i})}[\varphi(x)-\varphi(a_{i})]\cdot Ju _{\varepsilon}(x)\mathrm{d}x+\varphi(a_{i})\int_{B_{\varepsilon}(a_{i})}Ju_{ \varepsilon}(x)\mathrm{d}x\] \[=\frac{\varphi(a_{i})}{2}\int_{B_{\varepsilon}(a_{i})}\!\!\nabla \times ju_{\varepsilon}(x)\mathrm{d}x+O(\varepsilon^{\alpha}|\!\log( \varepsilon)|)\] \[=\frac{\varphi(a_{i})}{2}\int_{\partial B_{\varepsilon}(a_{i})} ju_{\varepsilon}\cdot\tau+O(\varepsilon^{\alpha}|\!\log(\varepsilon)|).\] Since \(|u_{\varepsilon}|=1\) on \(\partial B_{\varepsilon}(a_{i})\) then from the construction of \(u_{*}^{T}\) we have that \[\frac{\varphi(a_{i})}{2}\int_{\partial B_{\varepsilon}(a_{i})} ju_{\varepsilon}\cdot\tau=\pi d_{i}\varphi(a_{i}).\] A similar calculation holds \(c_{jk}\) for \(j=0,1,\ldots,b\) and \(k=1,2,\ldots,M_{2,j}\) except there are additional error terms due to the boundary. ## Appendix In this Appendix we provide a proof for a relation between the Euler characteristic of \(\Omega\) and the number of boundary components provided \(\partial\Omega\) is \(C^{2,1}\). In addition, we show that a general \(C^{2,1}\) domain, with possibly non-trivial homology, is obtained by excising simply connected sets from a larger simply connected set (i.e. general domains of this regularity are obtained by making holes in the domain). These results will be needed to establish a relationship between the possible degree configurations and canonical harmonic map to the topological properties of the domain. In the next lemma we follow the ideas from [5] to show that the first Betti number of \(\overline{\Omega}\) corresponds, for a bounded, connected open set \(\Omega\subseteq\mathbb{R}^{2}\) with \(C^{2}\)-boundary, to one less than the number of connected components of its boundary. In addition, we demonstrate that the second Betti number of \(\overline{\Omega}\) is zero. **Lemma A.1**.: Suppose \(\Omega\subseteq\mathbb{R}^{2}\) is an open, bounded, connected set with \(C^{2,1}\)-boundary. Then if we let \(b+1\), where \(b\geq 0\), denote the number of connected components of \(\partial\Omega\) then we have \[\beta_{1}(\overline{\Omega})=b,\qquad\beta_{2}(\overline{\Omega})=0\] where \(\beta_{i}(\overline{\Omega})\) denotes the \(i^{\mathrm{th}}\) Betti number of \(\overline{\Omega}\). Proof.: Since \(\overline{\Omega}\) is a compact orientable 2-manifold with boundary then by Lefschetz Duality, see Theorem 3.43 of [13] with \(A=\partial\Omega\) and \(B=\varnothing\), we have \[H_{1}(\overline{\Omega})\cong H^{1}(\overline{\Omega},\partial\Omega).\] It is shown in Lemma 13 of [5] that \[H^{1}(\overline{\Omega},\partial\Omega)\cong\mathbb{Z}^{b}.\] Finally, by definition, we have that \[\beta_{1}(\overline{\Omega})=\operatorname{rank}(H_{1}(\overline{\Omega}))= \operatorname{rank}(\mathbb{Z}^{b})=b.\] By another application of Lefschetz Duality with \(A=\partial\Omega\) and \(B=\varnothing\) we have \[H_{2}(\overline{\Omega})\cong H^{0}(\overline{\Omega},\partial\Omega)\cong \operatorname{Hom}(H_{0}(\overline{\Omega},\partial\Omega),\mathbb{Z}).\] Notice that \(H_{0}(\overline{\Omega},\partial\Omega)=0\) since \(\overline{\Omega}\) is a connected two dimensional manifold with boundary, and hence path-connected, then any \(0\)-chain in \(\overline{\Omega}\) differs from a \(0\)-chain in \(\partial\Omega\) by a boundary. We conclude that \[\operatorname{Hom}(H_{0}(\overline{\Omega},\partial\Omega),\mathbb{Z})\cong \operatorname{Hom}(0,\mathbb{Z})\cong 0.\] **Lemma A.2**.: Suppose \(\Omega\subseteq\mathbb{R}^{2}\) is an open, bounded, connected subset with \(C^{2,1}\)-boundary. Then \[H_{i}(\Omega)=H_{i}(\overline{\Omega}),\quad\text{ for }i=0,1,2. \tag{59}\] Proof.: This result follows from the fact that the maps \(f\colon\Omega\to\overline{\Omega}\) and \(g\colon\overline{\Omega}\to\Omega\) defined by \[f=\iota_{\Omega},\quad g(x)=\begin{cases}x,&\operatorname{dist}(x,\partial \Omega)>r_{1}\\ x+r_{1}\nu_{i}((\psi_{i,j}^{-1})^{1}(x)),&\text{if }x\in\overline{\mathcal{U}}_{i,j} \end{cases}\] form a homotopy equivalence for \(\Omega\) and \(\overline{\Omega}\). **Corollary A.3**.: _Suppose \(\Omega\subseteq\mathbb{R}^{2}\) is an open, bounded, connected subset with \(C^{2,1}\)-boundary. Then_ \[\chi_{Euler}(\Omega)=\chi_{Euler}(\overline{\Omega})=1-b. \tag{60}\] Proof.: This follows from Lemmas A.1 and A.2. **Lemma A.4**.: Suppose \(\Omega\subseteq\mathbb{R}^{2}\) is an open, bounded, connected set with \(C^{2,1}\)-boundary. Suppose also that \(\partial\Omega\) has \(b+1\), where \(b\geq 0\), connected components. Then we have \[H_{0}(\mathbb{R}^{2}\setminus\Omega)\cong\mathbb{Z}^{b+1},\quad\ H_{1}( \mathbb{R}^{2}\setminus\Omega)\cong\mathbb{Z} \tag{61}\] and hence \(\mathbb{R}^{2}\setminus\Omega\) consists of \(b+1\) connected components and all, except one, are simply connected. In addition, if \(\mathbb{R}^{2}\setminus\overline{\Omega}=B_{0}\sqcup\bigsqcup_{i=1}^{b}B_{i}\) where \(B_{0}\) is the unbounded component of \(\mathbb{R}^{2}\setminus\overline{\Omega}\) and \(B_{i}\) for \(1\leq i\leq b\) denote the bounded components then \[H_{1}\bigg{(}\Omega\cup\bigcup_{i=1}^{b}\overline{B}_{i}\bigg{)}=0 \tag{62}\] and hence \(\Omega\) is obtained from a simply connected set by removing \(b\) simply connected open sets with \(C^{2}\)-boundary from it. Proof.: By the Classification of One-Manifolds, see page 64 of [12], each connected component of \(\partial\Omega\) is diffeomorphic to \(\mathbb{S}^{1}\). Connectedness of \(\Omega\) combined with the Jordan-Schonenflies theorem, see Theorem 4 on page 72 of [20], gives (61). (62) follows from Lemma A.1.
2303.07354
MetaTroll: Few-shot Detection of State-Sponsored Trolls with Transformer Adapters
State-sponsored trolls are the main actors of influence campaigns on social media and automatic troll detection is important to combat misinformation at scale. Existing troll detection models are developed based on training data for known campaigns (e.g.\ the influence campaign by Russia's Internet Research Agency on the 2016 US Election), and they fall short when dealing with {\em novel} campaigns with new targets. We propose MetaTroll, a text-based troll detection model based on the meta-learning framework that enables high portability and parameter-efficient adaptation to new campaigns using only a handful of labelled samples for few-shot transfer. We introduce \textit{campaign-specific} transformer adapters to MetaTroll to ``memorise'' campaign-specific knowledge so as to tackle catastrophic forgetting, where a model ``forgets'' how to detect trolls from older campaigns due to continual adaptation. Our experiments demonstrate that MetaTroll substantially outperforms baselines and state-of-the-art few-shot text classification models. Lastly, we explore simple approaches to extend MetaTroll to multilingual and multimodal detection. Source code for MetaTroll is available at: https://github.com/ltian678/metatroll-code.git.
Lin Tian, Xiuzhen Zhang, Jey Han Lau
2023-03-13T06:39:38Z
http://arxiv.org/abs/2303.07354v1
# MetaTroll: Few-shot Detection of State-Sponsored Trolls with Transformer Adapters ###### Abstract. State-sponsored trolls are the main actors of influence campaigns on social media and automatic troll detection is important to combat misinformation at scale. Existing troll detection models are developed based on training data for known campaigns (e.g. the influence campaign by Russia's Internet Research Agency on the 2016 US Election), and they fall short when dealing with _novel_ campaigns with new targets. We propose MetaTroll, a text-based troll detection model based on the meta-learning framework that enables high portability and parameter-efficient adaptation to new campaigns using only a handful of labelled samples for few-shot transfer. We introduce _campaign-specific_ transformer adapters to MetaTroll to "memorise" campaign-specific knowledge so as to tackle catastrophic forgetting, where a model "forgets" how to detect trolls from older campaigns due to continual adaptation. Our experiments demonstrate that MetaTroll substantially outperforms baselines and state-of-the-art few-shot text classification models. Lastly, we explore simple approaches to extend MetaTroll to multilingual and multimodal detection. Source code for MetaTroll is available at: [https://github.com/ltiano78/metatroll-code.git](https://github.com/ltiano78/metatroll-code.git) troll detection, few-shot learning, adapter, continual learning, multilingual, multimodal + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction + Footnote †: journal: Information extraction either encoding them using pretrained image classifiers or converting them into text via optical character recognition. * Large-scale experiments on a real-world dataset of 14 Twitter campaigns showed the superior performance of MetaTroll. ## 2. Related Work Our related work comes from three areas, troll detection, meta-learning and few-shot text classification. ### Troll detection Early studies of troll detection focus on extracting hand-engineered features from the textual contents of user posts for troll detection (Han et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Signals such as writing style, sentiment as well as emotions have been explored (Li et al., 2019; Li et al., 2019). User online activities have also been used to detect trolls (Han et al., 2016; Li et al., 2019). Cheng et al. (2019) presents a study on anti-social behavior in online discussion communities, focusing on users that will eventually be banned. More recently, approaches that combine user posts and online activities for troll detection have emerged (Han et al., 2016; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Addawood et al. (Han et al., 2016) identify 49 linguistic markers of deception and measure their use by troll accounts. They show that such deceptive language cues can help to accurately identify trolls. Im et al. (Imagenet al., 2017) propose a detection approach that relies on users' metadata, activity (e.g. number of shared links, retweets, mentions, etc.), and linguistic features to identify active trolls on Twitter. Shafiel and Dadlani (Shafiel and Dadlani, 2019) show that Russian trolls aim to hijack the political conversation to create distrust among different groups in the community. Stewart and Dawson (Swarwarth and Dawson, 2019) similarly demonstrate how trolls act to accentuate disagreement and sow division along divergent frames, and this is further validated by Dutt et al. (Dutt et al., 2019) in relation to Russian ads on Facebook. In Badawy et al. (Badawy et al., 2019), the authors study the effects of manipulation campaigns by analysing the accounts that endorsed trolls' activity on Twitter. They find that conservative-leaning users re-shared troll content 30 times more than liberal ones. Zannettou et al. (Zannettou et al., 2019) compare troll behavior with other (random) Twitter accounts by recognising the differences in the content they spread, the evolution of their accounts, and the strategies they adopted to increase their impact. Atanasov et al. (Atanasov et al., 2019) leverage both social structure and textual contents to learn user representations via graph embedding. We focus on how to detect trolls in emergent social activities with limited labelled data. ### Meta-learning Meta-learning aims to extract some transferable knowledge from a set of tasks to quickly adapt to a new task. These approaches can be divided into three categories: metric-based, optimisation-based, and model-based. Metric-based meta-learning approaches (Swarth and Wojcik, 2019; Li et al., 2019) aim to learn an embedding function to encode the input and a metric function to learn the distance (e.g. cosine distance and euclidean distance) between the query data and support data. Optimisation-based approaches (Li et al., 2019; Li et al., 2019) are designed to learn good parameter initialisation that can quickly adapt to new tasks within a few gradient descent steps. Model-based approaches (Li et al., 2019; Li et al., 2019) use neural networks to embed task information and predict test examples conditioned on the task information. Our approach combines both optimisation and model-based ideas in that we adopt (Li et al., 2019) to update model parameters and a novel architecture that involves adapters and adaptive classification layers to learn task information. ### Few-shot text classification Text classification has shifted from task-specific training to pretrained language models (such as BERT) followed by task-specific fine-tuning (Li et al., 2019; Li et al., 2019). Recently, the language model GPT-3 (Gang et al., 2019) has shown strong few-shot performance for many natural language processing tasks. Few-shot text classification refer to a text classification setting where there are novel unseen tasks (domains) with only a few labelled examples for training a classification model, and meta-learning have been shown to produce strong performance (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Bao et al. (Bao et al., 2019) introduce distributional signatures, such as word frequency and information entropy, into a meta-learning framework. Gao et al. (Gao et al., 2019) combine attention mechanism with prototypical network to solve noisy few-shot relation classification task. Geng et al. (Geng et al., 2019) leverage the dynamic routing algorithm in meta-learning to solve sentiment and intent text classification tasks for English and Chinese datasets. Yu et al. (Yu et al., 2019) propose an adaptive metric-based method that can determine the best weighted combination automatically. In our problem setting, our focus is on adapting the classifier to a new task (i.e. campaign) under the meta-learning framework. ## 3. Problem Statement We first explain some preliminaries of meta-learning, and contextualise it under our problem setting, before describing our model in Section 4. Let \(\mathcal{E}=\{e_{0},e_{1},e_{2},...,e_{l}\}\) be a set of campaigns. These campaigns are split into two partitions: \(\mathcal{E}_{train}\) for meta-training and \(\mathcal{E}_{test}\) for meta-testing (in our case, we have \(6\)\(\mathcal{E}_{train}\), and \(4\) for \(\mathcal{E}_{test}\); see Table 1). For each campaign \(\epsilon\), we have labeled users \(\mathcal{U}\), where each user \(\mathbf{u}\) consists of a list of text posts and images, defined as \(\mathbf{u}=\{(c_{0},m_{0},p_{0}),...,(c_{n},m_{n},p_{n})\}\), where \(c\) refers to the textual content of the post, \(m\) a set of images, and \(p\) the timestamp of the post. Each user \(\mathbf{u}\) is associated with a ground-truth label \(y\in Y\), where \(Y\) represents the label set (troll or non-troll; binary). Our goal is to leverage the knowledge learned from past (meta-train) campaigns (\(\mathcal{E}_{train}\)) to adapt quickly to new (meta-test) campaigns (\(\mathcal{E}_{test}\)) via few-shot learning (e.g. 5 or 10 users for the new campaign). In our problem setting, the notion of a task (from meta-learning) is the _binary troll detection task for a particular campaign_. Each task \(\mathcal{T}_{i}=(\mathcal{D}^{s},\mathcal{D}^{q})\) includes a support set \(\mathcal{D}^{s}\) with \(S\) data points and a query set \(\mathcal{D}^{q}\) with \(Q\) data points, where \(\mathcal{D}^{s}=\left\{\mathbf{u}_{i}^{s},y_{i}^{s}\right\}_{i=1}^{S},\mathcal{D }^{q}=\left\{\mathbf{u}_{i}^{q},y_{i}^{q}\right\}_{i=1}^{Q}\), and \(\mathbf{u}_{i}\) is a user with a list of posts and images and \(y_{i}\in\{0,1\}\) corresponding to troll/non-troll labels (as defined above). During meta-training, we iterate through different campaign in \(\mathcal{E}_{train}\) and sample support and query instances to train the model. Once that's done, to adapt to a new campaigns in \(\mathcal{E}_{test}\), we follow a similar training process but we update the model parameters using _only_ the support instances from the new campaign, and reserve the query set for evaluation (i.e. the query set is not used for model update). ## 4. Approach We present the overall architecture of MetaTroll in Figure 1. MetaTroll has two modules, a BERT-based (Han et al., 2017) feature extractor and an adaptive linear classifier. We first explain at a high level how we train MetaTroll in three stages for the feature extractor and linear classifier (Algorithm 1). In the first stage, we fine-tune an off-the-shelf BERT and update all its parameters (\(\Phi\)) using _all_ training samples from the meta-train campaigns \(\mathcal{E}_{train}\) (line 1 in Algorithm 1). At this stage we do not introduce adapters (Han et al., 2017) to BERT and it is optimised to do binary classification of trolls and non-trolls. The idea of this step is to train a feature extractor for the troll detection task (in other words, this step is standard fine-tuning to adapt BERT for binary troll detection). In the second stage, we introduce adapters (Han et al., 2017) to BERT and train the adapter parameters (\(\Psi\)) using model agnostic meta-learning (MAML: Finn et al. (2017); line 3-16 in Algorithm 1). Note that the adapter is _shared_ across all campaigns (i.e. the adapter is _not_ campaign-specific), and the idea of this stage is to learn a good initialisation for the adapter to do general troll detection; in the next stage, the learned adapter parameters will be used to initalise _campaign-specific_ adapters. During this stage, only the adapter parameters (\(\Psi\)) are updated while the BERT parameters (\(\Phi\)) are frozen. In the third and final stage, we introduce campaign-specific adapters and adaptive linear classifiers to MetaTroll, creating the full model. The campaign-specific adapters are initialised using the campaign-general adapter from stage 2. The idea of using campaign-specific adapters is to address catastrophic forgetting when MetaTroll is continuously updated for new campaigns that emerge over time in an application setting: the campaign-specific adapter solves the 'forgetting' problem because the knowledge of detecting older/past campaigns are stored in their adapters which will not be overwritten as MetaTroll is continually updated. As for the adaptive linear classifiers, they are also campaign-specific and designed to encourage MetaTroll to learn campaign representations that are distinct for different campaigns.2 We update the campaign-specific adapter (\(\Psi_{e}\)) and classifier parameters (\(\Omega_{e}\)) via meta-learning, similar to stage 2 training (line 17-32 in Algorithm 1), noting that the BERT parameters (\(\Phi\)) are frozen in this stage. Footnote 2: The adaptive linear classifier parameters are initialised randomly for a new campaign. After MetaTroll is trained (over the 3 stages), to adapt it to trolls of a new campaign at test time, we follow the third stage training to learn campaign-specific adapter (\(\Psi_{e}\)) and classifier (\(\Omega_{e}\)) parameters for the new campaign. Once adapted, MetaTroll can be used to classify users from this new campaign. We next describe the training stages and test inference in detail. ### Stage One Training In the first stage, MetaTroll is a standard BERT fine-tuned with standard cross entropy loss to do binary classification of trolls vs. non-trolls (ignoring the campaigns). Given a user \(\mathbf{u}\) with posts \(\{e_{0},...,e_{n}\}\): \[v =\text{BERT}([\text{CLS}]\oplus c_{0}\oplus...\oplus c_{n}) \tag{1}\] \[\dot{y} =\text{softmax}(Wv+b) \tag{2}\] where \(v\) is the contextual embedding of [CLS] and \(\oplus\) is the concatenation operation. ### Stage Two Training In the second stage, we add adapters to BERT (see Figure 2). Following (Han et al., 2017), we insert two adapter modules containing bottleneck layers into each transformer layer of BERT. Note that the adapter is _shared_ across all campaigns here, as the goal in this stage is to learn good set of initial adapter parameter values that can be used to initialise campaign-specific adapters in the next stage. Correspondingly, Equation 1 is now modified to: \[v=\text{AdapterBERT}([\text{CLS}]\oplus c_{0}\oplus...\oplus c_{n}) \tag{3}\] We meta-train the adapter parameters using MAML (line 3-16 in Algorithm 1). We first sample a task \(\mathcal{T}_{i}\) for campaign \(e\) from \(\mathcal{E}_{train}\) to create the support \(D^{\sharp}\) and query set \(D^{\sharp}\). Denoting M as the model, \(\Phi\) the BERT parameters and \(\Psi\) the adapter parameters, Figure 1. Overall architecture of MetaTroll. we next compute the inner loop to update \(\Psi\) as follows (Kang et al., 2018): \[\mathcal{L}_{\mathcal{T}_{i}}(M_{\Phi,\Psi},\mathcal{D}^{q}) =\frac{1}{|\mathcal{D}^{s}|}\sum_{\mathbf{u}_{i}\in\mathcal{D}^{s}} -\log p\left(y_{i}|\mathbf{u}_{i};\Phi,\Psi\right)\] \[\Psi^{\prime} =\Psi-\alpha\nabla_{\Psi}\mathcal{L}\left(M_{\Phi,\Psi},\mathcal{D }^{s}\right)\] where \(\alpha\) is the learning rate (computed next in the outer loop) and \(\mathcal{L}_{\mathcal{T}_{i}}(M_{\Phi,\Psi},\mathcal{D}^{s})\) is the cross entropy loss over the support set. Next we compute the cross-entropy loss based on the query set, using the updated parameters: \[\mathcal{L}_{\mathcal{T}_{i}}(M_{\Phi,\Psi^{\prime}},\mathcal{D}^{q}) =\frac{1}{|\mathcal{D}^{q}|}\sum_{\mathbf{u}_{i}\in\mathcal{D}^{q}}-\log p \left(y_{i}|\mathbf{u}_{i};\Phi,\Psi^{\prime}\right)\] This inner loop is carried out for multiple steps of gradient descent (using several tasks from the same campaign). Note that in this stage \(\Phi\) (BERT parameters) is frozen. Once that's done, we update the adapter parameters \(\Psi\) and inner loop learning rate \(\alpha\):3 Footnote 3: In our implementation, \(\alpha\) is layer-specific, i.e. we have a separate learning rate for each adapter in different layers. \[\Psi \leftarrow\Psi-\beta\nabla_{\Psi}\mathbb{E}_{\mathcal{T}_{i}- \mathcal{T}}\left[\mathcal{L}_{\mathcal{T}_{i}}\left(M_{\Phi,\Psi^{\prime}}, \mathcal{D}^{q}\right)\right]\] \[\alpha \leftarrow\Psi-\beta\nabla_{\alpha}\mathbb{E}_{\mathcal{T}_{i}- \mathcal{T}}\left[\mathcal{L}_{\mathcal{T}_{i}}\left(M_{\Phi,\Psi^{\prime}}, \mathcal{D}^{q}\right)\right]\] where \(\beta\) is the learning rate for the outer loop (set to \(1e^{-5}\) in our experiments). ### Stage Three Training In this stage, we introduce campaign-specific adapters for each campaign, and they are all initialised using the campaign-general adapter learned from the previous stage. Formally, Equation 3 is now updated to: \[v=\text{AdapterBERT}_{\textbf{e}}([\text{CLS}]\oplus c_{0}\oplus\dots\oplus c_{n})\] where \(e\) is the campaign. We introduce campaign-specific adapters to our model for two reasons: (1) they are more efficient to train and less vulnerable to overfitting (which is important in a few-shot learning setting), since they contain only a small number of parameters compared to the alternative where we have one BERT model for every campaign); and (2) they alleviate catastrophic forgetting in a continual learning setting, as each campaign has its own adapter. Inspired by Requeima et al. (Requeima et al., 2018), we next introduce an adaptive linear classifier network that replaces \(W\) and \(b\) in the linear layer used for classification (Equation 2). Intuitively, this adaptive classifier works by first computing the aggregate troll and non-troll representation for each campaign, and then learn campaign-specific projections to classify between trolls vs. non-trolls. Let \(\mathcal{D}_{0}^{s}\) and \(\mathcal{D}_{1}^{s}\) denote the support set where the labels are trolls (\(y=0\)) and non-trolls (\(y=1\)) respectively, we compute \(W_{e}\), \(b_{e}\) and \(\hat{y}\) for troll campaign \(e\) as follows: \[W_{e}^{0} =\frac{1}{\left|\mathcal{D}_{0}^{s}\right|}\sum_{u_{e}\mathcal{D} _{0}^{s}}(v) b_{e}^{0}=\frac{1}{\left|\mathcal{D}_{0}^{s}\right|}\sum_{u_{e} \mathcal{D}_{0}^{s}}(v)\] \[W_{e}^{1} =\frac{1}{\left|\mathcal{D}_{1}^{s}\right|}\sum_{u_{e}\mathcal{D} _{1}^{s}}(v) b_{e}^{1}=\frac{1}{\left|\mathcal{D}_{1}^{s}\right|}\sum_{u_{e} \mathcal{D}_{1}^{s}}(v)\] \[\hat{y} =\operatorname{softmax}(W_{e}o+b_{e})\] where \(W_{e}^{i}\) denotes the \((i-1)^{\text{th}}\) column of \(W_{e}\). In other words, MetaTroll classifies a user based on whether its representation (\(v\)) is closer to the (average) troll or non-troll representation. The campaign-specific adaptive linear classifier parameters \(\Omega_{e}\) and the adapters parameters \(\Psi_{e}\) are trained using MAML, just like stage 2 training (line 17-31 in Algorithm 1). ### Meta-testing After MetaTroll is trained, to adapt it to a new campaign \(e\), we follow the process of the third stage training. To simulate few-shot learning, we sample only a small number of instances for the support and query set (e.g. 5 each), and use _only_ the support set for updating the adapter (\(\Psi_{e}\)) and classifier parameters (\(\Omega_{e}\)) (line 25-26 in Algorithm 1) and do not run the outer loop ((line 30-31). Here the query set is used only for computing performance (i.e. in line 27 we compute accuracy instead of loss over the query set). ## 5. Experiments and Results ### Datasets and models We use the information operations dataset published by Twitter for our experiments.4 This dataset contains different groups of users banned by Twitter since October 2018 for engaging in state-sponsored information operations, and each group represents a _campaign_ in our work. For example, the "Iran-2018-Palestine" campaign refers to trolls sponsored by Iran for an information campaign targeting Palestine in 2018.5 To clarify, these campaigns are defined by Twitter when they release the data, and each campaign is associated with a blogpost that explains the information operation. Footnote 4: [https://transparency.twitter.com/en/reports/information-operations.html](https://transparency.twitter.com/en/reports/information-operations.html) Footnote 5: [https://blog.twitter.com/en_us/topics/company/2018/enabling-further-research-of-information-operations-on-twitter](https://blog.twitter.com/en_us/topics/company/2018/enabling-further-research-of-information-operations-on-twitter) For each campaign, we filter users and keep only those who have posted a tweet within the 6-month event period ("Event Time" in Table 1) to remove users who are inactive during the event.6 For each user, we also filter their tweets to keep only their most recent 20 posts that have a timestamp within the event period. Footnote 7: The event period is determined as follows: (1) the end date is the last post in the campaign; and the (2) start date is 6 months from the end date. To create the non-troll users, we combine two sources: (1) "Random", random accounts that are sampled by generating random numeric user IDs and validating their existence following (Bach et al., 2018); and (2) "Hashtag", users whose posts contain popular hashtags used by a campaign, where popular hashtags are defined as hashtags that collectively cover 75% of trolls' posts. The reason why we have two types of non-troll users is that if we only sample random users as non-trolls, the post content of non-trolls would be _topically_ very different to that of the trolls, and the detection task would degenerate into a topic detection task. The non-troll users sampled using the "Hashtag" approach is designed to circumvent this and makes the detection task more challenging. Table 1 present some statistics for trolls and non-trolls in different campaigns, where 6 are used for meta-training and 4 for meta-testing. Note that the non-trolls of a particular campaign are users sampled with the "Hashtag" approach, and the last row corresponds to non-troll users sampled using the "Random" approach. For these campaigns, at least 80% of the trolls' posts are in English, and so they are used for the monolingual (English) experiments in the paper.7 Footnote 7: For non-troll users, we only keep English tweets (based on the predicted language given in the metadata). The trolls and non-trolls in Table 1 represent the pool of users which we draw from to construct the final training/testing data. In all of our experiments, we keep the ratio of trolls to non-trolls to 50/50 through sampling, and when sampling for non-troll users, the ratio from the two sources ("Random" and "Hashtag") is also 50/50.8 As an example, "Uganda-2021-NRM" has 334 troll users. We therefore sample 334 non-troll users, where 167 are from "Random" and another 167 from "Hashtag". Figure 2. Overall architecture of Adapter-BERT. The left figure illustrates how two adapter modules are added to a transformer layer; right shows the components in the adapter. We compare our MetaTroll model against the following baselines including state-of-the-art meta-learning methods and few-shot text classification models: * **BERT**[(11)]9: BERT fine-tuned using the support set of meta-test data. Footnote 9: [https://huggingface.co/docs/transformers/model_doc/bert](https://huggingface.co/docs/transformers/model_doc/bert) * **KNN**[(12)]: K-nearest neighbour classifier with off-the-shelf BERT as the feature extractor.10 Footnote 10: K is selected from [5,10]. * **AdBERT**[(27)]11: BERT that fine-tunes an adapter for each campaign. Footnote 11: [https://adapterhun.nl/](https://adapterhun.nl/) * **GPT3**[(7)]12: a very large pretrained model adapted to our tasks using prompt-based learning (Zhao et al., 2019).13 Footnote 12: [https://gpt3dem.com/apps/openai-gpt-3-playground](https://gpt3dem.com/apps/openai-gpt-3-playground) * **MAML**[(14)]14: BERT trained with the MAML algorithm for few-shot learning. Footnote 14: We use “text-davien-d002” in our experiments. The prompts are a small set of instances (e.g.) with their labels added to the beginning of the input. Footnote 14: [https://github.com/tristandee/pytorch-meta/tree/master/examples/mamal](https://github.com/tristandee/pytorch-meta/tree/master/examples/mamal) * **CNP**[(16)]15: a model-based meta-learning framework that consists of a shared encoder, aggregator and decoder. Footnote 15: [https://github.com/deepmind/neural-processes](https://github.com/deepmind/neural-processes) * **ProtoNet**[(33)]16: a deep metric-based approach using sample average as class prototypes and the distance is calculated based on euclidean-distance. Footnote 16: [https://github.com/arobin/Frottotypical-Networks-for-Few-shot-Learning-](https://github.com/arobin/Frottotypical-Networks-for-Few-shot-Learning-) * **Induct**[(17)]: a few-shot classification model that uses a dynamic routing algorithm to learn a class-wise representation. Footnote 16: [https://huggingface.co/docs/transformers/model_doc/bert](https://huggingface.co/docs/transformers/model_doc/bert) * **HATT**[(15)]17: A classification model of metric-based meta-learning framework together with attention mechanism. Footnote 17: [https://adaptrehun.nl/](https://adaptrehun.nl/) * **DS**[(5)]18: A few-shot text classification model that uses distributional signatures such as word frequency and information entropy for training. Footnote 18: [https://gpt3dem.com/apps/openai-gpt-3-playground](https://gpt3dem.com/apps/openai-gpt-3-playground) ### Results We first present English troll detection performance. In Sections 5.4 and 5.5 we extend MetaTroll to work with non-English data and images. As we focus on English here, non-English posts are discarded (although this only reduces the data size by a negligible amount, as most data is in English, as explained in Section 5.1). All reported figures are an average accuracy performance over 5 runs with different random seeds.19 Footnote 17: https: use “text-davien-d002” in our experiments. The prompts are a small set of instances (e.g.) with their labels added to the beginning of the input. Table 2 presents 5-shot and 10-shot results for the 4 meta-test campaigns.20 MetaTroll is the best model, achieving an average of 76.35% accuracy over all campaigns. That said, some of the few-shot text classifiers (Induct e.g.73.57%) are not far behind. Most models only benefit marginally by seeing 5 more examples going from 5- to 10-shot, with the exception of GPT3 where we see a substantial performance boost (average 14.64% gain). Footnote 16: [https://github.com/tristandee/pytorch-meta/tree/master/examples/mamal](https://github.com/tristandee/pytorch-meta/tree/master/examples/mamal) ### Continual learning performance In continual learning (Zhao et al., 2019), new tasks appear over time, where the goal of learning is to adapt the model accordingly to the new tasks without forgetting the previous tasks. This is a more realistic setting for a troll detection system, as it should continually adapt to new campaigns that appear over time. But in this setting it will suffer from catastrophic forgetting (Srivastava et al., 2017; Wang et al., 2018), where after adapting to newer campaigns its performance to classify older campaigns will degrade. To simulate this continual learning setting, we next evaluate the troll detection models on a _past campaign_ after it has been adapted for a number of campaigns in sequence. For example, a system is first adapted to GRU-2020-NATO (G), and then to IRA-2020-Russia (I), Uganda-2021-NRM (U) and China-2021-Xinjiang (C) in sequence (denoted as G\(\rightarrow\)I\(\rightarrow\)U\(\rightarrow\)C). We then test the system using trolls using the past campaigns, i.e. G, I and U. One challenge with MetaTroll under this continual learning evaluation is that at test time it needs to know which adapter to use -- information that most other systems do not require as they don't have campaign-specific parameters (exception: AdBERT and GPT-32). We experiment a simple approach to solve this: have MetaTroll classify a user using all its adapters, and select the outcome that has the highest probability.23 Footnote 23: GPT3 technically does not have campaign-specific parameters, but it needs to be given campaign-specific prompts, and so requires the campaign label. Footnote 24: We do the same for AdBERT and GPT3. We present (5-shot) troll classification results under this continual learning setting in Table 3. AdBERT and GPT-3 suffers little catastrophic forgetting, as they have campaign-specific parameters or prompts (they are unaffected by continual learning as their base model is unchanged), although their performance is only marginally above random chance in the first place. MetaTroll is the clear winner here, with only \(<\)5% accuracy degradation over time.25 In contrast, the meta-learning methods and few-shot classifiers suffer from catastrophic forgetting and their performance on older campaigns drops substantially (e.g. MAML's performance for GRU-2020-NATO drops from 68.32 to 65.15 and 57.62 after several updates). Footnote 25: Note that performance of these systems on older campaigns will still degrade slightly over more adaptations, as there are more campaign-specific adapters or prompts to select from. An unintended benefit of that MetaTroll classifies trolls using multiple adapters is that it is effectively doing multi-class rather than binary classification (with over 85% accuracy for campaign classification; Table 4). What this means is that not only MetaTroll is able to retain its performance for classifying trolls vs. non-trolls from different campaigns over time, it can also predict which campaign an instance belongs to -- an arguably more difficult task. ### Multilingual performance We extend MetaTroll to multilingual. To encode multilingual input text, we replace BERT with XLM-R (Chen et al., 2019).26 For comparison, we include baselines BERT, KNN, and AdBERT, and meta-learning methods ProtoNet and MAML as their base model can be replaced with XLM-R. We exclude the few-shot classifiers as they are designed for English and cannot be trivially adapted to other languages. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & G\(\rightarrow\)I & G\(\rightarrow\)I\(\rightarrow\)U & \multicolumn{3}{c}{G\(\rightarrow\)I\(\rightarrow\)U\(\rightarrow\)C} \\ \cline{2-7} & I & I & U & I & U & C \\ \hline GPT3 (Chen et al., 2019) & 74.86 & 79.25 & 82.86 & 81.17 & 79.38 & 74.62 \\ \hline MetaTroll & 85.57 & 85.15 & 87.50 & 83.61 & 86.92 & 86.11 \\ \hline \hline \end{tabular} \end{table} Table 4. Campaign classification results under continual learning. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & G\(\rightarrow\)I & G\(\rightarrow\)I\(\rightarrow\)U & \multicolumn{3}{c}{G\(\rightarrow\)I\(\rightarrow\)U\(\rightarrow\)C} \\ \cline{2-7} & G & G & G & I & G & I & U \\ \hline BERT (Golovolovolov et al., 2017) & 50.41 & 45.16 & 48.02 & 44.80 & 42.85 & 43.25 & 50.00 \\ KNN (KNN, 2017) & 46.10 & 45.25 & 43.12 & 49.99 & 43.01 & 45.50 & 45.33 \\ AdBERT (Wang et al., 2018) & 65.05 & 62.41 & 54.59 & 52.56 & 50.54 & 53.74 & 49.95 \\ GPT3 (Chen et al., 2019) & 54.25 & 51.92 & 50.33 & 58.88 & 48.93 & 56.87 & 50.06 \\ \hline MAML (Miller et al., 2019) & 70.13 & 68.32 & 65.15 & 56.55 & 57.62 & 50.04 & 54.87 \\ CNP (Chen et al., 2019) & 69.88 & 71.55 & 68.72 & 56.75 & 60.57 & 56.95 & 63.21 \\ ProtoNet (Wang et al., 2018) & 60.43 & 56.56 & 61.05 & 61.99 & 53.47 & 52.83 & 60.05 \\ \hline Induct (Chen et al., 2019) & 71.11 & 69.50 & 62.82 & 66.13 & 58.43 & 63.52 & 64.83 \\ HAT (Golovolov et al., 2017) & 63.02 & 60.50 & 55.62 & 70.18 & 50.25 & 50.11 & 55.43 \\ DS (Chen et al., 2019) & 62.69 & 60.05 & 58.85 & 61.50 & 50.65 & 60.12 & 61.85 \\ \hline MetaTroll & 72.74 & **73.45** & **72.18** & **70.81** & **71.15** & **69.74** & **68.75** \\ \hline \hline \end{tabular} \end{table} Table 3. Catastrophic forgetting results under continual learning. “G” = GRU-2020-NATO, “I” = IRA-2020-Russia, “U” = Uganda-2021-NRM, “C” = China-2021-Xinjiang. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & G\(\rightarrow\)I & G\(\rightarrow\)I\(\rightarrow\)U & \multicolumn{3}{c}{G\(\rightarrow\)I\(\rightarrow\)U\(\rightarrow\)C} \\ \cline{2-7} & I & I & U & I & U & C \\ \hline GPT3 (Chen et al., 2019) & 74.86 & 79.25 & 82.86 & 81.17 & 79.38 & 74.62 \\ \hline MetaTroll & 85.57 & 85.15 & 87.50 & 83.61 & 86.92 & 86.11 \\ \hline \hline \end{tabular} \end{table} Table 4. Campaign classification results under continual learning. In terms of data, we expand the meta-test campaigns by adding four new campaigns (Thailand-2020-RTA, Mexico-2021-Election, Venezuela-2021-Gov, and China-2021-Changyu) where the predominant language is not in English and present their statistics in Table 5. For the 10 English campaigns (Table 1), we restore the previously discarded non-English posts and include them for meta-training and meta-testing. Results are presented in Table 6. Generally we see that all models' performance has somewhat degraded when their feature extractor is replaced with a multilingual model, although MetaTroll manages to keep its accuracy around 60%. Interestingly, China-2021-Changyu appears to be the most difficult campaign, and we suspect it may be due to its diverse set of languages (38% French, 12.3% English, 12.1% Simplified Chinese). ### Multimodal performance Next we consider incorporating images posted by users, as image is an effective communication device (e.g. memes). Note that we only present results for different variants of MetaTroll here, as we have demonstrated that it is the most competitive detection system. To process images, we use pre-trained ResNet18 (He et al., 2017) as an off-the-shelf tool to extract image embeddings. We also explore using a multilingual OCR model to extract text information from images (which will be useful for processing memes).25 As we have multiple images for each user, we aggregate the ResNet18 image embeddings via max-pooling, and concatenate the max-pooled vector with the text representation (\(\mathbf{v}\) in Equation 3). For texts that are extracted by OCR, we concatenate them into a long string and process them with another AdapterBERT (Equation 3; their parameters are not shared), and similarly concatenate the final CLS representation to the text representation. Footnote 25: [https://github.com/JaidedAI/EasyOCR](https://github.com/JaidedAI/EasyOCR) Results are in Table 7. +-image* is a baseline where we concatenate a numeric feature that denotes the number of images posted by the user to the text representation. Interestingly, even with the baseline approach we see a small improvement, indicating that trolls use more images (e.g. average number of images used by trolls is 20 vs. 5 for non-trolls in GRU-2020-NATO). Incorporating either ResNet or OCR encodings boosts performance further (with OCR being marginally more beneficial), and that adding them both produces the best performance. ## 6. Conclusion We propose MetaTroll, a few-shot troll detection model with campaign-specific adapters that tackles catastrophic forgetting in a continual learning setting. Experimental results show that MetaTroll outperforms existing state-of-the-art meta-learning and few-shot text classification models, and it can be extended to handle multilingual and multimodal input. ## Acknowledgements This research is supported in part by the Australian Research Council Discovery Project DP200101441. Lin Tian is supported by the RMIT University Vice-Chancellor PhD Scholarship (VCPS). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Campaign}} & \multirow{2}{*}{Event Time} & \multirow{2}{*}{Language} & \multirow{2}{*}{Type} & \multirow{2}{*}{\#users} & \multicolumn{2}{c}{Top-3 Hashtags} \\ \hline \multirow{2}{*}{Thailand-2020-RTA} & \multirow{2}{*}{Aug 2019 – Feb 2020} & \multirow{2}{*}{th, en} & \multirow{2}{*}{\begin{tabular}{c} Troll \\ Non-troll \\ \end{tabular} } & \multirow{2}{*}{158} & \multirow{2}{*}{\#army \#parade} \\ & & & & & & & \\ \hline \multirow{2}{*}{Mexico-2021-Election} & \multirow{2}{*}{Sep 2020 – Mar 2021} & \multirow{2}{*}{es, en} & \multirow{2}{*}{\begin{tabular}{c} Troll \\ Non-troll \\ \end{tabular} } & \multirow{2}{*}{190} & \multirow{2}{*}{\#Elections2021} \\ & & & & & & & \\ \hline \multirow{2}{*}{Venezuela-2021-Gov} & \multirow{2}{*}{Nov 2020 – May 2021} & \multirow{2}{*}{es, en} & \multirow{2}{*}{\begin{tabular}{c} Troll \\ Non-troll \\ \end{tabular} } & \multirow{2}{*}{257} & \multirow{2}{*}{\#Venezuela} \\ & & & & & & & \\ \hline \multirow{2}{*}{China-2021-Changyu} & \multirow{2}{*}{Feb 2019 – Aug 2019} & \multirow{2}{*}{fr,en} & \multirow{2}{*}{ \begin{tabular}{c} Troll \\ Non-troll \\ \end{tabular} } & \multirow{2}{*}{280} & \multirow{2}{*}{\#MBR \#MorningBox} \\ & & & & & & & \\ \hline \multirow{2}{*}{Random} & \multirow{2}{*}{Varies} & \multirow{2}{*}{es, fr, th, en} & \multirow{2}{*}{Non-troll} & \multirow{2}{*}{5,000} & \multirow{2}{*}{\#news} \\ & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 5. Statistics of Multilingual meta-test data. Languages are in ISO code.
2306.03934
Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via Volumetric Pseudo-Labeling
Purpose: Interpreting chest radiographs (CXR) remains challenging due to the ambiguity of overlapping structures such as the lungs, heart, and bones. To address this issue, we propose a novel method for extracting fine-grained anatomical structures in CXR using pseudo-labeling of three-dimensional computed tomography (CT) scans. Methods: We created a large-scale dataset of 10,021 thoracic CTs with 157 labels and applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels. These labels were projected onto a two-dimensional plane, similar to the CXR, allowing the training of detailed semantic segmentation models for CXR without any manual annotation effort. Results: Our resulting segmentation models demonstrated remarkable performance on CXR, with a high average model-annotator agreement between two radiologists with mIoU scores of 0.93 and 0.85 for frontal and lateral anatomy, while inter-annotator agreement remained at 0.95 and 0.83 mIoU. Our anatomical segmentations allowed for the accurate extraction of relevant explainable medical features such as the cardio-thoracic-ratio. Conclusion: Our method of volumetric pseudo-labeling paired with CT projection offers a promising approach for detailed anatomical segmentation of CXR with a high agreement with human annotators. This technique may have important clinical implications, particularly in the analysis of various thoracic pathologies.
Constantin Seibold, Alexander Jaus, Matthias A. Fink, Moon Kim, Simon Reiß, Ken Herrmann, Jens Kleesiek, Rainer Stiefelhagen
2023-06-06T18:01:08Z
http://arxiv.org/abs/2306.03934v1
# Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via Volumetric Pseudo-Labeling ###### Abstract **Purpose:** Interpreting chest radiographs (CXR) remains challenging due to the ambiguity of overlapping structures such as the lungs, heart, and bones. To address this issue, we propose a novel method for extracting fine-grained anatomical structures in CXR using pseudo-labeling of three-dimensional computed tomography (CT) scans. **Methods:** We created a large-scale dataset of 10,021 thoracic CTs with 157 labels and applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels. These labels were projected onto a two-dimensional plane, similar to the CXR, allowing the training of detailed semantic segmentation models for CXR without any manual annotation effort. **Results:** Our resulting segmentation models demonstrated remarkable performance on CXR, with a high average model-annotator agreement between two radiologists with mIoU scores of 0.93 and 0.85 for frontal and lateral anatomy, while inter-annotator agreement remained at 0.95 and 0.83 mIoU. Our anatomical segmentations allowed for the accurate extraction of relevant explainable medical features such as the cardio-thoracic-ratio. **Conclusion:** Our method of volumetric pseudo-labeling paired with CT projection offers a promising approach for detailed anatomical segmentation of CXR with a high agreement with human annotators. This technique may have important clinical implications, particularly in the analysis of various thoracic pathologies. **Keywords:** Anatomy Segmentation, Chest-Radiograph, Computer Tomography Projection, Pseudo-Labels + Footnote †: dagger\) denotes equal contribution. + Footnote †: dagger\) denotes equal contribution. We propose a method to collect large amounts of annotations for CXR without manual labeling and demonstrate how these can generate networks capable of identifying thoracic anatomy in a fine-grained manner. Since manual annotation is not feasible, we are inspired from three related facts: First, CT scans are aggregated multi-view 2D radiographs and thus share a very similar origin signal [11]. Second, CTs offer immense advantages over CXRs in identifying structures due to their volumetric nature[12]. Finally, the consistent body structure of a patient across modalities implies that semantic can be translates from CT to CXR. Looking at other imaging domains, we find most datasets focus on a specific subset relevant to their specialty, e.g. segmentations of thoracic organs at risk [13] or the spine [14] in CT volumes. These datasets, often containing less than 100 patients, allow the development of robust networks that can be applied to new data [15]. We hypothesize that by combining different volumetric annotations, we can obtain a holistic view on the human body that can be transferred onto a large CT corpus. To facilitate network training for CXR segmentation the annotations and their associated images have to be projected onto a two-dimensional (2D) plane. A simple concept is the Ray-summation [16], which allow one to quickly gain insight into a patient without requiring a thorough look through the CT. Matsubara _et al_. alter this approach via a separate bone window and non-linearity [17]. Other concurrent work approaches CT-Projection also via generative models [18]. We adapt Matsubara _et al_.'s method via histogram equalizations to generate more realistic projections without falling into the potential pitfalls of deep unpaired generative models such as mode collapse or the prediction of unfitting patterns [19; 20]. Following this process, we build the Projected Anatomy for X-Ray [21] (PAXRay++) dataset containing over two million annotated instances across 15 thousand images for 157 anatomical classes for both lateral and frontal thoracic views. PAXRay++ allows us to train data-intensive 2D chest X-Ray anatomy segmentation (CXAS) models that accurately delineate the fine-grained thoracic anatomy. We evaluate these models in several ways. We use a hand-selected set of projected X-Rays as a validation set to select hyperparameters such as model architecture. The best model on the validation set is used for comparison with two radiologists. In a direct comparison to human annotations of 157 anatomical classes in 30 frontal and lateral images, we show a high model-annotator agreement (MAA) with the two radiologists, with average Intersection-over-Union (IoU) scores of 0.93 and 0.96 for the frontal view (anterior-posterior [AP]/ posterior-anterior [PA]) with an inter-annotator agreement (IAA) of 0.93. For the lateral view, the MAA comes to 0.83 and 0.87 with an IAA of 0.83. We also show correlations of automatically extracted biomarkers with radiological findings such as scoliosis (\(N=2054/231007\)) or cardiomegaly (\(N=41633/231007\)) on the MIMIC-CXR dataset [22] with a t-statistic of 35.30 (_p-value_\(<0.0001\)) and 139.71 (_p-value_\(<0.0001\)) respectively. We make our models available on this project page. ## Results ### Projected Chest X-Ray Anatomy Dataset Generation via Volumetric Pseudo-Labeling Gathering vast amounts of pixel-wise annotated image data is tedious and difficult, especially in the medical domain, where we require expert annotators with a certain amount of fundamental knowledge. To combat this, we propose a general workflow for an automated large-scale dataset generation scheme, which we visualize in Fig. 1. First **a**, we collect several datasets [13, 14, 23, 24, 25, 26, 27, 28] which contain anatomical structures visible in a chest X-Ray which come down to individual _bones_ such as ribs and vertebrae, _abdominal_ organs such as the liver, _cardiovascular_ structures such as the aorta or the heart, and _respiratory_ structures like lung lobes or the trachea. Subsequently **b**, we train an ensemble of nnUNets [15] on each CT dataset. This architecture has proven itself as a reliable out-of-the-box segmentation model able to achieve strong performances with even small amounts of data, and the lack of manual hyperparameter Figure 1: A flowchart describing the PAX-Ray++ dataset generation process. (a) We collect publicly available datasets containing different anatomical structures. (b) We train an ensemble of nnUNets of each expert domain and infer them on a shared dataset. (c) We merge the 3D predictions, apply anatomical priors and retrain. (d) We infer the final nnUNet on 10K chest CTs. (e) We apply a CT and label projection to generate a chest X-Ray dataset which we apply anatomical-prior-based postprocessing to collect the final dataset. setting allows us easily apply it for different datasets. After training the nnUNet ensembles on the individual datasets, we infer these expert models on a subset of 560 whole-body volumes from AutoPET [29], a large CT dataset for tumor segmentation covering different body regions. We chose whole-body CTs because they do not impair the predictive ability of models for lower body regions such as the abdomen or lumbar spine. We merge classes of overlapping semantics and discard unnecessary classes or those with insufficient quality by manual assessment, such as the humerus. We further apply postprocessing techniques in the form of morphological operations and persistent anatomical biases like maintaining the largest connected component for organs that should contain a single one. We then retrain the nnUNet on the resulting dataset. The resulting model allows us to generate masks for the complete label set with a single prediction, which is significant as inference and postprocessing for an expert model can span up to an hour per volume. Thus, we mitigate a bottleneck in our workflow by compressing this knowledge into a single model. We collect several large-scale chest CT datasets [30, 31, 32, 33, 34] ranging in size from 176 to 7,302 volumes. The combined set for inference contains 10,021 CT scans. The final nnUNet ensemble trained on AutoPET pseudo-labels generates anatomical predictions for all provided scans. Subsequently, we automatically filter these volumetric pseudo-labels based on their plausibility through checks like deviation of mask center position or volume to the class mean. We discard scans that do not follow these rules. We add classes like lung zones, mediastinum, or aortic arch based on general rules following expert knowledge as they provide helpful insights for downstream tasks. Afterward, we project the image and masks to a frontal and lateral view and resize them to a uniform size. For the CT projection, we adapt Matsubara_et al_. [17] by removing the imaging table and applying histogram equalization [35] to generate more visually pleasing results. The masks are then post-processed using standard morphological operations. Following this procedure, we generate the PAX-Ray++ dataset with a total of 14,754 images, 157 densely labeled anatomy classes, and more than two million instance mask annotations, the currently largest dataset for CXR segmentation to date. We discuss the details of our dataset in the following. We present details of the used datasets and the performance of the nnUNet on validation sets in Tab. 1. ### Evaluation of CXAS Performance For all our experiments shown for chest X-Ray anatomy segmentation (CXAS), we use a UNet [36] with a pre-trained ResNet-50 backbone [37]. We train the model on the PAX-Ray++ dataset and show details of the training choices in the Methods. ### Comparison of CXAS with CT Projections We randomly select a validation set of 5 CTs from each source CT dataset of different patients. We manually inspect the associated frontal and lateral images to be visual correctness. The resulting validation set consists of a total of 60 images with 157 labels. We use the validation set for the development of the segmentation model and setting Figure 2: a) We show qualitative results for our CXAS model on frontal projections. We highlight masks for the respiratory system, bones, vascularatory system and abdomen. a) We show qualitative results for our CXAS model on lateral projections. We highlight masks for our CXAS model on lateral projections. We highlight boxes, vascularatory system and abdomen. c) We show the performance of our CXAS model in terms of mIoU, mDICE and mHausdorff distance for frontal (top row) and lateral (bottom row) images. of the hyperparameters. We evaluate our model via the mean Intersection-over-Union, DICE, and Hausdorff distance and show the performance of the final model. We show qualitative results for frontal projections in Fig. 2. We show a subset of classes belonging to the supercategories lungs, vascular systems, bones, and abdomen/digestive system. The predictions show minor deviations at the boundaries of the individual classes of the respiratory and vasculature system, while some inaccuracies become visible in the abdominal area. The qualitative results for the lateral projections are displayed in Fig. 2. Akin to the frontal view, the predictions show smoother borders but align with the ground truth. Apart from this, the segmentations provide matching insights on the thoracic anatomy with a slight deviation from the ground truth for both frontal and lateral views. Fig. 2. shows quantitative segmentation results for frontal (top row) and lateral (bottom row) views. We display the class performances in the form of IoU (left), DICE (center), and Hausdorff distance (right) for each sample as a scatter plot with the mean performance for the classes shown by a line plot. Generally, we see performances for standard spinal classes, such as the thoracic vertebrae with average IoU-scores above 80%, while the average performance of rare vertebrae of the dataset belonging to the cervical and lumbar spine can drop down to 40%. In the frontal view, there exists more variance in thoracic vertebrae segmentation performance compared to the lateral view. Bone structures such as the sternum, clavicles, and scapula achieve IoUs in the mean from 85% to 95%. For ribs, we can see a noticeable performance drop for the anterior parts of the lower ribs independent of the side. The lower anterior ribs typically do not contain a large area, making them difficult to segment. This behavior is mirrored in the lateral view across the metrics. Abdominal classes can vary in segmentation quality as they occur in a nearly homogenous region. For example, while the liver or stomach are typically well-segmented, the duodenum and kidneys are more complex. Heart and Lung related classes show near-perfect segmentations with scores above 90% IoU. Breast tissue segmentation in comparison only achieves a mean of 70% mIoU. It can be noted that classes in the lateral view tend to have slightly better scores than their frontal counterparts. ### Comparison of CXAS to Human Anatomy Annotations on Real World Chest X-Rays To test the real-world applicability of PAX-Ray++ for developing anatomy segmentation models, we prepare a test set of 30 frontal and lateral chest X-Rays each from the PadChest dataset [38] for a total fo 60 images. First, we use our best segmentation model based on the validation performance to generate preliminary anatomical segmentations. Then, we tasked two medical experts to independently examine and correct these densely annotated X-Rays using the annotation tool CVAT [39]. Even in this simplified annotation process, the expert denoted that a single image correction would take roughly 1.5 hours on average due to the sheer amount of considered classes. For the inter-annotator agreement (IAA), we calculate the three prior metrics between the masks from both annotators due to their symmetrical properties. For the model-annotator agreement (MAA), we compute the metrics between the individual annotator and the model predictions and average the results class-wise. We display qualitative results in Fig. 3a. The annotators tend to be content with most annotations. There are edits at the extensions of the esophagus, trachea, and aorta and corrections of the lower ribs. There is little consensus for classes in the abdominal area, such as the stomach, as seen on the right of Fig. 3a. In contrast, the annotators often align for bone classes. Figure 3: a) We show qualitative results for our CXAS model on frontal CXR as well as two expert manual annotations. We highlight masks for the respiratory system, bones, vasculatory system and abdomen. b) We show the performance of our CXAS model in terms of mIoU, mDICE and mHausdorff distance for frontal (top row) and lateral (bottom row) images. We show the quantitative segmentation performance against medical expert annotations for all classes in Fig. 3 D. The dashed lines represent the IAA, whereas the dotted line displays the average MAA. As the task for the human annotators was not to annotate from scratch but to correct wrong pixel-wise predictions, we can see a high MAA for most classes. In the frontal view, the most significant disagreements exist for rare bone structures such as L3 and C4, lower ribs, the mediastinal distribution, or the breast tissue. The IAA and MAA are highly similar with mIoU of respective 95% and 94%. The Hausdorff distance for MAA is slightly lower than the IAA, indicating slight differences in boundary annotations while maintaining a considerable overlap with the other annotator. In the lateral view, the concrete delineation of rib structures appears ambiguous, leading to lower MAA and IAA with a greater IAA than MAA for all metrics in this supercategory. Overall there is less agreement between the medical experts in the lateral view, leading to a better average MAA than IAA across all metrics (i.e. 85% vs 83% mIoU). While the experts propose changes to the original predictions, they are often not overlapping. In the lateral view, rib segmentations can become quite hard to interpret. While both annotators disagree with the rib segmentations, they do not always agree on how they should look. Similarly to the frontal view, tube-like structures like the esophagus are extended as they can appear fractured at times. ### Applications of Anatomy-based Feature Extraction #### 4.0.1 Automated Identification Cardio-Thoracic Ratio via CXAS We use all PA CXR from the MIMIC dataset and use the publically available automatically pre-extracted labels from medical reports. We extract 41,633 images presenting cardiomegaly and 189,374 images stating either no cardiomegaly or no indication in such a way. We infer our CXAS model from all images and extract the cardio-thoracic ratio (CTR) via the lung and heart segmentations. For the calculation of the CTR, we follow Caffey et al. [40] To infer the difference in this value between pathological and non-pathological cases, we conduct a t-test. Furthermore, to identify the diagnostic ability of this value, we display ROC curves for different subsets of the dataset according to sex. In Fig. 4 a, we show qualitative examples of the calculation of the cardio-thoracic-ratio for patients presenting cardiomegaly and not presenting cardiomegaly. For the case of the pathology, we see that the border of the heart is expanded, leading to a CTR above 0.7, whereas, for the absence case, the CTR is 0.4255. When conducting a t-test, we get a value of 139.71 (p-value \(>\) 0.0001) indicating a strong difference in CTR between positive and negative patients for cardiomegaly. We show this property in the violin plots on the left of Fig. 4 b, highlighting the CTR distributions for sex, age group, and pathology. Typically, the CTR for the presence and absence of cardiomegaly has noticeable shifts, with both respective means increasing with age. This average increase in CTR, however, becomes less in the age group of \(70-\geq 90\), with the disparity in mean CTR between cardiomegaly and no cardiomegaly reducing the older a patient gets. This result, in turn, lessens the insight of CTR with age. We observe this property for male and female patients. On the right Figure 4: a) qualitative examples of the calculation of the CTR for presence/absence of cardiomegaly. b) We show the CTR distributions of the PA CXR of the MIMIC dataset for sex, pathology and age-group. We further show the predictive value of the CTR for cardiomegaly. c) We show qualitative examples of the calculation of the SCD for presence/absence of scoliosis. d) We show the SCD distributions of the PA CXR of the MIMIC dataset for sex, pathology and age-group. We further show the predictive value of the SCD for scoliosis. of Fig. 4, we show the ROC-Curve for pathology classification based on the CTR score. We see a difference in the diagnostic performance of the CTR for female and male patients with an AuROC of 0.73 and 0.75. ### Automated Identification of the Center-Spine Distance via CXAS We use all PA CXR from the MIMIC dataset and straightforwardly filter medical reports for the term _scoliosis_. We extract 2,054 images presenting scoliosis and 228,953 images not containing this phrase. We infer our CXAS model from all images and propose a spine-center distance (SCD) to measure the shift of the individual vertebrae. We calculate this by using the center point of the individual vertebrae segmentations. We then regress a centerline based on these vertebrae centers and compute the sum over the minimal distances of all vertebrae centers to the centerline. To infer the difference in the SCD value between pathological and non-pathological cases, we conduct a t-test. Furthermore, to identify the diagnostic ability of this value, we display ROC curves for different subsets of the dataset according to sex. In Fig. 4, we show qualitative examples of the calculation of the SCD for patients for the presence and absence of scoliosis. We can see that the vertebrae segmentation presents a noticeable curvature for the scoliosis case, leading to an SCD above 19, while for the patient with a healthy spine, the SCD is 1.125. When conducting a t-test, we get a value of 35.30 (p-value \(<0.0001\)), indicating a statistically significant difference in SCD between positive and negative patients for scoliosis. On the left of Fig. 4, we show the distributions of the SCD in the form of violin plots for sex, age group, and pathology. Similarly to the CTR, SCD increases with age for the presence and absence of scoliosis. The results show that the older the patient gets, the less the disparity in mean SCD between scoliosis and no scoliosis becomes, indicating that the SCD becomes slightly less insightful the older the patient is. However, different from the CTR, the standard deviation of the SCD is considerably higher, and there is a relatively larger difference in means, leading to a higher predictive value for the identification of scoliosis. On the right of Fig. 4, we show the ROC-Curve for pathology classification based on the SCD score. We see a difference in the diagnostic performance of the SCD for female and male patients with an AuROC of 0.77 and 0.80. ## Limitations Our method is not without its limitations. While accurate target masks can be generated, the dataset size is limited by the amount and characteristics of the available CT scans. CTs are typically a much scarcer resource than the common CXR, making it difficult to scale this approach to the levels of natural image datasets with millions of images due to the lack of volumetric data in comparison. CXR also tends to use a harder type of radiation than standard CTs. This fact, paired with contrast agents in CTs, might lead to a divergence of visual quality between the projected X-rays and real ones. Furthermore, the generated masks will not reach the resolution without running into minor interpolation errors as our projections enable a segmentation at the size of \(512\times 512\), common CXR approach resolutions of \(2048\times 2048\). Due to the dataset properties of PAX-Ray++, which we expand on in Methods, the resulting models are not as robust to novel views or irregular orientations such as heavy rotations. We, however, argue that through the use of stronger models, enhanced heavy data augmentations and further post-processing rules these errors can be minimized. ## Discussion Anatomical segmentation in CXR is a challenging task as manual annotation of large datasets is nearly impossible due to superimposed structures in the imaging domain. We take advantage of the fact that the CT and X-Ray imaging domains are reconstructed from similar sources of ionizing radiation, allowing us to collect anatomical annotations from various sources in the pixel-wise distinct CT domain, which we subsequently project to a 2D plane resembling an annotated X-Ray. Our proposed workflow aggregates label sources from ten different source datasets and utilizes strong 3D segmentation models to generate anatomically grounded pseudo-labels for 7377 thorax volumes. Using those, we create the PAX-Ray++ dataset, the first dataset with dense anatomical annotations in the CXR domain spanning label categories from abdominal and mediastinal regions to specific bones and organs. This dataset contains more than 14 thousand images and is, with more than 2 million instances, comparable in size to natural image datasets such as MS COCO. This procedure allows us to develop the first fine-grained anatomy segmentation models for chest X-Rays. By training on the PAX-Ray++ dataset, we create CXAS models that perform well on projected CTs and show a high agreement with medical personnel on CXR data. We validate our method on projected data, which shows good performance cross all classes. It can be noted that classes in the lateral view tend to have slightly better scores than their frontal counterparts, which is also attributed to the fact that frontal classes tend to be more fine-granular (i.e., left/right split). Our experiments also show on regular chest X-rays good alignment with trained experts. It is noted that even trained experts' annotations do not always agree in this domain, especially for abdominal classes. We demonstrate how to generate meaningful and interpretable patient features by extracting custom information from anatomical segmentations. We show how these can help in the large-scale analysis of patient cohorts for the exemplary pathologies 'cardiomegaly' and'scoliosis'. We can identify age-based trends for these cases and highlight how these anatomy-based features aid the interpretation of a patient. We believe that anatomy segmentation can aid the development of medical reports by providing grounded information. These segmentations can similarly enhance pathology recognition models by identifying associated regions and enabling potential reasoning approaches stemming from the co-occurrence of abnormal visual patterns and anatomy.
2302.00712
Blume-Emery-Griffiths model on Random Graphs
The Blume-Emery-Griffiths model with a random crystal field is studied in a random graph architecture, in which the average connectivity is a controllable parameter. The disordered average over the graph realizations is treated by replica symmetry formalism of order parameter functions. A self-consistent equation for the distribution of local fields is derived and numerically solved by a population dynamics algorithm. The results show that the average connectivity amounts to changes in the topology of the phase diagrams. Phase diagrams for representative values of the model parameters are compared with those obtained for fully connected mean field and renormalization group approaches.
R. Erichsen Jr., Alexandre Silveira, S. G. Magalhaes
2023-02-01T19:08:16Z
http://arxiv.org/abs/2302.00712v1
# Blume-Emery-Griffiths model on Random Graphs ###### Abstract The Blume-Emery-Griffiths model with a random crystal field is studied in a random graph architecture, in which the average connectivity is a controllable parameter. The disordered average over the graph realizations is treated by replica symmetry formalism of order parameter functions. A self consistent equation for the distribution of local fields is derived, and numerically solved by a population dynamics algorithm. The results show that the average connectivity amounts to changes in the topology of the phase diagrams. Phase diagrams for representative values of the model parameters are compared with those obtained for fully connected mean field and renormalization group approaches. keywords: BEG model, Disordered systems, Finite connectivity Pacs: 64.60.De, 87.19.lj, 87.19.lg + Footnote †: journal: Physica A ## 1 Introduction The three spin states (\(\sigma=0\), \(\pm 1\)) Blume-Emery-Griffiths (BEG) [1] was introduced with the aim to describe qualitatively superfluidity in \({}^{3}\)He - \({}^{4}\)He mixtures and phase separation. It is composed by three terms: the spin exchange interaction responsible by stabilizing a magnetic order, the local crystal field favoring non-active spin states \(\sigma=0\) and the non-local bi-quadratic interaction term favouring active spin state \(\sigma=\pm 1\) in neighboring sites. The competition between these three mechanisms is responsible for giving rise to a complex phase diagram. For instance, it is expected as an outcome, a phase diagram with second and first-order phase transitions lines and multicritical points. This has motivated this model to be investigated using several methods, such as mean field theory [2], effective field theory [3], cluster variation method [4; 5], Monte Carlo simulations [6; 7; 8], Bethe lattice [9; 10; 11] and renormalization group with hierarchical lattices [12]. This interest in the BEG model raises the question of what might be the effects of disorder on it. It is known that the presence of disorder might lead to changes in the boundary line order of the phase transitions and, consequently, affecting multicritical phase diagrams [13; 14]. In the case of the BEG model, disorder can be introduced in three ways: by choosing exchange interaction, crystal field or bi-quadratic exchange strengths as a random variable or even a combination of the previously mentioned possibilities. Each situation can describe different problems. For instance, the case of a random bi-quadratic exchange can be used in neural networks, where this term in the BEG model becomes a learning rule [15; 16]. On the other hand, a random crystal field can be applied to the modeling mixtures \({}^{3}\)He-\({}^{4}\)He in porous medium such as aerogel [17; 18]. The three possibilities of disorder in the BEG model and its combinations have been treated also in several techniques, such as mean field, renormalization group, Bethe lattice, transfer matrix, cluster variation, effective field theory (see, for instance, Refs. [14; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]). In the case of the random crystal field displayed in Ref. [29], results coming from mean field approximation (i. e., with infinite dimension or coordination number) and real space renormalization group can be compared. This last technique is quite suitable to describe low dimensionality scenarios. The most important difference between the two techniques is the suppression of the first-order phase transition lines or their replacing by continuous ones obtained in the renormalization group. The random crystal field BEG model with anti-ferromagnetic (AF) bi-quadratic coupling constant in the Bethe lattice was investigated in ref. [23]. Our purpose with this work is to study the random crystal field BEG (RCBEG) model on the ensemble of poissonian random graphs. The random graph offers the average connectivity \(c\) as a continuous, controllable parameter, allowing to investigate the RCBEG model for different regimes, i.e., from large connectivity, close to the fully connected limit corresponding to the mean field approximation, till to the small connectivity situation. One can expect that might occur important changes as compared with the mean field results involving the replacement and/or disappearance of multicritical points in the phase diagrams as long as \(c\) decreases. Indeed, this kind of changes for small \(c\) has been confirmed in the Blume-Capel model [32; 33] with an added disorder given by a random field. In that model, it was found that variations in \(c\) produced drastic changes in the multicritical phase diagrams as compared with fully connected case [34]. Indeed, some multicritical points disappear when \(c\) decreases [35]. To sum over the realizations of the random graph, we use the replica symmetry theory of order parameter functions [36; 37; 38]. As frustration is absent is this model, we anticipate that the replica symmetry solution is exact for this purpose. The same equations for this problem can be derived by the cavity method [39] after taking the ensemble average [40]. We approach the problem considering that the lattice of spins is a random graph, where the connectivity is finite and the degree of a site is given by a Poisson distribution. Thus, we offer an alternative route to approach this problem. Also, we study simultaneously the presence of random crystal field disorder and the disorder of the lattice. As it has been shown, the connectivity has a crucial role in phase diagram topology [35; 41; 42], allowing to change the nature of transitions and critical points through a fine-tuning of the control parameter The paper is organized as follows. In Sec. 2 we describe our model and derive the fundamental equations using replica symmetry theory for finite connectivity systems. In Sec. 3 we explain the method to numerically calculate the distribution of fields, some examples of order parameters are shown and the behavior of the system is described by drawing phase diagrams for the thermodynamic phases. The conclusions can be found in Sec. 4. ## 2 Model and Replica Procedure The model's Hamiltonian is \[H(\mathbf{\sigma})=-\frac{J}{c}\sum_{i<j}c_{ij}\sigma_{i}\sigma_{j}-\frac{K}{c} \sum_{i<j}c_{ij}\sigma_{i}^{2}\sigma_{j}^{2}+\sum_{i}\Delta_{i}\sigma_{i}^{2}\,, \tag{1}\] where \(\sigma\equiv\left\{\sigma_{i}\right\},i=1\ldots N\) denotes the state of the system, \(c_{ij}\) are independent, identically distributed random variables (i.i.d.r.v.) chosen from the distribution \[p\left(c_{ij}\right)=\frac{c}{N}\delta(c_{ij}-1)+\left(1-\frac{c}{N}\right) \delta(c_{ij})\,, \tag{2}\] indicating if the pair of spins \(i\) and \(j\) is connected (\(c_{ij}=1\)) or not (\(c_{ij}=0\)), with the constant \(c\) representing the mean connectivity. The local, random crystal fields \(\Delta_{i}\) are i.i.d.r.v. chosen from the distribution \[p\left(\Delta_{i}\right)=p\delta(\Delta_{i}-\Delta)+\left(1-p\right)\delta( \Delta_{i})\,. \tag{3}\] The constant \(K\) controls the strenght of the bi-quadratic couplings. Using the replica method we can write the disorder averaged free energy as \[f(\beta)=-\lim_{N\to\infty}\frac{1}{\beta N}\lim_{n\to 0}\frac{1}{n}\log \langle Z^{n}\rangle_{\mathbf{c},\mathbf{\Delta}}\,, \tag{4}\] where \[Z^{n}=\sum_{\boldsymbol{\sigma}_{1}\ldots\boldsymbol{\sigma}_{n}}\mathrm{e}^ {-\beta\sum_{\alpha}H(\boldsymbol{\sigma}_{\alpha})} \tag{5}\] is the replicated partition function \(\boldsymbol{\sigma}_{\alpha}\,,\alpha=1\ldots n\) denotes the state of replica \(\alpha\), \(\langle\cdot\rangle_{\mathbf{c}\mathbf{\Delta}}\), with \(\mathbf{c}\equiv\{c_{ij}\}\) and \(\mathbf{\Delta}\equiv\{\Delta_{i}\}\), denotes the disorder average. In the limit \(c/N\to 0\), the average over \(c_{ij}\) gives \[\langle Z^{n}\rangle=\sum_{\boldsymbol{\sigma}_{1}\ldots\boldsymbol{\sigma}_{ n}}\langle\mathrm{e}^{-\beta\sum_{\alpha,i}\Delta_{i}\sigma_{i\alpha}^{2}} \rangle_{\mathbf{\Delta}}\exp\Big{[}\frac{c}{2N}\sum_{i\neq j}\Big{(}\mathrm{e} ^{\frac{\beta J}{c}\sum_{\alpha}\sigma_{i\alpha}\sigma_{j\alpha}+\frac{\beta K }{c}\sum_{\alpha}\sigma_{i\alpha}^{2}\sigma_{j\alpha}^{2}}-1\Big{)}\Big{]}\,. \tag{6}\] To transform into a single spin problem, order functions \[P(\boldsymbol{\sigma})=\frac{1}{N}\sum_{i}\delta_{\boldsymbol{\sigma} \boldsymbol{\sigma}_{i}}\,, \tag{7}\] which represent the probability of a replicated spin variable \(\boldsymbol{\sigma}_{i}\) to assume the replica state \(\boldsymbol{\sigma}\), and their conjugated order functions \(\hat{P}(\boldsymbol{\sigma})\), are introduced. The partition function can be rewritten as (see the appendix) \[\langle Z^{n}\rangle= \int\prod_{\boldsymbol{\sigma}}d\hat{P}(\boldsymbol{\sigma})dP( \boldsymbol{\sigma})\exp N\Big{\{}\sum_{\boldsymbol{\sigma}}\hat{P}( \boldsymbol{\sigma})P(\boldsymbol{\sigma})+\frac{c}{2}\sum_{\boldsymbol{ \sigma}\boldsymbol{\sigma}^{\prime}}P(\boldsymbol{\sigma})P(\boldsymbol{\sigma}^ {\prime})\] \[\times\Big{(}\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha}\sigma_{ \alpha}\sigma_{\alpha}^{\prime}+\frac{\beta K}{c}\sum_{\alpha}\sigma_{\alpha}^ {2}\sigma_{\alpha}^{\prime 2}}-1\Big{)}+\log\sum_{\boldsymbol{\sigma}}\langle \mathrm{e}^{-\hat{P}(\boldsymbol{\sigma})-\beta\Delta\sum_{\alpha}\sigma_{ \alpha}^{2}}\rangle_{\Delta}\Big{\}}\,. \tag{8}\] In the thermodynamic limit the integral can be evaluated through the saddle-point method. We eliminate the \(\hat{P}(\boldsymbol{\sigma})\)'s through the saddle-point equations and rewrite the free-energy as \[f(\beta) =-\lim_{n\to 0}\frac{1}{\beta n}\mathrm{Extr}\Big{\{}-\frac{c}{2} \sum_{\boldsymbol{\sigma}\boldsymbol{\sigma}^{\prime}}P(\boldsymbol{\sigma})P( \boldsymbol{\sigma}^{\prime})\Big{(}\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha} \sigma_{\alpha}\sigma_{\alpha}^{\prime}+\frac{\beta K}{c}\sum_{\alpha}\sigma_{ \alpha}^{2}\sigma_{\alpha}^{\prime 2}}-1\Big{)} \tag{9}\] where Extr amounts to take the extreme of the expression between braces relatively to \(P(\mathbf{\sigma})\), which gives the remaining saddle-point equations \[P(\mathbf{\sigma})=\frac{1}{\mathcal{N}}\Big{\langle}\exp\Big{[}c\sum_{\mathbf{\sigma}^ {\prime}}P(\mathbf{\sigma}^{\prime})\Big{(}\mathrm{e}^{\frac{\beta J}{c}\sum_{ \alpha}\sigma_{\alpha}\sigma_{\alpha}^{\prime}+\frac{K\beta}{c}\sum_{\alpha} \sigma_{\alpha}^{2}\sigma_{\alpha}^{\prime 2}}-1\Big{)}-\beta\Delta\sum_{\alpha} \sigma_{\alpha}^{2}\Big{]}\Big{\rangle}_{\Delta}\,, \tag{10}\] where \(\mathcal{N}\) is a normalization factor. We search solutions of Eq. (10) satisfying the RS Ansatz, where the order function is invariant under replica index permutations, which are written in the form \[P(\mathbf{\sigma})=\int\mathcal{D}W(x,\!y)\frac{\mathrm{e}^{\beta x\sum_{\alpha} \sigma_{\alpha}+\beta y\sum_{\alpha}\sigma_{\alpha}^{2}}}{\Big{(}\sum_{\sigma} \mathrm{e}^{\beta x\sigma+\beta y\sigma^{2}}\Big{)}^{n}}\,, \tag{11}\] where \(\mathcal{D}W(x,\!y)\equiv dxdyW(x,\!y)\). Expanding the exponential of Eq. (10) and introducing Eq. (11) we obtain a self consistent equation for the distribution of local fields (details in the Appendix) \[W(x,\!y)=\sum_{k=0}^{\infty}\frac{c^{k}\mathrm{e}^{-c}}{k!}\Big{\langle}\int \prod_{l=1}^{k}\mathcal{D}W(x_{l},\!y_{l})\delta\Big{[}x-\frac{1}{\beta}\sum_{ l}\phi(x_{l},\!y_{l})\Big{]}\delta\Big{[}y+\Delta-\frac{1}{\beta}\sum_{l}\psi(x_{l}, \!y_{l})\Big{]}\Big{\rangle}_{\Delta}\,, \tag{12}\] where \[\phi(x,\!y)=\frac{1}{2}\ln\frac{\chi_{+1}(x,\!y)}{\chi_{-1}(x,\!y)}\,, \tag{13}\] \[\psi(x,\!y)=\frac{1}{2}\ln\frac{\chi_{+1}(x,\!y)\chi_{-1}(x,\!y)}{\chi_{0}^{2 }(x,\!y)}\,, \tag{14}\] and \[\chi_{\sigma}(x,\!y)=\sum_{\tau}\mathrm{e}^{\beta x\tau+\frac{\beta}{c}J\sigma \tau+\beta y\tau^{2}+\frac{\beta}{c}K\sigma^{2}\tau^{2}}\,. \tag{15}\] The relevant observables are the average magnetization \[m=\sum_{\mathbf{\sigma}}\sigma_{\alpha}P(\mathbf{\sigma})=\int\mathcal{D}W(x,y)\frac {2\sinh(\beta x)}{\mathrm{e}^{-\beta y}+2\cosh(\beta x)} \tag{16}\] and the occupation number \[Q=\sum_{\mathbf{\sigma}}\sigma_{\alpha}^{2}P(\mathbf{\sigma})=\int\mathcal{D}W(x,y) \frac{2\cosh(\beta x)}{\mathrm{e}^{-\beta y}+2\cosh(\beta x)}\,. \tag{17}\] To determine the RS free-energy we insert the Ansatz (11) in Eq. (9) and take the limit \(n\to 0\), which results \[f(\beta)=\frac{c}{2\beta}\int\mathcal{D}W(x,\!y)\mathcal{D}W(x^{ \prime},\!y^{\prime})\frac{\sum_{\sigma\sigma^{\prime}}\mathrm{e}^{\beta x \sigma+\beta y\sigma^{2}+\beta x^{\prime}\sigma^{\prime}+\beta y^{\prime} \sigma^{\prime 2}+\frac{\beta}{c}J\sigma\sigma^{\prime}+\frac{\beta}{c}K\sigma^{2} \sigma^{\prime 2}}}{\chi_{0}(x,\!y)\chi_{0}(x^{\prime},\!y^{\prime})}\] \[\qquad\qquad-\frac{1}{\beta}\sum_{k=0}^{\infty}P_{k}\int\prod_{l= 1}^{k}\mathcal{D}W(x_{l},\!y_{l})\Big{\langle}\ln\Big{(}\sum_{\sigma} \mathrm{e}^{-\beta\Delta\sigma^{2}}\prod_{l}\frac{\chi_{\sigma}(x_{l},\!y_{l}) }{\chi_{0}(x_{l},\!y_{l})}\Big{)}\Big{\rangle}_{\Delta}\,, \tag{18}\] where \(P_{k}=c^{k}\mathrm{e}^{-c}/k!\) is a poissonian weight. Figure 2: Magnetization \(m\), occupation number \(Q\) and free-energy as functions of \(\Delta\) for \(p=0.85\), \(c=8\), \(K=2\) and \(T=0.05\). Solid black lines on \(m\) and \(Q\) represent the stable order parameters values. Dashed black (dashed red) line represents metastable FM (PM) solution. Dashed black (red) line on \(f\) is the FM (PM) free-energy raw data. Solid black (red) line is a polynomial adjust of the FM (PM) data. The arrow signals the crossing of FM and PM free-energies. Figure 1: Magnetization \(m\), occupation number \(Q\) and free-energy as functions of \(T\) for \(p=1\), \(c=8\), \(K=2\) and \(D=1.55\). Solid black lines on \(m\) and \(Q\) represent the stable order parameters values. Dashed black (dashed red) line represents metastable FM (PM) solution. Dashed black line on \(f\) is the metastable FM free-energy raw data. Solid black line is a polynomial adjust of the FM data. Solid red line is the PM free-energy data. The arrow signals the crossing of FM and PM free-energies. ## 3 Results According to Eqs. (16) - (18), the relevant order parameters are obtained through the calculation of the local field distribution, given by the self consistent equation (12). This is done numerically, via a population dynamics algorithm [39], as follows: (i) a population of \(\mathcal{N}\) two-component fields (\(x\),\(y\)) is created; (ii) an integer \(k\) is randomly sorted from a Poisson distribution of mean \(c\), and \(k\) fields are randomly chosen from the population; (iii) with the sorted fields, evaluate the two summations appearing in the delta functions of Eq. (12) and (iv) the results are assigned to the components of a further randomly chosen field (\(x^{*}\),\(y^{*}\)). The algorithm is repeated till the convergence to a stable population distribution \(W(x,y)\). Throughout this work we used populations of \(\mathcal{N}=100\),\(000\) fields and convergence time that amounts to 5,\(000\),\(000\) iterations. Still, each point is averaged over \(20\) runs. As shown in Eq. (18), the first free-energy term contains a double integral and the second term contains a \(k\)-fold integral over the local field distribution. To evaluate these terms, we follow a Montecarlo algorithm: a large number (1,\(000\),\(000\)) of pairs and \(k\)-sets of local fields are randomly chosen and and their contributions are summed. This results in a noisy curve, contrary to \(m\) and \(Q\) evaluations that contain a simple integral. To overcome the noise, the \(f\) curves are adjusted by a polynomial. As example of the outcome, order parameters and free-energy curves are shown in Fig. 1, as functions of \(T\), for \(c=8\), \(K=2\), \(p=1\) and \(\Delta=1.55\). Here and in the sequel the energy scale is fixed by assuming the bi-linear coupling constant \(J=1\). For \(\Delta=1.55\), PM and FM phases coexist from \(T=0\) till a continuous FM - PM transition at \(T\approx 0.693\). To overcome the noisy free-energy and find the discontinuous transition locus we resort to a polynomial fit which indicates the crossing of the free-energy curves at \(T\approx 0.266\). PM is stable in the \(0\leq T\lesssim 0.266\) and \(0.693\lesssim T\) interval. FM is stable in the \(0.266\lesssim T\lesssim 0.693\) interval. This characterizes a re-entrant behavior. The discontinuous transition at \(T\approx 0.266\) appears as a dashed red line on Fig. 5a and continuous transition at \(T\approx 0.693\) appears as a solid red line in the same figure. Order parameters and free-energy curves as functions of the crystal field \(\Delta\), for \(c=8\), \(K=2\), \(p=0.85\) and \(T=0.05\) are shown in Fig. 2. The curves show a high \(m\), high \(Q\) FM\({}_{1}\) phase at small \(\Delta\), a low \(m\), low \(Q\) FM\({}_{2}\) at large \(\Delta\) and a co-existence region between them. As mentioned above, we resort to a linear fit to find a crossing of the free-energy curves at \(\Delta\approx 1.75\). This reveals a Figure 3: Magnetization \(m\) and occupation number \(Q\) as functions of \(T\) for \(p=0.85\), \(c=4\), \(K=2\) and \(\Delta=2.07\). discontinuous transition between the two ferromagnetic phases, represented by the dashed red line on Fig. 5b. The reason for the existence of two FM's phases will be discussed below. The order parameters \(m\) and \(Q\) as functions of the temperature for \(c=4\), \(p=0.85\), \(K=2\) and \(\Delta=2.07\) are shown in Fig. 3. This figure shows, as the temperature increases, a FM\({}_{2}\) phase, then a re-entrant PM phase, a FM\({}_{2}\) phase and a PM phase at high \(T\). To give a complete overview of a model with so many parameters, keeping a reasonable amount of pictures, is a difficult task, and the zero-temperature \(K\) versus \(\Delta\) phase diagram may guide us. This diagram is shown in Fig. 4 for the representative case \(c=4\) and \(p=0.85\), revealing a discontinuous FM\({}_{1}\) - FM\({}_{2}\) transition and a continuous FM\({}_{2}\) - PM transition. The two ferromagnetic phases are present, at low temperature, whenever \(p<1\), i.e., in the presence of disorder. This disorder acts turning off the crystal field \(\Delta\) in a \(1-p\) fraction of sites, this way favouring the active states in these sites. The higher magnetization FM\({}_{1}\) is found at low \(\Delta\) value, while the lower magnetization FM\({}_{2}\) and PM are found for higher \(\Delta\)'s. Since the bi-quadratic coupling constant \(K\) favours the active states, higher magnetization phases are found as \(K\) increases. It is unnecessary to add further zero temperature diagrams, but it is worthy to mention that, as the connectivity \(c\) increases, or \(p\) decreases, FM\({}_{2}\) becomes stable at large \(\Delta\) and there is no more a PM phase at \(T=0\). To describe the finite temperature behavior, \(T\) versus \(\Delta\) phase diagrams for \(K=2\) and \(K=5\) are presented in Figs. 5 and 6, respectively. For each \(K\) value results for representative disorder parameters \(p=1\), \(p=0.85\) and \(p=0.5\), as well as connectivity values \(c=4\) and \(c=8\), are shown. Results for \(p=0.5\) with \(c=25\) and \(c=100\) were also included, allowing for a better comprehension of the convergence to the mean field approach, which is expected for large \(c\) (see ref. [29]). Smaller \(c\) values, like \(0<c<1\) are below the percolation limit \(c=1\) preventing, thus, the appearing of ordered phases. This way, the solutions would be \(m=0\), \(Q>0\). The most interesting feature is the appearing of two paramagnetic phases, PM\({}_{1}\) and PM\({}_{2}\) (to be defined below), depending on parameters \(T\) and \(\Delta\). The ordered case, \(p=1\) is shown in Figs. 5(a), for \(K=2\) and 6(a), for \(K=5\). If \(K=2\), there is a FM phase at low \(T\), low \(\Delta\) and a single PM phase elsewhere, with a continuous transition at high temperature, a re-entrant discontinuous transition at high \(\Delta\) and a tricritical point (TCP) between them. TCPs, critical points (CPs) and critical end points (CEPs) are indicated as circles, squares and triangles in the figures. The re-entrant behavior is illustrated in Fig. 1, described above. If \(K=5\), in addition to the FM phase there are two paramagnetic phases, PM\({}_{1}\) and PM\({}_{2}\). The co-existence of PM\({}_{1}\) and PM\({}_{2}\) is typical of models with a non-magnetic state \(\sigma=0\), in which a sufficiently large crystal field suppresses the active \(\sigma=\pm 1\) states. The high \(Q\) and low \(Q\) PM phases are named PM\({}_{1}\) and PM\({}_{2}\), respectively. The transition from FM to PM\({}_{1}\) is continuous, while the transition from PM\({}_{2}\) to FM and to PM\({}_{1}\) is discontinuous and re-entrant, with a CEP where the two lines meet. The PM\({}_{1}\) - PM\({}_{2}\) discontinuous transition ends at CP. The \(p=1\) diagrams are similar to those concerning the Bethe lattice approach reported in [9], although the re-entrant behavior in the discontinuous transition is more pronounced in the present paper. The re-entrant behavior in the ordered system with \(K=2\) was also reported in [29]. As a further remark, our results are qualitatively equivalent for both \(c=4\) and \(c=8\), although a lowering \(c\) appear to favour ordered phases. Disorder, even for a moderate amount, i.e. \(p=0.85\), unfolds the ferromagnetic phase in two, namely FM\({}_{1}\) and FM\({}_{2}\). The first one is reminiscent of the ordered system's FM phase. The second one, located at low \(T\) and large \(\Delta\), arises consequently to disorder that turns off the crystal field in a fraction \(1-p\) of sites favouring the active states in these sites, as stated above. Connectivity effects become relevant. Figures 5(b) and 6(b) show that, for \(c=8\), FM\({}_{2}\) extends unbounded in \(\Delta\) in contrast to \(c=4\), where there appears a zero temperature PM phase. We argue that a moderate level of disorder is not a sufficient condition to stabilize a FM\({}_{2}\) phase at large \(\Delta\). Instead, it must be associated to a large cooperative FM neighborhood. This condition is found for \(c=8\), but it is not for \(c=4\). The random network with \(c=8\) and a moderate amount of disorder behaves similarly to a fully connected one, whose mean-field results are reported in [29]. In both models there is a part of the FM\({}_{1}\) - PM\({}_{2}\) that is discontinuous. In our case, although the finite connectivity, the random graph architecture still preserves a high dimensional nature. Conversely, renormalization group results for bi-dimensional systems, also reported in [29], show that this transition is entirely continuous. To end this part, additional qualitative differences between \(K=2\) and \(K=5\) for \(p=0.85\) should be reported. For \(K=2\), \(c=4\), there is a discontinuous FM\({}_{1}\) - FM\({}_{2}\) transition that ends in a CP, shown in the inset of Fig. 5(b). This way, the transition between the two FM phases and the PM is always continuous and re-entrant, as illustrated in Fig. 3. Conversely, for \(K=5\) and \(c=4\) there is a CEP and a TCP in the FM - PM transition, as shows Fig. 6(b). This figure also shows, detailed in the inset, for \(c=8\), a discontinuous PM\({}_{1}\) - PM\({}_{2}\) transition ending in a CP. The scenario for a larger disorder, e.g. \(p=0.5\), is shown in Figs. 5(c) and 6(c) corresponding to \(K=2\) and \(K=5\), respectively. There is little to remark in these figures beyond the \(\Delta\)-dependent continuous FM - PM transition. The expectation for lower \(p\) values is that the critical temperature approaches a constant \(T\sim 1\) for all \(\Delta\). This behavior is significantly distinct from the mean-field description for high disorder [29]. To investigate the behavior of the highly disordered random network as \(c\) increases, the phase diagrams for \(c=25\) and \(c=100\), \(K=2\) and \(K=5\) were drawn, for \(p=0.85\). The results are shown in Figs. 5(d), for \(K=2\) and 6(d), for \(K=5\). The results show that the convergence to the fully connected scenario is faster for \(K=5\). For \(c=25\) the FM phase unfolds in FM\({}_{1}\) and FM\({}_{2}\) with a discontinuous transition between them ending in a CP. The fully connected scenario is observed for \(c=100\), with a CEP, a TCP and discontinuous FM\({}_{1}\) - PM transition between them. Figure 4: \(K\) versus \(\Delta\) phase diagram for \(T=0\), \(c=4\) and \(p=0.85\). Solid (dashed) lines corresponds to continuous (discontinuous) transition. Figure 5: Thermodynamic phase diagrams \(T\) versus \(\Delta\) for \(K=2\); (a) \(p=1\), (b) \(p=0.85\) and (c) \(p=0.5\). The inset in (b) shows in detail the vicinity of the critical point. The connectivity values are \(c=4\) (black) and \(c=8\) (red). (d) Thermodynamic phase diagrams for \(p=0.5\), \(K=2\), \(c=25\) (black), \(c=100\) (red). Solid (dashed) lines correspond to continuous (discontinuous) transitions. Circles, squares and triangles represent tri-critical points, critical points and critical end points, respectively Figure 6: (a) Thermodynamic phase diagrams \(T\) versus \(\Delta\) for \(p=1\)\(K=5\), \(c=4\) (black), \(c=8\) (red). (b) The same, but for \(p=0.85\); the inset shows in detail the vicinity of the critical points. (c) The same, but for \(p=0.5\). (d) Thermodynamic phase diagrams for \(p=0.5\), \(K=5\), \(c=25\) (black), \(c=100\) (red). Solid (dashed) lines correspond to continuous (discontinuous) transitions. Circles, squares and triangles represent tri-critical points, critical points and critical end points, respectively. ## 4 Conclusions The BEG model with a disordered random crystal field was revisited, in a random graph topology, employing a finite connectivity technique. The disorder was introduced in the crystal field, as in [29] and through the random graph architecture. We argue that, instead the crystal field, disorder could be introduced in the bi-quadratic coupling constant and it would play a similar role. Our model for disorder 'turns off' the crystal field in a fraction of sites, allowing to this fraction to assume active states \(\sigma=\pm 1\) without energetic penalty, even for large crystal field values. Models with an inactive state \(\sigma=0\), like the ordered BEG model, unfolds the PM phase in a high temperature PM\({}_{1}\) and a low temperature PM\({}_{2}\). The main role that the disorder plays is to unfold the FM phase in a high magnetization FM\({}_{1}\) and a low magnetization FM\({}_{2}\). The last one survives at high crystal field values because the crystal field is 'turned off' in a finite fraction of sites. We fixed \(K=2\) and \(K=5\). Anti-ferromagnetic coupling constant \(K<0\), as reported in [23] for the Bethe lattice with fixed coordination number, allow for a richer thermodynamic scenario with the appearing of a quadrupolar staggered phase. To do the same in a random network architecture would require the introduction of a sub-network or a random network of clusters, and this remains in our scope for future works. To end this work we resume the most relevant results. i) We found that the moderate disorder regime, e.g. \(p=0.85\), is the more sensitive to changes in the average connectivity, because the stabilization of FM\({}_{2}\) relies on the cooperative effect of a large neighborhood. Otherwise, for small \(c\), a PM phase sets at low \(T\) and large \(\Delta\). This is the regime where the finite connectivity network becomes the more distinct from the fully connected one. ii) For a large disorder, like \(p=0.5\), the FM\({}_{1}\) - FM\({}_{2}\) discontinuous transition and the associated CP disappear at low \(c\) values, like \(c=4\) and \(c=8\), appearing for \(c\) as large as \(c=25\). iii) A phase diagram similar to the fully connected mean field description one only appears for \(c=100\) and \(K=5\), but not for \(c=100\) and \(K=2\). This suggests, in general lines, that some of the features observed in mean field phase diagrams are artifacts that does not exist in most of the real, finite connectivity physical systems. ## Acknowledgements The authors thanks to Dr. Nilton Branco for fruitful discussions and for carefully reading the manuscript. This work was supported, in part, by CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, Brazil). ## Appendix: self consistent equation for the field distribution The site spin variables appearing in the inner exponential of the replicated partition function, Eq. (6), are removed using the identity \[1=\prod_{\alpha=1}^{n}\sum_{\sigma_{\alpha}}\delta_{\sigma_{\alpha}\sigma_{ \alpha i}}=\sum_{\boldsymbol{\sigma}}\delta_{\boldsymbol{\sigma}\boldsymbol{ \sigma}_{i}}\,, \tag{19}\] where \(\boldsymbol{\sigma}=\{\sigma_{1}\ldots\sigma_{n}\}\) is a vector of replicated spin variables and \(\boldsymbol{\sigma}_{i}\) is the replicated spin variable associated to spin \(i\). Introducing the order functions \(P(\boldsymbol{\sigma})\) through the identity \[1=\int\prod_{\boldsymbol{\sigma}}dP(\boldsymbol{\sigma})\delta\Big{[}P( \boldsymbol{\sigma})-\frac{1}{N}\sum_{i}\delta_{\boldsymbol{\sigma}\boldsymbol {\sigma}_{i}}\Big{]}\,, \tag{20}\] Eq. (6) becomes \[\langle Z^{n}\rangle=\sum_{\mathbf{\sigma}_{1}\ldots\mathbf{\sigma}_{n}}\int \prod_{\mathbf{\sigma}}dP(\mathbf{\sigma})d\hat{P}(\mathbf{\sigma})\exp\Big{\{} \sum_{\mathbf{\sigma}}\hat{P}(\mathbf{\sigma})P(\mathbf{\sigma}) \tag{21}\] \[+\frac{cN}{2}\sum_{\mathbf{\sigma}\mathbf{\sigma}^{\prime}}P(\mathbf{\sigma})P (\mathbf{\sigma}^{\prime})\Big{(}\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha}\sigma _{\alpha}\sigma_{\alpha}^{\prime}+\frac{\beta K}{c}\sum_{\alpha}\sigma_{\alpha }^{2}\sigma_{\alpha}^{\prime 2}}-1\Big{)}\] \[-\frac{1}{N}\sum_{\mathbf{\sigma}}\hat{P}(\mathbf{\sigma})\sum_{i}\delta_ {\mathbf{\sigma}\mathbf{\sigma}_{i}}\Big{\}}\langle\mathrm{e}^{-\beta\sum_{\alpha i} \Delta_{i}\sigma_{i\alpha}^{2}}\rangle_{\mathbf{\Delta}}\,.\] Summing over the spin variables \(\mathbf{\sigma}_{i}\) and changing variables \(\hat{P}(\mathbf{\sigma})\to N\hat{P}(\mathbf{\sigma})\), Eq. (8) is obtained. Expanding the exponential en Eq. (10) and inserting the RS Ansatz, we obtain \[P(\mathbf{\sigma})= \sum_{k=0}^{\infty}P_{k}\Big{\langle}\mathrm{e}^{-\beta\Delta \sum_{\alpha}\sigma_{\alpha}^{2}}\Big{\rangle}_{\Delta}\int\prod_{l=1}^{k} \frac{\mathcal{D}W(x_{l},y_{l})}{\Big{(}\sum_{\sigma}\mathrm{e}^{\beta x_{l} \sigma_{l}+\beta y_{l}\sigma_{l}^{2}}\Big{)}^{n}}\exp\sum_{\alpha=1}^{n}\ln \chi_{\sigma_{\alpha}}(x_{l},y_{l})\,. \tag{22}\] Now we withdraw the \(\sigma_{\alpha}\) variables outside of the log using the identity \(\sum_{\sigma}\delta_{\sigma\sigma_{\alpha}}=1\), \[\sum_{\alpha=1}^{n}\log\chi_{\sigma_{\alpha}}(x_{l},y_{l})=\sum_{\alpha=1}^{n }\sum_{\sigma}\delta_{\sigma\sigma_{\alpha}}\ln\chi_{\sigma}(x_{l},y_{l})\,. \tag{23}\] The Kronecker delta representation for the spin states \(\sigma=\{-1,0,1\}\) is given by \[\delta_{\sigma\sigma_{\alpha}}=1-\sigma^{2}-\sigma_{\alpha}^{2}+\frac{1}{2} \sigma\sigma_{\alpha}+\frac{3}{2}\sigma^{2}\sigma_{\alpha}^{2}\,. \tag{24}\] Summing over \(\sigma\) we get, after some algebra, \[P(\mathbf{\sigma})=\sum_{k=0}^{\infty}P_{k}\Big{\langle}\int\prod_{l =1}^{k} \frac{\mathcal{D}W(x_{l},y_{l})}{\Big{(}\sum_{\sigma}\mathrm{e}^{ \beta x_{l}\sigma_{l}+\beta y_{l}\sigma_{l}^{2}}\Big{)}^{n}}\exp\Big{\{} \Big{(}\sum_{\alpha}\sigma_{\alpha}\Big{)}\sum_{l=1}^{k}\phi(x_{l},y_{l})\] \[+\Big{(}\sum_{\alpha}\sigma_{\alpha}^{2}\Big{)}\sum_{l=1}^{k}\psi (x_{l},y_{l})-\Big{(}\sum_{\alpha}\sigma_{\alpha}^{2}\Big{)}\beta\Delta\Big{\}} \Big{\rangle}_{\Delta}\,. \tag{25}\] Substitution of RS Ansatz in the LHS and taking the limit \(n\to 0\) \[\int\mathcal{D}W(x, y)\mathrm{e}^{\beta x\sum_{\alpha}\sigma_{\alpha}+\beta y \sum_{\alpha}\sigma_{\alpha}^{2}}=\int dxdy\Big{\{}\sum_{k=0}^{\infty}P_{k} \Big{\langle}\int\prod_{l=1}^{k}\mathcal{D}W(x_{l},y_{l}) \tag{26}\] \[\times\delta\Big{[}x-\beta^{-1}\sum_{l}\phi(x_{l},y_{l})\Big{]} \delta\Big{[}y+\Delta-\beta^{-1}\sum_{l}\psi(x_{l},y_{l})\Big{]}\Big{\rangle} _{\Delta}\Big{\}}\mathrm{e}^{\beta x\sum_{\alpha}\sigma_{\alpha}+\beta y\sum_{ \alpha}\sigma_{\alpha}^{2}}\,.\] Comparing both sides of this equation we obtain Eq. (12).
2310.19052
Exploring the Emotional Landscape of Music: An Analysis of Valence Trends and Genre Variations in Spotify Music Data
This paper conducts an intricate analysis of musical emotions and trends using Spotify music data, encompassing audio features and valence scores extracted through the Spotipi API. Employing regression modeling, temporal analysis, mood transitions, and genre investigation, the study uncovers patterns within music-emotion relationships. Regression models linear, support vector, random forest, and ridge, are employed to predict valence scores. Temporal analysis reveals shifts in valence distribution over time, while mood transition exploration illuminates emotional dynamics within playlists. The research contributes to nuanced insights into music's emotional fabric, enhancing comprehension of the interplay between music and emotions through years.
Shruti Dutta, Shashwat Mookherjee
2023-10-29T15:57:31Z
http://arxiv.org/abs/2310.19052v1
Exploring the Emotional Landscape of Music: An Analysis of Valence Trends and Genre Variations in Spotify Music Data ###### Abstract This paper conducts an intricate analysis of musical emotions and trends using Spotify music data, encompassing audio features and valence scores extracted through the Spotipi API. Employing regression modeling, temporal analysis, mood transitions, and genre investigation, the study uncovers patterns within music-emotion relationships. Regression models -- linear, support vector, random forest, and ridge-- are employed to predict valence scores. Temporal analysis reveals shifts in valence distribution over time, while mood transition exploration illuminates emotional dynamics within playlists. The research contributes to nuanced insights into music's emotional fabric, enhancing comprehension of the interplay between music and emotions through years. ## 1 Introduction Music has long been recognized as a universal medium that possesses the unique ability to evoke a wide range of emotions in listeners. The intricate interplay between music and emotions has captivated scholars, artists, and enthusiasts alike, prompting exploration into the underlying mechanisms that govern this relationship. With the advent of digital music platforms, such as Spotify, an unprecedented wealth of data has become available, offering a gateway to unravel the emotional fabric encoded within musical compositions. This paper delves into the realm of musical emotions by conducting a comprehensive analysis of Spotify music data, shedding light on the nuanced dynamics of emotional expression in music. The emotional quality of music is often quantified using valence -- a dimension that captures the positivity or negativity of emotions conveyed by a musical piece. Valence serves as a lens through which we can explore how musical attributes such as tempo, harmony, and rhythm converge to create a rich tapestry of emotional responses. Leveraging the capabilities of the Spotipi API, we harness audio features and valence scores to embark on a multidimensional journey through musical emotions. The objectives of this research are as follows. First, we employ a suite of regression models, including linear regression, support vector regression, random forest regression, and ridge regression, to predict valence scores based on the extracted audio attributes. By evaluating the performance of each model, we discern their effectiveness in capturing the intricate emotional nuances embedded within the audio data. Next, we delve into temporal trends to uncover shifts in valence distribution over different time periods. Our analysis aims to elucidate whether distinct eras are characterized by specific emotional qualities in music. We also investigate mood transitions within playlists to uncover how valence evolves as playlists unfold, offering insights into the dynamic nature of musical emotions within curated listening experiences. ## 2 Methodology ### Data Collection and Preprocessing **Data Source:** We obtained our music dataset from Spotify, including audio features such as "Acousticness," "Danceability," "Energy," and more, for a diverse range of tracks. For the purpose of our analysis, we used 100 different songs randomly selected from the years 1900-2020. A python script was written which fetched unique 100 songs using the Spotify API. The details of the retrieved tracks are as in Fig[1]. **Data Cleaning:** We performed thorough data cleaning, handling missing values, duplicate entries, and outliers to ensure data quality and consistency. **Selection of Features:** Audio features were judiciously selected to encapsulate the multifaceted aspects of musical compositions. Features encompassed dimensions such as acousticness, danceability, energy, instrumentalness, key, liveness, loudness, mode, speechiness, tempo, and time signature. Of particular significance is the inclusion of valence--a pivotal dimension that quantifies the emotional sentiment of music on a continuum ranging from negative to positive. ### Exploratory Data Analysis (EDA) **Descriptive Statistics:** We conducted an initial examination of the dataset using descriptive statistics, summarizing the distribution, central tendencies, and variation of each feature. **Visualizations:** Employing various data visualization techniques, we explored relationships between features, analyzed their distributions, and visualized temporal trends as in Fig[2], Fig[3] and Fig[4]. _Fig[2]: Relationships between some features with valence_ _Fig[3]: Relationships between different features_ _Fig[4]: Values and distribution of valence_ _Fig[4]: Values and distribution of valence_ _Fig[5]: Relationships between different features_ _Fig[6]: Relationships between different features_ _Fig[7]: Relationships between different features_ _Fig[8]: Relationships between different features_ _Fig[9]: Relationships between different features_ _Fig[10]: Relationships between different features_ _Fig[11]: Sources of different features_ [MISSING_PAGE_POST] _Fig[5]: Cross-feature correlation matrix heatmap_ ### Predictive Modeling **Model Selection:** We chose several regression models, including Linear Regression, Random Forest Regression, Support Vector Regression, and Ridge Regression, to predict valence based on the extracted features. **Model Training and Evaluation:** We split the dataset into training and testing sets, trained each model on the training data, and evaluated model performance using metrics such as Mean Absolute Error, Mean Squared Error, Root Mean Squared Error, and R-squared. **Feature Importance:** For models that allowed feature importance analysis, such as Random Forest and Ridge Regression, we quantified the impact of each feature on valence prediction. The results for each model are as follows: **Linear Regression:** \begin{tabular}{c c c} & Feature & Importance \\ \(\beta\) & Energy & 0.836558 \\ 2 & Renew11187 & 0.636558 \\ 3 & Acoutfulness & 0.729163 \\ 4 & Spreathness & 0.729266 \\ 5 & State Signature & 0.72925 \\ 6 & Uiverse & 0.088591 \\ 7 & Instumentalness & 0.507758 \\ 8 & State & 0.086583 \\ 9 & Key & 0.086720 \\ 10 & Key & 0.086720 \\ 11 & Key & 0.086720 \\ 12 & Key & 0.086720 \\ 13 & Key & 0.086726 \\ 14 & Interm & 0.6365839 \\ 15 & State Absolute Error & 0.50776583 _Fig[6]: Yalence trends over time_ **K-means Clustering:** We applied K-means clustering to group songs from each decade into clusters based on their feature profiles, aiming to identify distinct musical trends. ### Visualization and Analysis **Cluster Mean Analysis:** We analyzed the mean feature values of clusters within each decade, identifying trends and differences that characterized each cluster. **PCA Visualization:** To visualize cluster distributions, we applied Principal Component Analysis (PCA) to reduce feature dimensions and plotted the clusters in a reduced 2D space. _Fig[7]: Cluster Visualisation_ ## 3 Conclusions ### Regression Models and Feature Importance **Key Predictors:** "Energy" and "Danceability" consistently stood out as crucial predictors of valence, indicating their pivotal roles in shaping the emotional quality of songs. **Complex Relationships:** Linear regression's limited performance and negative R-squared value suggest that valence prediction involves intricate non-linear relationships among audio features. **Feature Impact:** Feature importance scores highlighted the strong influence of "Energy" and "Danceability," while attributes like "Year" and "Tempo" held comparatively lower importance in predicting valence. **Multifaceted Emotion:** The low R-squared of the linear model implies that valence cannot be solely explained by the provided features, suggesting that valence is influenced by additional complex factors not captured in the dataset. ### Decade-Wise Clustering and Analysis **Cluster Patterns:** Across most decades, there is a predominant cluster (Cluster 4) that contains songs with moderate to high energy, danceability, and valence, indicating a trend towards more energetic and uplifting music. This could reflect changing preferences in music consumption. **Temporal Trends:** The analysis reveals shifts in musical characteristics over time. For instance, during the 1940s and 1950s, Cluster 1 dominates, representing songs with low energy and danceability but higher acousticness. This could be attributed to the instrumentation and production techniques of that era. **Energy and Valence:** Over the decades, there seems to be a general trend towards songs with higher energy and valence, especially from the 1960s onwards. This may align with the desire for more positive and engaging musical experiences. **Instrumentalness and Speechiness:** Instrumentalness and speechiness exhibit intriguing patterns. While Cluster 3 in the 1950s shows high instrumentalness and speechiness, these features seem to diverge in the later decades, possibly reflecting advancements in musical production techniques. **Cluster Variability:** Some clusters remain consistent across decades (e.g., Cluster 0's high acousticness), while others change substantially. This variability suggests that musical tastes and trends evolve over time. **Distinct Eras:** K-means clustering across decades revealed distinct musical clusters, reflecting unique stylistic trends of different eras. **Energetic Shift:** Clusters from the 2000s and 2010s showed higher energy and danceability, indicating an evolution towards more energetic and dance-oriented music in recent decades. **Decade-Specific Features:** Clusters' mean features across decades revealed era-specific traits, such as higher acousticness and danceability in the 1920s cluster. **Cultural and Social Impact:** Musical clusters can serve as markers of cultural and societal changes, capturing the shifting tastes, influences, and technological advancements that have shaped each decade's musical landscape. ### Implications and Future Directions **Music Creation Guidance:** Composers and producers can leverage insights from our study to intentionally craft music with desired emotional qualities, enhancing the listener experience. **Advanced Modeling Potential:** Considering the likely non-linear nature of feature-emotion relationships, future studies could explore advanced modeling techniques like neural networks for improved accuracy. **Enhanced Dataset:** Incorporating a more diverse and expansive dataset could strengthen findings, offering deeper insights into emotional attributes across varying musical genres and cultural contexts. **Cross-Domain Applications:** Similar methodologies could be applied to other creative domains, such as visual arts or literature, to uncover patterns in emotion representation and evolution over time.
2307.02903
PUFFIN: A Path-Unifying Feed-Forward Interfaced Network for Vapor Pressure Prediction
Accurately predicting vapor pressure is vital for various industrial and environmental applications. However, obtaining accurate measurements for all compounds of interest is not possible due to the resource and labor intensity of experiments. The demand for resources and labor further multiplies when a temperature-dependent relationship for predicting vapor pressure is desired. In this paper, we propose PUFFIN (Path-Unifying Feed-Forward Interfaced Network), a machine learning framework that combines transfer learning with a new inductive bias node inspired by domain knowledge (the Antoine equation) to improve vapor pressure prediction. By leveraging inductive bias and transfer learning using graph embeddings, PUFFIN outperforms alternative strategies that do not use inductive bias or that use generic descriptors of compounds. The framework's incorporation of domain-specific knowledge to overcome the limitation of poor data availability shows its potential for broader applications in chemical compound analysis, including the prediction of other physicochemical properties. Importantly, our proposed machine learning framework is partially interpretable, because the inductive Antoine node yields network-derived Antoine equation coefficients. It would then be possible to directly incorporate the obtained analytical expression in process design software for better prediction and control of processes occurring in industry and the environment.
Vinicius Viena Santana, Carine Menezes Rebello, Luana P. Queiroz, Ana Mafalda Ribeiro, Nadia Shardt, Idelfonso B. R. Nogueira
2023-07-06T10:29:03Z
http://arxiv.org/abs/2307.02903v2
# PUFFIN: A Path-Unifying Feed-Forward Interfaced Network for Vapor Pressure Prediction ###### Abstract Accurately predicting vapor pressure is vital for various industrial and environmental applications. However, obtaining accurate measurements for all compounds of interest is not possible due to the resource- and labor intensity of experiments. The demand for resources and labor further multiplies when a temperature-dependent relationship for predicting vapor pressure is desired. In this paper, we propose PUFFIN (Path-Unifying Feed-Forward Interfaced Network), a machine learning framework that combines transfer learning with a new inductive bias node inspired by domain knowledge (the Antoine equation) to improve vapor pressure prediction. By leveraging inductive bias and transfer learning using graph embeddings, PUFFIN outperforms alternative strategies that do not use inductive bias or that use generic descriptors of compounds. The framework's incorporation of domain-specific knowledge to overcome the limitation of poor data availability shows is potential for broader applications in chemical compound analysis, including the prediction of other physicochemical properties. Importantly, our proposed machine learning framework is partially interpretable, because the inductive Antoine node yields network-derived Antoine equation coefficients. It would then be possible to directly incorporate the obtained analytical expression in process design software for better prediction and control of processes occurring in industry and the environment. + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Accurately predicting sought-after properties, particularly in situations where data is limited, remains a significant hurdle across diverse scientific and engineering fields. In these contexts, transfer learning has emerged as an invaluable technique, given its capacity to repurpose a model engineered for one task as the foundation for another task (Sanchez-Lengeling et al., 2019; Queiroz et al., 2022). This approach has gained increasing interest where pre-trained models are usually used as the starting point for computer vision and natural language processing tasks (Zhuang et al., 2021; Vermeire and Green, 2021). In our specific context, applying transfer learning can be particularly beneficial when we aim to predict a property with limited data availability. Thus, the premise of transfer learning is grounded in the assumption that the features learned from the task with abundant data are relevant and useful for the task with limited data. This assumption holds especially well when the properties are physically related. For instance, the boiling point and vapor pressure of compounds are both linked to the strength of the intermolecular forces. Consequently, they could be expected to have related feature representations. We can leverage the abundant data for one related property to train an initial model and transfer the learned features and patterns to predict the property of interest. Recent advancements in artificial intelligence and machine learning have further corroborated the capabilities of machine learning models for learning the relationship between molecular structures and properties at high dimensional data (Lansford et al., 2023; Fang et al., 2022), with graph neural networks (GNNs) emerging as a potent approach to tackle these challenges in QSPR modeling (Aouichaoui et al., 2023; Schweidtmann et al., 2020). Using graph theory to represent a compound as a graph, where each node is an atom, and each edge is a bond between atoms, GNNs capture the complex 3D structure of molecules, which is crucial for predicting their properties (Zhang et al., 2020; Ohe, 2019; Zhang et al., 2022; Queiroz et al., 2023, 20). To further exploit these advancements, this paper introduces PUFFIN (Path-Unifying Feed-Forward Interfaced Network), a novel machine learning framework. PUFFIN integrates transfer learning and inductive bias to improve prediction accuracy and reliability. Specifically, transfer learning leverages the graph embeddings from a pre-trained GNN model. These graph embeddings, which encapsulate meaningful representations of molecules and their structural relationships (Sanchez-Lengeling et al., 2019), serve as inputs to a modified feed-forward neural network (FFNN) within the PUFFIN framework. This approach allows the FFNN to benefit from the GNN's understanding of complex relationships and patterns within the molecular graph data, thereby enhancing the accuracy of vapor pressure predictions. Additionally, PUFFIN incorporates inductive bias, ensuring the network output conforms to the functional form of known physical laws relevant to vapor pressure. This integration reinforces the model's capability to capture the underlying physical phenomena governing vapor pressure. The introduction of PUFFIN marks a significant stride towards an effective methodology for predicting temperature-dependent vapor pressure rooted in a strong theoretical foundation. Combining the strengths of GNNs, transfer learning, domain knowledge, and inductive bias, PUFFIN presents a comprehensive framework for accurate and interpretable predictions. By ensuring alignment with known physical laws through an internal hybridization of the neural network structure, PUFFIN not only enriches the existing array of computational models but also fortifies the bridge between machine learning and domain-specific knowledge of physical phenomena. This holds promising implications for the future of scientific research and practical applications in industries reliant on accurate vapor pressure prediction. ## 2 Methodology The methodology section in this paper focuses on implementing three distinct methodological paths, each of which plays a crucial role in the overall analysis. Figure 1 serves as a visual representation of these paths, with each path being highlighted in the figure using its respective color. The first two paths are introduced as benchmark tests, serving as reference points for comparison with the novel approach proposed in the paper. The subsequent subsections of the methodology will delve into detailed descriptions of each path, providing a comprehensive understanding of their individual contributions and how they relate to the central focus of this research. In Section 2.1, we present a methodology that focuses on creating molecular descriptors based on a specific target property. To achieve this, we leverage the power of graph neural networks, which allow us to find a mathematical representation of the molecules guided by the desired property. This approach enables us to capture the intricate relationships and structural patterns within the molecules. The hypothesis underlying this approach is that it can create more accurate and informative descriptors. This hypothesis will be tested against a generic descriptor. Moving on to Section 2.2, we introduce an alternative approach that uses a generic descriptor of the molecules. This descriptor utilizes logical rules to obtain a numerical molecular representation in the form of a binary vector. This is in contrast to the embedding, which is a matrix representation between two neural networks. By comparing these two descriptor types, we can assess the potential benefits of utilizing graph descriptors that are obtained from a target property (such as boiling point). Lastly, in Section 2.3, we present our main proposal, which centers around integrating graph descriptors and knowledge reinforcement in a single framework. This novel methodology aims to enhance the predictive capabilities of neural networks in relation to a specific property. By leveraging the unique strengths of graph descriptors and incorporating knowledge reinforcement techniques, we strive to develop a robust and accurate neural network model for property prediction. ### Graph Neural Networks and Transfer Learning In this study, we have employed a two-tier approach to model and predict the vapor pressure of chemical compounds. The first stage utilizes a graph neural network to predict the boiling point of the compounds, a property typically more accessible in data terms than vapor pressure. This GNN then serves as a benchmark model whose generated embeddings are transferred to a feed-forward neural network (FFNN) to predict the vapor pressure. The principal idea is to evaluate the accuracy of our proposed machine learning framework--the Path-Unifying Feed-Forward Interfaced Network--compared to the two-stage GNN and FFNN model. The transfer learning principle presented here will also be used within the PUFFIN framework. Figure 1 provides a concise visual depiction of the comprehensive framework underlying this approach, highlighted in blue. A GNN is well-suited for the task of representing molecules as it excels at handling graph-structured data. This hypothesis will be verified in this paper by comparing the GNN results with the outcome obtained by employing a generic descriptor (the Morgan fingerprint). In this context, the molecules of the compounds are modeled as graphs where atoms are nodes and chemical bonds are edges. These GNNs are capable of capturing the local and global relationships between atoms in a molecule, which is essential for the accurate prediction of physicochemical properties. Mathematically, a GNN works by iteratively updating the representations (or embeddings) of each node based on the representations of its neighbors, following a procedure often referred to as message passing or neighborhood aggregation: \[h_{i}^{(l+1)}=f_{h}^{(l)}\left(h_{i}^{(l)},\ \text{Aggregate}^{(l)}\left( \left\{h_{j}^{(l)}:j\in N(i)\right\}\right)\right) \tag{1}\] Here, \(h_{i}^{(l)}\) denotes the feature vector of node \(i\) at layer \(l\), \(N(i)\) is the set of its neighboring nodes, and \(f_{h}^{(l)}\) and \(Aggregate^{(l)}\) are parameterized functions implemented as Figure 1: Schematic diagram of the methodology presented in this work neural networks. This process is repeated for a fixed number of iterations (or layers), after which the final node representations are used to make the prediction. Within the transfer learning framework, the generated embeddings from the GNN model serve as inputs to the FFNN model, allowing it to leverage the GNN's understanding of complex relationships and patterns in the molecular graph data (Zang et al., 2017). FFNNs are simple, yet powerful, neural networks that map inputs to outputs through a series of transformations across multiple layers, each followed by a non-linear activation function. The mathematical formulation of a layer in an FFNN is as follows: \[h^{(l+1)}=f_{h}^{(l)}(W^{(l)}h^{(l)}+b^{(l)}) \tag{2}\] where \(h^{(l)}\) denotes the activations of layer \(l\), \(W^{(l)}\) and \(b^{(l)}\) are the weight matrix and bias vector of layer \(l\), respectively, and \(f_{h}^{(l)}\) is the activation function of layer \(l\). As indicated by the preceding equation, the inputs of the FFNN undergo internal processing to orient toward a new target. In the current context, the GNN-derived embeddings, originally intended for boiling point prediction, are processed by the FFNN to predict vapor pressure. This methodology is backed by evidence in scientific literature pointing to the boiling point as a critical and insightful variable in transfer learning applications aiming at vapor pressure prediction Zang et al. (2017); Ohe (2019). This established correlation between boiling point and vapor pressure reinforces the potential of transfer learning. ### Generic Descriptor: Morgan Fingerprints In our methodology, we also consider another benchmark strategy for evaluating the performance of the PUFFIN framework. This alternative approach bypasses the use of inductive bias and transfer learning, instead directly predicting the vapor pressure of compounds. At the heart of this strategy is using Morgan's fingerprints as generic molecular descriptors (Morgan, 1965). These fingerprints provide a numerical representation of molecules and offer an alternative to the GNN used in our benchmark model and within the PUFFIN framework. The Morgan algorithm starts by assigning each atom in the molecule an initial identifier (usually its atomic number). Then, for each iteration up to a specified radius, it updates the identifier for each atom to a hash of its own identifier and the identifiers of its immediate neighbors. Finally, the identifiers are hashed into the fingerprint bit vector. Figure 1 provides a concise visual depiction of the comprehensive framework underlying this approach, highlighted in green. Mathematically, a molecule's Morgan fingerprint, \(\mathbf{F}\), is a binary vector where each element, \(f_{i}\), represents the presence (1) or absence (0) of a specific substructure in the molecule. For a molecule with \(n\) atoms, the fingerprint is computed as follows: \[\mathbf{F}=[f_{1},f_{2},...,f_{n}] \tag{3}\] where \(f_{i}\) is computed based on the connectivity and types of atoms in a circular neighborhood with a given radius around the \(i^{th}\) atom. In more detail, the \(f_{i}\) is determined as: \[f_{i}=\begin{cases}1&\text{if specific substructure exists around the $i^{th}$ atom,}\\ 0&\text{otherwise.}\end{cases} \tag{4}\] In this strategy, each molecule is represented by its Morgan fingerprint, which serves as input to a feed-forward neural network (FFNN). The FFNN, as a universal function approximator, aims to learn a mapping from these fingerprints to the vapor pressure, similar to our two other approaches. This benchmark strategy not only serves as a comparative measure of performance but also highlights the efficacy of the PUFFIN framework and illuminates the enhancements gained through the transfer learning over using a direct numerical molecular representation like Morgan fingerprints. ### PUFFIN: Path-Unifying Feed-Forward Interfaced Network Our proposed PUFFIN framework enhances this two-stage model by integrating domain-knowledge reinforcement via inductive bias, ensuring alignment with known physical laws. The ultimate aim of this methodology is to develop a robust, efficient, and accurate tool for predicting vapor pressure that extends beyond the capabilities of conventional GNNs and FFNNs. Figure 1 provides a concise visual depiction of the comprehensive framework underlying this approach, highlighted in salmon. The Path-Unifying Feed-Forward Interfaced Network incorporates an inductive bias node--an Antoine equation layer--into the traditional FFNN model, thereby reinforcing the phenomena within the model and offering a more accurate prediction of the vapor pressure. The Antoine equation is a well-established empirical model for describing the relationship between vapor pressure and temperature for a given substance, often expressed as \(\log\mathbf{P}=A-\frac{B}{(C+T)}\)(Thomson, 1946), where \(P\) represents vapor pressure, \(T\) is the temperature, and \(A\), \(B\), and \(C\) are substance-specific parameters. Similar to the GNN-based approach, PUFFIN begins by learning embeddings of molecules using a graph neural network. These embeddings encapsulate structural and chemical information about the molecules and serve as inputs to the FFNN. The modified FFNN operates as described in Figure 1, where \(\sigma_{T}\) is the output of the Antoine neuron, \(A\), \(B\) and \(C\) are the outputs of the intermediary layer, and \(T\) is the temperature. In this hybridized FFNN, the Antoine equation serves as the activation function of the output layer. This is an innovative approach that exploits both the neural networks and the domain-specific knowledge encapsulated in the Antoine equation. The bias of this output neuron is considered an exogenous variable, the temperature (\(T\)), while its inputs are the outputs of the previous layer. These inputs, therefore, are effectively equivalent to the Antoine coefficients. This design essentially bridges the gap between the high-dimensional latent space represented by the previous layer's outputs and the real-world physical property to be predicted, the vapor pressure. The output of the Antoine neuron is the predicted vapor pressure, directly incorporating the temperature dependency encapsulated in the Antoine equation. This predicted vapor pressure is then compared with the true value to compute the loss function, which guides the training of the neural network. By assessing PUFFIN's performance against the two benchmark models described in the subsections above, we seek to substantiate the advantages of incorporating transfer learning and inductive bias into machine learning models for predicting physical properties. This comparison demonstrates the effectiveness of PUFFIN's unique design, thereby showcasing its potential in advancing the field of computational prediction of physicochemical properties. ### Vapor Pressure Data Set The database used in this study consists of 1851 molecules of organic chemicals, encompassing a wide range of classes such as industrial chemicals, antimicrobials, dyes, fertilizers, flame retardants, fragrances, pharmaceutical products, herbicides, pesticides, inert ingredients, petrochemicals, food additives. The Simplified Molecular Identification and Line Entry System (SMILES) system represents these molecules in the database (Weininger, 1988). SMILES employs an alphanumeric character sequence to describe the connectivity of atoms in a molecule. Atoms are represented by their chemical symbols, and their bonds are indicated by special characters such as hyphens and numbers. Additionally, SMILES can incorporate information about functional groups, isomerism, stereochemistry, and electric charge. For each molecule, vapor pressures were extracted at five different temperatures (298 K, 273 K, 325 K, 350 K, 375 K) to populate the database. The vapor pressures at 298 K were obtained experimentally by Zang et al. (2017), and the vapor pressures at the remaining temperatures were obtained through the "Thermo: Thermodynamics and Phase Equilibrium component of Chemical Engineering Design Library (ChEDL)" API (Bell and Contributors, 2023). The training, validation, and test sets were obtained by shuffling the vapor pressure (VP) data set and first splitting it into two data sets--training and validation (80%) and test (20%). The training and validation were then split into two again, resulting in the training set (64%) and validation set (16%). ### Hyperparameter Identification Hyperparameter optimization is an essential part of any machine learning project, and in the development of the PUFFIN model, as well as the benchmark models, we have employed a state-of-the-art hyperparameter optimization method called HYPERBAND (Li et al. (2018)) to identify the optimal hyperparameters. Ensuring that the same method of optimization is used across all models allows for a fair and unbiased comparison of their performance. The HYPERBAND algorithm, introduced by Li et al. (2018), is an efficient random search algorithm designed for hyperparameter optimization. It tackles the fundamental challenge of allocating limited computational resources to the evaluation of numerous hyperparameter configurations. Unlike conventional hyperparameter tuning methods that follow a sequential process, HYPERBAND employs a more dynamic approach, enabling it to explore the hyperparameter space more efficiently. HYPERBAND operates on the principle of adaptive resource allocation and early-stopping. It begins by sampling a large number of hyperparameter configurations and allocating a small amount of resources to each. After an initial evaluation, only a fraction of these configurations, specifically the ones showing the most promise, are retained and given additional resources for further evaluation. Mathematically, HYPERBAND employs a multi-armed bandit strategy known as "successive halving" to allocate resources. If \(n\) is the number of configurations and \(r\) is the initial amount of resources allocated to each configuration, the algorithm iteratively reduces \(n\) by a factor of \(\mu\) (usually set to 3) while increasing \(r\) by the same factor until only the most promising configuration is left. This is done over a series of \(s\) brackets, each representing a round of resource allocation. By doing this, HYPERBAND eliminates sub-optimal configurations early in the process, allowing it to dedicate more resources to evaluating promising configurations. As a result, it can effectively identify the optimal hyperparameters within a relatively short time and with limited computational resources. ## 3 Results ### Morgan Fingerprints--Vapor Pressure Prediction Our baseline model was constructed as a feed-forward neural network taking inputs from Morgan fingerprints combined with temperature data. The output from this model was the base-10 logarithm of vapor pressure. To determine the optimal neural network architecture, we used HYPERBAND for hyperparameter search. We allowed for a range in the number of layers (from 1 to 6) and varied the number of neurons per layer (from 20 to 440 with increments of 40 neurons). Additionally, the learning rate was set to vary between \(1\times 10^{-5}\) and \(1\times 10^{-2}\). To measure the effectiveness of these variations, we used a validation set to compute the mean absolute error, which served as our optimization metric. The training process was conducted with the Adaptive Moment Estimation (ADAM), (Kingma and Ba, 2017) algorithm, batching data in fixed sizes of 64. HYPERBAND was configured to a maximum of 100 epochs and a factor of 3 for the training process. The best architecture, identified by achieving the smallest validation set error, is presented in Table 1. Upon establishing the optimal architecture, the model underwent further training for 600 epochs, with an early stopping protocol in place to mitigate the risk of overfitting. The resultant mean absolute errors for the training and validation sets were 0.1663 and 0.3529, respectively. To evaluate the baseline model's efficacy, the error against a test set was computed. This test dataset had been excluded from both the initial training and the hyperparameter tuning stages. The mean absolute error determined for this test set was 0.3690. To demonstrate the model's predictive proficiency with the test set, Figure 3.a illustrates a parity plot and a kernel density estimation of the residuals' distribution, showing that the mean of the residuals is centered around zero with low variance. ### Molecular Embedding--Estimation From Boiling Point Morgan fingerprints are a form of generic molecular descriptors that serve to encode molecular structures into a compact vector form. While these descriptors can provide a useful overview of molecular properties, they may not be the most suitable for predicting specific properties like vapor pressure due to their generic nature. An alternative approach could involve designing a more specialized descriptor for the task at hand. For instance, following the methodology presented in the previous section, we trained a separate neural network to predict the boiling point of a compound. The intermediate representation or "latent space" (also known as the embedding) from this boiling point prediction model is then used as a more specialized descriptor for predicting vapor pressure. Unlike traditional neural networks, a GNN is specifically designed to work with graph-like data structures, such as molecular structures, by considering their topological information. The input to the GNN was the molecular graph. During the training process, the GNN learned to transform these molecular graphs into a continuous vector representation, known as an embedding, which effectively captured the essential structural features relevant to the boiling point. The architecture of our model consisted of three GNN layers. These were followed by a global addition pooling layer, two dense layers with rectified linear unit (ReLU) activation functions, and dropout layers interspersed for regularization. A final readout layer completed the architecture. Before proceeding with model identification, data preprocessing steps were undertaken. DeepChem library (Ramsundar et al., 2019) was used to convert the molecular SMILES strings into graph data. Subsequently, the target variable (boiling point) was standardized; this was achieved by subtracting the mean and dividing by the standard deviation of the boiling points in the training set. The boiling point (BP) dataset was then partitioned into training and validation subsets, which constituted 80% of the total dataset. These subsets were used for hyperparameter tuning and for scaling operations. The remaining 20% of the dataset was designated as the test set and was reserved exclusively for the final evaluation of model performance. Hyperparameters such as the number of neurons per layer, batch size, and dropout probability were tuned using the ASHAScheduler (AsyncHYPERBANDScheduler). Specifically, the search space for the number of neurons per layer was defined as \(2^{i}\), where \(i\) could be any value from the set 6, 7, 8, 9. Dropout probability was allowed to vary among 0.2, 0.3, 0.4, providing a balance between model complexity and risk of overfitting. The learning rate was searched in a loguniform space, ranging from \(5\times 10^{-4}\) to \(1\times 10^{-2}\). Finally, the batch size was varied among the set 32, 64, 128. The optimal model architecture determined from the hyperparameter search is detailed in Table 2. Upon establishing the optimal architecture, the model underwent further training for 300 epochs, with an early stopping protocol to mitigate the risk of overfitting. The final mean squared error for the test set with respect to the rescaled boiling point was 0.0471. Figure 2 displays a parity plot showing the performance of the identified model on BP test set data. It can be seen that the data points are randomly spread over the identity line, which indicates satisfactory predictive performance. The learned embeddings serve as a more specialized form of the molecular descriptor and learn a representation of the molecular structures in relation to their boiling point, which is intended to be used as input descriptors instead of the more generic Morgan fingerprints for predicting vapor pressure. ### Molecular Embedding Without Domain Knowledge Node--Vapor Pressure Prediction As mentioned, the 128-dimensional learned embeddings are intended to serve as a more specialized form of the molecular descriptor and learn a representation of the molecular structures in relation to their boiling point. As in the baseline model, to determine the optimal neural network \begin{table} \begin{tabular}{|c|c|c|} \hline **Layer Type** & **Units** & **Activation** \\ \hline Input & 1024 & - \\ Dense & 340 & reLU \\ Dense & 60 & reLU \\ Dense & 100 & reLU \\ Dense & 1 & linear \\ \hline \end{tabular} \end{table} Table 1: Architecture summary of identified feed forward neural network for predicting vapor pressure from Morgan fingerprints. \begin{table} \begin{tabular}{|c|c|c|} \hline **Layer Type** & **Units** & **Activation** \\ \hline GNN & 128 & reLU \\ GNN & 64 & reLU \\ GNN & 64 & reLU \\ Global Add & - & - \\ Dense & 128 & reLU \\ Dropout & 0.2 probability & - \\ Dense & 128 & reLU \\ Dropout & 0.2 probability & - \\ Dense & 1 & linear \\ \hline \end{tabular} \end{table} Table 2: Architecture summary of identified graph neural network (GNN) for predicting boiling point. architecture, we used HYPERBAND for hyperparameter search. We allowed for a range in the number of layers (from 1 to 6) and varied the number of neurons per layer (from 20 to 440 with increments of 40 neurons). Additionally, the learning rate was set to vary between \(1\times 10^{-5}\) and \(1\times 10^{-2}\). Table 3 presents the architecture of the identified feed-forward neural network used to predict vapor pressure at different temperatures from the learned embeddings. Upon establishing the optimal architecture, the model underwent further training for 600 epochs, with an early stopping protocol to mitigate the risk of overfitting. The resultant mean absolute errors for the training and validation sets were measured at 0.2016 and 0.3646, respectively. Figure 3.b shows a visual illustration of the model's predictive performance in the test set. As can be seen from test set errors and plots, using embeddings from boiling point GNN predictive model did not produce a better predictive performance for the vapor pressure prediction task than the Morgan fingerprint generic descriptor. ### Molecular Embedding With Domain Knowledge Node (PUFFIN)--Vapor Pressure Prediction Finally, we evaluated the performance of the PUFFIN. To determine the optimal neural network architecture, we used HYPERBAND for hyperparameter search and allowed the same search space as for the other approaches. Table 4 presents the architecture of the identified feed-forward neural network used to predict vapor pressure at different temperatures from the learned embeddings. Upon establishing the optimal architecture, the model underwent further training for 600 epochs, with an early stopping protocol in place to mitigate the risk of overfitting. The resultant mean absolute errors for the training and validation sets were measured at 0.1227 and 0.1609, respectively. Figure 3.c illustrates the model's predictive performance in the test set. As can be seen both from the test set error and from the figures, the performance of PUFFIN reduced the error by about 50%. The residuals are considerably more concentrated around zero and with smaller variances than other approaches. This improvement can be attributed to the inductive bias introduced by using the Antoine equation in an intermediary node of the neural network. Within the proposed framework, we encode molecular information and boiling point data into the domain knowledge-reinforced PUFFIN, allowing a considerably better model for predicting vapor pressure. Table 5 offers a summary of the test set performance for three different approaches: using Morgan fingerprints, using boiling point specialized embeddings, and using boiling point specialized embeddings with thermodynamic reinforcement. The performance is measured in terms of mean squared error (MSE) on the test set. From the given results, it is evident that the approach using Morgan fingerprints has the worst performance, with an MSE of 0.3690. The performance improves slightly when the model uses boiling point specialized embeddings as the input, resulting in an MSE of 0.3646. This provides evidence for the hypothesis underlying this approach: guided graphs can create more accurate and informative descriptors. However, this hypothesis should be further explored in future work to understand its benefits and drawbacks. The most significant improvement is observed when boiling point specialized embeddings are combined with domain knowledge reinforcement--in this case, the Antoine equation. This approach results in an MSE of 0.1609, more than halving the error obtained using the other two methods. This result illustrates the utility of integrating domain-specific knowledge (in this case, boiling point) and physical laws (thermodynamic reinforcement) into the learning process, leading to enhanced prediction accuracy. These results underscore the potential for specialized embeddings and \begin{table} \begin{tabular}{|c|c|c|} \hline **Layer Type** & **Units** & **Activation** \\ \hline Dense & 180 & reLU \\ Dense & 380 & reLU \\ Dense & 180 & reLU \\ Dense & 420 & reLU \\ Dense & 380 & reLU \\ Dense & 380 & reLU \\ Dense & 3 & linear \\ \hline \end{tabular} \end{table} Table 4: Architecture summary of the identified feed forward neural network for predicting vapor pressure from embeddings with domain knowledge reinforcement (the Antoine node). \begin{table} \begin{tabular}{|c|c|c|} \hline **Layer Type** & **Units** & **Activation** \\ \hline Dense & 260 & reLU \\ Dense & 220 & reLU \\ Dense & 180 & reLU \\ Dense & 180 & linear \\ Dense & 1 & linear \\ \hline \end{tabular} \end{table} Table 3: Architecture summary of the identified feed-forward neural network for predicting vapor pressure from embeddings. Figure 2: Parity plot of graph neural network (GNN) prediction of boiling point compared to the ground truth in the test set physics-informed machine learning to be used as predictive models for complex properties like vapor pressure. ### Interpretability remarks The intermediary layer of the PUFFIN model, which feeds into the Antoine neuron, produces output parameters \(A\), \(B\), and \(C\). To test the partial interpretability of this structure, these parameters are evaluated against the known Antoine coefficients available at Dortmund Data Bank (www.ddbst.com) for three different components from the test set. Table 6 presents the values of the coefficients for three components and the embedding values provided by the PUFFIN. The close agreement between these sets of coefficients underscores the model's ability to extract meaningful, domain-specific features from high-dimensional molecular graph data. For all three components, the model's \(A\), \(B\), and \(C\) parameters closely align with the empirical Antoine coefficients. The accuracy of these outputs reinforces the model's capability to integrate the inductive bias of the Antoine equation into the predictive framework, ensuring not only high predictive accuracy but also thermodynamic consistency. It is worth highlighting that the model was not fed any information about the Antoine coefficients during its training. Despite this, the PUFFIN model demonstrated a remarkable ability to generate embeddings consistent with the Antoine equation. The findings underscore the robustness of the PUFFIN model, given that it has been trained purely on high-dimensional molecular data and temperature-vapor pressure pairs, with no explicit information regarding the Antoine coefficients being provided. These findings suggest that the hybridized structure of PUFFIN provides a methodology for incorporating domain-specific knowledge into the machine learning architecture, which can be extended to other fields. This mechanism promotes a richer interpretation of the model's outputs and contributes to its performance in accurately predicting vapor pressure. In particular, the successful retrieval of Antoine-like coefficients from the intermediary layer validates the design choice to use the Antoine equation as an activation function for the output layer. This lends credence to the concept that incorporating domain knowledge into machine learning models can improve both the interpretability and the predictive performance of these models. A final assessment is provided in Figure 4. The figure presented in this analysis illustrates the extrapolation of three strategies in predicting vapor pressure, specifically focusing on the proposed strategy, PUFFIN. Figure 4 demonstrates the performance of each strategy relative to Antoine predictions, particularly in regions extending beyond the confines of the training data. \begin{table} \begin{tabular}{c c c c c c} \hline Substance & A. Puffin & A & B. Puffin & B & C. Puffin & C \\ \hline Antoine & 8.09 & 12.312 & 1082.27 & 1315.67 & -46.31 & -32.671 \\ Batanti & 10.064 & 12.3013 & 1600.87 & 1285.027 & -59.25 & -99.903 \\ Ocyl-benzene & 9.402 & 12.46418 & 2173.842 & 212.18576 & -63.901 & -71.162 \\ \hline \end{tabular} \end{table} Table 6: Comparison of coefficients predicted from PUFFIN and from literature. \begin{table} \begin{tabular}{c c c c} \hline \hline & Morgan & Embeddings without & Embeddings with \\ & fingerprint & Antoine & Antoine (PUFFIN) \\ \hline Test set & & & \\ mse & 0.3690 & 0.3646 & 0.1609 \\ \hline \end{tabular} \end{table} Table 5: Summary of test set performance for the three approaches evaluated based on Mean Squared Error (MSE) metric. Figure 3: Comparative analysis: parity plots and kernel density estimation of residuals, including mean and variance for each model in the test dataset. Notably, the PUFFIN strategy aligns accurately with Antoine predictions, even in regions significantly distant from the original training data where the other models drift significantly, a behavior clearly seen in Figure 4.a. This observation is important as it highlights the potential of our modified AI model to predict vapor pressure as a continuous function of temperature with remarkable accuracy. In contrast, the other two strategies start to diverge noticeably from Antoine predictions when venturing into regions distant from the training area. This stark difference further emphasizes the exceptional precision and robustness of the PUFFIN strategy. Furthermore, it is essential to underline that Antoine coefficients are typically derived from experimental data, making it a significant challenge to obtain these values for all the molecules incorporated in our database. Despite this, our figure evaluates three molecules from the unseen test dataset. The results from these evaluations show that PUFFIN can accurately predict vapor pressure for molecules unseen during the training phase and for temperatures well beyond the range of the training set. This capability highlights the potential of PUFFIN in predicting vapor pressure for new molecules and for temperatures outside the known training data, making it a compelling area for future research. ## 4 Conclusions In conclusion, this study presents a novel approach for predicting vapor pressure using a hybrid neural network model, the PUFFIN framework, demonstrating good accuracy and partial interpretability. We adopted a transfer learning strategy, leveraging widely available boiling point data. Furthermore, we incorporated domain knowledge (thermodynamic) concepts into the neural network architecture with an inductive bias. Our results indicate that this approach significantly enhances the model's prediction capabilities, reducing the mean squared error by about 50% compared to the other two neural network approaches (the Morgan fingerprint network and the non-reinforced molecular embedding network). Moreover, the PUFFIN framework's ability to extract Antoine-like coefficients directly from high-dimensional molecular graph data offers a compelling illustration of the benefits of incorporating domain knowledge into machine learning models. This promising outcome offers a degree of model interpretability typically lacking in standard neural network approaches, contributing significantly to the model's utility. Remarkably, these advances were achieved without providing the model with explicit information about the Antoine coefficients during training. This demonstrates the power of machine learning models to learn and utilize domain-specific relationships effectively, even without explicit domain knowledge during training. Furthermore, the introduction of an Antoine node in the PUFFIN permitted a physically-accurate extrapolation of vapor pressure data to temperatures beyond those to which training was completed. This is in contrast to the two other methods (the Morgan fingerprint and the non-reinforced molecular embedding), which were both less accurate in predicting vapor pressure at temperatures outside of the range used for training. The improvements realized by the PUFFIN framework affirm the considerable potential of incorporating domain-specific knowledge into machine learning architectures. The demonstrated success of this approach suggests potential applicability beyond vapor pressure prediction, extending to other domains that could benefit from a similar integration of domain knowledge into machine learning models. In the face of ever-increasing complexity and data availability in various scientific and engineering domains, our study underscores the importance and potential of physics-informed and interpretable machine learning models for accurately predicting complex properties. This further underscores the potential of the PUFFIN model as a reliable instrument for predicting complex thermophysical properties. ## 5 Acknowledgements This research was supported by the doctoral Grant (reference PRT/BD/152850/2021) with funds from State Budgets under MIT Portugal Program. This work was also financially supported by LA/P/0045/2020 (ALiCE), UIDB/50020/2020 and UIDP/50020/2020 (LSRE-LCM), funded by national funds through FCT/MCTES (PIDDAC). Figure 4: Extrapolation assessment of all models for three test set components. (a) Acetone, (b) Octyl-Benzene and (c) Butanol. ## 6 Declaration of generative AI and AI-assisted technologies in the writing process During the preparation of this work the authors used ChatGPT 4.0 in order to improve language and readability. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
2303.16259
Automorphic functions for nilpotent extensions of curves over finite fields
We define and study the subspace of cuspidal functions for $G$-bundles on a class of nilpotent extensions $C$ of curves over a finite field. We show that this subspace is preserved by the action of a certain noncommutative Hecke algebra $\mathcal{H}_{G,C}$. In the case $G=\rm{GL}_2$, we construct a commutative subalgebra in $\mathcal{H}_{G,C}$ of Hecke operators associated with simple divisors. In the case of length 2 extensions and of $G=\rm{GL}_2$, we prove that the space of cuspidal functions (for bundles with a fixed determinant) is finite-dimensional and provide bounds on its dimension. In this case we also construct some Hecke eigenfunctions using the relation to Higgs bundles over the corresponding reduced curve.
Alexander Braverman, David Kazhdan, Alexander Polishchuk
2023-03-28T19:08:42Z
http://arxiv.org/abs/2303.16259v1
# Automorphic functions for nilpotent extensions of curves over finite fields ###### Abstract. We define and study the subspace of cuspidal functions for \(G\)-bundles on a class of nilpotent extensions of curves over a finite field. We show that this subspace is preserved by the action of a certain noncommutative Hecke algebra \(\mathcal{H}_{G,C}\). In the case \(G=\operatorname{GL}_{2}\), we construct a commutative subalgebra in \(\mathcal{H}_{G,C}\) of Hecke operators associated with _simple divisors_. In the case of length \(2\) extensions and of \(G=\operatorname{GL}_{2}\), we prove that the space of cuspidal functions (for bundles with a fixed determinant) is finite-dimensional and provide bounds on its dimension. In this case we also construct some Hecke eigenfunctions using the relation to Higgs bundles over the corresponding reduced curve. D.K. is partially supported by the ERC grant No 669655. A.P. is partially supported by the NSF grant DMS-2001224, and within the framework of the HSE University Basic Research Program and by the Russian Academic Excellence Project '5-100'. depending on the reduction modulo some power of the maximal ideal \(\mathfrak{m}^{n}\subset O\) (see [3, Sec. 5.8] on related constructions). This naturally leads to the problem of studying automorphic functions and Hecke operators for curves over finite rings such as \(O/\mathfrak{m}^{n}\), or more generally, over nilpotent extensions of curves over finite fields. This is the main subject of the present paper. ### Nilpotent extensions of curves: adeles and \(G\)-bundles Let \(\overline{C}\) be a smooth proper curve over a finite field \(k\), and let \(C\) be a nilpotent extension of \(\overline{C}\), i.e., a scheme of finite type such that the corresponding reduced scheme is \(C\). We denote by \(\mathcal{N}\subset\mathcal{O}_{C}\) the nilradical. **Definition 1.1**.: We say that \(C\) is a _special nilpotent extension_ of \(\overline{C}\) if there exists \(n\geq 1\) such that \(\mathcal{N}^{n}=0\), and \(\mathcal{N}^{i}/\mathcal{N}^{i+1}\) is a line bundle on \(\overline{C}\) for \(i=1,\dots,n-1\). We refer to \(n\) as the _length_ of the nilpotent extension. For example, if \(C\) is a smooth proper curve over a finite quotient of the ring of integers \(O\) of a non-archimedean field \(K\) then it is a special nilpotent extension of \(\overline{C}\). We denote by \(F=F_{C}\) the stalk of the structure sheaf of \(C\) at the general point (which is the same as the total ring of fractions of the ring of functions on any nonempty affine open of \(C\)). We also set \(\overline{F}:=\underline{F_{\overline{C}}}=k(\overline{C})\). For every closed point \(p\in\overline{C}\), we consider \(\mathcal{O}_{p}\), the completion of the local ring of \(C\) at \(p\) with respect to the maximal ideal. Note that the quotient of \(\mathcal{O}_{p}\) by its nilradical is \(\overline{\mathcal{O}}_{p}\), the completion of the local ring of \(\overline{C}\) at \(p\). We denote by \(F_{p}\) the total ring of fractions of \(\mathcal{O}_{p}\), and we define the ring of adeles \(\mathbb{A}=\mathbb{A}_{C}\) as the restricted product of \(F_{p}\) over all points \(p\), with respect to the subrings \(\mathcal{O}_{p}\subset F_{p}\). We denote by \(\mathcal{O}\subset\mathbb{A}\) the ring of integer adeles. We have a natural embedding \(F\subset\mathbb{A}\). Let \(\mathcal{N}^{i}\mathbb{A}\subset\mathbb{A}\) (resp., \(\mathcal{N}^{i}_{F}\subset F\)) be the \(i\)th power of the nilradical. The quotient \(\overline{\mathbb{A}}:=\mathbb{A}/\mathcal{N}\mathbb{A}\) is the usual ring of adeles for \(\overline{C}\), and the quotient \(\overline{\mathcal{O}}\) of \(\mathcal{O}\) by its nilradical is the ring of integer adeles for \(\overline{C}\). Let \(G\) is a connected group defined over \(\mathbb{Z}\). We denote by \(\mathrm{Bun}_{G}(C)\) the groupoid of \(G\)-bundles over \(C\) trivial at the general point of \(C\). 1 Let us recall the adelic interepretation of \(\mathrm{Bun}_{G}(C)\). Given a \(G\)-bundle \(P\) on \(C\) trivial at the general point (thought of as a right \(G\)-torsor), we fix a trivialization \(e_{\eta}\) of \(P\) at the general point and trivializations \(e_{p}\) of the restriction of \(P\) to the formal completion of \(C\) at each point \(p\). Writing \(e_{p}=e_{\eta}\cdot g_{p}\), we get \(g=(g_{p})\in G(\mathbb{A})\). Changing trivilizations \(e_{p}\) corresponds to multiplying \(g\) on the right with \(G(\mathcal{O})\), while changing \(e_{\eta}\) corresponds to multiplying \(g\) on the left with \(G(F)\). The following result is well known in the case \(C=\overline{C}\) and is proved similarly in general. Footnote 1: If \(G\) is connected split reductive, then any \(G\)-bundle is automatically trivial at the general point, as follows from [1, Prop. 4.5]. **Proposition 1.2**.: _The above construction give an equivalence of groupoids_ \[\mathrm{Bun}_{G}(C)\simeq G(F)\backslash G(\mathbb{A})/G(\mathcal{O}), \tag{1.1}\] _where the set of double cosets is naturally viewed as a groupoid, see Sec. A.2._ For every open compact subgroup \(K\subset G(\mathbb{A})\), we set \[\mathrm{Bun}_{G}(C,K):=G(F)\backslash G(\mathbb{A})/K.\] In the case of \(G=\mathrm{GL}_{n}\), we can consider vector bundles instead of \(G\)-bundles. In this case we associate with \(g\in\mathrm{GL}_{n}(\mathbb{A})\) the vector bundle \(V=V(g)\) where one has \(V_{\eta}=F^{n}\) and \(V_{p}=g_{p}\mathcal{O}_{p}^{n}\subset F_{p}^{n}\) (here \(V_{p}\) is the completion of the stalk of \(V\) at \(p\)). As in the case of curves over a finite field, we have the following interpretation of the cohomology \(H^{*}(C,V)\) for \(V=V(g)\) with \(g\in\mathrm{GL}_{n}(\mathbb{A})\): \[H^{0}(C,V)\simeq F^{n}\cap g\mathcal{O}^{n},\ \ H^{1}(C,V)\simeq\mathbb{A}^{n}/( F^{n}+g\mathcal{O}^{n}). \tag{1.2}\] When integrating functions over \(H(\mathbb{A})\), where \(H\) is a an algebraic subgroup of \(G\) defined over \(\mathbb{Z}\), we always normalize the Haar measure so that \(\mathrm{vol}(H(\mathcal{O}))=1\). For a vector bundle \(V\) on \(C\) we set \[\overline{V}:=V|_{\overline{C}}=V/\mathcal{N}V.\] ### Cuspidal functions and Hecke operators: definitions and conjectures For a small groupoid \(X\) we denote by \(\mathcal{S}(X)\) (resp., \(\mathbb{C}(X)\)) the space of finitely supported (resp., all) \(\mathbb{C}\)-valued functions on the set of isomorphism classes of \(X\). Below we define the subspace \(\mathcal{S}_{\mathrm{cusp}}(\mathrm{Bun}_{G}(C))\subset\mathcal{S}(\mathrm{ Bun}_{G}(C))\) of cuspidal functions invariant under the action of certain algebra of Hecke operators on \(\mathrm{Bun}_{G}(C)\). More generally, for each compact open subgroup \(K\subset G(\mathbb{A})\), we can define a similar subspace \(\mathcal{S}_{\mathrm{cusp}}(\mathrm{Bun}_{G}(C,K))\subset\mathcal{S}(\mathrm{ Bun}_{G}(C,K))\). Let \(G\) be a connected reductive group over \(\mathbb{Z}\). One could define cuspidal functions for an arbitrary such \(G\), but to simplify notations, we assume from now on that \(G\) is split, and we denote by \(T\subset G\) a split maximal torus (defined over \(\mathbb{Z}\)). In analogy with [8], which dealt with the local case, for each compact open subgroup \(K\subset G(\mathbb{A})\) we consider the _Hecke algebra_, \(\mathcal{H}_{G,C,K}\) of compactly supported, \(K\)-biinvariant measures on \(G(\mathbb{A})\). For \(K=G(\mathcal{O})\) we denote this algebra simply as \(\mathcal{H}_{G,C}\) (note that this algebra is not commutative when \(C\) is not a curve over a finite field). The algebra \(\mathcal{H}_{G,C,K}\) acts by convolution on the space \(\mathbb{C}(\mathrm{Bun}_{G}(C,K))\), preserving the subspace \(\mathcal{S}(\mathrm{Bun}_{G}(C,K))\). We denote this action as \(h\star f\), where \(h\in\mathcal{H}_{G,C,K}\), \(f\in\mathbb{C}(\mathrm{Bun}_{G}(C,K))\). Let \(B=TU\) be a Borel subgroup \(B\). A parabolic subgroup is _standard_ if it contains \(B\). **Definition 1.3**.: For a standard parabolic subgroup \(P\supset B\) (defined over \(\mathbb{Z}\)) with the unipotent radical \(U\) and a Levi subgroup \(L\subset P\), we set \[\mathrm{QBun}_{L}(C,K):=(L(F)U(\mathbb{A}))\backslash G(\mathbb{A})/K,\] \[\mathrm{QBun}_{L}(C):=\mathrm{QBun}_{L}(C,G(\mathcal{O})).\] Note that the set \(\mathrm{QBun}_{L}(\overline{C})\) can be identified with the set of isomorphism classes of \(L\)-bundles on \(\overline{C}\). **Definition 1.4**.: 1. For a standard parabolic subgroup \(P\), the constant term operator \[E_{P}:\mathbb{C}(\mathrm{Bun}_{G}(C,K))\to\mathbb{C}(\mathrm{QBun}_{L}(C,K))\] is given by the formula \(E_{P}f(g)=\int_{u\in U_{P}(F)\backslash U_{P}(\mathbb{A})}f(ug)du\). 2 Footnote 2: This integral can rewriten as a finite sum over \(U_{P}(F)\backslash U_{P}(\mathbb{A})/(U_{P}(\mathbb{A})\cap gKg^{-1})\). 2. More generally, for each \(i\geq 0\), we define the subgroup \(U_{P}(\mathcal{N}^{i}\mathbb{A})\subset U_{P}(\mathbb{A})\) (resp., \(U_{P}(\mathcal{N}^{i}_{F})\subset U_{P}(F)\)) as the kernel of the projection \(U_{P}(\mathbb{A})\to U_{P}(\mathbb{A}/\mathcal{N}^{i}\mathbb{A})\) (resp., \(U_{P}(F)\to U_{P}(F/\mathcal{N}^{i}_{F})\)). We define the corresponding constant term operator by \[E^{i}_{P}f(g)=\int_{u\in U_{P}(\mathcal{N}^{i}_{F})\backslash U_{P}(\mathcal{ N}^{i}\mathbb{A})}f(ug)du,\] 3. We say that a function \(f\) on \(\operatorname{Bun}_{G}(C,K)\) is _cuspidal_ if \(E_{P}f\equiv 0\) for every proper standard parabolic subgroup \(P\) containing \(B\). 4. \(\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}_{G}(C,K))\subset \mathbb{C}(\operatorname{Bun}_{G}(C,K))\) denotes the subspace of cuspidal functions. 5. For \(i\geq 0\), we say that \(f\) is \(i\)_-cuspidal_ if \(E^{i}_{P}f\equiv 0\) for every proper standard parabolic subgroup \(P\) containing \(B\). In the case \(n=2\), we also call \(1\)-cuspidal functions _strongly cuspidal_. 6. \(\mathcal{S}^{i}_{\operatorname{cusp}}(\operatorname{Bun}_{G}(C,K))\subset \mathbb{C}(\operatorname{Bun}_{G}(C,K))\) denotes the subspace of \(i\)-cuspidal functions. For a closed embedding \(C^{\prime}\subset C\) (where \(C\) and \(C^{\prime}\) are nilpotent extensions of the reduced curve \(\overline{C}\)), we have a natural reduction map \(\operatorname{Bun}_{G}(C)\to\operatorname{Bun}_{G}(C^{\prime})\), which induces the embedding \(\mathbb{C}(\operatorname{Bun}_{G}(C^{\prime}))\hookrightarrow\mathbb{C}( \operatorname{Bun}_{G}(C))\) as the subspace of functions depending only on the reduction of a \(G\)-bundle to \(C^{\prime}\). **Proposition A**.: _(1) The action of the Hecke algebra \(\mathcal{H}_{G,C,K}\) on \(\mathbb{C}(\operatorname{Bun}_{G}(C,K))\) preserves the subspace of cuspidal (resp., \(i\)-cuspidal) functions. (2) For a closed embedding \(C^{\prime}\subset C\), we have_ \[\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}_{G}(C^{\prime}))= \mathbb{C}(\operatorname{Bun}_{G}(C^{\prime}))\cap\mathcal{S}_{\operatorname {cusp}}(\operatorname{Bun}_{G}(C)).\] Let \(G^{\prime}\subset G\) be the commutator subgroup. Then we have a natural projection \(\operatorname{Bun}_{G}(C)\to\operatorname{Bun}_{G/G^{\prime}}(C)\). For each \(G/G^{\prime}\)-bundle \(L\) over \(C\), we denote by \(\operatorname{Bun}_{G}^{L}(C)\subset\operatorname{Bun}_{G}(L)\) the subset of isomorphism classes of \(G\)-bundles whose associated \(G/G^{\prime}\)-bundle is isomorphic to \(L\). For every open subgroup \(K\subset G(\mathcal{O})\), we denote by \(\operatorname{Bun}_{G}^{L}(C,K)\subset\operatorname{Bun}_{G}(C,K)\) the preimage of \(\operatorname{Bun}_{G}^{L}(C)\) under the natural projection \(\operatorname{Bun}_{G}(C,K)\to\operatorname{Bun}_{G}(C)\). For \(L\in\operatorname{Bun}_{G/G^{\prime}}(C)\), we always view functions on \(\operatorname{Bun}_{G}^{L}(C,K)\) as functions on \(\operatorname{Bun}_{G}(C,K)\) (extended by zero). On the other hand, let \(Z\subset G\) be the center of \(G\), and let \(Z_{0}\subset Z\) be the connected component of \(1\). Then \(Z_{0}\to G/G^{\prime}\) is an isogeny. The group \(\operatorname{Bun}_{Z_{0}}(C,K)\) acts naturally on \(\operatorname{Bun}_{G}(C,K)\). We denote this action as \(M+P\), where \(M\in\operatorname{Bun}_{Z_{0}}(C,K)\) and \(P\in\operatorname{Bun}_{G}(C,K)\). **Definition 1.5**.: A function \(f\in\mathbb{C}(\operatorname{Bun}_{G}(C,K))\) is called _Hecke-bounded_ if there exists a finite set \(S\subset\operatorname{Bun}_{G}(C,K)\), such that the support of \(h*f\) is contained in \(\operatorname{Bun}_{Z_{0}}(C,K)+S\) for every \(h\in\mathcal{H}_{G,C,K}\). _Conjecture 1.6_.: Assume that \(C\) is a special nilpotent extension of \(\overline{C}\) of length \(n\), and that the genus \(g\) of \(\overline{C}\) is \(\geq 2\). Let \(K\subset G(\mathcal{O})\) be an open subgroup. 1. The \(\mathbb{C}\)-vector space \[\mathcal{S}_{\mathrm{cusp}}(\mathrm{Bun}^{L}_{G}(C,K))\coloneqq\mathcal{S}_{ \mathrm{cusp}}(\mathrm{Bun}_{G}(C,K))\cap\mathbb{C}(\mathrm{Bun}^{L}_{G}(C,K))\] is finite-dimensional for any \(L\in\mathrm{Bun}_{G/G^{\prime}}(C)\). 2. If \(\deg(\mathcal{N}/\mathcal{N}^{2})=0\) then there exists a function \(c(g)>0\) and positive integer \(r\) such that \[|\frac{\dim\mathcal{S}_{\mathrm{cusp}}(\mathrm{Bun}^{L}_{G}(C))}{q^{n\dim(G^{ \prime})(g-1)}}-r|\leq c(g)q^{-1/2},\] where \(q=|k|\). 3. A function \(f\) on \(\mathrm{Bun}^{L}_{G}(C,K)\) is cuspidal if and only if it is Hecke-bounded. Note that part (3) implies that any cuspidal functions has finite support. Conversely, part (3) follows from part (1) together with the statement that any cuspidal function has finite support. In the case when \(C\) is a curve over a finite field and \(G=\mathrm{PGL}_{2}\) or \(G=\mathrm{GL}_{2}\), the finite dimensionality is well known, while the dimension estimate is computed by Schleich in [13, Sec. 3.3]. ### The case of \(G=\mathrm{GL}_{2}\): unramified commuting Hecke operators From now till the end of this section we assume that \(G=\mathrm{GL}_{2}\) and write \(\mathrm{Bun}(C,K)\) and \(\mathrm{Bun}^{L}(C,K)\) instead of \(\mathrm{Bun}_{G}(C,K)\) and \(\mathrm{Bun}^{L}_{G}(C,K)\) In this subsection we assume that \(K=G(\mathcal{O})\) and define a commuting family of Hecke operators associated with certain divisors on \(C\). We always assume that \(C\) is a special nilpotent extension of \(\overline{C}\). **Definition 1.7**.: A _simple divisor_\(c\subset C\) is an effective Cartier divisor \(c\subset C\) such that \(\overline{c}:=c\cap\overline{C}\) (the scheme-theoretic intersection in \(C\)) is a reduced point in \(\overline{C}\). Every point \(\overline{c}\in\overline{C}\) can be lifted to a simple divisor (non uniquely). For a simple divisor \(c\subset C\), we denote by \(f_{c}\) an idele given by a local generator of the ideal of \(c\) at the place \(\overline{c}\) and trivial at all other places. **Definition 1.8**.: We denote by \(h_{c}\in\mathcal{H}_{G,C}\) the characteristic measure of the double class \[G(\mathcal{O})\begin{pmatrix}f_{c}^{-1}&0\\ 0&1\end{pmatrix}G(\mathcal{O}). \tag{1.3}\] We define the corresponding Hecke operator on \(\mathrm{Bun}(C)\) by \[T_{c}:\mathbb{C}(\mathrm{Bun}(C))\to\mathbb{C}(\mathrm{Bun}(C)):f\mapsto h_{c} \ast f.\] **Theorem B**. _Assume that \(C\) is a special nilpotent extension of \(\overline{C}\). Then for any pair of simple divisors \(c\) and \(c^{\prime}\), one has \(h_{c}h_{c^{\prime}}=h_{c^{\prime}}h_{c}\) in \(\mathcal{H}_{G,C}\)._ For a line bundle \(L\) on \(C\) we denote by \(t_{L}^{\ast}\) the operator on \(\mathbb{C}(\mathrm{Bun}(C))\) given by \[t_{L}^{\ast}f(V)=f(V\otimes L).\] These operators come from central elements \(h_{L}\) of \(\mathcal{H}_{G,C}\). Thus, the elements \((h_{L})\) and \((h_{c})\) generate a commutative subalgebra of \(\mathcal{H}_{G,C}\) In a forthcoming work we will generalize the construction of this commutative subalgebra to other reductive groups in the case when \(C\) is a smooth curve over \(O/\mathfrak{m}^{n}\), where \(O\) is the ring of integers in a local nonarchimedean field. We also define a weaker notion of Hecke-boundedness, that uses only commuting operators \(T_{c}\) corresponding to simple divisors, as opposed to the entire Hecke algebra \(\mathcal{H}_{G,C}\). **Definition 1.9**.: A function \(f\in\mathbb{C}(\operatorname{Bun}(C))\) is called _weakly Hecke-bounded_ if there exists a finite set \(S\subset\operatorname{Bun}(C)\), such that for every collection of simple divisors \(c_{1},\dots,c_{n}\) on \(C\) (not necessarily distinct), the support of \(T_{c_{1}}\dots T_{c_{n}}f\) is contained in \(S\otimes\operatorname{Pic}(C)\). _Conjecture 1.10_.: Fix a line bundle \(L\) on \(C\). Any weakly Hecke-bounded function on \(\operatorname{Bun}^{L}(C)\) is cuspidal. Note that together with Conjecture 1.6(3) this implies that for function on \(\operatorname{Bun}^{L}(C)\) being "weakly Hecke-bounded" is equivalent to "Hecke-bounded" and is equivalent to cuspidal. ### The case of \(G=\operatorname{GL}_{2}\) and an extension of length \(2\) This paper provides the proof of most of the above conjectures in the case when \(G=\operatorname{GL}_{2}\) and the special nilpotent extension \(C\) has length \(2\). Until the end of this section we restrict our analysis to this case. We say that functions from the subspace \(\mathcal{S}^{1}_{\operatorname{cusp}}(\operatorname{Bun}^{L}(C))=\mathcal{S} ^{1}_{\operatorname{cusp}}(\operatorname{Bun}(C))\cap\mathbb{C}( \operatorname{Bun}^{L}(C))\) are _strongly cuspidal_. **Theorem C**.: _Assume that \(C\) is a special nilpotent extension of length \(2\) and the genus \(g\) of \(\overline{C}\) is \(\geq 2\). (1) For every line bundle \(L\) on \(C\) and every open subgroup \(K\subset G(\mathcal{O})\), the space \(\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}^{L}(C,K))\) is finite-dimensional. (2) A function \(f\) on \(\operatorname{Bun}^{L}(C)\) is cuspidal if and only if it is weakly Hecke-bounded._ The main technical result used in the proof of Theorem C is Proposition 5.4 showing that every cuspidal function on \(\operatorname{Bun}_{G}(C)\) (resp., \(\operatorname{Bun}_{G}(C,K)\)) vanishes on vector bundles \(V\) (resp., elements of \(\operatorname{Bun}_{G}(C,K)\) projecting to \(V\)) such that \(\overline{V}\simeq L_{0}\oplus M_{0}\), where \(L_{0}\) and \(M_{0}\) are line bundles on \(\overline{C}\) with \(\deg(M_{0})-\deg(L_{0})\geq 6g-1\) (resp., \(\deg(M_{0})-\deg(L_{0})\leq N(K)\) for some constant \(N(K)\)). ### The space of cuspidal functions for \(\operatorname{PGL}_{2}\) To simplify notations we replace in this subsection the group \(G=\operatorname{GL}_{2}\) by \(\operatorname{PGL}_{2}\). By Proposition A, the Hecke algebra \(\mathcal{H}_{K}:=\mathcal{H}_{\operatorname{PGL}_{2},C,K}\) acts on the space of cuspidal function \[V(C,K):=\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}_{\operatorname{ PGL}_{2}}(C,K)).\] For each simple divisor \(c\subset C\) we denote by \(T_{c}\) the Hecke operator associated with the double coset of the image of (1.3). **Corollary D**. 1. _For each open subgroup_ \(K\subset\operatorname{PGL}_{2}(\mathcal{O})\)_, the space_ \(V(C,K)\) _is finite-dimensional._ 2. _A function_ \(f\in\mathcal{S}\big{(}\operatorname{Bun}_{\operatorname{PGL}_{2}}(C)\big{)}\) _is cuspidal if and only if the space spanned by_ \(T_{c_{1}}\dots T_{c_{n}}f\) _(resp., the space_ \(\mathcal{H}_{\operatorname{PGL}_{2}(\mathcal{O})}\cdot f\)_) is finite-dimensional._ We also prove a similar characterization of cuspidal function in \(\mathcal{S}\big{(}\operatorname{Bun}_{\operatorname{PGL}_{2}}(C,K)\big{)}\) for an arbitrary open subgroup \(K\subset\operatorname{PGL}_{2}(\mathcal{O})\) (see Corollary 6.8). Recall (see Definition (1.4)(5)) that we also have the subspace of strongly cuspidal functions \[V^{1}(C,K):=\mathcal{S}^{1}_{\operatorname{cusp}}\big{(}\operatorname{Bun}_{ \operatorname{PGL}_{2}}(\mathbb{C},K)\big{)}\subset V(C,K),\] preserved by \(\mathcal{H}_{K}\). Let \(\overline{K}\subset\operatorname{PGL}_{2}(\mathbb{A})\) be the image of \(K\). Then we can view \(V(\overline{C},\overline{K})\) as a subspace in \(V(C,K)\). Using tools from representation theory of \(G(\mathbb{A})\), we get a direct sum decomposition \[V(C,K)=V^{1}(C,K)\oplus V(C,K)_{n}\oplus V(\overline{C},\overline{K}),\] where \(V(C,K)_{n}\) is a certain piece corresponding to the nilpotent adjoint orbit in \(\mathfrak{g}(\overline{F})\) (see Corollary 6.9). We also relate \(V^{1}(C):=V^{1}(C,G(\mathcal{O}))\) to the moduli space of Higgs bundles over \(\overline{C}\) (see Proposition 3.20) and use this to study the dimension of \(V^{1}(C)\) and of \(V(C):=V(C,G(\mathcal{O}))\). **Theorem E**. _Assume that \(C\) is a special nilpotent extension of length \(2\) with \(L=\mathcal{N}/\mathcal{N}^{2}\) of degree \(0\), the genus \(g\) of \(\overline{C}\) is \(\geq 2\), and the characteristic of \(k\) is \(\neq 2\). Then_ \[|\frac{\dim V^{1}(C)}{q^{6g-6}}-2|\leq a(g)q^{-1/2},\ \ \dim(V(C)/V^{1}(C))\leq b(g)q^{3g-3},\] _for some constants \(a(g),b(g)\) depending only on the genus \(g\)._ For simplicity of notation we will formulate our last result for \(C=\overline{C}\times\operatorname{Spec}(k[\epsilon]/(\epsilon)^{2})\), where \(\overline{C}\) is a curve over \(k\) (a generalization to an arbitrary special nilpotent extensions of length \(2\) is given by Theorem 6.12). Set \(A:=H^{0}(\overline{C},\omega_{\overline{C}}^{2})\), and let \(A^{\prime}\subset A\) be the complement to the set of elements \(\eta^{2}\), where \(\eta\in H^{0}(\overline{C},\omega_{\overline{C}})\). There is a decomposition \[V^{1}(C)\simeq\bigoplus_{\alpha\in A^{\prime}}\mathcal{S}(HF_{\alpha}(k)),\] whose summands are preserved by the big Hecke algebra \(\mathcal{H}_{C}\) (see (6.3)). Here \(HF_{\alpha}\) are Hitchin fibers in the moduli stack of Higgs \(\operatorname{PGL}_{2}\)-bundles. In the case when \(\alpha\in A^{\prime}\) has only simple zeros, \(HF_{\alpha}(k)\) can be identified with the set of \(k\)-points of an abelian variety, hence, it makes sense to consider characters of \(HF_{\alpha}(k)\). **Theorem F**. _Assume that \(C=\overline{C}\times\operatorname{Spec}(k[\epsilon]/(\epsilon)^{2})\), and the characteristic of \(k\) is \(\neq 2\). Then for every \(\alpha\in A^{\prime}\) having only simple zeros on \(\overline{C}\), the characters of the group \(HF_{\alpha}(k)\) form a basis in the space \(\mathcal{S}(HF_{\alpha}(k))\) consisting of \(\mathcal{H}_{\operatorname{PGL}_{2},C}\)-eigenvectors._ Thus, the big Hecke algebra \(\mathcal{H}_{\operatorname{PGL}_{2},C}\) acts on the summands in \(V^{1}(C)\) corresponding to smooth spectral curves through a commutative quotient. However, we expect to see noncommuting operators from this algebra when considering the summands corresponding to singular spectral curves. One can ask whether Hecke eigenfunctions corresponding to singular spectral curves come from some perverse sheaves on Hitchin fibers. The paper is organized as follows. In Section 2 we prove some simple general results, which hold for nilpotent extensions \(C\) of arbitrary length. In Sec. 2.1, we prove Proposition A (for any \(G\)), and in Sec. 2.2 and 2.3 we study Hecke operators for \(\operatorname{GL}_{2}\), in particular, proving Theorem B. Starting from Section 3 we consider only special nilpotent extensions of length \(2\). In Section 3, we study the representation of the group \(G(\mathbb{A})\) on the space \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) of smooth functions with compact support on \(G(F)\backslash G(\mathbb{A})\) for an arbitrary split group \(G\). In Sec. 3.2, we decompose this space \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) into the direct sum of invariant subspaces parametrized by coadjoint orbits of the group \(G(\overline{F})\). We describe explicitly the pieces corresponding to elliptic orbits (i.e., regular semisimple with anisotropic stabilizer) in terms of certain induced representations (see Sec. 3.3 and 3.4), and study admissible subrepresentations in the pieces corresponding to regular semisimple orbit (see Sec. 3.5). In Sec. 3.6, we establish the relation of these pieces with the moduli stack of Higgs bundles on \(\overline{C}\). In Sec. 3.7 we specialize to the case \(G=\operatorname{PGL}_{2}\) and describe completely the subspace of _finitary functions_ in \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\), i.e., those contained in an admissible \(G(\mathbb{A})\)-subrepresentation. 3 Footnote 3: We choose to work with \(\operatorname{PGL}_{2}\) rather than \(\operatorname{GL}_{2}\) in representation theory part of the paper, since the results are somewhat easier to formulate and since the corresponding local statements are readily available in [8]. In Section 4 we study the constant term operator in the case of \(\operatorname{GL}_{2}\). In Sec. 4.1, we introduce an important technical tool, an analog of the Iwasawa decomposition for nilpotent extensions of length \(2\). In Sec. 4.3, we establish a useful compatibility of the constant term with the Hecke operators. In Sec. 5 we combine this result with an analog of the reduction theory to prove Theorem C. In Section 6 we combine the previous results to study cuspidal functions for \(\operatorname{PGL}_{2}\). In Sec. 6.1, we prove Corollary D on equivalence of cuspidality with Hecke finiteness for \(\operatorname{PGL}_{2}\). Then we combine this with the orbit decomposition from Sec. 3.2 to prove that the subspaces of cuspidal and finitary functions in \(\mathcal{S}(\operatorname{PGL}_{2}(F)\backslash\operatorname{PGL}_{2}( \mathbb{A}))\) coincide (see Theorem 6.7). Finally, in Sec. 6.4 we prove Theorems E and F. In Appendix A we describe some useful results on groupoids given by double cosets. In Appendix B we give a geometric interpretation of the constant term operator for \(G=\operatorname{GL}_{2}\). ## 2. Hecke operators for special nilpotent extensions of curves over finite fields Let \(C\) be a nilpotent extension of a curve \(\overline{C}\). In Sec. 2.1 we prove Proposition A (which works for any reductive group \(G\) and any \(C\)). Then in Sec. 2.2, we restrict to the case when \(G=\operatorname{GL}_{2}\) and compare geometric and an adelic definitions of the Hecke operators associated with effective Cartier divisors. In Sec. 2.3 we assume that \(C\) is a special nilpotent extension and prove Theorem B. ### Compatibility of the constant term with the Hecke algebra For each parabolic subgroup \(P=LU\) and a function \(f\) on \(\operatorname{Bun}_{G}(C,K)\), we define the constant term \(E_{P}(f)\) as a function on \(\operatorname{QBun}_{L}(C,K)\) (see Def. 1.3) given by \[E_{P}f(g)=\int_{u\in U(F)\backslash U(\mathbb{A})}f(ug)du,\] where \(g\in G(\mathbb{A})\) and \(du\) is the Haar measure on \(U(\mathbb{A})\) normalized so that \(\operatorname{vol}(U(\mathcal{O}))=1\) (we will check that the function on \(\operatorname{QBun}_{L}(C,K)\) is well defined in Lemma 2.1 below). Consider the compact open subgroup \(U_{g,K}:=U(\mathbb{A})\cap gKg^{-1}\). Since the function \(u\mapsto f(ug)\) is right \(U_{g,K}\)-invariant, and the set of double cosets \(U(F)\backslash U(\mathbb{A})/U_{g,K}\) is finite, the integral defining \(E_{P}f\) can be rewritten as a finite sum. **Lemma 2.1**.: _The map \(g\mapsto E_{P}f(g)\) descends to a well defined function on \(\operatorname{QBun}_{L}(C,K)\)._ Proof.: It is clear that \(g\mapsto Ef(g)\) is right \(K\)-invariant and left \(U(\mathbb{A})\)-invariant. It remains to check left \(L(F)\)-invariance. For \(l\in L(F)\), we have \[E_{P}f(lg)=\int_{u\in U(F)\backslash U(\mathbb{A})}f(ulg)du=\int_{u\in U(F) \backslash U(\mathbb{A})}f(\operatorname{Ad}(l^{-1})(u)g)du=\int_{u\in U(F) \backslash U(\mathbb{A})}f(ug)du,\] since for \(l\in L(F)\), the action of \(Ad(l)\) on \(U(\mathbb{A})\) preserves the Haar measure. Proof of Proposition A.: (1) For each \(g_{0}\in G(\mathbb{A})\), let \(h_{g_{0}}\) denote the characteristic measure of the double coset \(Kg_{0}K\) so the total integral of \(h_{g_{0}}\) is \(1\). We have to show that each operator \(T_{g_{0}}:f\mapsto h_{g_{0}}\ast f\) preserves the subspace of cuspidal functions. It is enough to construct an operator \(T_{g_{0}}^{P}\) on \(\mathbb{C}(\operatorname{QBun}_{L}(C,K))\), such that \[E_{P}\circ T_{g_{0}}=T_{g_{0}}^{P}\circ E_{P}.\] We have \[T_{g_{0}}f(g)=\int_{h\in K}f(ghg_{0})dh,\] where the Haar measure is normalized so that \(\operatorname{vol}(K)=1\). Let \[H_{g_{0}}:=K\cap g_{0}Kg_{0}^{-1}\subset K.\] Since \(hh_{0}g_{0}K=hg_{0}K\) for \(h_{0}\in H_{g_{0}}\), we can replace the integration over \(h\in K\) by a finite sum over \(K/H_{g_{0}}\): \[T_{g_{0}}f(g)=\operatorname{vol}(H_{g_{0}})\cdot\sum_{h\in K/H_{g_{0}}}f(ghg_{ 0}). \tag{2.1}\] Let \(P\) be a parabolic subgroup, \(U\subset P\) its unipotent radical. By definition, \[E_{P}T_{g_{0}}f(g)=\operatorname{vol}(H_{g_{0}})\cdot\sum_{h\in K/H_{g_{0}}} \int_{u\in U(F)\backslash U(\mathbb{A})}f(ughg_{0})dudh.\] This formula can be rewritten as \[E_{P}T_{g_{0}}f(g)=\operatorname{vol}(H_{g_{0}})\cdot\sum_{h\in K/H_{g_{0}}}E_ {P}f(ghg_{0}), \tag{2.2}\] so the right-hand side has the required form \(T_{g_{0}}^{P}E_{P}(f)\) where \[T_{g_{0}}^{P}\phi(g)=\operatorname{vol}(H_{g_{0}})\cdot\sum_{h\in K/H_{g_{0}}} \phi(ghg_{0}).\] The proof that \(T_{g_{0}}\) preserves the kernel of \(E_{P}^{i}\) is analogous for each \(i\geq 0\). (2) It is enough to consider the case when the ideal sheaf \(I_{C^{\prime}}\subset\mathcal{O}_{C}\) defining \(C^{\prime}\subset C\) satisfies \(I_{C^{\prime}}^{2}=0\). Let \[G(\mathbb{A}_{C})\to G(\mathbb{A}_{C^{\prime}}):g\mapsto\overline{g}\] denote the natural reduction map. Suppose we are given a function \(f_{0}\) on \(\operatorname{Bun}_{G}(C^{\prime})=G(F_{C^{\prime}})\backslash G(\mathbb{A}_ {C^{\prime}})/G(\mathcal{O}_{C^{\prime}})\). Consider the corresponding function \(f\) on \(\operatorname{Bun}_{G}(C)\) given by \(f(g)=f_{0}(\overline{g})\). Then we have \[E_{P}f(g)=\int_{u\in U(F_{C})\backslash U(\mathbb{A}_{C})}f_{0}(\overline{u} \overline{g})du=\int_{u_{0}\in U(F_{C^{\prime}})\backslash U(\mathbb{A}_{C^{ \prime}})}\operatorname{vol}(\pi^{1}(u_{0}))f_{0}(u_{0}\overline{g})du_{0}.\] where \(\pi:U(F_{C})\backslash U(\mathbb{A}_{C})\to U(F_{C^{\prime}})\backslash U( \mathbb{A}_{C^{\prime}})\) is the reduction map. It remains to show that \(\operatorname{vol}(\pi^{1}(u_{0}))\) does not depend on \(u_{0}\). Since \(I^{2}=0\), we can identify the kernel of the map \(U(\mathbb{A}_{C})\to U(\mathbb{A}_{C^{\prime}})\) with \(1+\mathfrak{u}(I\otimes\mathbb{A}_{C^{\prime}})\) (where the tensor product is over \(A/I\)). This leads to the identification \[\pi^{-1}\pi(u)=(1+\mathfrak{u}(I\otimes\mathbb{A}_{C^{\prime}})/\mathfrak{u}( I\otimes F_{C^{\prime}}))\cdot u\subset U(F_{C})\backslash U(\mathbb{A}_{C}).\] Thus, \[\operatorname{vol}(\pi^{-1}\pi(u))=\operatorname{vol}(1+\mathfrak{u}(I \otimes\mathbb{A}_{C^{\prime}})/\mathfrak{u}(I\otimes F_{C^{\prime}})),\] which does not depend on \(\pi(u)\). Thus, up to a constant factor, \(E_{P}f(g)\) is given by the constant term operator applied to \(f_{0}\). ### Hecke operators for \(\operatorname{GL}_{2}\): adelic and modular definitions In this subsection we assume that \(G=\operatorname{GL}_{2}\) and write \(\operatorname{Bun}(C)=\operatorname{Bun}_{\operatorname{GL}_{2}}(C)\). **Definition 2.2**.: (i) For any effective Cartier divisor \(c\subset C\) and a function \(f\) on \(\operatorname{Bun}(C)\), we set \[T_{c}f(V)=\sum_{V^{\prime}}\frac{i(V,V^{\prime};c)}{a(V^{\prime})}f(V^{\prime }),\] where \(i(V,V^{\prime};c)\) is the number of embeddings \(V\to V^{\prime}\) such that \(V^{\prime}/V\simeq\mathcal{O}_{c}\). (ii) We denote by \(\mathbb{D}:\mathbb{C}(\operatorname{Bun}(C))\to\mathbb{C}(\operatorname{Bun} (C))\) the operator induced by duality: \[\mathbb{D}f(V)=f(V^{\vee}),\] also consider another Hecke operator \(T_{c}^{\prime}\) given by \[T_{c}^{\prime}=\mathbb{D}\circ T_{c}\circ\mathbb{D}.\] Equivalently, using the equality \(i(V,W,c)=i(W^{\vee},V^{\vee},c)\), we get \[T_{c}^{\prime}f(V^{\prime})=\sum_{F}\frac{i(V,V^{\prime},c)}{a(V)}f(V)=\sum_{V \subset V^{\prime}:V^{\prime}/V\simeq\mathcal{O}_{c}}f(V), \tag{2.3}\] where in the last formula we sum of subsheaves \(V\) of \(V^{\prime}\) which are vector bundles. We have \[T_{c}(\mathbb{C}(\operatorname{Bun}^{L}(C)))\subset\mathbb{C}(\operatorname{Bun}^{ L(-c)}(C)),\ \ T_{c}^{\prime}:\mathbb{C}(\operatorname{Bun}^{L(-c)}(C))\to\mathbb{C}( \operatorname{Bun}^{L}(C)).\] The operators \((T_{c})\) (resp., \(T_{c}^{\prime}\)) in general do not commute (see Remark 2.10). However, in the next section we will show that in the case when \(C\) is a special nilpotent extension of \(\overline{C}\), the operators associated with _simple_ divisors do commute. Now we connect the above definition to the adelic Hecke operators \(T_{g}\) given by (2.1). Let \(c\subset C\) be an effective Cartier divisor. We denote by \(f_{c}=(f_{p,c})\) an idele such that \(f_{p,c}\in\mathcal{O}_{p}\) is a local equation of \(c\) for each point \(p\). **Lemma 2.3**.: _Set \(A(c)=H^{0}(\mathcal{O}_{C}/I_{c})\), where \(I_{c}\) is the ideal of \(c\subset C\). Consider the element_ \[g_{c}:=\begin{pmatrix}f_{c}^{-1}&0\\ 0&1\end{pmatrix}\in T(\mathbb{A}_{C}). \tag{2.4}\] _Then one has_ \[T_{g_{c}}=\frac{1}{|\mathbb{P}^{1}(A(c))|}\cdot T_{c}.\] Proof.: The subgroup \(H_{g_{c}}\subset G(\mathcal{O})\) consists of matrices \((a_{ij})\in G(\mathcal{O})\) such that \(a_{21}\in I_{c}:=f_{c}\cdot\mathcal{O}\). Thus, the quotient \(G(\mathcal{O})/H_{g_{c}}\) can be identified with \(G(\mathcal{O}/I_{c})/B(\mathcal{O}/I_{c})\), i.e., with the projective line \(\mathbb{P}^{1}(A(c))\). Hence, \(\operatorname{vol}(H_{g_{c}})=1/|\mathbb{P}^{1}(A(c))|\), and the general formula (2.1) can be rewritten as \[T_{g_{c}}f(g)=\frac{1}{|\mathbb{P}^{1}(A(c))|}\cdot\sum_{[a_{1}:a_{2}]\in \mathbb{P}^{1}(A(c))}f(gh_{[a_{1}:a_{2}]}g_{c}),\] where \(h_{[a_{1}:a_{2}]}\) is any element of \(G(\mathcal{O})\) such that \[h_{[a_{1}:a_{2}]}\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}a_{1}\\ a_{2}\end{pmatrix}.\] Note that since \(g_{c}^{-1}\) has entries in \(\mathcal{O}_{C}\), for the bundles \(V(g)\) and \(V(gh_{[a_{1}:a_{2}]}g_{c})\) we have an inclusion of sheaves \[V:=V(g)\hookrightarrow V(gh_{[a_{1}:a_{2}]}g_{c})=:V_{[a_{1}:a_{2}]}\] which induces at every point the inclusion \[V_{p}=g\mathcal{O}_{p}^{2}\hookrightarrow gh_{[a_{1}:a_{2}]}g_{c}\mathcal{O} _{p}^{2}=(V_{[a_{1}:a_{2}]})_{p}.\] On the other hand, surjections \(V^{\vee}=V((g^{t})^{-1})\to\mathcal{O}_{c}\) up to automorphisms of \(\mathcal{O}_{c}\), correspond exactly to the points \([a_{1}:a_{2}]\) of the projective line \(\mathbb{P}^{1}(\mathbb{A}(c))\), and their kernels are easily seen to be given by subsheaves \(V^{\vee}_{[a_{1}:a_{2}]}\). Taking into account the relation \(T_{c}=\mathbb{D}\circ T^{\prime}_{c}\circ\mathbb{D}\) and the formula (2.3) for \(T^{\prime}_{c}\) we get \[T_{c}(g)=\sum_{[a_{1}:a_{2}]\in\mathbb{P}^{1}(A(c))}f(gh_{[a_{1}:a_{2}]}g_{c}).\] **Lemma 2.4**.: _(i) For every line bundle \(L\) on \(C\) the operator \(t_{L}^{*}\) on \(\mathbb{C}(\operatorname{Bun}(C))\) given by \(t_{L}^{*}f(V)=f(V\otimes L)\) commutes with all the operators \(T_{c}\) and \(T_{c}^{\prime}\) associated with effective Cartier divisors._ _(ii) For any effective Cartier divisor_ \(c\subset C\)_, one has_ \[T_{c}^{\prime}=t_{\otimes\mathcal{O}(-c)}\circ T_{c},\] _i.e.,_ \[T_{c}^{\prime}f(V)=(T_{c}f)(V(-c)).\] Proof.: (i) is straightforward. For (ii) we use that for every pair of vector bundles \(V\) and \(V^{\prime}\), there is a bijection between the set of embeddings \(i:V\to V^{\prime}\) with the cokernel isomorphic to \(\mathcal{O}_{c}\) and the set of embeddings \(V^{\prime}(-c)\to V\) with the cokernel isomorphic to \(\mathcal{O}_{c}\). Namely, starting with \(i:V\to V^{\prime}\) we observe that the natural embedding \(V^{\prime}(-c)\to V^{\prime}\) factors uniquely through an embedding \(j:V^{\prime}(-c)\to V\). Furthermore, \(\operatorname{coker}(j)\) is isomorphic to the kernel of the induced map \(V^{\prime}|_{c}\to\mathcal{O}_{c}\), hence, it is isomorphic to \(\mathcal{O}_{c}\). Now we rewrite Definition (2.2) in terms of the natural groupoid structure on the Hecke correspondence where we take into account automorphisms of objects. For each effective Cartier divisor \(c\subset C\), let us consider the Hecke groupoid \[\operatorname{Hecke}(C,c):=\{V\hookrightarrow V^{\prime}\mid V^{\prime}/V \simeq\mathcal{O}_{c}\},\] where \(V\) and \(V^{\prime}\) are vector bundles over \(C\). and let \(p_{1},p_{2}\) denote the natural projections from \(\operatorname{Hecke}(C)\) to \(\operatorname{Bun}(C)\) sending \(V\hookrightarrow V^{\prime}\) to \(V\) and \(V^{\prime}\), respectively. We use the pull-backs and push-forwards with respect to these projections (see Appendix). **Lemma 2.5**.: _One has_ \[T_{c}=p_{1*}p_{2}^{*},\ \ T_{c}^{\prime}=p_{2*}(p_{1})^{*}.\] Proof.: Let us consider the case of \(T_{c}^{\prime}\) (the case of \(T_{c}\) is similar, and also follows by duality). Note that for a vector bundle \(V\), the fiber groupoid \(p_{2}^{-1}(V)\) is the groupoid of embeddings \(V^{\prime}\hookrightarrow V\) (with \(V\) fixed), such that \(V/V^{\prime}\simeq\mathcal{O}_{c}\). This groupoid has trivial automorphisms, so by Lemma A.1, we get \[p_{2*}(p_{1})^{*}f(V)=\sum_{V^{\prime}\subset V:V/V^{\prime}\simeq\mathcal{O} _{c}}f(V^{\prime})=T_{c}^{\prime}f(V).\] ### Proof of Theorem B For every closed point \(p\) of \(C\) let us denote by \(\mathcal{H}_{p}=\mathcal{H}_{G(F_{p}),G(\mathcal{O}_{p})}\) the _local Hecke algebra_ of compactly supported, \(G(\mathcal{O}_{p})\)-biinvariant measures on \(G(F_{p})\). Note that for every double coset \(S=G(\mathcal{O}_{p})gG(\mathcal{O}_{p})\) there is a unique _characteristic measure_\(\mu_{S}\) supported on \(S\), which is \(G(\mathcal{O}_{p})\)-biinvariant and satisfies \(\mu_{S}(1)=1\). It can be defined by \[\mu_{S}(f)=\int_{G(\mathcal{O}_{p})\times G(\mathcal{O}_{p})}f(k_{1}gk_{2})dk _{1}dk_{2},\] where \(dk\) is the Haar measure on \(G(\mathcal{O}_{p})\) (left and right invariant since \(G(\mathcal{O}_{p})\) is compact). For any open compact subgroup \(K\subset G(\mathbb{A})\) of the form \(K=G(\mathcal{O}_{p})\times K^{\prime}\), where \(K^{\prime}\subset G(\mathbb{A}^{\prime})\), and \(\mathbb{A}^{\prime}\) is the adeles without the place \(p\), we have a natural injective homomorphism \[\mathcal{H}_{p}\to\mathcal{H}_{G,C,K}.\] In particular, for \(K=G(\mathcal{O})\) we get homomorphisms \(\mathcal{H}_{G(F_{p})}\to\mathcal{H}_{G,C}\), and the images of these homomorphisms for different points commute. Recall that the element \(h_{c}\) associated with a simple divisor \(c\) is is defined using the double coset of the element \(g_{c}\) in \(G(\mathbb{A})\) which is trivial away from the place \(p=\overline{c}\) (see (2.4)), Hence, \(h_{c}\) can be viewed as an element of the local Hecke algebra \(\mathcal{H}_{p}\) corresponding to the characteristic measure of \(G(\mathcal{O}_{p})g_{c}G(\mathcal{O}_{p})\). Now let \(G=\operatorname{GL}_{2}\). Theorem B follows from the following local result. **Theorem 2.6**.: _Assume that \(C\) is a special nilpotent extension of \(\overline{C}\), and let \(c\) and \(d\) be simple divisors in \(C\) supported at a point \(p\in\overline{C}\). Then \(h_{c}h_{d}=h_{d}h_{c}\) in the local Hecke algebra \(\mathcal{H}_{p}\)._ For the rest of this subsection we will write \(\mathcal{O}\) instead of \(\mathcal{O}_{p}\). As in [8], we use the Gelfand's trick. Let \(\theta:G(F_{p})\to G(F_{p})\) denote the anti-involution given by the transpose of a matrix. Then \(\theta\) induces an anti-involution of the Hecke algebra \(\mathcal{H}_{p}\). **Lemma 2.7**.: _Assume that for \(g_{1},g_{2}\in\operatorname{GL}_{2}(F_{p})\), the double cosets \(G(\mathcal{O})g_{1}G(\mathcal{O})\) and \(G(\mathcal{O})g_{2}G(\mathcal{O})\) are \(\theta\)-invariant, and the subset \(G(\mathcal{O})g_{1}G(\mathcal{O})g_{2}G(\mathcal{O})\subset\operatorname{GL}_ {2}(F_{p})\) is the union of \(\theta\)-invariant double \(G(\mathcal{O})\)-cosets. Then the characteristic measures of \(G(\mathcal{O})g_{1}G(\mathcal{O})\) and \(G(\mathcal{O})g_{2}G(\mathcal{O})\) commute in \(\mathcal{H}_{G(F_{p})}\)._ Proof.: Since the group \(G(\mathcal{O})\) is compact, the Haar measure on it is preserved by \(\theta\). Hence, \(\theta\) maps the characteristic measure of a double coset \(G(\mathcal{O})gG(\mathcal{O})\) to the characteristic measure of a double coset \(G(\mathcal{O})\theta(g)G(\mathcal{O})\). Hence, any element of the Hecke algebra supported on the union of a finite number of \(\theta\)-invariant double \(G(\mathcal{O})\)-cosets is \(\theta\)-invariant. Let \(\chi_{1}\) and \(\chi_{2}\) be the characteristic measures of \(G(\mathcal{O})g_{1}G(\mathcal{O})\) and \(G(\mathcal{O})g_{2}G(\mathcal{O})\). Then \(\chi_{1}*\chi_{2}\) is supported on \(G(\mathcal{O})g_{1}G(\mathcal{O})g_{2}G(\mathcal{O})\). Hence, we have \[\theta(\chi_{1})=\chi_{1},\ \ \theta(\chi_{2})=\chi_{2},\ \ \theta(\chi_{1}*\chi_{2})= \chi_{1}*\chi_{2}.\] Since \(\theta\) is an anti-involution, we deduce \[\chi_{1}*\chi_{2}=\theta(\chi_{1}*\chi_{2})=\theta(\chi_{2})*\theta(\chi_{1}) =\chi_{2}*\chi_{1}.\] Note that if a matrix \(g\) is symmetric then \(G(\mathcal{O})gG(\mathcal{O})\) is invariant under \(\theta\). Hence, this condition holds for the elements \(g_{c}\) associated with generators \(f_{c}\) of the ideals of simple divisors \(c\) supported at \(p\). **Lemma 2.8**.: _We have an inclusion_ \[\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}G(\mathcal{O})\begin{pmatrix}1&0\\ 0&f_{d}\end{pmatrix}\subset G(\mathcal{O})\begin{pmatrix}1&0\\ 0&f_{c}f_{d}\end{pmatrix}G(\mathcal{O})\cup\bigcup_{x\in\mathcal{O}}G(\mathcal{ O})\begin{pmatrix}f_{c}&0\\ x&f_{d}\end{pmatrix}G(\mathcal{O}).\] Proof.: Recall that we denote by \(H_{g_{c}}\subset G(\mathcal{O})\) the subgroup of matrices such that \(a_{21}\in f_{c}\cdot\mathcal{O}\). For any \(h\in H_{g_{c}}\) we have \[\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}h\in G(\mathcal{O})\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}.\] Hence, it is enough to study the double \(G(\mathcal{O})\)-cosets of \[\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}g\begin{pmatrix}1&0\\ 0&f_{d}\end{pmatrix},\] where \(g\) runs through some set of representatives of right \(H_{g_{c}}\)-cosets in \(G(\mathcal{O})\). It is easy to see that we can take as such representatives some matrices of the form \[\begin{pmatrix}0&1\\ 1&a\end{pmatrix},\ \ \begin{pmatrix}1&0\\ \epsilon b&1\end{pmatrix},\] with \(a,b\in\mathcal{O}\). Now we observe that \[\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ \epsilon b&1\end{pmatrix}\begin{pmatrix}1&0\\ 0&f_{d}\end{pmatrix}=\begin{pmatrix}f_{c}&0\\ x&f_{d}\end{pmatrix}\] for some \(x\in\epsilon\mathcal{O}\), and \[\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}\begin{pmatrix}0&1\\ 1&a\end{pmatrix}\begin{pmatrix}1&0\\ 0&f_{d}\end{pmatrix}=\begin{pmatrix}0&f_{c}f_{d}\\ 1&af_{d}\end{pmatrix}\in G(\mathcal{O})\begin{pmatrix}1&0\\ 0&f_{c}f_{d}\end{pmatrix}G(\mathcal{O}).\] Proof of Theorem 2.6.: By Lemma 2.7, it is enough to show that the subset \(G(\mathcal{O})g_{c}G(\mathcal{O})g_{d}G(\mathcal{O})\subset G(F_{p})\) is the union of \(\theta\)-invariant double \(G(\mathcal{O})\)-cosets. In the case \(c=d\), the subset \(G(\mathcal{O})g_{c}G(\mathcal{O})g_{c}G(\mathcal{O})\) is clearly invariant under \(\theta\). Thus, we can assume \(c\neq d\). By Lemma 2.8, it is enough to check that for every \(x\in\epsilon\mathcal{O}\) the subset \[G(\mathcal{O})\begin{pmatrix}f_{c}&0\\ x&f_{d}\end{pmatrix}G(\mathcal{O})=G(\mathcal{O})\begin{pmatrix}f_{d}&x\\ 0&f_{c}\end{pmatrix}G(\mathcal{O})\] is invariant under \(\theta\). Note that multiplying by elementary matrices on the left and on the right we can add to \(x\) any element in \((f_{c},f_{d})\). Set \(f=f_{d}\). Since \(\overline{c}=\overline{d}\), we can write \(f_{c}\) as \(f_{c}=f+g\), where \(g\in\epsilon\mathcal{O}\). Thus, we are interested in the double \(G(\mathcal{O})\)-coset of the matrix \[M=\begin{pmatrix}f&x\\ 0&f+g\end{pmatrix}.\] Note that \[\mathcal{O}/(f,\epsilon)=k(\overline{d})\] is the residue field of the corresponding reduced point. Let \(n\geq 1\) be minimal such that \(g=\epsilon^{n}g_{0}\). If \(g_{0}\equiv 0\mod(\epsilon,f)\), \(g_{0}=fa+\epsilon b\), then \[f+g=f+\epsilon^{n}af+\epsilon^{n+1}b=(1+\epsilon^{n}a)(f+g^{\prime}),\] where \(g^{\prime}\in(\epsilon^{n+1})\). Hence, multiplying the second row with \((1+\epsilon^{n}a)\) we can replace \(n\) by \(n+1\). Continuing like this, we will either arrive to the case \(g=0\), or to the case where \(g_{0}\not\in(\epsilon,f)\). Let \(m\geq 1\) be minimal such that \(x\in(\epsilon^{m},f)\). Since we can add a multiple of \(f\) to \(x\), we can assume \(x=\epsilon^{m}e_{0}\), where \(e_{0}\not\in(\epsilon,f)\). If \(m\geq n\) then since \((\epsilon,f,g_{0})=1\), we obtain \[\epsilon^{n}\in(\epsilon^{n+1},f,\epsilon^{n}g_{0}).\] Hence, the ideals \(I=(f,\epsilon^{n})\) and \(J=(f,\epsilon^{n}g_{0})\subset I\) satisfy \(I\subset\epsilon I+J\), which implies that \(I=J\). Thus, \(x\in(\epsilon^{n})\in J=(f,g)\), so this reduces to the case \(x=0\). It remains to consider the case \(m<n\). We claim that for there exist \(a\) and \(b\) such that \[(f+\epsilon^{n}g_{0})(1+\epsilon^{n}a)=f+\epsilon^{n}e_{0}b.\] This can be checked by the descending induction in \(n\), using the fact that \((\epsilon,f,e_{0})=(1)\). Indeed, we can find \(a\) and \(b\) such that \[(f+\epsilon^{n}g_{0})(1+\epsilon^{n}a)=f+\epsilon^{n}e_{0}b+\epsilon^{n+1}g_{ 0}^{\prime}.\] Then we can apply the induction assumption to \(n+1\) instead of \(n\), \(f^{\prime}=f+\epsilon^{n}e_{0}b\) instead of \(f\) (which still satisfies \((\epsilon,f^{\prime},e_{0})=(1)\)) and \(g_{0}^{\prime}\) instead of \(g_{0}\). Hence, we reduce to the case \(g=bx\) for some \(b\). Thus, \[M=\begin{pmatrix}f&x\\ 0&f+bx\end{pmatrix}.\] Now the assertion follows from the identity \[MB=BM^{t},\text{ where }B=\begin{pmatrix}0&1\\ 1&b\end{pmatrix}.\] **Corollary 2.9**.: _Under the assumption of the theorem, all the Hecke operators \((T_{c}^{\prime})\) and \((T_{c})\) commute._ Proof.: The case of \((T_{c}^{\prime})\) follows from Theorem 2.6. The rest follows from Lemma 2.4. _Remark 2.10_.: If the ground ring \(A\) is a finite field then the Hecke operators corresponding to arbitrary effective Cartier divisors still commute. However, already for \(A=k[\epsilon]/(\epsilon^{2})\) the Hecke operators corresponding to non-simple divisors do not necessarily commute. Namely, for any simple divisor \(c\), the characteristic measures of \(G(\mathcal{O})g_{c}G(\mathcal{O})\) and \(G(\mathcal{O})g_{c}^{2}G(\mathcal{O})\) do not commute. In fact, we have \[G(\mathcal{O})g_{c}G(\mathcal{O})g_{c}^{2}G(\mathcal{O})\neq G(\mathcal{O})g _{c}^{2}G(\mathcal{O})g_{c}G(\mathcal{O}).\] Indeed, we have \[\begin{pmatrix}f_{c}&0\\ \epsilon&f_{c}^{2}\end{pmatrix}=\begin{pmatrix}f_{c}&0\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ \epsilon&1\end{pmatrix}\begin{pmatrix}1&0\\ 0&f_{c}^{2}\end{pmatrix}\in G(\mathcal{O})g_{c}^{-1}G(\mathcal{O})g_{c}^{-2}G (\mathcal{O}),\] but it is easy to see that this matrix is not in a \(G(\mathcal{O})\)-double coset of any matrix of the form \(\begin{pmatrix}f_{c}&\epsilon b\\ 0&f_{c}^{2}\end{pmatrix}\), and so it is not in \(G(\mathcal{O})g_{c}^{-2}G(\mathcal{O})g_{c}^{-1}G(\mathcal{O})\). In the case when \(C\) is a smooth curve over a finite local ring \(A\), we have the following characterization of simple divisors on \(C\). **Lemma 2.11**.: _Assume \(C\) is a smooth curve over a finite local ring \(A\). A subscheme \(c\subset C\) is a simple divisor if and only if it is an irreducible affine closed subscheme, etale over \(\operatorname{Spec}(A)\)._ Proof.: Assume that \(c\subset C\) is a simple divisor. If \(f_{c}\in\mathcal{O}\) is a local equation of \(c\) then its reduction \(\overline{f}_{c}=f_{\overline{c}}\) is an equation of \(\overline{c}\). Note that \(c\) is flat over \(A\). Indeed, the resolution \([\mathcal{O}\xrTo[f_{c}]{f_{c}}\mathcal{O}]\) for \(\mathcal{O}_{c}\) shows that \[\operatorname{Tor}_{1}^{A}(\mathcal{O}_{c},k)=\ker(\overline{\mathcal{O}} \xTo[f_{\overline{c}}]{f_{\overline{c}}}\overline{\mathcal{O}})=0.\] Also, \(\mathcal{O}_{c}/\epsilon\mathcal{O}_{c}\) is \(k(\overline{c})\), so \(c\) is unramified over \(\operatorname{Spec}(A)\), hence it is etale. Conversely, if \(c=\operatorname{Spec}(A(c))\subset C\) is an irreducible affine closed subscheme, etale over \(\operatorname{Spec}(A)\), then its reduction \(\overline{c}\) is a reduced point, so \(A(c)/\mathfrak{m}A(c)=k(\overline{c})\) Furthermore, by flatness of \(A(c)\), we have \(I_{c}/\mathfrak{m}I_{c}=I_{\overline{c}}\), where \(I_{c}\subset\mathcal{O}\) (resp., \(I_{\overline{c}}\subset\overline{\mathcal{O}}\)) is the ideal of \(c\) (resp., \(\overline{c}\)). This implies that a local generator of \(I_{\overline{c}}\) can be lifted to a generator of \(I_{c}\), so \(c\subset C\) is an effective Cartier divisor. ## 3. Automorphic representations for an extension of length \(2\) From now on we always assume that \(C\) is a special nilpotent extension of \(\overline{C}\) of length \(2\). In this section we consider a general split reductive group \(G\). Our goal is to describe a decomposition of the space \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) (see Sec. 3.2) into a direct sum of \(G(\mathbb{A})\)-representations numbered by coadjoint orbits in \(\mathfrak{g}^{\vee}(\overline{F})\) and to analyze the summands corresponding to elliptic regular orbits. We start with a local representation theory in Sec. 3.1. As in [8] where the case of \(\operatorname{PGL}_{2}\) was considered, the main idea is to use the Mackey theory. In Sec. 3.2, we establish the orbit decomposition of \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) In Sec. 3.3, 3.4 and 3.5 we give more details on the pieces of this decomposition corresponding to regular semisimple and elliptic orbits. In Sec. 3.6 we establish the relation between the spaces of \(G(\mathcal{O})\)-invariants and the moduli spaces of Higgs bundles. In Sec. 3.7 we determine the subspaces of _finitary functions_ in \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\), i.e., those contained in admissilbe \(G(\mathbb{A})\)-subrepresentations, in the case \(G=\operatorname{PGL}_{2}\). ### Local theory For a point \(p\in C\), we have the nilpotent extension \(E\coloneqq F_{p}\) of the corresponding local field \(\overline{E}\coloneqq\overline{F}_{p}\). We have an element \(\epsilon\in E\) such that \(\epsilon^{2}=0\), \(E/\epsilon E=\overline{E}\) and \(\epsilon E\simeq\overline{E}\). Let \(N\subset G(E)\) denote the kernel of the (surjective) reduction homomorphism \[r:G(E)\to G(\overline{E}).\] Then we have a natural isomorphism \(\mathfrak{g}(\overline{E})\simeq N\). If there is a ring isomorphism \(E\simeq\overline{E}[\epsilon]/(\epsilon^{2})\) then \(G(E)\simeq G(\overline{E})\ltimes\mathfrak{g}(\overline{E})\). Let us fix a non-trivial additive character \(\psi:\overline{E}\to U(1)\). **Definition 3.1**.: 1. \(\Xi\) is the group of characters \(\chi:N\to\mathbb{C}^{*}\). 2. We identify \(\Xi\) with \(\mathfrak{g}^{\vee}(\overline{E})\) through the isomorphism \(a:\mathfrak{g}^{\vee}(\overline{E})\to\Xi\) given by \(x\to\chi_{x}\) where \(\chi_{x}(y)=\psi((x,y))\). 3. For \(\chi\in\Xi\) we denote by \(\Omega_{\chi}\) the \(G(\overline{E})\)-orbit of \(\chi\), by \(St(\chi)\subset G(E)\) the stabilizer of \(\chi\) in \(G(E)\), and by \(\overline{St}(\chi)=St(\chi)/N\subset G(\overline{E})\) the stabilizer of \(\chi\) in \(G(\overline{E})\). 4. For an open compact subgroup \(K\subset G(E)\) we denote by \(\Xi_{K}\subset\Xi\) the subgroup of characters \(\chi\) such that \(\chi|_{N\cap K}\equiv 1\). 5. For an orbit \(\Omega\subset\Xi\) and an open compact subgroup \(K\subset G(E)\) we define \(\Delta_{\Omega}(K):=\Omega\cap\Xi_{K}/\overline{K}\), where \(\overline{K}:=r(K)\). _Remark 3.2_.: It is easy to see that the set \(\Delta_{\Omega}(K)\) is finite for every regular semisimple \(\Omega\) (see the proof of Proposition 3.7(1) below). **Definition 3.3**.: 1. \(\mathcal{R}\) denotes the category of smooth representations of \(G(E)\). 2. For \(\chi\in\Xi\), \(\mathcal{C}_{\chi}\) denotes the category of smooth representations \(\rho\) of \(St(\chi)\) such that \(\rho|_{N}=\chi Id\). 3. \(Ir_{\chi}\) denotes the set of equivalence classes of irreducible representations in \(\mathcal{C}_{\chi}\). 4. \(\operatorname{Ind}_{\chi}:\mathcal{C}_{\chi}\to\mathcal{R}:(\rho,W)\to(\pi( \chi,\rho),V):=\operatorname{Ind}_{St(\chi)}^{G(E)}(\rho,W)\) is the compactly supported induction functor. 5. \(J_{\chi}:\mathcal{R}\to\mathcal{C}_{\chi}\) is the functor given by \(J_{\chi}(\pi,V)=V_{N,\chi}\) where \(V_{N,\chi}\) is the space of \(\chi\)-coinvariants of \(V\) with respect to \(N\). 6. For a \(G(\overline{E})\)-orbit \(\Omega\subset\Xi\) we denote by \(\mathcal{R}_{\Omega}\) the full subcategory of representations \(\pi\) such that \(J_{\chi}(\pi)=\{0\}\) for \(\chi\not\in\Omega\). **Lemma 3.4**.: 1. _The functors_ \(J_{\chi}\) _and_ \(\operatorname{Ind}_{\chi}\) _are exact, and_ \(J_{\chi}\) _is left adjoint to_ \(\operatorname{Ind}_{\chi}\)_._ 2. _For any_ \(\chi\in\Xi,\rho\in Ir_{\chi}\) _the representation_ \(\operatorname{Ind}_{\chi}(\rho)\) _of_ \(G\) _is irreducible. Furthermore,_ \(\operatorname{Ind}_{\chi}(\rho)\) _is isomorphic to_ \(\operatorname{Ind}_{\chi^{\prime}}(\rho^{\prime})\) _if and only if the pairs_ \((\chi,\rho)\) _and_ \((\chi^{\prime},\rho^{\prime})\) _are conjugate._ 3. _Any smooth irreducible representation of_ \(G\) _is equivalent to_ \(\operatorname{Ind}_{\chi}(\rho)\) _for some pair_ \((\chi,\rho)\)_, with_ \(\chi\in\Xi\)_,_ \(\rho\in Ir_{\chi}\)_._ 4. _Let_ \(\Omega_{\chi}\) _be the orbit of_ \(\chi\)_. If_ \(V\in\mathcal{R}_{\Omega_{\chi}}\) _satisfies_ \(J_{\chi}(V)=0\) _then_ \(V=0\)_._ Proof.: Part (1) is well known, and parts (2) and (3) follow from the Mackey's theory. For part (4), we use the fact that a nonzero representation \(V\) has an irreducible subquotient \(V_{0}\). Then we would get \(J_{\chi}(V_{0})=0\). But we also have \(V_{0}\in\mathcal{R}_{\Omega_{\chi}}\), so \(J_{\chi^{\prime}}(V_{0})=0\) for any character \(\chi^{\prime}\). Since \(V_{0}=\operatorname{Ind}_{\chi^{\prime}}(\rho)\) for some \(\chi^{\prime}\) and \(\rho\in Ir_{\chi^{\prime}}\) we get a contradiction. **Proposition 3.5**.: 1. _For any_ \((\rho,W)\in\mathcal{C}_{\chi}\) _and an open compact subgroup_ \(K\subset G(E)\)_, for_ \((\pi(\chi,\rho),V)=\operatorname{Ind}_{\chi}(\rho,W)\)_, one has_ \[V^{K}=\bigoplus_{\delta=g_{\delta}\cdot\chi\in\Delta_{\Omega_{\chi}}(K)}W^{g_{ \delta}^{-1}Kg_{\delta}\cap St_{\chi}}\] 2. \(J_{\chi}\circ\operatorname{Ind}_{\chi}\simeq\operatorname{Id}_{\mathcal{C}_{\chi}}\)_._ 3. _The functors_ \(\operatorname{Ind}_{\chi}:\mathcal{C}_{\chi}\to\mathcal{R}_{\Omega_{\chi}}\) _and_ \(J_{\chi}:\mathcal{R}_{\Omega_{\chi}}\to\mathcal{C}_{\chi}\) _are equivalences of categories._ Proof.: (1) Recall that \(V\) is the space of smooth functions \(f:G\to W\) such that \(f(hg)=\rho(h)f(g)\) for \(h\in\operatorname{St}_{\chi}\), such that the support of \(f\) is compact modulo left shifts by \(\operatorname{St}_{\chi}\). In particular, \(f(ug)=\chi(u)f(g)\) for \(u\in N\). The space \(V^{K}\) consists of \(f\in V\) such that \(f(gk)=f(g)\) for \(k\in K\). Note that if \(gkg^{-1}\in N\) then we get \[f(g)=f(gk)=\chi(gkg^{-1})f(g),\] so \(f(g)=0\) unless \(\chi|_{gKg^{-1}\cap N}\equiv 1\), or equivalently, \(g^{-1}\chi\in\Xi_{K}\). Similarly, we see that for any \(g\) one has \(f(g)\in W^{gKg^{-1}\cap\operatorname{St}_{\chi}}\). Furthermore, since the support of \(f\) is compact modulo left shifts by \(\operatorname{St}_{\chi}\), there is only finitely many double cosets \(\operatorname{St}_{\chi}gK\in\operatorname{St}_{\chi}\backslash G/K\) such that \(f(g)\neq 0\). Thus, we get an embedding \[V^{K}\to\bigoplus_{\delta=g_{\delta}\times\mathfrak{c}\in\Delta_{\Omega_{ \chi}}(K)}W^{g_{\delta}^{-1}Kg_{\delta}\cap St_{\chi}}:f\mapsto(f(g_{\delta}^{ -1})).\] Conversely, for each \(g\) and each \(w\in W^{gKg^{-1}\cap\operatorname{St}_{\chi}}\), we have a well defined function \(f_{g,K,w}\in V^{K}\) given by \[f_{g,K,w}(g_{1})=\begin{cases}\rho(h)w,&g_{1}=hgk,h\in\operatorname{St}_{\chi },k\in K,\\ 0,g_{1}\not\in G\setminus\operatorname{St}_{\chi}gK.\end{cases}\] This proves the claimed decomposition. (2) We use the same notation as in the proof of (1). We need to show that the map \[V\to W:f\mapsto f(1)\] induces an isomorphism \(J_{\chi}(V)\)\(\tilde{\tilde{\phantom{\rule{1.0pt}{1.0pt}}}}\)\(W\). The surjectivity is clear (e.g., using functions \(f_{1,K,w}\) for sufficiently small \(K\)). Let \(V_{0}\subset V\) denote the subspace of \(f\) such that \(f(1)=0\). We need to show that functions of the form \(uf-\chi(u)f\), where \(u\in N\), span \(V_{0}\). Note that for any \(f\in V\) and \(u\in N\), we have \[(uf-\chi(u)f)(g)=f(gu)-\chi(u)f(g)=[\chi(gug^{-1})-\chi(u)]\cdot f(g).\] Now for any \(g\not\in\operatorname{St}_{\chi}\), let \(u\in N\) be such that \(\chi(gug^{-1})\neq\chi(u)\). Then for any sufficiently small \(K\) we have \(\chi((gk)u(gk^{-1}))=\chi(gug^{-1})\), hence for \(w\in W^{gKg^{-1}\cap\operatorname{St}_{\chi}}\), the function \(f_{g,K,w}\) is proportional to \(uf_{gK,w}-\chi(u)f_{g,K,w}\). It remains to note that such functions span \(V_{0}\). (3) We have to check that for any \(V\in\mathcal{R}_{\Omega_{\chi}}\) the adjunction map \[c_{V}:V\to\operatorname{Ind}_{\chi}J_{\chi}(V)\] is an isomorphism. Let \(K=\ker(c_{V})\), \(C=\operatorname{coker}(c_{V})\). By part (1), \(J_{\chi}(c_{V})\) is an isomorphism, hence \(J_{\chi}(K)=J_{\chi}(C)=0\). By Lemma 3.4(4), this implies that \(K=C=0\), so \(c_{V}\) is an isomorphism. For \(m\in\mathfrak{g}^{\vee}(\overline{E})\), let \(\langle m\rangle^{\perp}\subset N\) denote the subgroup of \(u\) such that \(\langle m,u\rangle=0\). We denote by \(H_{m}\) the quotient of \(\operatorname{St}_{\chi}\), where \(\chi=\chi_{m}\), by the normal subgroup \(\langle m\rangle^{\perp}\), so that we have a commutative diagram and \(H_{m}\) is a central extension of \(\overline{\mathrm{St}}_{\chi}\) by \(\overline{E}\). **Lemma 3.6**.: _Assume that \(m\) is regular semisimple. Then \(H_{m}\) is commutative._ Proof.: Note that in this case we have the maximal torus \(T_{m}\subset G\) defined over \(\overline{E}\) such that \(\overline{\mathrm{St}}_{\chi}=T_{m}(\overline{E})\). If \(T_{m}\) is defined over \(\mathbb{Z}\) then we have the commutative subgroup \(T_{m}(E)\subset\mathrm{St}_{\chi}\) surjecting onto \(T_{m}(\overline{E})\). Hence, in this case \(H_{m}\) is commutative (since it is generated by \(T_{m}(E)\) and the center). Since the isomorphism class of the group \(H_{m}\) depends only on the conjugacy class of \(m\) in \(\mathfrak{g}^{\vee}(\overline{E})\), using the fact that any split maximal torus is conjugate to a torus defined over \(\mathbb{Z}\), we see that \(H_{m}\) is commutative whenever \(T_{m}\) is split. In the general case, there exists a finite separable field extension \(\overline{E}\subset\overline{E}^{\prime}\) such that \(T_{m}\) splits over \(\overline{E}^{\prime}\). Thus, it is enough to prove that for any such extension the nilpotent extension \(E\to\overline{E}\) extends to a nilpotent extension \(E^{\prime}\to\overline{E}^{\prime}\) (since then \(H_{m}\) is contained in a similar extension of \(T_{m}(\overline{E}^{\prime})\) defined using \(G(E^{\prime})\)). We have \(\overline{E}^{\prime}=\overline{E}[x]/(f(x))\), for some monic polynomial \(f(x)\in\overline{E}[x]\). But then we can take \(E^{\prime}=E[x]/(\widetilde{f}(x))\) as the required nilpotent extension of \(\overline{E}^{\prime}\), where \(\widetilde{f}(x)\) is any monic lift of \(f(x)\) to \(E[x]\). Thus, in the case when \(m\) is regular semisimple, all irreducible representations in \(\mathcal{C}_{\chi}\), where \(\chi=\chi_{m}\), are \(1\)-dimensional. **Proposition 3.7**.: _Assume that \(\chi=\chi_{m}\), where \(m\) is regular semisimple, and \(\rho\) is a smooth character of \(\mathrm{St}_{\chi}\), such that \(\rho|_{N}=\chi\). Let \(V=\mathrm{Ind}_{\chi}(\rho)\). (i) The space \(V^{K}\) is finite-dimensional, and \(V^{K}=0\) unless there exists \(g\in G(\overline{E})\) such that \(\rho\circ\mathrm{Ad}(g)^{-1}|_{K\cap\mathrm{St}_{g\chi}}\equiv 1\) (where we use the fact that \(\mathrm{Ad}(g)(\mathrm{St}_{\chi})=\mathrm{St}_{g\chi}\)). (ii) Assume \(K=G(\mathcal{O})\), \(U\cap K=\mathfrak{g}(\mathcal{O})\), \(\psi|_{\mathcal{O}}\equiv 1\), \(\psi|_{\mathfrak{m}^{-1}}\not\equiv 1\), \(m\in\mathfrak{g}(\mathcal{O})\) is such that \(\overline{m}\in\mathfrak{g}(\mathcal{O}/\mathfrak{m})\) is still regular, and \(\rho|_{G(\mathcal{O})\cap\mathrm{St}_{\chi}}\equiv 1\). Assume also that the torus \(T_{m}\) (defined over \(\overline{E}\)) splits over an unramified extension of \(\overline{E}\). Then \(V^{K}\) is \(1\)-dimensional._ Proof.: (i) By Proposition 3.5(1), it is enough to check that the set \(\Delta_{\Omega_{\chi}}(K)\) is finite. Since \(\Omega\cap\Xi_{K}\) is compact (as \(\Omega\) is closed), this follows from the fact that \(\overline{K}\)-orbits on \(\Omega\) are open. (ii) It is enough to check that \(\Omega\cap\mathfrak{g}(\mathcal{O})\) is a single \(G(\mathcal{O})\)-orbit. **Step 1. Split case**. Let \(B\) be the Borel subgroup containing the split torus \(T_{m}\), \(U\subset B\) its unipotent radical. Using the Iwasawa decomposition, we are reduced to checking that if \(u\in U(\overline{E})\) satisfies \(\mathrm{Ad}(u)m\in\mathfrak{g}(\mathcal{O})\) then \(u\in U(\mathcal{O})\). Let us consider the map \[\theta:U\to\mathfrak{u}=\mathfrak{b}/\mathfrak{t}:u\mapsto\mathrm{Ad}(u)\mod \mathfrak{t}\in\mathfrak{b}/\mathfrak{t}.\] Let \(U=U_{1}\supset U_{2}\supset\ldots\) be the lower central series, so \(U_{n+1}=[U_{n},U]\), and let \(\mathfrak{u}_{1}\supset\mathfrak{u}_{2}\supset\ldots\) be the corresponding Lie subalgebras. Then we have an identification \[U_{n}/U_{n+1}\simeq\mathfrak{u}_{n}/\mathfrak{u}_{n+1}\simeq\bigoplus_{\alpha= \sum m_{i}\alpha_{i}:\sum m_{i}=n}\mathfrak{g}_{\alpha},\] where \((\alpha_{i})\) are simple roots. Furthermore, the map \(U_{n}/U_{n+1}\to\mathfrak{u}_{n}/\mathfrak{u}_{n+1}\), induced by \(\theta\), is given by the multiplication with \(\alpha(m)\) on the summand \(\mathfrak{g}_{\alpha}\). Note that our assumption on \(m\) means that \(\alpha(m)\in\mathcal{O}^{*}\) for all positive roots \(\alpha\). Now we can prove our statement by the descending induction: assume we know it for elements of \(U_{n+1}\), and let \(u\in U_{n}(\overline{E})\) be such that \(\operatorname{Ad}(u)m\in\mathfrak{g}(\mathcal{O})\). Since \(\alpha(m)\in\mathcal{O}^{*}\) for all \(\alpha\), we obtain that the reduction of \(u\) modulo \(U_{n+1}\) belongs to \((U_{n}/U_{n+1})(\mathcal{O})\). Hence, we can find \(u_{0}\in U_{n}(\mathcal{O})\) and \(u_{1}\in U_{n+1}(\overline{E})\) such that \(u=u_{0}u_{1}\). But then \(\operatorname{Ad}(u_{1})m\in\mathfrak{g}(\mathcal{O})\), so by the induction assumption \(u_{1}\in U_{n+1}(\mathcal{O})\), and so, \(u\in U_{n}(\mathcal{O})\). **Step 2. Reduction to finite rings**. We claim that it is enough to check the same statement with \(\mathcal{O}\) replaced by \(\mathcal{O}/\mathfrak{m}^{n}\) for any \(n\geq 1\). Indeed, suppose \(\Omega\cap\mathfrak{g}(\mathcal{O})\) contains two distinct \(G(\mathcal{O})\)-orbits, \(O_{1}\) and \(O_{2}\). Let \(x\in O_{1}\setminus O_{2}\). Since \(G(\mathcal{O})\)-orbits are closed in \(\Omega\cap\mathfrak{g}(\mathcal{O})\), this means that there exists \(n\geq 1\) such that \((x+\mathfrak{g}(\mathfrak{m}^{n}))\cap O_{2}=\emptyset\), i.e., \(x\not\in O_{2}+\mathfrak{g}(\mathfrak{m}^{n})\). But this contradicts transitivity over \(\mathcal{O}/\mathfrak{m}^{n}\). **Step 3. General case**. By Step 2, it is enough to prove transitivity of the action of \(G(\mathcal{O}/\mathfrak{m}^{n})\) on \(\Omega(\mathcal{O}/\mathfrak{m}^{n})\). Using the Greenberg functor, we can consider this as an action of the group of \(k\)-points of an algebraic group over \(k\) on the set of \(k\)-points of a variety over \(k\). By Step 1, we know that this action becomes transitive over a finite extension of the finite field \(k\). Furthermore, the stabilizer subgroup is obtained by applying the Greenberg functor to \(T_{m}(\mathcal{O}/\mathfrak{m}^{n})\), so it is connected. Using triviality of the Galois cohomology \(H^{1}\) of \(k\) with coefficients in a connected algebraic group, we deduce the required transitivity over \(k\). _Remark 3.8_.: For an open compact subgroup \(K\subset G(\overline{E})\) and \(\chi\in\Xi\), let \[G_{m,K}:=\{g\overline{K}\in G(\overline{E})\ |\ \chi|_{\operatorname{Ad}(g)(U \cap K)}\equiv 1\}.\] This set is invariant under right shifts by \(\overline{K}\) and by left shifts by \(\overline{\operatorname{St}}_{\chi}\). The quotient \(A_{\chi}:=G_{m,K}/\overline{K}\) is an analog of the affine Springer fiber. The space of \(K\)-invariants in \(\operatorname{Ind}_{\chi}(\rho)\), where \(\rho\) is a character of \(\operatorname{St}_{\chi}\) extending \(\chi\), can be interpreted as a space of \(\overline{\operatorname{St}}_{\chi}\)-invariant sections of a \(\overline{\operatorname{St}}_{\chi}\)-equivariant line bundle over \(A_{\chi}\). In the case of \(G=PGL_{2}\), \(E=\overline{E}[\epsilon]/(\epsilon^{2})\), and \(K=G(\mathcal{O})\), the orbits of \(\operatorname{St}_{\chi}\) on \(A_{\chi}\) and the resulting spaces of \(K\)-invariants are described explicitly in [8]. ### Adelic case: orbit decomposition We use the notations of Sec. 1.2. Let us fix a nontrivial additive character \(\psi\) of the finite field \(k\). For each line bundle \(M\) over \(\overline{C}\), we have the corresponding module \(M(\overline{\mathbb{A}})\) over the ring of adeles \(\mathbb{A}\), obtained as the restricted product of \(M_{p}\otimes\overline{F}_{p}\). The corresponding twisted principal adeles \(M(\overline{F})\) are identified with the space of rational sections of \(M\). For example, \(\omega_{\overline{C}}(\overline{F})\) is the \(1\)-dimensional \(\overline{F}\)-linear space of rational \(1\)-forms on \(\overline{C}\). The character \(\psi\) induces the character \[\psi_{\overline{C}}\colon\omega_{\overline{C}}(\overline{\mathbb{A}})/\omega_{ \overline{C}}(\overline{F})\to U(1):\alpha\mapsto\psi(\sum_{p}\operatorname{Res} _{p}\alpha).\] Recall that for our nilpotent extension \(C\) of \(\overline{C}\), the kernel \(L\coloneqq\mathcal{N}\subset\mathcal{O}_{C}\) of the projection to \(\mathcal{O}_{\overline{C}}\) is a line bundle on \(\overline{C}\). Thus, we have an exact sequence \[0\to L(\overline{\mathbb{A}})\to\mathbb{A}\to\overline{\mathbb{A}}\to 0.\] Let \(N_{\mathbb{A}}\subset G(\mathbb{A})\) denote the kernel of the reduction homomorphism \[r:G(\mathbb{A})\to G(\overline{\mathbb{A}}),\] and let \(N_{F}=N_{\mathbb{A}}\cap G(F)\). Then we have a natural isomorphism \(\mathfrak{g}\otimes L(\overline{\mathbb{A}})\)\(\tilde{\ Let us consider the \(G(\overline{F})\)-action on \(\mathbb{C}(N_{F}\backslash G(\mathbb{A}))\) given by \(g_{0}f(g)=f(\widetilde{g}_{0}^{-1}g)\), where \(\widetilde{g}_{0}\in\Gamma\) is a lift of \(g_{0}\). Then for \(g_{0}\in G(\overline{F})\), we have \[g_{0}\Pi_{\eta}f=\Pi_{g_{0}(\eta)}g_{0}f,\] where we use the adjoint action of \(G(\overline{F})\) on \(\mathfrak{g}(\overline{F})\). Now for a \(G(\overline{F})\)-orbit \(\Omega\subset\mathfrak{g}^{\vee}\otimes L^{-1}\omega_{\overline{C}}(\overline {F})\), we can consider the sum of projectors \[\Pi_{\Omega}(f):=\sum_{\eta\in\Omega}\Pi_{\eta}(f).\] By Lemma 3.9, for \(f\in\mathbb{C}_{lc}(N_{F}\backslash G(\mathbb{A}))\), \(\Pi_{\Omega}(f)\) is a well defined function on \(N_{F}\backslash G(\mathbb{A})\). Furthermore, if \(f\) is \(G(\overline{F})\)-invariant then \(\Pi_{\Omega}(f)\) is also \(G(\overline{F})\)-invariant, so we get a well defined operator \[\Pi_{\Omega}:\mathcal{S}(G(F)\backslash G(\mathbb{A}))\to\mathbb{C}_{lc}(G(F )\backslash G(\mathbb{A})).\] **Lemma 3.10**.: _For every \(f\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))\), one has_ 1. \(\Pi_{\Omega}(f)\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))\)_,_ 2. \(\Pi_{\Omega}(f)=0\) _for all_ \(\Omega\) _except for a finite number, and_ 3. \(\sum_{\Omega}\Pi_{\Omega}(f)=f\)_._ Proof.: Since \(f\) has compact support we can choose an \(N_{\mathbb{A}}/N_{F}\)-invariant compact subset \(K\subset N_{F}\backslash G(\mathbb{A})\) such that \(f(g)=0\) for \(g\not\in G(F)K\). Now applying Lemma 3.9, we see that there exists a finite set of characters \(S\) such that \(\Pi_{\eta}f(g)=0\) for all \(\eta\not\in G(\overline{F})S\) and all \(g\). The above Lemma shows that we have a direct sum decomposition of \(G(\mathbb{A})\)-representations (in algebraic sense), \[\mathcal{S}(G(F)\backslash G(\mathbb{A}))=\bigoplus_{\Omega}\mathcal{S}(G(F) \backslash G(\mathbb{A}))_{\Omega}. \tag{3.1}\] It is clear that the term corresponding to \(\Omega=0\) is the subspace \(\mathcal{S}(G(\overline{F})\backslash G(\overline{\mathbb{A}}))\subset \mathcal{S}(G(F)\backslash G(\mathbb{A}))\). Next, we will describe more explicitly the pieces \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega}\). For \(\eta\in\mathfrak{g}^{\vee}\otimes L^{-1}\omega_{\overline{C}}(\overline{F})\), let \(\mathbf{St}_{\eta}\subset G_{\overline{F}}\) denote the stabilizer of \(\eta\), viewed as an algebraic subgroup defined over \(\overline{F}\). Then the stabilizer \(\mathrm{St}_{\eta}\) of \(\eta\) in \(G(\mathbb{A})\) is the preimage of \(\mathbf{St}_{\eta}(\overline{\mathbb{A}})\) under the projection \(G(\mathbb{A})\to G(\overline{\mathbb{A}})\). It is clear that \(\mathrm{St}_{\eta}\) contains \(N_{\mathbb{A}}\), \(\mathrm{St}_{\eta}\,/N_{\mathbb{A}}\simeq\mathbf{St}_{\eta}(\overline{\mathbb{ A}})\), and \((\mathrm{St}_{\eta}\cap\Gamma)/N_{F}\simeq\mathbf{St}_{\eta}(\overline{F})\). Note that \((\Gamma\cap\mathrm{St}_{\eta})\backslash\Gamma\simeq\mathbf{St}_{\eta}( \overline{F})\backslash G(\overline{F})\). We denote by \(L_{\eta}\subset\mathrm{St}_{\eta}\) the preimage of \(\mathbf{St}_{\eta}(\overline{F})\) under the homomorphism \(\mathrm{St}_{\eta}\to\mathbf{St}_{\eta}(\overline{\mathbb{A}})\). We have \(L_{\eta}=N_{\mathbb{A}}\cdot(\mathrm{St}_{\eta}\cap\Gamma)\). **Claim 3.11**.: _There exists a unique character \(\widetilde{\psi}_{\eta}\) of \(L_{\eta}\), trivial on \(\mathrm{St}_{\eta}\cap\Gamma\) and equal to \(\psi_{\eta}\) on \(N_{\mathbb{A}}\). The adjoint action of \(\mathrm{St}_{\eta}\) on \(L_{\eta}\) preserves \(\widetilde{\psi}_{\eta}\)._ Proof.: We have \(L_{\eta}=N_{\mathbb{A}}\cdot(\mathrm{St}_{\eta}\cap\Gamma)\), and \(N_{\mathbb{A}}\cap(\mathrm{St}_{\eta}\cap\Gamma)=N_{F}\). **Definition 3.12**.: We denote by \(\widetilde{\mathcal{S}}_{\eta}\subset\mathbb{C}_{lc}(N_{F}\backslash G(\mathbb{A }))_{\eta}\) the space of functions \(f\) on \(G(\mathbb{A})\), such that \(f(lg)=\widetilde{\psi}_{\eta}(l)f(g)\) for \(l\in L_{\eta}\), with compact support modulo left translations by \(L_{\eta}\). Let us denote by \(\Omega_{\eta}\) the orbit of \(\eta\). **Lemma 3.13**.: _The map_ \[\kappa_{\eta}f(g)=\sum_{\gamma\in(\Gamma\cap\mathrm{St}_{\eta})\setminus\Gamma}f( \gamma g).\] _is well defined and gives an isomorphism of \(G(\mathbb{A})\)-representations_ \[\kappa_{\eta}:\widetilde{\mathcal{S}}_{\eta}\rTo\mathcal{S}(\Gamma\backslash G (\mathbb{A}))_{\Omega_{\eta}}.\] Proof.: **Step 1.** We claim that the sum in the definition of \(\kappa_{\eta}\) is finite. It is enough to check that for any \(f\in\widetilde{\mathcal{S}}_{\eta}\) one has \(f(\gamma)\neq 0\) for finitely many \(\gamma\in(\Gamma\cap\mathrm{St}_{\eta})\backslash\Gamma\). Given \(f\in\widetilde{\mathcal{S}}_{\eta}\), let \(K\subset G(\mathbb{A})\) be an open compact subgroup such that \(f(gk)=f(g)\) for \(g\in G(\mathbb{A})\), \(k\in K\). We claim that \(f\) is supported on the set \(G_{\eta,K}\) of \(g\) such that \(\psi_{\mathrm{Ad}(g^{-1})\eta}|_{N_{\mathbb{A}}\cap K}\equiv 1\). Indeed, for \(u\in N_{\mathbb{A}}\cap K\) we have \[f(g)=f(gu)=f(gug^{-1}\cdot g)=\psi_{\eta}(gug^{-1})f(g)=\psi_{\mathrm{Ad}(g^{- 1})(\eta)}(u)f(g).\] Thus, it is enough to check the set \((\Gamma\cap\mathrm{St}_{\eta})\backslash(\Gamma\cap G_{\eta,K})\) is finite. The map \(g\mapsto\mathrm{Ad}(g^{-1})\eta\) identifies this set with the set \(G(\overline{F})\eta\cap K^{\prime}\), where \(K^{\prime}\subset N_{\mathbb{A}}\) is the orthogonal to \(N_{\mathbb{A}}\cap K\) (with respect to the pairing induced by \(\psi\)). But the latter is set is contained in \(N_{F}\cap K^{\prime}\) which is finite. **Step 2.** For \(f\in\mathcal{S}(\Gamma\backslash G(\mathbb{A}))\), we have \[\Pi_{\Omega_{\eta}}f(g)=\sum_{\gamma\in\Gamma/\Gamma\cap\mathrm{St}_{\eta}} \int_{N_{\mathbb{A}}/N_{F}}\psi_{\eta}(\gamma^{-1}u\gamma)^{-1}f(ug)du=\] \[\sum_{\gamma\in\Gamma/\Gamma\cap\mathrm{St}_{\eta}}\int_{N_{\mathbb{A}}/N_{F}} \psi_{\eta}(u)^{-1}f(u\gamma^{-1}g)du=\kappa_{\eta}(\Pi_{\eta}(f)).\] This shows that the image of \(\kappa_{\eta}\) contains \(\Omega_{\eta}\). **Step 3.** For \(f\in\widetilde{\mathcal{S}}_{\eta}\), we have \[\Pi_{\eta}\kappa_{\eta}f(g)=\int_{N_{\mathbb{A}}/N_{F}}\psi_{\eta}(u)^{-1}\sum _{\gamma\in(\Gamma\cap\mathrm{St}_{\eta})\setminus\Gamma}f(\gamma ug)du=\] \[\int_{N_{\mathbb{A}}/N_{F}}\sum_{\gamma\in(\Gamma\cap\mathrm{St}_{\eta}) \setminus\Gamma}\psi_{\eta}^{-1}(u)\psi_{\eta}(\gamma u\gamma^{-1})f(\gamma g).\] Integrating first over \(N_{\mathbb{A}}/N_{F}\) we get zero unless \(\gamma\) preserves \(\psi_{\eta}\). Hence, we deduce that \[\Pi_{\eta}\kappa_{\eta}f=f,\] which implies that \(\kappa_{\eta}\) is injective. **Step 4.** Finally, for \(f\in\widetilde{\mathcal{S}}_{\eta}\), we have \[\Pi_{\Omega_{\eta}}\kappa_{\eta}(f)(g)=\sum_{\gamma\in\Gamma/\Gamma \cap\mathrm{St}_{\eta}}\int_{N_{\mathbb{A}}/N_{F}}\psi_{\eta}(\gamma^{-1}u \gamma)^{-1}\sum_{\gamma_{1}\in\Gamma/\Gamma\cap\mathrm{St}_{\eta}}f(\gamma_{ 1}^{-1}ug)du=\] \[\sum_{\gamma,\gamma_{1}\in\Gamma/\Gamma\cap\mathrm{St}_{\eta}} \int_{N_{\mathbb{A}}/N_{F}}\psi_{\eta}(\gamma^{-1}u\gamma)^{-1}\psi(\gamma_{1}^ {-1}u\gamma_{1})f(\gamma_{1}^{-1}g)du=\sum_{\gamma\in\Gamma/\Gamma\cap\mathrm{ St}_{\eta}}f(\gamma^{-1}g)=\kappa_{\eta}(f)(g),\] so we conclude that the image of \(\kappa_{\eta}\) is exactly \(\Omega_{\eta}\) ### Regular semisimple case We say that \(\eta\) is regular semisimple if the stabilizer subgroup \(T_{\eta}:=\operatorname{\mathbf{St}}_{\eta}\subset G_{\overline{F}}\) is a maximal torus in \(G_{\overline{F}}\). Let us fix a regular semisimple orbit \(\Omega=G(\overline{F})\cdot\eta\subset\mathfrak{g}^{\vee}\otimes L^{-1}\omega_{ \overline{C}}(\overline{F})\). Let \(\langle\eta\rangle^{\perp}\subset N_{\mathbb{A}}\) denote the subgroup of \(u\in N_{\mathbb{A}}\) such that \(\langle\eta,u\rangle=0\), and let \(H_{\eta}\) denote the quotient of \(\operatorname{St}_{\eta}\) by the normal subgroup \(\langle\eta\rangle^{\perp}\), so that we have a commutative diagram and \(H_{\eta}\) is a central extension of \(T_{\eta}(\overline{\mathbb{A}})\) by \(\omega_{\overline{C}}(\overline{\mathbb{A}})\). We have a canonical splitting of \(H_{\eta}\to T_{\eta}(\overline{\mathbb{A}})\) over \(T_{\eta}(\overline{F})\): \[\sigma:T_{\eta}(\overline{F})\simeq(\Gamma\cap\operatorname{St}_{\eta})/N_{F }\to H_{\eta}.\] **Lemma 3.14**.: _The group \(H_{\eta}\) is commutative._ Proof.: This immediately follows from the corresponding local statement, Lemma 3.6. The commutativity of \(H_{\eta}\) implies that there is a well defined action of \(H_{\eta}\) on \(\widetilde{\mathcal{S}}_{\eta}\) given by \(hf(g)=f(\widetilde{h}^{-1}g)\), where \(\widetilde{h}\in\operatorname{St}_{\eta}\) is any lifting of \(h\in H_{\eta}\). Under this action the subgroup \(\omega_{\overline{C}}(\overline{\mathbb{A}})\) acts through the character \(\psi_{\overline{C}}^{-1}\) and \(T_{\eta}(\overline{F})\) acts trivially. **Definition 3.15**.: For a smooth character \(\chi\) of \(H_{\eta}/T_{\eta}(\overline{F})\), restricting to \(\psi_{\overline{C}}^{-1}\) on \(\omega_{\overline{C}}(\overline{\mathbb{A}})\), let \(\widetilde{\mathcal{S}}_{\eta,\chi}\) denote the induced representation from \(\chi\) viewed as a character of \(\operatorname{St}_{\eta}\), i.e., \(\widetilde{\mathcal{S}}_{\eta,\chi}\) is the space of smooth functions \(f\) on \(G(\mathbb{A})\) such that \(f(hg)=\chi(h)f(g)\) for \(h\in\operatorname{St}_{\eta}\), with support compact modulo left shifts by \(\operatorname{St}_{\eta}\). **Lemma 3.16**.: _For any open compact subgroup \(K\subset G(\mathcal{O})\), the space of invariants \(\widetilde{\mathcal{S}}_{\eta,\chi}^{K}\) is finite-dimensional._ Proof.: This follows immediately from the local Proposition 3.7. ### Regular elliptic case Now assume in addition that \(\eta\) is elliptic, i.e., the torus \(T_{\eta}\) is anisotropic. Then the group \(T_{\eta}(\overline{\mathbb{A}})/T_{\eta}(\overline{F})\) is compact. Hence, the group \[\overline{H}_{\eta}:=H_{\eta}/(\omega_{\overline{F}}(\overline{F})\cdot T_{ \eta}(\overline{F}))\] is a commutative compact group, an extension of \(T_{\eta}(\overline{\mathbb{A}})/T_{\eta}(\overline{F})\) by \(\omega_{\overline{C}}(\overline{\mathbb{A}})/\omega_{\overline{C}}(\overline {F})\). Consider the set \(\Pi_{\eta}\) of characters of \(\overline{H}_{\eta}\), restricting to the character \(\psi_{\overline{C}}^{-1}\) of \(\omega_{\overline{C}}(\overline{\mathbb{A}})/\omega_{\overline{C}}(\overline {F})\). This is a torsor over the set of characters of \(T_{\eta}(\overline{\mathbb{A}})/T_{\eta}(\overline{F})\). The action of \(H_{\eta}\) on \(\widetilde{\mathcal{S}}_{\eta}\) factors through the action \(\overline{H}_{\eta}\), so that \(\omega_{\overline{C}}(\overline{\mathbb{A}})/\omega_{\overline{C}}(\overline {F})\) acts by \(\psi_{\overline{C}}^{-1}\). Hence, \(\widetilde{\mathcal{S}}_{\eta}\) splits into a direct sum of \(\chi\)-isotypic components, where \(\chi\) runs through \(\Pi\). Finally, we observe that the \(\chi\)-isotypic component in \(\widetilde{\mathcal{S}}_{\eta}\) coincides with the induced representation \(\widetilde{\mathcal{S}}_{\eta\times}\) (see Definition 3.15), so we get a decomposition \[\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{\eta}}\simeq\widetilde{ \mathcal{S}}_{\eta}=\bigoplus_{\chi\in\Pi_{\eta}}\widetilde{\mathcal{S}}_{\eta,\chi}. \tag{3.2}\] Combining the decomposition (3.2) with Lemma 3.13, we arrive at the following result. **Proposition 3.17**.: _Let \(\eta\) be regular elliptic. Then one has a decomposition of \(G(\mathbb{A})\)-representations_ \[\mathcal{S}(\Gamma\backslash G(\mathbb{A}))_{\Omega_{\eta}}\simeq\bigoplus_{ \chi\in\Pi}\widetilde{\mathcal{S}}_{\eta,\chi}.\] ### Admissible representations and finitary functions A smooth representation \(V\) of \(G(\mathbb{A})\) is called admissible if \(V^{K}\) is finite dimensional for every compact open subgroup \(K\subset G(\mathbb{A})\). Since the functors \(V\mapsto V^{K}\) are exact, any subquotient of an admissible representation is admissible. **Definition 3.18**.: We say that a function \(f\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) is \(G(\mathbb{A})\)_-finitary_ (or simply, _finitary_ if it is contained in an admissible \(G(\mathbb{A})\)-subrepresentation of \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\). We denote by \(\mathcal{S}_{f}(G(F)\backslash G(\mathbb{A}))\subset\mathcal{S}(G(F) \backslash G(\mathbb{A}))\) the subspace of finitary functions. For example, by Lemma 3.16 and Proposition 3.17, the \(G(\mathbb{A})\)-representation \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{\eta}}\) is admissible for \(\eta\) regular elliptic. Hence, the regular elliptic part of \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) consists of finitary functions. Now let us consider the case of other semisimple orbits. **Lemma 3.19**.: _Let \(\Omega=\Omega_{\eta}\) be a regular semisimple orbit such that the corresponding maximal torus admits a nontrivial character \(T_{\eta}\to\mathbb{G}_{m}\) (defined over \(k\)). Then \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega}\) does not contain admissible subrepresentations._ Proof.: We use notation of Sec. 3.3. By Lemma 3.13, \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega}\simeq\widetilde{\mathcal{S }}_{\eta}\), where \(\eta\) is split semisimple. We have an isomorphism \[\widetilde{\mathcal{S}}_{\eta}\simeq\operatorname{Ind}_{\operatorname{St}_{ \eta}}^{G(\mathbb{A})}\mathcal{S}_{\operatorname{St}_{\eta}},\] where \(\mathcal{S}_{\operatorname{St}_{\eta}}\) is the subspace of \(f\in\mathbb{C}_{lc}(N_{F}\backslash\operatorname{St}_{\eta})\) such that \(f(lh)=\widetilde{\psi}_{\eta}(l)f(h)\) for \(l\in L_{\eta}\), \(h\in\operatorname{St}_{\eta}\), with compact support modulo left translations by \(L_{\eta}\). Note that since \(T_{\eta}=\mathbf{St}_{\eta}\) is commutative, \(L_{\eta}\) is a normal subgroup in \(\operatorname{St}_{\eta}\). Note that as in Prop. 3.5, the induction functor gives an equivalence of the category of smooth representations \(\rho\) of \(\operatorname{St}_{\eta}\) with \(\rho|_{N_{\mathbb{A}}}=\psi_{\eta}\), with a subcategory of the category of representations of \(G(\mathbb{A})\), closed under subobjects. Hence, any subrepresentation of \(\widetilde{\mathcal{S}}_{\eta}\) is induced by a subrepresentation of \(\mathcal{S}_{\operatorname{St}_{\eta}}\). Thus, it is enough to prove that \(\mathcal{S}_{\operatorname{St}_{\eta}}\) does not have any admissible subrepresentations. Note that \(\operatorname{St}_{\eta}\) acts on \(\mathcal{S}_{\operatorname{St}_{\eta}}\) through its commutative quotient \(H_{\eta}\). Thus, for any open compact subgroup \(K\), the subspace of \(K\)-invariants is an \(H_{\eta}\)-subrepresentation. Thus, it is enough to check that \(\mathcal{S}_{\operatorname{St}_{\eta}}\) does not contain finite-dimensional subrepresentations. Using our homomorphism \(T_{\eta}\to\mathbb{G}_{m}\) we get a surjective homomorphism \[\deg:\operatorname{St}_{\eta}\big{/}L_{\eta}\to T_{\eta}(\overline{\mathbb{A}} )/T_{\eta}(\overline{F})\to\operatorname{Pic}(\overline{C})\rTo\operatorname{ deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo \operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg}\rTo\operatorname{deg} \rTo\operatorname{deg}\rTo\operatorname{deg} Now for any nonzero \(f\in\mathcal{S}_{\mathrm{St}_{\eta}}\), let \(S\subset\mathrm{St}_{\eta}/L_{\eta}\) be the compact support of \(f\). Then \(\deg(S)\) is finite. Hence, for \(h\in\mathrm{St}_{\eta}\) such that \(\deg(h)\gg 0\), the functions \((h^{n}f)_{n\in\mathbb{Z}}\) have disjoint supports, hence, span an infinite-dimensional subspace. ### Connection with the Hitchin fibers Now let us assume that \(\Gamma=G(F)\) and consider the invariants in \(\widetilde{\mathcal{S}}_{\eta}\) with respect to \(G(\mathcal{O})\subset G(\mathbb{A})\). As in Lemma 3.13, one checks that every \(f\in\widetilde{\mathcal{S}}_{\eta}^{G(\mathcal{O})}\) is supported on the subset \(G(\mathbb{A})_{\eta}\subset G(\mathbb{A})\) consisting of \(g\in G(\mathbb{A})\) such that \[\mathrm{Ad}(g^{-1})(\eta)\in(\mathfrak{g}^{\vee}\otimes L^{-1}\omega_{ \overline{C}})(\overline{\mathcal{O}})\subset(\mathfrak{g}^{\vee}\otimes L^{ -1}\omega_{\overline{C}})(\overline{\mathbb{A}}).\] Note that there is a similarly defined subset \(G(\overline{\mathbb{A}})_{\eta}\subset G(\overline{\mathbb{A}})\). Let \(\mathcal{M}^{Higgs,L^{-1}}(\overline{C})\) denote the stack of \(L^{-1}\)-twisted Higgs \(G\)-bundles \((P,\phi)\) on \(\overline{C}\). By definition, here \(P\) is a \(G\)-bundle, and \(\phi\) is a section of \(H^{0}(\overline{C},\mathfrak{g}^{\vee}_{P}\otimes L^{-1}\omega_{\overline{C}})\), where \(\mathfrak{g}_{P}\) is the vector bundle associated with \(P\) and the adjoint representation of \(G\). We have a natural \(\mathbb{G}_{a}\)-torsor over \(\mathcal{M}^{Higgs,L^{-1}}\) defined as follows. For simplicity, let us describe the corresponding \(k\)-torsor on \(k\)-points. For every \(L^{-1}\)-twisted Higgs \(G\)-bundle \((P,\phi)\) on \(\overline{C}\), we have a natural \(H^{1}(\overline{C},\mathfrak{g}_{P}\otimes L)\)-torsor of all liftings of \(P\) to a \(G\)-bundles over \(C\). By Serre duality, we have a natural pairing between \(H^{1}(\overline{C},\mathfrak{g}_{P}\otimes L)\) and \(H^{0}(\overline{C},\mathfrak{g}^{\vee}_{P}\otimes L^{-1}\omega_{\overline{C}})\). Thus, we can take push-forward of our \(H^{1}(\overline{C},\mathfrak{g}_{P}\otimes L)\)-torsor with respect to the functional \(H^{1}(\overline{C},\mathfrak{g}_{P}\otimes L)\to k\) given by the pairing with \(\phi\). Let us denote by \(\mathcal{L}_{\psi}\) the associated \(\mathbb{C}^{*}\)-torsor on the groupoid of \(k\)-points, where we use the additive character \(\psi:k\to\mathbb{C}^{*}\). For a \(\mathbb{C}^{*}\)-torsor \(\mathcal{L}\) over a set \(X\) we denote by \(\mathcal{S}(X,\mathcal{L})\) the space of finitely supported sections of \(\mathcal{L}\times_{\mathbb{C}^{*}}\mathbb{C}\). **Proposition 3.20**.: 1. _There is a natural identification of groupoids_ \[\mathbf{St}_{\eta}(\overline{F})\backslash G(\overline{\mathbb{A}})_{\eta}/G( \overline{\mathcal{O}})\simeq\mathcal{M}^{Higgs,L^{-1}}_{\eta}(\overline{C}) (k),\] _where_ \(\mathcal{M}^{Higgs,L^{-1}}_{\eta}(\overline{C})\) _is the subgroupoid of_ \(L^{-1}\)_-twisted Higgs_ \(G\)_-bundles_ \((V,\phi)\) _such that the orbit of_ \(\phi\) _at the general point coincides with_ \(\Omega_{\eta}\)_._ 2. _We have a natural identification_ \[\widetilde{\mathcal{S}}_{\eta}^{G(\mathcal{O})}\simeq\mathcal{S}(\mathcal{M} ^{Higgs,L^{-1}}_{\eta}(\overline{C})(k),\mathcal{L}_{\psi}).\] (3.3) _In particular,_ \[\dim\widetilde{\mathcal{S}}_{\eta}^{G(\mathcal{O})}=|\mathcal{M}^{Higgs,L^{-1 }}_{\eta}(\overline{C})(k)|\] 3. _Assume in addition that_ \(\eta\) _is regular semisimple. Then there is an action of_ \(H_{\eta}\) _on_ \(\mathcal{L}_{\psi}\) _compatible with the action of_ \(T_{\eta}(\overline{\mathbb{A}})\) _on_ \(\mathcal{M}^{Higgs,L^{-1}}_{\eta}(\overline{C})(k)\) _such that the subgroup_ \(\omega_{\overline{C}}(\overline{\mathbb{A}})\) _acts via_ \(\psi_{\overline{C}}^{-1}\)_, and such that the isomorphism (_3.3_) is compatible with the_ \(H_{\eta}\)_-action._ Proof.: 1. Let \(P(g)\) be the \(G\)-bundle on \(\overline{C}\) associated with \(g=(g_{p})\in G(\overline{\mathbb{A}})\) (recall that we assume \(G\) to be split connected reductive, so every \(G\)-bundle on \(\overline{C}\) has this form). By definition, \(P(g)\) is equipped with trivializations \(e_{\eta}\), \((e_{p})\), such that \(e_{p}=e_{\eta}g_{p}\). The associated vector bundle \(\mathfrak{g}_{P(g)}^{\vee}\) has \[\mathfrak{g}_{P(g),\eta}^{\vee}=\mathfrak{g}^{\vee}(\overline{F}),\;\;\; \mathfrak{g}_{P(g),p}^{\vee}=\operatorname{Ad}(g_{p})\mathfrak{g}^{\vee}( \overline{O}_{p})\subset\mathfrak{g}(\overline{F}_{p}).\] Hence, assuming that \(g\in G(\overline{\mathbb{A}})_{\eta}\), we can view \(\eta\in\mathfrak{g}^{\vee}\otimes L^{-1}\omega_{\overline{C}}(\overline{F})\) as an \(L^{-1}\)-twisted Higgs field for \(P(g)\). If \(g^{\prime}=hgk\), with \(h\in\operatorname{\mathbf{St}}_{\eta}(\overline{F})\) and \(k\in G(\overline{O})\), then we have the isomorphism between the Higgs bundles \((P(g),\eta)\) and \((P(g^{\prime}),\eta)\) sending \(e_{\eta}\) to \(e_{\eta}h\). 2. The space \(\widetilde{\mathcal{S}}_{\eta}\) is the space of functions on \((\operatorname{St}_{\eta}\cap G(F))\backslash G(\mathbb{A})_{\eta}/G(\mathcal{ O})\) transforming according to the character \(\psi_{\eta}\) with respect to the left shifts by \(N_{\mathbb{A}}\simeq\mathfrak{g}\otimes L(\overline{\mathbb{A}})\). It remains to observe that these shifts act transitively on the fibers of the projection \[(\operatorname{St}_{\eta}\cap G(F))\backslash G(\mathbb{A})_{\eta}/G( \mathcal{O})\to\operatorname{\mathbf{St}}_{\eta}(\overline{F})\backslash G( \overline{\mathbb{A}})_{\eta}/G(\overline{\mathcal{O}})\simeq\mathcal{M}_{ \eta}^{Higgs,L^{-1}}(\overline{C})(k),\] and that these fibers are identified with \(H^{1}(\overline{C},\mathfrak{g}_{P}\otimes L)\). 3. The action of \(H_{\eta}\) on \(\mathcal{L}_{\psi}\) is induced by the action of \(\operatorname{St}_{\eta}\) on \((\operatorname{St}_{\eta}\cap G(F))\backslash G(\mathbb{A})_{\eta}/G(\mathcal{ O})\) by left shifts, which is compatible with the action of \(\operatorname{\mathbf{St}}_{\eta}(\overline{\mathbb{A}})\) on \((\operatorname{St}_{\eta}\cap G(F))\backslash G(\mathbb{A})_{\eta}/G(\mathcal{ O})\). ### Finitary functions for \(\operatorname{PGL}_{2}\) Now we specialize to the case \(G=\operatorname{PGL}_{2}\), so everywhere in this subsection we assume that \(G=\operatorname{PGL}_{2}\). We also assume that the characteristic of \(k\) is \(\neq 2\), and use the identification \(\mathfrak{g}^{\vee}\simeq\mathfrak{g}\) given by the Killing form. Our goal is to determine explicitly the subspace of finitary functions in \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) using the orbit decomposition obtained in Section 3 (see Theorem 3.22). In addition to our previous results concerning semisimple orbits, we need study the space \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{n}}\), where \(\Omega_{n}\) is the orbit of the matrix \[\eta_{0}:=\begin{pmatrix}0&\alpha_{0}\\ 0&0\end{pmatrix}\in\mathfrak{g}\otimes L^{-1}\omega_{\overline{C}}(\overline{F }), \tag{3.4}\] where we take any nonzero \(\alpha_{0}\in L^{-1}\omega_{\overline{C}}(\overline{F})\). This case turns out to be similar to that of a regular semisimple orbit. The stabilizer of \(\eta_{0}\) in \(\operatorname{PGL}_{2}\) is exactly the group \(U\) of strictly upper triangular matrices. Thus, \(L_{\eta_{0}}=N_{\overline{\mathbb{A}}}U(F)\). By Lemma 3.13, we have \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{n}}\simeq\widetilde{ \mathcal{S}}_{\eta_{0}}\). As in Sec. 3.3, we define the central extension \(H_{\eta_{0}}\) of \(U(\overline{\mathbb{A}})\) by \(\omega_{\overline{C}}(\overline{\mathbb{A}})\) by setting \(H_{\eta_{0}}=\operatorname{St}_{\eta_{0}}\big{/}(\eta_{0})^{\perp}\) where \((\eta_{0})^{\perp}\subset N_{\mathbb{A}}\) is the kernel of \((\eta_{0},\cdot)\). Note that \((\eta_{0})^{\perp}=\mathfrak{b}(\overline{\mathbb{A}})\otimes L\). Thus, we have a splitting \[\sigma:U(\overline{\mathbb{A}})\simeq U(\mathbb{A})/(\mathfrak{u}(\overline{ \mathbb{A}})\otimes L)\to H_{\eta_{0}},\] so we have a canonical decomposition \[H_{\eta_{0}}\simeq\omega_{\overline{C}}(\overline{\mathbb{A}})\times U( \overline{\mathbb{A}}).\] The action of \(\operatorname{St}_{\eta_{0}}\) on \(\widetilde{\mathcal{S}}_{\eta_{0}}\) factors through an action of \(H_{\eta}\), such that the subgroup \(\omega_{\overline{C}}(\overline{F})\times U(\overline{F})\) acts trivially. Since the group \(U(\overline{\mathbb{A}})/U(\overline{F})\simeq\overline{\mathbb{A}}/\overline {F}\) is compact, we have can decompose \(\widetilde{\mathcal{S}}_{\eta_{0}}\) into isotypical components corresponding to characters of \(\overline{\mathbb{A}}/\overline{F}\). These characters are identified with \(\omega_{\overline{C}}(\overline{F})\): with every \(\alpha\in\omega_{\overline{C}}(\overline{F})\), one associates the character \(\chi_{\alpha}(x)=\psi_{\overline{C}}(x\alpha)\). Thus, we get \[\widetilde{\mathcal{S}}_{\eta_{0}}\simeq\bigoplus_{\alpha\in\omega_{\overline{C }}(\overline{F})}\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}, \tag{3.5}\] where \[\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}=\operatorname{Ind}_{\operatorname {St}_{\eta_{0}}}^{G(\mathbb{A})}(\psi_{\overline{C}},\chi_{\alpha}).\] Here we use the homomorphism \(\operatorname{St}_{\eta_{0}}\to H_{\eta_{0}}=\omega_{\overline{C}}(\overline{ \mathbb{A}})\times U(\overline{\mathbb{A}})\) to view \((\psi_{\overline{C}},\chi_{\alpha})\) as a character of \(\operatorname{St}_{\eta_{0}}\). Note that the \(G(\mathbb{A})\)-representations in the right-hand side of (3.5) are irreducible and pairwise non-isomorphic. **Lemma 3.21**.: _(i) For any nonzero \(\alpha\in\omega_{\overline{C}}(\overline{F})\), the representation \(\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\) is admissible. (ii) The representation \(\widetilde{\mathcal{S}}_{\eta_{0},\chi_{0}}\) of \(G(\mathbb{A})\), where \(\chi_{0}\) is the trivial character of \(\overline{\mathbb{A}}/\overline{F}\), is not admissible._ Proof.: This follows from the corresponding local results, similar to Prop. 3.7, established in [8, Sec. 4.4]. **Theorem 3.22**.: _One has a decomposition_ \[\mathcal{S}_{f}(G(F)\backslash G(\mathbb{A}))=\mathcal{S}_{f}(G( \overline{F})\backslash G(\overline{\mathbb{A}}))\oplus\bigoplus_{\alpha\neq 0 }\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\oplus\bigoplus_{d\in H^{ \prime}}\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{d}}. \tag{3.6}\] \[\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{d}}=\bigoplus_ {\chi\in\Pi_{\eta_{d}}}\widetilde{\mathcal{S}}_{\eta_{d},\chi}. \tag{3.7}\] _Here \(\eta_{d}\) is a representative of the orbit \(\Omega_{d}\) and \(\Pi_{\eta_{d}}\) is the set of characters of the commutative compact group \(\overline{H}_{\eta_{d}}\) extending \(\psi_{\overline{C}}\) (see Sec. 3.4)._ Proof.: We already know the decomposition (3.7) (see (3.2)). By Lemma 3.16, Proposition 3.17 and Lemma 3.21, the right-hand side of (3.6) is contained in \(\mathcal{S}_{f}(G(F)\backslash G(\mathbb{A}))\), so it remains to establish the opposite inclusion. Let \(\mathcal{S}(G(\overline{F})\backslash G(\overline{\mathbb{A}}))_{\ast 0}= \bigoplus_{\Omega\neq 0}\mathcal{S}(G(\overline{F})\backslash G(\overline{ \mathbb{A}}))_{\Omega}\). We have to prove that every admissible subrepresentation \(V\subset\mathcal{S}(G(\overline{F})\backslash G(\overline{\mathbb{A}}))_{\ast 0}\) is contained in the right-hand side of (3.6). By Lemma 3.19, we obtain \(\Pi_{\Omega}V=0\) for every split semisimple orbit \(\Omega\). It remains to prove that \(\Pi_{\Omega_{n}}V\) is contained in \(\bigoplus_{\alpha\neq 0}\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\). But this follows immediately from Lemma 3.21(ii). ## 4. Constant term for \(G=\operatorname{GL}_{2}\) In this section we assume that \(G=\operatorname{GL}_{2}\), and \(C\) is a special nilpotent extension of length \(2\). We will give a more precise information on the constant term operator in this case. We start by partitioning the set \(\operatorname{QBun}_{T}(C)\) into subsets numbered by effective divisors in \(\overline{C}\) (this is based on an analog of the Iwasawa decomposition developed in Sec. 4.1). Then we establish a more explicit form of compatibility between the constant term operator and the Hecke operators (see Lemmas 4.12). This analysis will be used in Section 5 to show that Hecke finiteness implies cuspidality. ### Analog of the Iwasawa decomposition Let \(\overline{K}\) be a local non-archimedean field, \(\overline{\mathcal{O}}\subset F\) the ring of integers, \(t\in\overline{\mathcal{O}}\) a uniformizer, Let \(\mathcal{O}\) be a commutative ring, which is a square zero extension of \(\overline{\mathcal{O}}\) by \(\overline{\mathcal{O}}\): \(\mathcal{O}\) a commutative ring with an element \(\epsilon\in\mathcal{O}\) such that \(\epsilon^{2}=0\), \(\mathcal{O}/\epsilon\mathcal{O}\simeq\overline{\mathcal{O}}\) and the map \(\overline{\mathcal{O}}\to\epsilon\mathcal{O}:x\mapsto\epsilon x\) is an isomorphism. Let \(K\) be the total ring of fractions of \(\mathcal{O}\). Then \(K/\epsilon K\simeq\overline{K}\), and \(\epsilon K\simeq\overline{K}\). Let \(G=\operatorname{GL}_{2}\), \(B\subset G\) the subgroup of upper-triangular matrices. Let \(\mathfrak{g}\) and \(\mathfrak{b}\) denote the Lie algebras of \(G\) and \(B\). For every \(n\geq 0\), consider the matrix \[\varphi_{n}:=\begin{pmatrix}1&0\\ t^{-n}&0\end{pmatrix}\in\mathfrak{g}(\overline{K}).\] **Lemma 4.1**.: _We have a decomposition into \(\operatorname{Ad}(B(\overline{\mathcal{O}}))\)-orbits,_ \[\mathfrak{g}(\overline{K})/(\mathfrak{b}(\overline{K})+\mathfrak{g}( \overline{\mathcal{O}}))=\sqcup_{n\geq 0}\operatorname{Ad}(B(\overline{ \mathcal{O}}))\cdot\varphi_{n}.\] Proof.: The class of a matrix \(X=\begin{pmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{pmatrix}\in\mathfrak{g}(\overline{K})\) in \(\mathfrak{g}(\overline{K})/(\mathfrak{b}(\overline{K})+\mathfrak{g}( \overline{\mathcal{O}}))\) is determined by the class of \(a_{21}\) in \(\overline{K}/\overline{\mathcal{O}}\). The adjoint action of \(U(\overline{\mathcal{O}})\) doesn't change this class, while the action of \(T(\overline{\mathcal{O}})\) corresponds to rescalings by \(\overline{\mathcal{O}}^{*}\). This immediately gives the result. **Proposition 4.2**.: _We have the following decomposition into open subsets_ \[G(K)=\sqcup_{n\geq 0}B(K)\cdot(1+\epsilon\cdot\varphi_{n})\cdot G(\mathcal{O}).\] Proof.: The Iwasawa decomposition \(G(\overline{K})=B(\overline{K})\cdot G(\overline{\mathcal{O}})\) implies that every \(B(K)-G(\mathcal{O})\) double coset in \(G(K)\) has a representative of the form \(1+\epsilon X\), where \(X\in\mathfrak{g}(\overline{K})\). Furthermore, assume that \[1+\epsilon X^{\prime}=b\cdot(1+\epsilon X)\cdot g_{\mathcal{O}},\] with \(b\in B(K)\), \(g_{\mathcal{O}}\in G(\mathcal{O})\). Then the reductions \(\overline{b}\in B(\overline{K})\) and \(\overline{g}_{\mathcal{O}}\in G(\overline{\mathcal{O}})\) satisfy \(\overline{b}\overline{g}_{\mathcal{O}}=1\). Hence, we can write \[b=(1+\epsilon Y)b_{0},\ \ g_{\mathcal{O}}=b_{0}^{-1}(1+\epsilon Z),\] with \(b_{0}\in B(\overline{\mathcal{O}})\), \(Y\in\mathfrak{b}(\overline{K})\), \(Z\in\mathfrak{g}(\overline{\mathcal{O}})\). In other words, we get \[X^{\prime}=Y+\operatorname{Ad}(b_{0})X+Z.\] Now the result follows from Lemma 4.1. Next, let us return to the picture with the special nilpotent extension \(C\) of length \(2\). Set \[N_{0}:=\mathcal{N}/\mathcal{N}^{2}\] (this is a line bundle on \(\overline{C}\)). Then for every point \(p\) we can consider the corresponding square-zero extension \(F_{p}\) of the local field \(\overline{F}_{p}\) by \(N_{0,p}\) (the completion of the stalk of \(N_{0}\) at \(p\)). We pick a generator \(\epsilon_{p}\in N_{0,p}\) at every point and consider the corresponding generator of the nilradical, \[\epsilon:=(\epsilon_{p})\in\mathcal{N}\mathbb{A}.\] Let us also pick a uniformizer \(t_{p}\in\overline{\mathcal{O}}_{p}\) at every point, and for every effective divisor \(D=\sum_{p}n_{p}p\) set \[f_{D}:=(t_{p}^{n_{p}})\in\overline{\mathbb{A}}^{*}, \tag{4.1}\] \[\varphi_{D}:=\begin{pmatrix}0&0\\ f_{D}^{-1}&0\end{pmatrix}\in\mathfrak{g}(\overline{\mathbb{A}}),\ \ g_{D}:=1+\epsilon\varphi_{D}\in G(\mathbb{A}).\] Let \(\operatorname{Div}(\overline{C})_{\geq 0}\) denote the set of (finite) linear combinations of points of \(\overline{C}\) with integer nonnegative coefficients. **Corollary 4.3**.: _We have the following decomposition into (open) double cosets:_ \[G(\mathbb{A})=\sqcup_{D\in\operatorname{Div}(\overline{C})_{\geq 0}}B( \mathbb{A})\cdot g_{D}\cdot G(\mathcal{O}).\] Recall that the target of the constant term map \(E=E_{B}\) associated with the Borel subgroup \(B\subset G=\operatorname{GL}_{2}\) is the space of functions on \[\operatorname{QBun}_{T}(C):=(T(F)U(\mathbb{A}))\backslash G(\mathbb{A})/G( \mathcal{O}),\] where \(U\subset B\) is the subgroup of strictly upper-triangular matrices. **Definition 4.4**.: We set \(B(\mathcal{O})[D]:=B(\mathbb{A})\cap g_{D}G(\mathcal{O})g_{D}^{-1}\) and denote by \(T(\mathcal{O})[D]\subset T(\mathbb{A})\) the image of \(B(\mathcal{O})[D]\) under the natural projection \(B(\mathbb{A})\to T(\mathbb{A})\). **Corollary 4.5**.: _We have a decomposition_ \[\operatorname{QBun}_{T}(C)=\sqcup_{D\in\operatorname{Div}(\overline{C})_{\geq 0 }}\operatorname{QBun}_{T}(C,D),\] _where_ \[\operatorname{QBun}_{T}(C,D):=T(F)\backslash T(\mathbb{A})/T(\mathcal{O})[D] \cdot g_{D}.\] Proof.: We need to check that the map \(t\mapsto t\cdot g_{D}\) induces an identification \[T(\mathbb{A})/(T(F)\cdot T(\mathcal{O})[D])\stackrel{{\tilde{ \ }}}{{\longrightarrow}}(T(F)U(\mathbb{A}))\backslash(B(\mathbb{A})\cdot g_{D} \cdot G(\mathcal{O}))/G(\mathcal{O}).\] First, to see that the map is well defined, suppose we have \(t_{0}\in T(\mathcal{O})[D]\). We need to check that \(tt_{0}g_{D}\) is in the same double coset as \(tg_{D}\). By definition, we can write \(t_{0}=ub_{0}\), with \(u\in U(\mathbb{A})\) and \(b_{0}\in B(\mathbb{A})\cap g_{D}G(\mathcal{O})g_{D}^{-1}\). Hence, \[tt_{0}g_{D}=tub_{0}g_{D}=(tut^{-1})(tg_{D})(g_{D}^{-1}b_{0}g_{D})\in U( \mathbb{A})tg_{D}G(\mathcal{O}).\] Conversely, suppose \[t^{\prime}g_{D}=ut_{F}tg_{D}g_{\mathcal{O}},\] where \(t,t^{\prime}\in T(\mathcal{A})\), \(u\in U(\mathbb{A})\), \(g_{\mathcal{O}}\in G(\mathcal{O})\). Then \[b_{0}:=g_{D}g_{\mathcal{O}}g_{D}^{-1}\in B(\mathbb{A})\cap g_{D}G(\mathcal{A}).\] Projecting the relation \(t^{\prime}=ut_{F}tb_{0}\) to \(T(\mathbb{A})\), we deduce that \(t^{\prime}\in tT(F)T(\mathcal{O})[D]\). _Remark 4.6_.: The double cosets presentations of \(\operatorname{QBun}_{T}(C)\) and \(\operatorname{QBun}_{T}(C,D)\) mean that these sets can be identified with isomorphism classes of certain groupoids (see Appendix A). For now we do not need to consider the groupoid structure. We will return to this in Appendix B, where we will give a geometric interpretation of \(\operatorname{QBun}_{T}(C)\). Later we will need the following explicit description of \(T(\mathcal{O})[D]\). **Lemma 4.7**.: _One has \(t=\operatorname{diag}(a_{1},a_{2})\in T(\mathcal{O})[D]\) if and only if_ \[a_{1},a_{2}\in\mathcal{O}[D]:=\mathcal{O}+\epsilon\cdot\overline{\mathcal{O}}( D),\ \ a_{1}a_{2}\in\mathcal{O}^{*},\ \ (a_{1}-a_{2})\mod\mathcal{N}\in\overline{\mathcal{O}}(-D). \tag{4.2}\] Proof.: One has \(\begin{pmatrix}a_{1}&b\\ 0&a_{2}\end{pmatrix}\in B(\mathbb{A})\cap g_{D}G(\mathcal{O})g_{D}^{-1}\) if and only if \[b\in\mathcal{O},\ \ a_{1}+\epsilon f_{D}^{-1}b\in\mathcal{O},\ \ a_{2}- \epsilon f_{D}^{-1}b\in\mathcal{O},\ \ \epsilon f_{D}^{-1}(a_{1}-a_{2})\in\mathcal{O},\ \ a_{1}a_{2}\in \mathcal{O}^{*}, \tag{4.3}\] where \(f_{D}\in\overline{\mathcal{O}}^{*}\) is given by (4.1) (the last condition is obtained by looking at the invertibility of the determinant). Thus, \(t=\operatorname{diag}(a_{1},a_{2})\in T(\mathcal{O})[D]\) if and only if there exists \(b\) such that the conditions (4.3) are satisfied. This implies that for such \((a_{1},a_{2})\) conditions (4.2) are satisfied. Conversely, assume conditions (4.2) are satisfied. Then we can write \[a_{1}=c-\epsilon f_{D}^{-1}b,\ \ a_{1}-a_{2}=f_{D}x+\epsilon f_{D}^{-1}y,\] with \(b,c,x,y\in\mathcal{O}\). This implies that \((a_{1}-a_{2})^{2}\in\mathcal{O}\), and hence, \[(a_{1}+a_{2})^{2}=(a_{1}-a_{2})^{2}+4a_{1}a_{2}\in\mathcal{O},\] so \(a_{1}+a_{2}\in\mathcal{O}\). Therefore, we have \(a_{2}=c^{\prime}+\epsilon f_{D}^{-1}b\) for some \(c^{\prime}\in\mathcal{O}\), which shows the existence of \(b\) such that (4.3) is satisfied. Thus, the constant term operator \(E:\mathbb{C}(G(F)\backslash G(\mathbb{A})/G(\mathcal{O}))\to\mathbb{C}( \operatorname{QBun}_{T}(C))\) can be viewed as a collection of operators \[E_{D}:\mathbb{C}(G(F)\backslash G(\mathbb{A})/G(\mathcal{O}))\to\mathbb{C}(T (\mathbb{A})/(T(F)\cdot T(\mathcal{O})[D])),\ \text{where}\] \[E_{D}f(t)=\int_{U(F)\backslash U(\mathbb{A})}f(utg_{D})du.\] Setting \(U_{t,D}:=U(\mathbb{A})\cap(tg_{D})G(\mathcal{O})(tg_{D})^{-1}\), and noticing that the fibers of the surjective map \[U(\mathbb{A})/U(F)\to U(\mathbb{A})/(U(F)\cdot U_{t,D})\] are \(U_{t,D}/(U(F)\cap U_{t,D})\)-cosets, we can rewrite the integral defining \(E_{D}\) as the following finite sum: \[E_{D}f(t)=\operatorname{vol}(U_{t,D}/(U(F)\cap U_{t,D}))\cdot\sum_{u\in U( \mathbb{A})/(U(F)\cdot U_{t,D})}f(utg_{D}). \tag{4.4}\] **Lemma 4.8**.: _For \(t=\operatorname{diag}(a_{1},a_{2})\), one has_ \[U_{t,D}=\{\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\ |\ x\in a_{1}a_{2}^{-1}(\epsilon\mathcal{O}+f_{D}\mathcal{O})\}.\] Proof.: Indeed, \(U_{t,D}=U(\mathbb{A})\cap(tg_{D})G(\mathcal{O})(tg_{D})^{-1}\), so the condition on \(x\) is that \[g_{D}^{-1}t^{-1}\begin{pmatrix}1&x\\ 0&1\end{pmatrix}tg_{D}=\begin{pmatrix}1+\epsilon f_{D}^{-1}a_{1}^{-1}a_{2}x&a_{ 1}^{-1}a_{2}x\\ 0&1-\epsilon f_{D}^{-1}a_{1}^{-1}a_{2}x\end{pmatrix}\] is in \(G(\mathcal{O})\). In other words, the condition is that \(a_{1}^{-1}a_{2}x\in\mathcal{O}\) and \(\epsilon f_{D}^{-1}a_{1}^{-1}a_{2}x\in\mathcal{O}\), which is equivalent to \(x\in a_{1}a_{2}^{-1}(\epsilon\mathcal{O}+f_{D}\mathcal{O})\). ### Auxiliary results involving reduction The projection \(G(\mathbb{A})\to G(\overline{\mathbb{A}})\) induces a well defined map \[\operatorname{QBun}_{T}(C)=(T(F)U(\mathbb{A}))\backslash G(\mathbb{A })/G(\mathcal{O})\to(T(\overline{F})U(\overline{\mathbb{A}}))\backslash G( \overline{\mathbb{A}})/G(\overline{\mathcal{O}})\simeq\] \[T(\overline{\mathbb{A}})/(T(F)\cdot T(\overline{\mathcal{O}})) \simeq\operatorname{Pic}(\overline{C})\times\operatorname{Pic}(\overline{C}).\] Concretely, the double coset represented by \(t\cdot g_{D}\), where \(t=\begin{pmatrix}a_{1}&0\\ 0&a_{2}\end{pmatrix}\in T(\mathbb{A})\) is mapped to the pair of line bundles \((L_{0},M_{0})\), corresponding to the reductions of the ideles \(a_{1}\) and \(a_{2}\). **Lemma 4.9**.: _If \(t\in\operatorname{QBun}_{T}(C,D)\) is in the support of \(E_{D}(f)\) and \((L_{0},M_{0})\) are the line bundles on \(\overline{C}\) associated with \(t\), then there exists a vector bundle \(F\) in the support of \(f\), and a surjection \(\overline{F}\to M_{0}\)._ Proof.: By definition of \(E_{D}(f)\), there exists an element \(g\in G(\mathbb{A})\) in the support of \(f\), such that \(g=u_{1}tg_{D}\) for some \(u_{1}\in U(\mathbb{A})\). This gives \(\overline{g}=\overline{u}_{1}\overline{t}\in B(\overline{\mathbb{A}})\). Hence the vector bundle \(\overline{F}\) is an extension of \(M_{0}\) by \(L_{0}\). Let us denote by \[\pi:\operatorname{Bun}(C)\to\operatorname{Bun}(\overline{C})\] the natural projection. The following description of the fibers of \(\pi\) is well known. Let us denote by \(\mathfrak{g}_{0}\subset\mathfrak{g}\) the subalgebra \(\mathfrak{sl}_{2}\). Recall that we denote \(N_{0}\) the line bundle on \(\overline{C}\) corresponding to \(\mathcal{N}\), and \(\epsilon=(\epsilon_{p})\subset\mathbb{A}\), where \(\epsilon_{p}\) are local generators of \(N_{0}\). Let us also denote by \(\epsilon_{F}\in\mathbb{A}\) the principal adele corresponding to a generator of \((N_{0})_{F}\). **Lemma 4.10**.: _For every \(V_{0}\in\operatorname{Bun}(\overline{C})\) there is a transitive action of \(H^{1}(\overline{C},\operatorname{End}(V_{0})\otimes N_{0})\) on the fiber \(\pi^{-1}(V_{0})\). If \(V\) is represented by the adele-valued element \(g\in G(\mathbb{A})\), then this action is given by \(g\mapsto(1+\epsilon_{F}X)g\), where \(X\in\mathfrak{g}(\overline{\mathbb{A}})\). Assume in addition that the map \(H^{0}(C,\mathcal{O}_{C})\to H^{0}(\overline{C},\overline{C})\) is surjective. Then for \(V_{0}\in\operatorname{Bun}^{\overline{L}}(\overline{C})\), there is a similar transitive action of \(H^{1}(\overline{C},\operatorname{End}(V_{0})\otimes N_{0})\) on \(\pi^{-1}(V_{0})\cap\operatorname{Bun}^{L}(C)\), where \(\operatorname{End}_{0}(V_{0})\) is the sheaf of endomorphisms of \(V_{0}\) with zero trace._ Proof.: Let \(g\in G(\mathbb{A})\). Since the maps \(G(F)\to G(\overline{F})\) and \(G(\mathcal{O})\to G(\overline{\mathcal{O}})\) are surjective, if \(g^{\prime}\in G(\mathbb{A})\) and \(g\) project to the same element in \(G(\overline{F})\backslash G(\overline{\mathbb{A}})/G(\overline{\mathcal{O}})\) then we can change \(g^{\prime}\) to another representative in the double coset of \(g^{\prime}\), so that \(g^{\prime}=(1+\epsilon_{F}X)g\), where \(X\in\mathfrak{g}(\overline{\mathbb{A}})\). Let \(a_{0}\in\overline{\mathbb{A}}^{*}\) denote the idele such that \(\epsilon=\epsilon_{F}a_{0}\), so \(N_{0}\) is the line bundle associated with \(a_{0}\). Then the double coset of \((1+\epsilon_{F}X)g\) in \(\operatorname{Bun}(C)\) depends only on the class of \(X\) in \[\mathfrak{g}(\overline{\mathbb{A}})/(\mathfrak{g}(\overline{F})+a_{0}\cdot g \mathfrak{g}(\overline{\mathcal{O}})g^{-1})\simeq H^{1}(\overline{C}, \operatorname{End}(V_{0})\otimes N_{0}).\] Furthermore, if \(\det(g^{\prime})\) represents the same line bundle as \(\det(g)\), then \(\det(1+\epsilon X)=1+\epsilon\operatorname{tr}(X)\) is of the form \(f\cdot u\) with \(f\in F^{*}\), \(u\in\mathcal{O}^{*}\). This implies that \(\overline{f}\overline{u}=1\), so \(\overline{f}\in\overline{F}^{*}\cap\overline{\mathcal{O}}^{*}\) can be lifted to an element of \(F^{*}\cap\mathcal{O}^{*}\) (using our assumption). Hence, modifying \(f\) and \(u\) we can assume that \(\overline{f}=\overline{u}=1\). Hence, we can modify \(g^{\prime}\) by appropriate diagonal elements, \(1\) modulo \(\epsilon\), so that \(\operatorname{tr}(X)=0\) ### Constant term and the Hecke operators In this subsection we assume that \(K=G(\mathcal{O})\). Recall that for each element \(g_{0}\in G(\mathbb{A})\), we have the corresponding Hecke operator \(T_{g_{0}}\) given by formula (2.1). We want to rewrite the general formula (2.2) from the proof of Proposition A more concretely, in terms of the operators \(E_{D}\), where \(D\in\operatorname{Div}(\overline{C})_{\geq 0}\). For each \(g_{0}\in G(\mathbb{A})\), \(D_{0}\in\operatorname{Div}(\overline{C})_{\geq 0}\), and each \(h\in G(\mathcal{O})/H_{g_{0}}\), using the analog of Iwasawa decomposition we can write \[g_{D}hg_{0}=u(h)t(h)g_{D(h)}g_{\mathcal{O}}, \tag{4.5}\] with \(g_{\mathcal{O}}\in G(\mathcal{O})\), \(u(h)\in U(\mathbb{A})\), \(t(h)\in T(\mathbb{A})\), and some \(D(h)\in\operatorname{Div}(\overline{C})_{\geq 0}\). **Lemma 4.11**.: _For \(f\in\mathbb{C}(\operatorname{Bun}_{G}(C))\) one has_ \[T_{g_{0}}f(ut\cdot g_{D})=\operatorname{vol}(H_{g_{0}})\sum_{h\in G(\mathcal{O })/H_{g_{0}}}f(u\operatorname{Ad}(t)(u(h))tt(h)\cdot g_{D(h)}),\] \[E_{D}T_{g_{0}}f=\operatorname{vol}(H_{g_{0}})\sum_{h\in G(\mathcal{O})/H_{g_{ 0}}}E_{D(h)}f(tt(h)).\] Proof.: Using (4.5), we get for \(f\in\mathbb{C}(\operatorname{Bun}(C))\), \[f(utg_{D}hg_{0})=f(utu(h)t(h)g_{D(h)})=f(u(tu(h)t^{-1})tt(h)g_{D(h)}).\] Hence, \[T_{g_{0}}f(t)=\operatorname{vol}(H_{g_{0}})\cdot\sum_{h\in G(\mathcal{O})/H_{ g_{0}}}f(utg_{D}hg_{0})=\operatorname{vol}(H_{g_{0}})\cdot\sum_{h\in G( \mathcal{O})/H_{g_{0}}}f(u\operatorname{Ad}(t)(u(h))tt(h)g_{D(h)}),\] which gives the first formula. Intergrating over \(u\in U(F)\backslash U(\mathbb{A})\) we get the second formula. Now we specialize further, and consider the Hecke operator \(T_{c}=|\mathbb{P}^{1}(A(c))|\cdot T_{g_{c}}\) associated with a simple divisor \(c\) of \(C\). Here \(g_{c}\) is given by (2.4), using some local equation of \(c\), \(f_{c}\in\mathcal{O}_{\overline{c}}\). **Lemma 4.12**.: _Assume that \(c\subset C\) is a simple divisor, \(D\subset\overline{C}\) an effective divisor. Let \(n\) denote the multiplicity of \(\overline{c}\) in \(D\). Then for \(n>0\), one has_ \[E_{D}T_{c}f(t)=\sum_{a\in A(c)}E_{D\_\overline{c}}f(t\begin{pmatrix}1-\epsilon f _{c}^{-n}a&0\\ 0&(1+\epsilon f_{c}^{-n}a)f_{c}^{-1}\end{pmatrix})+\sum_{bek(\overline{c})}E_{ D\_\overline{c}}f(t\begin{pmatrix}f_{c}^{-1}&0\\ 0&1+bf_{c}^{n}\end{pmatrix}),\] _while for \(n=0\), one has_ \[E_{D}T_{c}f(t)=|A(c)|\cdot E_{D}f(t\begin{pmatrix}1&0\\ 0&f_{c}^{-1}\end{pmatrix})+\sum_{bek(\overline{c})^{*}}E_{D\_\overline{c}}f(t \begin{pmatrix}f_{c}^{-1}&0\\ 0&b\end{pmatrix})+E_{D}f(t\begin{pmatrix}f_{c}^{-1}&0\\ 0&1\end{pmatrix}),\] _where \(t\in\operatorname{QBun}_{T}(C,D))\)._ Proof.: Recall (see the proof of Lemma 2.3) that the quotient \(G(\mathcal{O})/H_{g_{c}}\) can be identified with \(G(\mathcal{O}/I_{c})/B(\mathcal{O}/I_{c})\), which itself is identified with the projective line over \(A(c)=\mathcal{O}/I_{c}\) the etale extension of \(A\) associated with \(c\). From this we see that as representatives of \(G(\mathcal{O})/H_{g_{c}}\) we can take the matrices \[h_{a}:=\begin{pmatrix}a&1\\ 1&0\end{pmatrix},\ \ h_{b}:=\begin{pmatrix}1&0\\ \epsilon b&1\end{pmatrix},\] where \(a\) runs over representatives of \(\mathcal{O}/I_{c}\), \(b\) runs over representatives of \(\overline{\mathcal{O}}/\overline{I}_{c}\). Now, as in Lemma 4.11, we apply the analog of Iwasawa decompositions to the elements \(g_{D}h_{b}g_{c}\), \(g_{D}h_{a}g_{c}\), i.e., we should find \(u_{b},u_{a}\in U(\mathbb{A})\), \(t_{b},t_{a}\in T(\mathbb{A})\) and divisors \(D_{b},D_{a}\), such that \[g_{D}h_{b}g_{c}\in u_{b}t_{b}g_{D_{b}}G(\mathcal{O}),\ \ g_{D}h_{a}g_{c}\in u _{a}t_{a}g_{D_{a}}G(\mathcal{O}).\] Then as we have seen in Lemma 4.11, we will have \[T_{c}f(utg_{D})=\sum_{a\in\mathcal{O}/I_{c}}f(u\operatorname{Ad}(t)(u_{a})tt _{a}g_{D_{a}})+\sum_{b\in\overline{\mathcal{O}}/\overline{I}_{c}}f(u \operatorname{Ad}(t)(u_{b})tt_{b}g_{D_{b}}),\] \[E_{D}T_{c}f(t)=\sum_{a\in\mathcal{O}/I_{c}}E_{D_{a}}f(tt_{a})+\sum_{b\in \overline{\mathcal{O}}/\overline{I}_{c}}E_{D_{b}}f(tt_{b}).\] It remains to find formulas for \(t_{b}\), \(t_{a}\), \(D_{b}\) and \(D_{a}\). Since \((g_{c})_{p}\) belongs to \(G(\mathcal{O}_{p})\) for \(p\neq\overline{c}\), this is a local computation at the point \(\overline{c}\) (so \(u_{b},u_{a},t_{0},t_{a}\) have trivial components away from \(\overline{c}\) and \(D_{b},D_{a}\) differ from \(D\) only at \(\overline{c}\)). Let \(n\geq 0\) denote the multiplicity of \(\overline{c}\) in \(D\). Let us take \(t_{\overline{c}}:=f_{c}\mod(\epsilon)\) as a uniformizer on \(\overline{C}\) at \(\overline{c}\), and use it when defining the matrix \(\varphi_{n}\) at \(\overline{c}\) (recall that \((g_{D})\overline{c}=1+\epsilon\varphi_{n}\)). We have \[(1+\epsilon\varphi_{n})h_{a}g_{c}=\begin{pmatrix}1&a(1-\epsilon af_{c}^{-n}) \\ 0&1\end{pmatrix}\begin{pmatrix}1-\epsilon f_{c}^{-n}a&0\\ 0&(1+\epsilon f_{c}^{-n}a)f_{c}^{-1}\end{pmatrix}(1+\epsilon\varphi_{n-1}) \begin{pmatrix}0&1\\ 1&0\end{pmatrix}.\] Hence, we get \(D_{a}=D-\overline{c}\) if \(n\geq 1\), \(D_{a}=D\) is \(n=0\), and \[u_{a}=\begin{pmatrix}1&a(1-\epsilon af_{c}^{-n})\\ 0&1\end{pmatrix},\ \ t_{a}=\begin{pmatrix}1-\epsilon f_{c}^{-n}a&0\\ 0&(1+\epsilon f_{c}^{-n}a)f_{c}^{-1}\end{pmatrix}.\] Note that for \(n=0\), we have \[t_{a}\equiv\begin{pmatrix}1&0\\ 0&f_{c}^{-1}\end{pmatrix}\mod T(\mathcal{O})[D].\] On the other hand, for \(n>0\), \[(1+\epsilon\varphi_{n})h_{b}g_{c}=\begin{pmatrix}f_{c}^{-1}&0\\ 0&1+bf_{c}^{n}\end{pmatrix}\cdot(1+\epsilon\varphi_{n+1})\cdot\begin{pmatrix} 1&0\\ 0&(1+bf_{c}^{n})^{-1}\end{pmatrix},\] so \(D_{b}=D+\overline{c}\), \(u_{b}=1\) and \(t_{b}=\operatorname{diag}(f_{c}^{-1},1+bf_{c}^{n})\) in this case. Finally, in the case \(n=0\), we consider two subcases. For \(b=-1\) we get \((1+\epsilon\varphi_{0})h_{-1}=1\), so \(D_{-1}=D\), \(u_{-1}=1\), \(t_{-1}=\operatorname{diag}(f_{c}^{-1},1)\). For \(b\neq-1\), we have \[(1+\epsilon\varphi_{0})h_{-1}g_{c}=\begin{pmatrix}f_{c}^{-1}&0\\ 0&(b+1)^{-1}\end{pmatrix}(1+\epsilon\varphi_{1})\begin{pmatrix}1&0\\ 0&(b+1)^{-1}\end{pmatrix},\] so \(D_{b}=D+\overline{c}\), \(u_{b}=1\), \(t_{b}=\operatorname{diag}(f_{c}^{-1},(b+1))\). This finishes the proof of Lemma. ### Constant term and the Hecke operators over a finite field Here we consider (a much simpler) classical case of the reduced curve \(\overline{C}\) over the finite field \(k\) (we continue to assume \(G=\operatorname{GL}_{2}\)). We can consider the constant term and the Hecke operators for functions on \(\operatorname{Bun}_{G}(\overline{C},K)\), where \(K\subset G(\overline{\mathcal{O}})\) is an open compact subgroup. The Hecke operators \(T_{p}=T_{g_{p}}\) associated with points \(p\in\overline{C}\) and the constant term operator \[E:\mathbb{C}(\operatorname{Bun}_{G}(\overline{C},K))\to\mathbb{C}(T(\overline{ F}U(\overline{\mathbb{A}})\backslash G(\overline{\mathbb{A}})/K))\] are defined as before. Assume now that for some point \(p\in\overline{C}\), \(K=G(\overline{\mathcal{O}}_{p})\times K^{\prime}\), where \(K^{\prime}\subset G(\overline{\mathbb{A}}^{\prime})\), where \(\overline{\mathbb{A}}^{\prime}\) are adeles for \(\overline{C}\setminus p\). Then using the Iwasawa decomposition \(G(\overline{F}_{p})=B(\overline{F}_{p})G(\overline{\mathcal{O}}_{p})\), we can write \[T(\overline{F}U(\overline{\mathbb{A}})\backslash G(\overline{\mathbb{A}})/K)= T(\overline{F})U(\overline{\mathbb{A}}^{\prime})\backslash T(\overline{F}_{p})G( \overline{\mathbb{A}}^{\prime})/T(\overline{\mathcal{O}}_{p})K^{\prime},\] so we can use representatives of the form \(tg^{\prime}\), where \(t\in T(\overline{F}_{p})\), \(g^{\prime}\in G(\overline{\mathbb{A}}^{\prime})\), in the argument of the constant term operator \(Ef\). In this situation we have the following simple analog of Lemma 4.12. **Lemma 4.13**.: _Let \(f_{p}\) be the generator of the maximal ideal of \(\overline{\mathcal{O}}_{p}\). One has_ \[ET_{p}f(tg^{\prime})=|k(p)|\cdot Ef(t\begin{pmatrix}1&0\\ 0&f_{p}^{-1}\end{pmatrix}g^{\prime})+Ef(t\begin{pmatrix}f_{c}^{-1}&0\\ 0&1\end{pmatrix}g^{\prime}).\] We omit the proof since it is similar (but simpler) than that of Lemma 4.12. ## 5. Cuspidal functions for \(\operatorname{GL}_{2}\): bounds on support In this section we continue to assume that \(G=\operatorname{GL}_{2}\). The main result of this section is the proof of Theorem C. In the proof we use the adelic representatives for vector bundles \(V\) on \(C\) with split reduction \(\overline{V}\simeq L_{0}\oplus M_{0}\) such that \(\deg(M_{0})-\deg(L_{0})\geq 2g+2\) (see Lemma 5.2). The key technical result, Lemma 5.3, implies the vanishing of certain sums of values of a cuspidal function on bundles with split reduction. More precisely, we show the existence of a constant \(N(K)\) (equal to \(6g-1\) for \(K=G(\mathcal{O})\)) such that in the case when \(\deg(M_{0})-\deg(L_{0})\geq N(K)\), each of these sums consists of a single term. This result shows that cuspidal functions vanish on such bundles (see Proposition 5.4). This vanishing immediately implies that the space of cuspidal functions on \(\operatorname{Bun}_{G}^{L}(C,K)\) is finite-dimensional and that cuspidal functions are Hecke-bounded. To show the converse (see Proposition 5.8) we use a precise information on the compatibility of the Hecke operators with the constant term operator given by Lemma 4.12. In Sec. 5.3 we prove Theorem C(2) that cuspidality is equivalent to (weak) Hecke-boundedness (see Proposition 5.8). ### Elements of \(G(\mathbb{A})\) with split reduction For \(g\in G(\mathbb{A})\) we denote its reduction modulo \(\epsilon\) by \(\overline{g}\in G(\overline{\mathbb{A}})\). Here we will study \(g\in G(\mathbb{A})\) such that \(\overline{g}\) is diagonal. For an idele \(a\in\overline{\mathbb{A}}^{*}\), we define the degree \(\deg(a)\) as the degree of the corresponding line bundle on \(\overline{C}\). For example, for the idele \(f_{D}\) associated with an effective divisor \(D\subset\overline{C}\) (see (4.1)) we have \(\deg(f_{D})=-\deg(D)\). **Lemma 5.1**.: _(i) Let \(N_{0}\) be a line bundle of degree \(\geq 2g+2\) on \(\overline{C}\). Then for any class \(e\in H^{1}(N_{0}^{-1})\) there exists an effective divisor \(D\subset\overline{C}\) of degree \(\leq\frac{\deg(N_{0})+1}{2}+g\), defined over \(k\), such that the class \(e\) goes to zero under the map \(H^{1}(N_{0}^{-1})\to H^{1}(N_{0}^{-1}(D))\). (ii) Let \(a\in\overline{\mathbb{A}}^{*}\) be an element with \(\deg(a)\geq 2g+2\). Then any class in \(\overline{\mathbb{A}}/(\overline{F}+a^{-1}\overline{\mathcal{O}})\) can be represented by an element of the form \(ua^{-1}f_{D}^{-1}\) for some \(u\in\overline{\mathcal{O}}^{*}\) and some effective divisor \(D\subset\overline{C}\) of degree \(\leq\frac{\deg(a)+1}{2}+g\)._ Proof.: (i) Let \[0\to\mathcal{O}\to E\to N_{0}\to 0\] be an extension on \(\overline{C}\) representing the class \(e\in H^{1}(N_{0}^{-1})\simeq\operatorname{Ext}^{1}(N_{0},\mathcal{O})\). Take any line bundle \(P_{0}\) of degree \(\lfloor\frac{\deg(N_{0})}{2}\rfloor-g\). Then \(\mu(P_{0}^{\vee}\otimes E)\geq g\). Hence, \(H^{0}(P_{0}^{\vee}\otimes E)\neq 0\), in other words, there exists a nonzero morphism \(P_{0}\to E\). Note that by assumption, \[\deg(P_{0})=\lfloor\frac{\deg(N_{0})}{2}\rfloor-g\geq 1,\] so \(\operatorname{Hom}(P_{0},\mathcal{O})=0\). Hence, the composition \(P_{0}\to E\to N_{0}\) is nonzero, so \(P_{0}=N_{0}(-D)\), where \[\deg(D)=\deg(N_{0})-\deg(P_{0})\leq\deg(N_{0})-\frac{\deg(N_{0})-1}{2}+g=\frac {\deg(N_{0})+1}{2}+g.\] (ii) Let \(N_{0}\) be the line bundle on \(\overline{C}\) associated with \(a\). Since we have an indentificiation \(H^{1}(N_{0}^{-1})=\overline{\mathbb{A}}/(\overline{F}+a^{-1}\overline{ \mathcal{O}})\), by part (i), for any class \(e\in\overline{\mathbb{A}}/(\overline{F}+a^{-1}\overline{\mathcal{O}})\) there exists an effective divisor \(D\subset\overline{C}\) of degree \(\leq\frac{\deg(a)+1}{2}+g\), such that \(e\) goes to zero under the map \[\overline{\mathbb{A}}/(\overline{F}+a^{-1}\overline{\mathcal{O}})\to\overline {\mathbb{A}}/(\overline{F}+a^{-1}f_{D}^{-1}\overline{\mathcal{O}}).\] In other words, \(e\) is in the image of the map \[f_{D}^{-1}\overline{\mathcal{O}}/\overline{\mathcal{O}}\xrightarrow{a^{-1}} \overline{\mathbb{A}}/(\overline{F}+a^{-1}\overline{\mathcal{O}}).\] But any element of \(f_{D}^{-1}\overline{\mathcal{O}}/\overline{\mathcal{O}}\) is represented by \(uf_{D^{\prime}}^{-1}\) for some \(u\in\overline{\mathcal{O}}^{*}\) and some subdivisor \(D^{\prime}\subset D\), and our assertion follows. **Lemma 5.2**.: _Suppose the reduction modulo \(\epsilon\) of a double coset \(G(F)gG(\mathcal{O})\) is \(G(\overline{F})\operatorname{diag}(\overline{a}_{1},\overline{a}_{2})G( \overline{\mathcal{O}})\), where \(\deg(\overline{a}_{2})-\deg(\overline{a}_{1})\geq 2g+2\). Then there exists an effective divisor \(D\subset\overline{C}\) of degree \(\leq\frac{\deg(\overline{a}_{2})-\deg(\overline{a}_{1})+1}{2}+g\), such that_ \[G(F)gG(\mathcal{O})=G(F)\begin{pmatrix}a_{1}&\epsilon\cdot a_{1}f_{D}^{-1}\\ 0&a_{2}\end{pmatrix}G(\mathcal{O})\] _for some ideles \(a_{1},a_{2}\) reducing to \(\overline{a}_{1}\) and \(\overline{a}_{2}\)._ Proof.: First, it is clear that we can choose a representative \(g\) for our double coset of the form \(g=(1+\epsilon X)\cdot t_{a}\), where \(t_{a}=\operatorname{diag}(a_{1},a_{2})\), where \(a_{1},a_{2}\in\mathbb{A}^{*}\) are some liftings of \(\overline{a}_{1},\overline{a}_{2}\in\overline{\mathbb{A}}^{*}\). Furthermore, we claim that we can assume \(X\) to be upper-triangular. Indeed, setting \(\varphi(x):=\begin{pmatrix}1&0\\ x&1\end{pmatrix}\), we have \[\operatorname{Ad}(t_{a})\varphi(x)=\varphi(\overline{a}_{2}\overline{a}_{1}^ {-1}x).\] Thus, multiplying \(g\) on the right with elements from \((1+\varphi(\epsilon\overline{\mathcal{O}}))\) we can add to \(X\) elements of \(\mathfrak{u}_{-}(\overline{a}_{2}\overline{a}_{1}^{-1}\overline{\mathcal{O}})\). On the other hand, multiplying \(g\) on the left with elements from \((1+\epsilon\mathfrak{u}_{-}(\overline{F}))\), we can add to \(X\) elements of \(\mathfrak{u}_{-}(\overline{F})\). But \(\overline{F}+\overline{a}_{2}\overline{a}_{1}^{-1}\overline{\mathcal{O}}= \mathbb{A}\) (since \(\deg(\overline{a}_{2}\overline{a}_{1}^{-1})\geq 2g+2\)), so we can make \(X\) to be upper-triangular. Next, modifying \(a_{1}\) and \(a_{2}\), we can assume that \(X\) is strictly upper triangular, so \(X=\begin{pmatrix}1&\epsilon x\\ 0&1\end{pmatrix}\), where \(x\) can be modified to any element in its \(\overline{F}+\overline{a}_{1}\overline{a}_{2}^{-1}\overline{\mathcal{O}}\)-coset. Thus, applying Lemma 5.1(ii), we can assume that \[x=u\overline{a}_{1}\overline{a}_{2}^{-1}f_{D}^{-1}\] with \(u\in\overline{\mathcal{O}}^{*}\), for some effective divisor \(D\) of degree \(\leq\frac{\deg(\overline{a}_{2})-\deg(\overline{a}_{1})+1}{2}+g\). Thus, we get a representative of the form \[g=\begin{pmatrix}1&\epsilon\cdot ua_{1}a_{2}^{-1}f_{D}^{-1}\\ 0&1\end{pmatrix}\cdot t_{a}=\begin{pmatrix}a_{1}&\epsilon\cdot ua_{1}f_{D}^{-1 }\\ 0&a_{2}\end{pmatrix}.\] Finally, we observe that \[\begin{pmatrix}a_{1}&\epsilon\cdot ua_{1}f_{D}^{-1}\\ 0&a_{2}\end{pmatrix}\cdot\begin{pmatrix}u&0\\ 0&1\end{pmatrix}=\begin{pmatrix}ua_{1}&\epsilon\cdot ua_{1}f_{D}^{-1}\\ 0&a_{2}\end{pmatrix}.\] Hence, replacing \(a_{1}\) by \(\widetilde{a}_{1}=ua_{1}\), we get a representative of the claimed form. ### Constant term and bundles with split reduction Let \(K\subset G(\mathcal{O})\) be a normal open compact subgroup, and let \((g_{i})_{1\leq i\leq r}\) be representatives of \(G(\mathcal{O})/K\). Then for any \(f\in\mathbb{C}(G(F)\backslash G(\mathbb{A})/K)\), the corresponding constant term is determined by the operators \[E_{D,i}(f)(t)=\int_{U(\mathbb{A})/U(F)}f(utg_{D}g_{i})du=\operatorname{vol}(U _{t,K,D}/(U(F)\cap U_{t,K,D}))\cdot\sum_{ueU(\mathbb{A})/(U(F)\cdot U_{t,K,D} )}f(utg_{D}g_{i}),\] where \(t\in T(\mathbb{A})\), \(U_{t,K,D}:=U(\mathbb{A})\cap(tg_{D})K(tg_{D})^{-1}\). **Lemma 5.3**.: _(i) Let \(a_{1},a_{2}\) be a pair of ideles such that \(\mathbb{A}_{F}=F+a_{1}^{-1}a_{2}\mathcal{O}\). Set \(t=\begin{pmatrix}a_{2}&0\\ 0&a_{1}\end{pmatrix}\). Then for \(f\in\mathbb{C}(G(F)\backslash G(\mathbb{A})/G(\mathcal{O}))\), one has_ \[\begin{array}{l}E_{D}(f)(t)=\operatorname{vol}(U_{t,D}/(U(F)\cap U_{t,D})) \times\\ \sum_{x\in a_{1}^{-1}a_{2}\overline{\mathcal{O}}/(\overline{F}\cap a_{1}^{-1}a _{2}\overline{\mathcal{O}}+a_{1}^{-1}a_{2}f_{D}\overline{\mathcal{O}})}f( \begin{pmatrix}a_{1}(1-\epsilon f_{D}^{-1}a_{1}a_{2}^{-1}x)&\epsilon f_{D}^{-1 }a_{1}\\ 0&a_{2}(1+\epsilon f_{D}^{-1}a_{1}a_{2}^{-1}x)\end{pmatrix}).\end{array} \tag{5.1}\] _(ii) Let \(K\subset G(\mathcal{O})\) be a normal open compact subgroup. Then there exists an effective divisor \(D_{0}\) on \(C\) such that for any pair of ideles \(a_{1},a_{2}\) with \(\mathbb{A}_{F}=F+a_{1}^{-1}a_{2}f_{D_{0}}\mathcal{O}\), and any \(f\in\mathbb{C}(G(F)\backslash G(\mathbb{A})/K)\), one has_ \[\begin{array}{l}E_{D,i}(f)(t)=c(t,K,D)\times\\ \sum_{xa_{1}^{-1}a_{2}f_{D_{0}}\overline{\mathcal{O}}/(\overline{F}\cap a_{1}^ {-1}a_{2}f_{D_{0}}\overline{\mathcal{O}}+a_{1}^{-1}a_{2}f_{Df_{0}}\overline{ \mathcal{O}})}f(\begin{pmatrix}a_{1}(1-\epsilon f_{D}^{-1}a_{1}a_{2}^{-1}x)& \epsilon f_{D}^{-1}a_{1}\\ 0&a_{2}(1+\epsilon f_{D}^{-1}a_{1}a_{2}^{-1}x)\end{pmatrix}g_{i}),\end{array} \tag{5.2}\] _where \(t=\begin{pmatrix}a_{2}&0\\ 0&a_{1}\end{pmatrix}\), \(c(t,K,D)\in\mathbb{C}^{*}\) is a constant._ Proof.: (i) By Lemma 4.8, \(\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\in U_{t,D}\) if and only if \(x\in a_{1}^{-1}a_{2}(\epsilon\mathcal{O}+f_{D}\mathcal{O})\). Hence, \[E_{D}(f)(t)=\operatorname{vol}(U_{t,D}/(U(F)\cap U_{t,D}))\cdot\sum_{x\in \mathbb{A}_{C}/(F+a_{1}^{-1}a_{2}(\epsilon\mathcal{O}+f_{D}\mathcal{O}))}f( \begin{pmatrix}1&x\\ 0&1\end{pmatrix}\cdot t\cdot g_{D}).\] Now \[\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\cdot t\cdot g_{D}=\begin{pmatrix}a_{2}+\epsilon f_{D}^{-1}a_ {1}x&xa_{1}\\ \epsilon f_{D}^{-1}a_{1}&a_{1}\end{pmatrix}.\] Since \(\mathbb{A}_{C}=F+a_{1}^{-1}a_{2}\mathcal{O}\), we can assume that \(x\in a_{1}^{-1}a_{2}\mathcal{O}\). Then we get \[\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\cdot t\cdot g_{D}\cdot\begin{pmatrix}1&-a_{1}a_{2}^{-1}x\\ 0&1\end{pmatrix}=\begin{pmatrix}a_{2}\alpha^{-1}&\epsilon y\\ \epsilon f_{D}^{-1}a_{1}&a_{1}\alpha\end{pmatrix},\] for some \(y\in\overline{\mathbb{A}}\), where \(\alpha=1-\epsilon f_{D}^{-1}a_{1}a_{2}^{-1}x\). Writing \(y=a_{1}f+a_{2}b\) with \(f\in\overline{F}\) and \(b\in\overline{\mathcal{O}}\), we get \[\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\cdot\begin{pmatrix}1&\epsilon f\\ 0&1\end{pmatrix}\cdot\begin{pmatrix}a_{2}\alpha^{-1}&\epsilon y\\ \epsilon f_{D}^{-1}a_{1}&a_{1}\alpha\end{pmatrix}\cdot\begin{pmatrix}1& \epsilon b\\ 0&1\end{pmatrix}\cdot\begin{pmatrix}0&1\\ 1&0\end{pmatrix}=\begin{pmatrix}a_{1}\alpha&\epsilon f_{D}^{-1}a_{1}\\ 0&a_{2}\alpha^{-1}\end{pmatrix}.\] Hence, \[f(\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\cdot t\cdot g_{D})=f(\begin{pmatrix}a_{1}\alpha&\epsilon f _{D}^{-1}a_{1}\\ 0&a_{2}\alpha^{-1}\end{pmatrix}).\] (ii) We make appropriate modifications in the proof of part (i). There exists an effective divisor \(D_{0}\) such that \[\begin{array}{l}\bullet\begin{pmatrix}1+\epsilon a&b\\ 0&1+\epsilon c\end{pmatrix}\in K\text{ provided }a,b,c\in f_{D_{0}}\mathcal{O},\\ \bullet\begin{pmatrix}1&\epsilon b\\ 0&1\end{pmatrix}\cdot\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\in K\text{ provided }b\in f_{D_{0}}\overline{\mathcal{O}}.\end{array}\] It follows that for \(x\in a_{1}^{-1}a_{2}f_{D_{0}}(f_{D}\mathcal{O}+\epsilon\mathcal{O})\) one has \(\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\in U_{t,K,D}\). Hence, there exists a constant \(c(t,K,D)\) such that \[E_{D,i}(f)=c(t,K,D)\sum_{x\in\mathbb{A}/(F+a_{1}^{-1}a_{2}f_{D_{0}}(f_{D} \mathcal{O}+\epsilon\mathcal{O}))}f(\begin{pmatrix}1&x\\ 0&1\end{pmatrix}tg_{D}g_{i}).\] Assume that \(\mathbb{A}=F+a_{1}^{-1}a_{2}f_{D_{0}}\mathcal{O}\). Then \[\mathbb{A}/(F+a_{1}^{-1}a_{2}f_{D_{0}}(f_{D}\mathcal{O}+\epsilon\mathcal{O}) \simeq a_{1}^{-1}a_{2}f_{D_{0}}\mathcal{O}/(a_{1}^{-1}a_{2}f_{D_{0}}\mathcal{O }\cap F+a_{1}^{-1}a_{2}f_{D_{0}}(f_{D}\mathcal{O}+\epsilon\mathcal{O})).\] Now for \(x\in a_{1}^{-1}a_{2}f_{D_{0}}\mathcal{O}\), repeating the manipulations in part (i) and using our choice of \(D_{0}\), we get \[\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\cdot t\cdot g_{D}\in G(F)\begin{pmatrix}a_{1}\alpha&\epsilon f _{D}^{-1}a_{1}\\ 0&a_{2}\alpha^{-1}\end{pmatrix}K,\] which gives the claimed formula. **Proposition 5.4**.: _(i) Let \(f\) be a cuspidal function on \(\operatorname{Bun}(C)\). Then \(f(V)=0\) for any vector bundle \(V\) such that \(\overline{V}\simeq L_{0}\oplus M_{0}\), where \(\deg(M_{0})-\deg(L_{0})\geq 6g-1\), where \(g\) is the genus of \(\overline{C}\). (ii) Let \(K\subset G(\mathcal{O})\) be an open compact subgroup, and let \(\operatorname{Bun}_{G}(C,K)\to\operatorname{Bun}_{G}(C):g\mapsto V(g)\) be the natural projection. Then there exists a constant \(N(K)>0\), such that \(f(g)=0\) for any \(g\) such that \(\overline{V(g)}\simeq L_{0}\oplus M_{0}\) where \(\deg(M_{0})-\deg(L_{0})\geq N(K)\)._ Proof.: (i) Set \(N=\deg(M_{0})-\deg(L_{0})\). Let \(V\) be a vector bundle such that \(\overline{V}=L_{0}\oplus M_{0}\), where \(\operatorname{Ext}^{1}(L_{0},M_{0})=0\). By Lemma 5.2, we can find a representative of the corresponding double coset of the form \[g=\begin{pmatrix}a_{1}&\epsilon\cdot a_{1}f_{D}^{-1}\\ 0&a_{2}\end{pmatrix},\] where \(a_{1}\) and \(a_{2}\) are ideles representing some line bundles reducing to \(L_{0}\) and \(M_{0}\), and \(D\subset\overline{C}\) is an effective divisor of degree \(\leq g+(N+1)/2\). We have \(\deg(L_{0}^{-1}M_{0}(-D))=N-\deg(D)\geq(N-1)/2-g\geq 2g-1\), so \(H^{1}(L_{0}^{-1}M_{0}(-D))=0\). Hence, the restriction map \[H^{0}(L_{0}^{-1}M_{0})\to H^{0}(L_{0}^{-1}M_{0}|_{D})\] is surjective. Now we observe that the summation in formula (5.1) is precisely over the cokernel of this map, hence, the sum reduces to the single term with \(x=0\). Since \(f\) is cuspidal, this implies the vanishing \[f(V)=f(\begin{pmatrix}a_{1}&\epsilon\cdot a_{1}f_{D}^{-1}\\ 0&a_{2}\end{pmatrix})=0.\] (ii) We can assume that \(K\subset G(\mathcal{O})\) is normal. Let \((g_{i})_{1\leq i\leq r}\) denote representatives of \(G(\mathcal{O})/K\). Set \(N=\deg(M_{0})-\deg(L_{0})\). Assuming that \(N\geq 2g+2\), as in part (i), we can find a representative of the form \[g=\begin{pmatrix}a_{1}&\epsilon\cdot a_{1}f_{D}^{-1}\\ 0&a_{2}\end{pmatrix}g_{i},\] where \(\deg(D)\leq g+(N+1)/2\). Let us choose a divisor \(D_{0}\) as in Lemma 5.3(ii) (it depends only on the subgroup \(K\)). We set \(N(K)=2(3g-1+\deg(D_{0}))+1\). Then for \(N\geq N(K)\), we have \[\deg(L_{0}^{-1}M_{0}(-D-D_{0}))=N-\deg(D)-\deg(D_{0})\geq(N-1)/2-g-\deg(D_{0}) \geq 2g-1.\] Hence, \(H^{1}\big{(}L_{0}^{-1}M_{0}(-D_{0})\big{)}=H^{1}(L_{0}^{-1}M_{0}(-D-D_{0}))=0\). Therefore, we can apply formula (5.2). Furthermore, the summation in this formula is over the cokernel of the map \[H^{0}(L_{0}^{-1}M_{0}(-D_{0}))\to H^{0}(L_{0}^{-1}M_{0}(-D_{0})|_{D}).\] The vanishing of \(H^{1}(L_{0}^{-1}M_{0}(-D-D_{0}))\) implies that this map is surjective, so the sum reduces to the single term, and we get the required vanishing. **Corollary 5.5**.: _Every cuspidal function on \(\operatorname{Bun}^{L}(C,K)\) has finite support._ Now we will prove part (1) of Theorem C. **Theorem 5.6**.: _For any open compact subgroup \(K\subset G(\mathbb{A})\), the space \(\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}^{L}(C,K))\) is finite dimensional._ Proof.: First, we observe that the projection \(\operatorname{Bun}^{L}(C,K)\to\operatorname{Bun}^{L}(C)\) has finite fibers since \(G(\mathcal{O})/K\) is finite, while the projection \(\operatorname{Bun}^{L}(C)\to\operatorname{Bun}^{\overline{L}}(\overline{C})\) has finite fibers by Lemma 4.10. Now Proposition 5.4 implies that any cuspidal function on \(\operatorname{Bun}^{L}(C,K)\) is supported on the preimage of a finite subset in \(\operatorname{Bun}^{\overline{L}}(\overline{C})\), and the assertion follows. ### Cuspidality = weak Hecke-boundedness For a rank 2 bundle \(\overline{V}\) on \(\overline{C}\) we denote by \(\mu(\overline{V})=\deg(\overline{V})/2\) its slope and by \(\phi_{-}(\overline{V})\) the smallest integer \(n\) such that there exists a surjection \(\overline{V}\to\overline{L}\), where \(\overline{L}\) is a line bundle of degree \(n\) on \(\overline{C}\). **Lemma 5.7**.: _A function \(f\in\mathbb{C}(\operatorname{Bun}^{L}(C))\) is weakly Hecke-bounded if and only if there exists \(N>0\) such that for every collection of simple divisors \(c_{1},\dots,c_{n}\) on \(C\), for every \(V\) in the support of \(T_{c_{1}}\dots T_{c_{n}}f\) one has_ \[\phi_{-}(\overline{V})\geq\mu(\overline{V})-N.\] Proof.: Assume first that \(f\) is weakly Hecke-bounded, and let \(S\subset\operatorname{Bun}(C)\) be a finite set, such that the support of \(T_{c_{1}}\dots T_{c_{n}}f\) is in \(\operatorname{Pic}(C)\otimes S\). Set \[\phi_{-}(S):=\min_{V\in S}\phi_{-}(\overline{V}),\] \[\mu_{+}(S):=\max_{V\in S}\mu(\overline{V}),\] \[N=\mu_{+}(S)-\phi_{-}(S).\] Then for every \(V\in S\), we have \(\phi_{-}(\overline{V})\geq\mu(\overline{V})-N\). Note that for any bundle \(V_{0}\) over \(\overline{C}\) and any line bundle \(L_{0}\) on \(\overline{C}\) we have \(\phi_{-}(L_{0}\otimes V_{0})=\phi_{-}(V_{0})+\deg(L_{0})\), \(\mu(L_{0}\otimes V_{0})=\mu(L_{0}\otimes V_{0})+\deg(L_{0})\). Hence, for every \(V\in\operatorname{Pic}(C)\otimes S\), one still has \(\phi_{-}(\overline{V})\geq\mu(\overline{V})-N\). To show the "if" part, we just observe that for a fixed line bundle \(L\) on \(\operatorname{Pic}(C)\), the set \(S_{N}(L)\) of all \(V\in\operatorname{Bun}^{L}(C)\) with \(\phi_{-}(\overline{V})\geq\mu(\overline{V})-N\) is finite. Now let \(L_{1},\ldots,L_{m}\) be representatives in \(\operatorname{Pic}(C)/2\operatorname{Pic}(C)\). Then we can take \[S\coloneqq S_{N}(L_{1})\cup\ldots\cup S_{N}(L_{m}).\] Indeed, it suffices to show that any vector bundle \(V\) with \(\phi_{-}(\overline{V})\geq\mu(\overline{V})-N\) is contained in \(\operatorname{Pic}(C)\otimes S\). But we can find \(M\in\operatorname{Pic}(C)\) such that \(\det(V)\simeq M^{2}\otimes L_{i}\), so \(M^{-1}\otimes V\in S_{N}(L_{i})\), i.e., \(V\in M\otimes S_{N}(L_{i})\). Now we can prove part (2) of Theorem C. **Proposition 5.8**.: _One has_ \[\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}^{L}(C))=\mathcal{S}_{b}( \operatorname{Bun}^{L}(C)).\] Proof.: Assume that \(f\) is cuspidal. Then, since Hecke operators \(T_{c}\) preserve \(\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}(C))\) (by Proposition A), we have \(T_{c_{1}}\ldots T_{c_{n}}(f)\subset\mathcal{S}_{\operatorname{cusp}}( \operatorname{Bun}^{L}(-c_{1}-\ldots-c_{n})(C))\), and the assertion follows (with \(N=3g\)) from Proposition 5.4. Conversely, assume for \(f\in\mathcal{S}(\operatorname{Bun}^{L}(C))\) there exists \(N\) as in Lemma 5.7. Consider the function \(\phi=E(f)=(E_{D}(f))_{D\geq 0}\) on \(\operatorname{QBun}_{T}(C)\). Recall that we have a natural projection \(\operatorname{QBun}_{T}(C)\to\operatorname{Pic}(\overline{C})\times \operatorname{Pic}(\overline{C})\) (see Sec. 4.2). Assume \(\phi\neq 0\). By assumption, for every \(V\) in the support of \(f\), we have \(\phi_{-}(\overline{V})\geq\deg(L)/2-N\). By Lemma 4.9, this implies that the set of numbers \(\deg(M_{0})\), where \((L_{0},M_{0})\) is a pair of line bundles associated with an element \(t\) in the support of \(\phi\), is bounded below by \(\deg(L)/2-N\). Now let \(t\in\operatorname{QPic}_{T}(C)\) be a point in the support of \(\phi\) with minimal \((\deg(M_{0}),D)\) in the lexicographical order, where \((L_{0},M_{0})\) is a pair of line bundles on \(\overline{C}\) corresponding to \(t\), and \(t\in\operatorname{QPic}_{T}(C,D)\). Let us take a point \(\overline{c}\) not in the support of \(D\) and lift it to a simple divisor. Then the formula of Lemma 4.12 shows that the support of \(T_{c}^{n}\phi\) contains a point of \(\operatorname{QPic}_{T}(C,D)\) with the associated pair of line bundles \((L_{0},M_{0}(-nc))\). By Lemma 4.9, this implies that there exists some \(V\) in the support of \(T_{c}^{n}f\) such that \(\overline{V}\) surjects onto \(M_{0}(-nc))\), so \(\phi_{-}(\overline{V})\leq\deg(M_{0}(-nc))\). But \(T_{c}^{n}f\) is supported on \(\operatorname{Bun}^{L(-nc)}(C)\), so \(\mu(\overline{V})=(\deg(L)-n\deg(\overline{c}))/2\). By assumption, we should have \[\phi_{-}(\overline{V})\geq\frac{\deg(L)-n\deg(\overline{c})}{2}-N.\] For large enough \(n\), we will have \[\frac{\deg(L)-n\deg(\overline{c})}{2}-N>\deg(M_{0})-n\deg(\overline{c})=\deg( M_{0}(-nc)),\] so we get a contradiction with the inequality \(\phi_{-}(\overline{V})\leq\deg(M_{0}(-nc))..\) Hence, in fact, \(\phi=0\) ## 6. Cuspidal functions for \(\operatorname{PGL}_{2}\): dimension estimates and Hecke eigenfunctions Throughout this section we assume that \(G=\operatorname{PGL}_{2}\), and we assume that the characteristic of \(k\) is \(\neq 2\). The main result of this section is an explicit description of the space \(\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A}))\) of cuspidal functions in \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) in this case (see Theorem 6.7). Namely, we prove that \(\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A}))\) coincides with the subspace of finitary functions described in Section 3.7. First, in Sec. 6.1, we prove Corollary D on equivalence of cuspidality with Hecke finiteness for \(\operatorname{PGL}_{2}\) (using Theorem C proved in Section 5). This shows that cuspidal functions are finitary. To prove the converse we show directly that the finitary pieces of the orbit decomposition of \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) are cuspidal. In Sec. 6.4 we prove Theorems E and F. For both results the crucial role is played by the relation with the moduli space of Higgs bundles established in Prop. 3.20. ### Cuspidality and Hecke-finiteness As a warm up, let us consider the classical case of functions on \(\operatorname{Bun}_{G}(\overline{C},\overline{K})\), where \(\overline{K}\subset G(\overline{\mathbb{A}})\) is an open compact subgroup. We want to show that Hecke-finiteness implies cuspidality. We can assume that for some point \(p\in\overline{C}\), \(\overline{K}=G(\overline{\mathcal{O}}_{p})\times K^{\prime}\), where \(K^{\prime}\subset G(\overline{\mathcal{O}}^{\prime})\), where \(\overline{\mathcal{O}}^{\prime}\) are integer adeles for \(\overline{C}\setminus p\). Then we have the Hecke operator \(T_{p}\) acting on \(\mathbb{C}(\operatorname{Bun}_{G}(\overline{C},\overline{K}))\). In this situation we have the following result. **Proposition 6.1**.: _Assume that a function \(f\) on \(\operatorname{Bun}^{\overline{L}}(\overline{C},\overline{K})\) is \(T_{p}\)-bounded, i.e., there exists a finite set \(S\subset\operatorname{Bun}(\overline{C},\overline{K})\) such that for any \(n\geq 0\), the support of \(T_{p}^{n}f\) is contained in \(S\otimes\operatorname{Pic}(\overline{C},\overline{K})\), where \(\operatorname{Pic}(\overline{C},\overline{K})=\operatorname{Bun}_{\mathbb{G}_ {m}}(\overline{C},\overline{K})\). Then \(f\) is cuspidal._ Proof.: Since \(\overline{K}\subset G(\overline{\mathcal{O}})\), we have the natural projections \[\operatorname{Bun}_{G}(\overline{C},\overline{K})\to\operatorname{Bun}_{G}( \overline{C}),\ (\deg_{1},\deg_{2}):T(\overline{F})U(\overline{\mathbb{A}}) \backslash G(\overline{\mathbb{A}})/\overline{K}\to T(\overline{\mathbb{A}}) /T(\overline{F})T(\overline{\mathcal{O}})=\operatorname{Pic}(\overline{C})^{2}.\] We will use the compatibility of \(T_{p}\) with the constant term operator \(E\) (see Sec. 4.4). The condition that \(f\) is \(T_{p}\)-bounded implies that all the subsets \((\deg_{1}-\deg_{2})(\operatorname{supp}E(T^{n}f))\) are contained in a fixed finite subset of \(\mathbb{Z}\). But Lemma 4.13 shows that if \((\deg_{1}-\deg_{2})(x)=n\) is maximal for some \(x\) in the support of \(E(T^{n}f)\) then there exists \(x^{\prime}\) in the support of \(E(T^{n+1}f)\) with \((\deg_{1}-\deg_{2})(x^{\prime})=n+1\). This shows that in fact \(Ef=0\), so \(f\) is cuspidal. Now let us go back to the case of a nilpotent extension \(C\) of \(\overline{C}\) of length \(2\). Proof of Corollary D.: Let \(K\subset\operatorname{GL}_{2}(\mathcal{O})\) be a compact subgroup, \(\overline{K}\subset\operatorname{PGL}_{2}(\mathbb{A})\) its image. It is easy to see that the embedding \(\pi^{*}\mathbb{C}(\operatorname{Bun}_{\operatorname{PGL}_{2}}(C,\overline{K}) )\hookrightarrow\mathbb{C}(\operatorname{Bun}_{\operatorname{GL}_{2}}(C,K))\), associated with the natural projection \(\pi:\operatorname{Bun}_{\operatorname{GL}_{2}}(C,K)\to\operatorname{Bun}_{ \operatorname{PGL}_{2}}(C,\overline{K})\) has the property that \(f\in\mathbb{C}(\operatorname{Bun}_{\operatorname{PGL}_{2}}(C,\overline{K}))\) is cuspidal if and only if \(\pi^{*}f\) is cuspidal. Furthermore, its image is contained in functions invariant under tensoring with line bundles. Let \(Z\subset\operatorname{GL}_{2}\) denote the center (isomorphic to \(\mathbb{G}_{m}\)). Then the image of the composed map \[Z(\mathbb{A})/(Z(F)\cdot Z(\mathbb{A})\cap K)\to\operatorname{Bun}_{GL_{2}}(C,K )\to\operatorname{Pic}(C)\] is exactly \(2\operatorname{Pic}(C)\), which has finite index in \(\operatorname{Pic}(C)\). Hence, there exists finitely many line bundles \(L_{1},\dots,L_{s}\), such that the pull-back of a function from \(\operatorname{Bun}_{\operatorname{PGL}_{2}}(C,\overline{K})\) is determined by its restrictions to \(\operatorname{Bun}_{\operatorname{GL}_{2}}^{L_{i}}(C,K)\), \(i=1,\dots,s\). Note that a function on \(\operatorname{Bun}_{\operatorname{GL}_{2}}(C,K)\) is cuspidal if and only if its restriction to each \(\operatorname{Bun}_{\operatorname{GL}_{2}}^{L}(C,K)\) is cuspidal. Indeed, this is clear from the form of the constant term operator: \(Ef(t)\) depends only on values \(f(g)\) with \(\det(g)=\det(t)\). Thus, we get an embedding \[V(C,\overline{K})=\mathcal{S}_{\operatorname{cusp}}(\operatorname{Bun}_{ \operatorname{PGL}_{2}}(C,\overline{K}))\mapsto\bigoplus_{i=1}^{s}\mathcal{S} _{\operatorname{cusp}}(\operatorname{Bun}_{\operatorname{GL}_{2}}^{L_{i}}(C,K)),\] which implies that \(V(C,\overline{K})\) is finite-dimensional. Thus, for any cuspidal function \(f\) on \(\operatorname{Bun}_{\operatorname{PGL}_{2}}(C)\), the space \(\mathcal{H}_{\operatorname{PGL}_{2},C}\cdot f\subset V(C)\) is finite-dimensional. Conversely, assume that for a finitely supported function \(f\) on \(\operatorname{PGL}_{2}(C)\), the space spanned by \(T_{c_{1}}\dots T_{c_{n}}f\) is finite-dimensional. Then the pull-back \(\pi^{*}f\) to \(\operatorname{Bun}_{\operatorname{GL}_{2}}(C)\) has the property that all functions \(T_{c_{1}}\dots T_{c_{n}}(\pi^{*}f)\) are supported on \(\pi^{-1}(S)\), where \(S\subset\operatorname{Bun}_{\operatorname{PGL}_{2}}(C)\) is a fixed finite set. Hence, setting \[\widetilde{S}=\cup_{i=1}^{s}\pi^{-1}(S)\cap\operatorname{Bun}_{\operatorname{ GL}_{2}}^{L_{i}}(C),\] we obtain a finite set \(\widetilde{S}\) such that all \(T_{c_{1}}\dots T_{c_{n}}(\pi^{*}f)\) all supported on \(\operatorname{Pic}(C)\otimes\widetilde{S}\). Hence, \(\pi^{*}f\) is weakly Hecke-bounded, so by Proposition 5.8, it is cuspidal. Therefore, \(f\) is cuspidal. **Definition 6.2**.: We define the subspace of _cuspidal_ functions \(\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A}))\subset \mathcal{S}(G(F)\backslash G(\mathbb{A}))\) by the condition \[\int_{u\in U(\mathbb{A})/U(F)}f(ug)du=0.\] Recall that we call a function \(f\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))\)_finitary_ if it is contained in an admissible \(G(\mathbb{A})\)subrepresentation. Note that the subspace \(\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A}))\) is an admissible \(G(\mathbb{A})\)-representation, by Corollary D. Hence, we deduce the following result. **Corollary 6.3**.: _Let \(f\) be in \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\). If \(f\) is cuspidal then it is finitary._ ### Orbit decomposition and strongly cuspidal functions Since the characterstic of \(k\) is \(\neq 2\) every element of \(\mathfrak{g}(\overline{F})\) with nonzero determinant is regular semisimple. Thus, in addition to the zero orbit \(\Omega_{0}\) and the orbit \(\Omega_{n}\) of a nilpotent element, we have orbits in parametrized by nonzero values of the determinant in the \(1\)-dimensional \(\overline{F}\)-vector space \[H:=L^{-2}\omega_{\overline{C}}^{\otimes 2}(\overline{F}).\] For \(d\in H\setminus\{0\}\), we denote by \(\Omega_{d}\) the orbit of elements \(x\) such that \(\det(x)=-d\). Note that \(d\neq 0\) is a square if and only if \(\Omega_{d}\) has a diagonal representative. Let us denote by \(H^{\prime}\subset H\) the elements which are not squares. **Definition 6.4**.: Let us define the subspace of _strongly cuspidal_ functions \(\mathcal{S}_{\operatorname{str.cusp}}(G(F)\backslash G(\mathbb{A}))\subset \mathcal{S}(G(F)\backslash G(\mathbb{A}))\) by the condition \[\int_{u\in U(\mathcal{N}\mathbb{A})/U(\mathcal{N}_{F})}f(ug)du=0.\] Note that the subspace \(\mathcal{S}_{\mathrm{str.cusp}}(G(F)\backslash G(\mathbb{A}))\) is preserved by \(G(\mathbb{A})\) and is contained in \(\mathcal{S}_{\mathrm{cusp}}(G(F)\backslash G(\mathbb{A}))\). **Proposition 6.5**.: _One has_ \[\mathcal{S}_{\mathrm{str.cusp}}(G(F)\backslash G(\mathbb{A}))=\bigoplus_{d\in H ^{\prime}}\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{d}}.\] Proof.: Note that since \((1+\epsilon\mathfrak{u}(\mathcal{N}\mathbb{A}))\) commutes with \(N_{\mathbb{A}}=(1+\epsilon\mathfrak{g}(\mathcal{N}\mathbb{A}))\), the projectors \(\Pi_{\Omega}\) preserve the subspace of strongly cuspidal functions. Thus, we have a decomposition \[\mathcal{S}_{\mathrm{str.cusp}}(G(F)\backslash G(\mathbb{A}))=\bigoplus_{ \Omega\neq 0}(\mathcal{S}_{\mathrm{str.cusp}}(G(F)\backslash G(\mathbb{A})) \cap\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega}),\] Assume first that \(\Omega=\Omega_{d}\), where \(d\in H\) is not a square. Then we claim that any function \(f\) in \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega}\) is strongly cuspidal. It is enough to prove that one has \[\int_{a\mathfrak{u}(\mathcal{N}\mathbb{A})/\mathfrak{u}(\mathcal{N}_{F})} \Pi_{\Omega}f((1+a)g)=0.\] We can rewrite this expression as \[\sum_{\eta\in\Omega}\int_{X\in\mathfrak{g}(\mathcal{N}\mathbb{A} )/\mathfrak{g}(\mathcal{N}_{F})}\int_{a\mathfrak{u}(\mathcal{N}\mathbb{A})/ \mathfrak{u}(\mathcal{N}_{F})}\psi_{\eta}(-X)f((1+X+a)g)dadX=\] \[\sum_{\eta\in\Omega}\int_{X\in\mathfrak{g}(\mathcal{N}\mathbb{A} )/\mathfrak{g}(\mathcal{N}_{F})}\int_{a\mathfrak{u}(\mathcal{N}\mathbb{A})/ \mathfrak{u}(\mathcal{N}_{F})}\psi_{\eta}(a)\psi_{\eta}(-X)f((1+X)g)dadX.\] Now we observe that \(\int_{a\mathfrak{u}(\mathcal{N}\mathbb{A})/\mathfrak{u}(\mathcal{N}_{F})} \psi_{\eta}(a)da=0\) unless the restriction \(\psi_{\eta}|_{\mathfrak{u}(\mathcal{N}\mathbb{A})}\) is trivial, i.e., \(\eta\) is upper-triangular. Since \(d\) is not a square, this never happens, and our claim follows. Next, assume that either \(\Omega=0\) or \(\Omega=\Omega_{n}\) or \(\Omega=\Omega_{d}\), where \(d\) is a square. Then we claim that for any strongly cuspidal function \(f\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))\), one has \(\Pi_{\Omega}(f)=0\). Indeed, let \(\eta\in\Omega\) be an upper-triangular representative. Then for any \(g_{0}\in G(F)\), we have \[\Pi_{g_{0}\eta}f(g)=\int_{\mathfrak{g}(\mathcal{N}\mathbb{A})/ \mathfrak{g}(\mathcal{N}_{F})}\psi_{g_{0}\eta}(-X)f((1+X)g)dX=\] \[\int_{\mathfrak{g}(\mathcal{N}\mathbb{A})/\mathfrak{g}(\mathcal{ N}_{F})}\psi_{\eta}(-\mathrm{Ad}(g_{0}^{-1})(X))f((1+X)g)dX=\int_{\mathfrak{g}( \mathcal{N}\mathbb{A})/\mathfrak{g}(\mathcal{N}_{F})}\psi_{\eta}(-X)f(g_{0}(1 +X)g_{0}^{-1}g)dX=\] \[\int_{\mathfrak{g}(\mathcal{N}\mathbb{A})/(\mathfrak{g}(\mathcal{ N}_{F})+\mathfrak{u}(\mathcal{N}\mathbb{A}))}\int_{\mathfrak{u}(\mathcal{N} \mathbb{A})/\mathfrak{u}(\mathcal{N}_{F})}\psi_{\eta}(-X)f((1+X+a)g_{0}^{-1}g) dadX,\] where we used the fact that \(\psi_{\eta}\) is trivial on \(\mathfrak{u}(\mathcal{N}\mathbb{A})\). Now integrating over \(a\) gives zero, since \(f\) is strongly cuspidal. Hence, \(\Pi_{g_{0}\eta}f(g)=0\), and so \(\Pi_{\Omega}f=0\). ### Orbit/character decomposition and cuspidal functions **Lemma 6.6**.: _For any nonzero \(\alpha\in\omega_{\overline{C}}(\overline{F})\), the subrepresentation \(\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\subset\mathcal{S}(G(F) \backslash G(\mathbb{A}))_{\Omega_{n}}\) is cuspidal, i.e., contained in \(\mathcal{S}_{\mathrm{cusp}}(G(F)\backslash G(\mathbb{A}))\)._ Proof.: Recall that an embedding of \(\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\) into \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) is given by the map \[\kappa=\kappa_{\eta_{0}}:f\mapsto\kappa f(g)=\sum_{\gamma\in U(F)\backslash G (F)}f(\gamma g).\] For any \(f\in\widetilde{\mathcal{S}}_{\eta,\chi_{\alpha}}\) one has \[\int_{U(\mathbb{A})/U(F)}(\kappa f)(ug)du=\sum_{\gamma\in U(F)\backslash G(F)} \int_{U(\mathbb{A})/U(F)}f(\gamma ug)du.\] Note that if \(\operatorname{Ad}(\gamma^{-1})\eta_{0}\) is not upper-triangular, then the restriction \(\psi_{\operatorname{Ad}(\gamma^{-1})\eta_{0}}|_{N_{\mathbb{A}}\cap U(\mathbb{ A})}\) is nontrivial. Hence in this case we get \[\int_{N_{\mathbb{A}}\cap U(\mathbb{A})/(N_{F}\cap U(F))}f(\gamma ug)du=\int\, \psi_{\eta_{0}}(\gamma u\gamma^{-1})f(\gamma g)du=0,\] and therefore the integral over \(U(\mathbb{A})/U(F)\) of such term is also zero. Thus, in the above sum we can restrict to \(\gamma\) such that \(\operatorname{Ad}(\gamma^{-1})(\eta_{0})\) is upper-triangular, i.e., \(\gamma\) itself is upper-triangular. But then \(\gamma\) normalizes \(U(\mathbb{A})\), so we have \[\int_{U(\mathbb{A})/U(F)}f(\gamma ug)du=\int_{U(\mathbb{A})/U(F)}\chi_{\alpha }(\gamma u\gamma^{-1})f(\gamma g)du=0,\] since the character \(\chi_{\alpha}\) is nontrivial. We can now combine our results to give a description of the subspace of cuspidal functions on \(G(F)\backslash G(\mathbb{A})\). **Theorem 6.7**.: _A function \(f\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))\) is cuspidal if and only if it is finitary. Hence, one has a decomposition_ \[\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A}))=\mathcal{S}_ {f}(G(F)\backslash G(\mathbb{A}))=\mathcal{S}_{\operatorname{cusp}}(G( \overline{F})\backslash G(\overline{\mathbb{A}}))\oplus\bigoplus_{\alpha=0} \widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\oplus\mathcal{S}_{\operatorname {str.cusp}}(G(F)\backslash G(\mathbb{A})), \tag{6.1}\] _where_ \[\mathcal{S}_{\operatorname{str.cusp}}(G(F)\backslash G(\mathbb{A}))=\bigoplus _{d\in H^{\prime}}\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{d}}, \tag{6.2}\] Proof.: Recall that the decomposition (6.2) was established in Prop. 6.5. We already know that every cuspidal function is finitary (see Corollary 6.3), so we need to prove the converse. By Theorem 3.22, we have a decomposition \[\mathcal{S}_{f}(G(F)\backslash G(\mathbb{A}))=\mathcal{S}_{f}(G(\overline{F}) \backslash G(\overline{\mathbb{A}}))\oplus\bigoplus_{\alpha=0}\widetilde{ \mathcal{S}}_{\eta_{0},\chi_{\alpha}}\oplus\bigoplus_{d\in H^{\prime}} \mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{d}}.\] We need to check that each summand in the right-hand side is contained in \(\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A}))\). For \(\mathcal{S}_{f}(G(\overline{F})\backslash G(\overline{\mathbb{A}}))\) this follows from Prop. 6.1, for \(\widetilde{\mathcal{S}}_{\eta_{0},\chi_{\alpha}}\) from Lemma 6.6, and for \(\mathcal{S}(G(F)\backslash G(\mathbb{A}))_{\Omega_{d}}\) from (6.2). We get the following strengthening of the second part of Corollary D. **Corollary 6.8**.: _For any open compact subgroup \(K\subset G(\mathbb{A})\), if \(f\in\mathcal{S}(G(F)\backslash G(\mathbb{A}))^{K}\) is such that \(\mathcal{H}_{K}\cdot f\) is finite-dimensional then \(f\) is cuspidal._ Proof.: Set \(V=V_{f,K}:=\cup_{K^{\prime}\subset K}\mathcal{H}_{K^{\prime}}\cdot f\subset \mathcal{S}(G(F)\backslash G(\mathbb{A}))\), where the union is over all open compact subgroups \(K^{\prime}\subset K\). Since, each such \(K^{\prime}\) has finite index in \(K\), the spaces \(\mathcal{H}_{K^{\prime}}\cdot f\) are finite-dimensional. It is easy to see that \(V\) is a \(G(\mathbb{A})\)-subrepresentation. Indeed, for any \(f^{\prime}\in\mathcal{H}_{K^{\prime}}\cdot f\) we have \(V_{f^{\prime},K^{\prime}}\subset V_{f,K}\), so it is enough to prove that for any \(g\in G(\mathbb{A})\), one has \(gf\in V\). We can find \(K^{\prime}\) such that \(gf\) is invariant under \(K^{\prime}\). Then \(h_{K^{\prime}gK^{\prime}}(f)\in\mathcal{H}K^{\prime}\cdot f\) is proportional to \(gf\). We claim that \(V^{K^{\prime}}\subset\mathcal{H}_{K^{\prime}}\cdot f\). Indeed, it is enough to prove that \((\mathcal{H}_{K^{\prime}}\cdot f)^{K}\subset\mathcal{H}_{K}\cdot f\). Indeed, any element of \(\mathcal{H}_{K^{\prime}}\cdot f\) is a linear combination of the elements of the form \(A_{K^{\prime}}gf\), where \(g\in G(\mathbb{A})\), and \(A_{K^{\prime}}\) denotes the averaging operator over \(K^{\prime}\). When we apply \(A_{K}\) to such elements we get elements of the \(A_{K}gf\) which lie in \(\mathcal{H}_{K}\cdot f\). Thus, \(V\) is an admissible \(G(\mathbb{A})\)-subrepresentation containing \(f\). Hence, by Theorem 6.7, \(f\) is cuspidal. By passing to \(K\)-invariants in (6.1), where \(K\subset G(\mathbb{A})\) is an open compact subgroup, we obtain a decomposition of the space \(V(C,K)\) of cuspidal functions on \(\operatorname{Bun}_{\operatorname{PGL}_{2}}(C,K)\). **Corollary 6.9**.: _One has a direct sum decomposition preserved by the Hecke algebra \(\mathcal{H}_{K}\),_ \[V(C,K)=V(\overline{C},\overline{K})\oplus V(C,K)_{n}\oplus V^{1}(C,K),\] _where \(V^{1}(C,K)\subset V(C,K)\) is the space of strongly cuspidal functions and \(V(C,K)_{n}=\bigoplus_{\alpha\neq 0}\widetilde{\mathcal{S}}_{\eta_{0},\chi_{ \alpha}}^{K}\)._ ### Proofs of Theorems E and F Recall that we are studying the spaces \[V(C):=\mathcal{S}_{\operatorname{cusp}}(G(F)\backslash G(\mathbb{A})/G( \mathcal{O}))\text{ and }V^{1}(C):=\mathcal{S}_{\operatorname{str.cusp}}(G(F) \backslash G(\mathbb{A})/G(\mathcal{O})),\] where \(G=\operatorname{PGL}_{2}\). We start with determining the space \(V^{1}(C)\) of strongly cuspidal functions. Note that for \(PGL_{2}\) semisimple adjoint orbits are classified by the value of the determinant. Thus, using (6.2) and Lemma 3.13, we can write \[V^{1}(C)=\bigoplus_{d\in H^{\prime}}\widetilde{\mathcal{S}}_{\Omega_{d}}^{G( \mathcal{O})},\] where \(H^{\prime}\) is the set of non-squares in \(H=\omega_{\overline{C}}^{2}L^{-2}(\overline{F})\), and \(\Omega_{d}\) is the orbit corresponding to \(d\in H^{\prime}\). Furthermore, by Proposition 3.20, we have an isomorphism \[\widetilde{\mathcal{S}}_{\Omega_{d}}\simeq\mathcal{S}(\mathcal{M}_{d}^{Higgs,L^{-1}}(\overline{C})(k),\mathcal{L}_{\psi}),\] where \(\mathcal{M}^{Higgs,L^{-1}}(\overline{C})_{d}\) is the moduli stack of \(L^{-1}\)-twisted Higgs \(\operatorname{PGL}_{2}\)-bundles \((V,\phi)\) with \(-\det(\phi)=d\) (and \(\mathcal{L}_{\psi}\) is a certain \(\mathbb{C}^{*}\)-torsor). Thus, \(\widetilde{\mathcal{S}}_{\Omega_{d}}\neq 0\) only when \(d\) comes from a global section \(\alpha\) of \[A:=H^{0}(\overline{C},\omega_{\overline{C}}^{2}L^{-2}),\] and the condition \(d\in H^{\prime}\) is equivalent to the condition \(\alpha\in A^{\prime}\subset A\), the complement to the image of map \[H^{0}(\overline{C},\omega_{\overline{C}}L^{-1})\to A:\alpha\mapsto\alpha^{2}.\] In other words, we have a decomposition \[V^{1}(C)=\bigoplus_{\alpha\in A^{\prime}}\mathcal{S}(HF_{\alpha}(k),\mathcal{ L}_{\psi}), \tag{6.3}\] where \(HF_{\alpha}\) is the Hitchin fiber over \(\alpha\), i.e., the fiber of the Hitchin map \[\mathcal{M}^{Higgs,L^{-1}}(\overline{C})\to A:=H^{0}(\overline{C},\omega_{ \overline{C}}^{2}L^{-2}):(V,\phi)\mapsto-\det(\phi).\] In particular, to estimate the dimension of \(V^{1}(C)\), we need to count (\(L^{-1}\)-twisted) Higgs \(\operatorname{PGL}_{2}\)-bundles over \(A^{\prime}\subset A\). In what follows we will use the well known fact that every \(\operatorname{PGL}_{2}\)-bundle over \(\overline{C}\) comes from a \(\operatorname{GL}_{2}\)-bundle, and that the \(\operatorname{PGL}_{2}\)-bundles associated with \(V_{1}\) and \(V_{2}\) are isomorphic if and only if there exists a line bundle \(M\) over \(\overline{C}\) such that \(V_{1}\simeq V_{2}\otimes M\). All of this follows e.g. from the adelic descriptions and the fact that any \(\operatorname{PGL}_{2}\)-bundle over \(\overline{C}\) is trivial at the general point (see [1, Prop. 4.5]). **Lemma 6.10**.: _Let \((V,\phi)\) be an \(L^{-1}\)-twisted Higgs \(\operatorname{PGL}_{2}\)-bundle. Then \(-\det(\phi)_{\eta}\in H^{\prime}\) if and only if_ * _either_ \((V,\phi)\) _is stable and belongs to_ \(h^{-1}(A^{\prime})\)_,_ * _or_ \((V,\phi)=(L,\phi_{1})\oplus(\sigma(L),-\phi_{1})\)_, where_ \(L\) _is a line bundle defined over the quadratic extension_ \(k^{\prime}\) _over_ \(k\)_,_ \(\sigma\) _is the nontrivial involution in_ \(\operatorname{Gal}(k^{\prime}/k)\)_, and_ \(\phi_{1}\in H^{0}\bigl{(}\overline{C}_{k^{\prime}},\omega\otimes L^{-1}\bigr{)} \smallsetminus\{0\}\) _is such that_ \(\sigma(\phi_{1})=-\phi_{1}\)_._ Proof.: Assume that \((V,\phi)\) is such that \(\det(\phi)_{\eta}\in H^{\prime}\). Then we claim that there are no line subundles \(L\subset V\) defined over \(k\) (where we represent \(V\) by a rank \(2\) bundle) preserved by \(\phi\). Indeed, we have \(\operatorname{tr}(\phi)=\phi|_{L}+\phi_{V/L}=0\), so this would imply that \[\det(\phi)=\det(\phi|_{L})\cdot\det(\phi|_{V/L})=-\det(\phi|_{L})^{2},\] a contradiction. Hence, \((V,\phi)\) is semistable (since a destablizing bundle would be defined over \(k\)) and belongs to \(h^{-1}(A^{\prime})\). Conversely, \((V,\phi)\) is in \(h^{-1}(A^{\prime})\) then \(-\det(\phi)\in H^{0}(\overline{C},\omega_{\overline{C}}^{2}L^{-2})\) is not a square at generic point, so \(-\det(\phi)_{\eta}\) is in \(H^{\prime}\). Now assume that \((V,\phi)\) is strictly semistable and \(\det(\phi)_{\eta}\in H^{\prime}\). Then there exists a line subbundle \(L\subset V\) over an algebraic closure of \(k\), preserved by \(\phi\), such that \(\deg(V/L)=\deg(L)=\deg(V)/2\). As we have seen above \(L\subset V\) cannot be defined over \(k\). If \(M\subset V\) is a different line subbundle (defined over an algebraic closure) preserved by \(\phi\), with \(\deg(M)=\deg(L)\), then the composed map \(M\to V\to V/L\) is an isomorphism, so we get a decomposition of Higgs bundles \[(V,\phi)=(L,\phi|_{L})\oplus(M,\phi|_{M})\] Since \(L\subset V\) has at least one Galois conjugate different from it, we do have such a decomposition. Furthermore, since \(\operatorname{tr}(\phi)=0\), we have \(\phi|_{L}=-\phi|_{M}\). This implies that there are no other line subbundles preserved by \(\phi\). Hence, \(L\subset V\) has only two conjugates under the Galois group, so it is defined over the quadratic extension \(k^{\prime}\) of \(k\), and \((M,\phi|_{M})=\sigma(L,\phi|_{L})\). Note that \(\phi|_{L}=\phi_{1}\in H^{0}\bigl{(}\overline{C}_{k^{\prime}},\omega\otimes L ^{-1}\bigr{)}\smallsetminus\{0\}\), and \(\phi|_{\sigma(L)}=\sigma(\phi_{1})=-\phi_{1}\). Finally, we observe that \(-\det(\phi)=-\phi_{1}\sigma(\phi_{1})=\phi_{1}^{2}\) is not a square of an element in \(H^{0}(\overline{C},\omega_{\overline{C}}L^{-1})\) (otherwise \(\phi_{1}\) would be defined over \(k\)). Next, let us consider the piece corresponding to the nonzero nilpotent orbit, i.e., the space \[V(C)_{n}:=\bigoplus_{\chi\in\mathbb{N}}\widetilde{\mathcal{S}}_{\eta_{0},\chi }^{G(\mathcal{O})},\] where \(\eta_{0}\) is a nonzero nilpotent element in \(\mathfrak{g}\otimes\omega_{\overline{C}}L^{-1}(\overline{F})\), and \(\chi\) runs over nontrivial characters of \(U(\overline{\mathbb{A}})/U(\overline{F})\simeq\overline{\mathbb{A}}/\overline{F}\). **Proposition 6.11**.: _One has_ \[\dim V(C)_{n}=(q-1)\cdot\sum_{D\in|H^{0}(\overline{C},\omega_{\overline{C}}L^{ -1})|}N(D),\] _where for an effective divisor \(D=\sum n_{i}p_{i}\), \(N(D)=\prod_{i}(n_{i}+1)\)._ Proof.: Let us consider the support of a function \(f\in\widetilde{\mathcal{S}}_{\eta_{0},\chi}^{G(\mathcal{O})}\) in \(U(\mathbb{A})\backslash G(\overline{\mathbb{A}})_{\eta_{0}}/G(\overline{ \mathcal{O}})\). By Iwasawa decomposition, it is enough to consider diagonal representatives \(t=\operatorname{diag}(a,1)\), where \(a\in\overline{\mathbb{A}}^{*}\). Let us fix a nonzero \(\alpha_{0}\in H^{0}(\overline{C},\omega_{\overline{C}}L^{-1})\), and take as \(\eta_{0}\) the corresponding element (3.4) Then the condition \(t\in G(\overline{\mathbb{A}})_{\eta_{0}}\) is equivalent to \(a^{-1}\alpha_{0}\in\mathfrak{g}\otimes\omega_{\overline{C}}L^{-1}(\overline{ \mathcal{O}})\). Note that for \(u\in U(\overline{\mathbb{A}})\), the condition \(t^{-1}ut\in G(\overline{\mathcal{O}})\) is equivalent to \(u\in U(a\mathcal{O})\). Thus, the condition \[f(t)=f(t(t^{-1}ut))=f(ut)=\chi(u)f(t)\] for such \(u\) implies that \(f(t)\neq 0\) only if \(\chi|_{a\mathcal{O}}\equiv 1\). Conversely, for every \(t=\operatorname{diag}(a,1)\in G(\overline{\mathbb{A}})_{\eta_{0}}\) such that \(\chi|_{a\mathcal{O}}\equiv 1\), there is a unique function \(f_{t}\in\widetilde{\mathcal{S}}_{\eta_{0},\chi}^{G(\mathcal{O})}\), supported on \(U(\mathbb{A})tG(\mathcal{O})\) and such that \(f_{t}(t)=1\). We can also rescale \(a\) by integer ideals, so we can assume that \(a_{p}=t_{p}^{n_{p}}\), where \(t_{p}\) are fixed generators of the maximal ideals at points \(p\in\overline{C}\). Let us also fix nonzero \(\beta_{0}\in H^{0}(\overline{C},\omega_{\overline{C}})\). We can write every character of \(U(\overline{\mathbb{A}})/U(\overline{F})=\overline{\mathbb{A}}/\overline{F}\) as \[\chi(x)=\chi_{f\beta_{0}}(x)=\psi_{\overline{C}}(xf\beta_{0}).\] Then \(\chi|_{a\mathcal{O}}\equiv 1\) means that \(af\beta_{0}\in\omega_{\overline{C}}(\overline{\mathcal{O}})\). The conditions that \(a^{-1}\alpha_{0}\) and \(af\beta_{0}\) are regular imply that \(f\alpha_{0}\beta_{0}\) is regular, hence \(f\alpha_{0}\beta_{0}\) is a nonzero global section of \(\omega_{\overline{C}}^{2}L^{-1}\). Once we fix \(f\), the conditions on \(a\) are equivalent to \[-v_{p}(f\beta_{0})\leq n_{p}\leq v_{p}(\alpha_{0}).\] Thus, for every point \(p\), we have \(v_{p}(f\alpha_{0}\beta_{0})+1\) choices for \(n_{p}\). Multiplying over all points \(p\), gives \(N(D)\) choices, where \(D\) is the divisor of zeros of \(f\alpha_{0}\beta_{0}\). Proof of Theorem E.: By Theorem 6.7, we have \[\dim V(C)=\dim V(\overline{C})+\dim V(C)_{n}+\dim V^{1}(C),\] where \(V(\overline{C})=\mathcal{S}_{\mathrm{cusp}}(G(\overline{F})\backslash G( \overline{\mathbb{A}})/G(\overline{\mathcal{O}}))\). We will show that \(\dim V^{1}(C)\) has the required asymptotics, while two other dimensions are of order of magnitude \(q^{3g-3}\). For the first term we have \(\dim V(\overline{C})\leq a(g)\cdot q^{3g-3}\) for some constant \(a(g)\) (see [13, Sec. 3.3]). Next, we can apply Lemma 6.10 to estimate the dimension of \(V^{1}(C)\). Namely, it shows that \[\dim V^{1}(C)=|h^{-1}(A^{\prime})(k)|+|X(k)|,\] where \(X\) is the set of equivalence classes of pairs \((L,\alpha)\) over \(\overline{C}_{k^{\prime}}\), where \(k^{\prime}\) is the quadratic extension of \(k\), with \(\sigma(\alpha)=-\alpha\), \(\alpha\neq 0\) (the pairs \((L,\alpha)\) and \((L^{\prime},\alpha^{\prime})\) are equivalent if either \((L^{\prime},\alpha^{\prime})\simeq(L\otimes M,\alpha)\) or \((L^{\prime},\alpha^{\prime})\simeq(\sigma(L)\otimes M,-\alpha)\), where \(M\in\operatorname{Pic}(\overline{C})\)). The number of \(\alpha\in H^{0}(\overline{C}_{k^{\prime}},\omega\otimes L^{-1})\) such that \(\sigma(\alpha)=-\alpha\) is \(\leq q^{g}\). Also, \(|J_{\overline{C}}(k^{\prime})|/|J_{\overline{C}}(k)|\leq c\cdot q^{g}\) (where \(c=c(g)\)). Hence, \[|X(k)|\leq c\cdot q^{2g}.\] Now let \(\mathcal{U}\subset h^{-1}(A^{\prime})\) denotes the open substack of stable Higgs \(\operatorname{PGL}_{2}\)-bundles with trivial automorphisms. Then \(|\mathcal{U}(k)|\) is equal to the number of \(k\)-points in a the corresponding open subset in the coarse moduli space of stable \(L^{-1}\)-twisted Higgs bundles, which is a variety of dimension \(6g-6\) with two connected components (both geometrically irreducible. Hence, by Lang-Weil estimate [12], we get \[|\frac{|\mathcal{U}(k)|}{q^{6g-g}}-2|\leq b(g)q^{-1/2}.\] Next, we need to estimate the number of stable Higgs \(\operatorname{PGL}_{2}\)-bundles with nontrivial automorphisms. We observe that if \((V,\phi)\) is a such a Higgs bundle then there is an isomorphism \(V\simeq V\otimes M\) for some line bundle \(M\) such that \(M^{82}\simeq\mathcal{O}\). But this implies that \(V\) comes from a line bundle on the corresponding unramified covering \(\widetilde{C}\to\overline{C}\), so the number of possibly \(\operatorname{PGL}_{2}\)-bundles appearing like this is \(\leq c\cdot q^{g-1}\). Hence, the number of stable Higgs \(\operatorname{PGL}_{2}\)-bundles with nontrivial automorphisms is \(\leq c\cdot q^{4g-4}\). Combining the steps above, we conclude that \[|\frac{\dim V^{1}(C)}{q^{6g-g}}-2|\leq b(g)q^{-1/2},\] for some constant \(b(g)\). Finally, we use Proposition 6.11 to estimate the dimension of \(V(C)_{n}\). We claim that there exists a function \(c(d)\), such that \(N(D)\leq c(\deg(D))\) for any effective divisor \(D\). Indeed, let \(D=\sum_{i=1}^{s}n_{i}p_{i}\), where all \(n_{i}>0\). Then we have \(\sum n_{i}\deg(p_{i})=d\). Therefore, \[\prod(n_{i}+1)\leq(\frac{1}{s}\sum(n_{i}+1))^{s}\leq(d+1)^{d}.\] Since \(\dim H^{0}(\overline{C},\omega_{\overline{C}}^{2}L^{-1})=3g-3\), Proposition 6.11 gives \[\dim V(C)\leq c(g)q^{3g-3},\] for some constant \(c(g)\). Now let us consider the case of a Hitchin fiber \(HF_{\alpha}\) in the moduli stack of \(L^{-1}\)-twisted Higgs \(\operatorname{PGL}_{2}\)-bundles over \(\overline{C}\), associated with \(\alpha\in A^{\prime}\subset H^{0}(\overline{C},\omega_{\overline{C}}^{2}L^{-2})\) that has only simple zeros. Recall that in this case the corresponding spectral curve \(\pi:C_{\alpha}\to\overline{C}\) is smooth. As is well-known, in this case we have an identification \(HF_{\alpha}(k)\simeq\operatorname{Pic}(C_{\alpha})/\pi^{*}\operatorname{ Pic}(\overline{C})\), so \(HF_{\alpha}(k)\) has a group structure. As before, we will use an identification \[\widetilde{\mathcal{S}}_{\eta}^{G(\mathcal{O})}\simeq\mathcal{S}(HF_{\alpha},\mathcal{L}_{\psi}),\] for some natural \(\mathbb{C}^{*}\)-torsor \(\mathcal{L}_{\psi}\), where \(\eta\) is an element in \(\mathfrak{sl}_{2}(\overline{F})\) with \(\det(\eta)=-\alpha\). Theorem F is implied by the following result. **Theorem 6.12**.: _Assume the characteristic of \(k\) is \(\neq 2\), and that \(\alpha\in A^{\prime}\subset H^{0}(\overline{C},\omega_{\overline{C}}^{2}L^{-2})\) has only simple zeros. Then there exists a commutative group extension_ \[1\to U(1)\to H_{U(1)}\to HF_{\alpha}(k)\to 1,\] _and an action of \(H_{U(1)}\) on the \(\mathbb{C}^{*}\)-torsor \(\mathcal{L}_{\psi}\) over \(HF_{\alpha}(k)\) (where \(U(1)\) acts naturally), compatible with the action of \(HF_{\alpha}(k)\) on itself by shifts. Furthermore, there is an \(\mathcal{H}_{\mathrm{PGL}_{2},C}\)-eigenbasis \((f_{\chi})\) in \(\mathcal{S}(HF_{\alpha},\mathcal{L}_{\psi})\) numbered by characters \(\chi\) of \(H_{U(1)}\) extending the identity character of \(U(1)\), where \(hf_{\chi}=\chi(h)f_{\chi}\) for \(h\in H\)._ Proof.: We can identify \(\pi_{*}\mathcal{O}_{C_{\alpha}}\) with \(\mathcal{A}=\mathcal{O}_{\overline{C}}\oplus\omega_{\overline{C}}^{-1}L\cdot t\), where \(t\) is a formal variable such that \(t^{2}=\alpha\). This induces an identification of the group of units \(\mathcal{A}^{*}(\overline{F}_{p})\) and \(\mathcal{A}^{*}(\overline{F})\) with the groups of \(\overline{F}_{p}\) or \(\overline{F}\)-points of the the stabilizer \(\widetilde{T}_{\eta}\) of \(\eta\) in \(\mathrm{GL}_{2}\), where we realize \(\mathrm{GL}_{2}(F)\) as \(F\)-automorphisms of the generic stalk \(F\oplus F\cdot t\) of \(\mathcal{A}\). This induces an identification of \(\mathrm{Pic}(C_{\alpha})\) with \(\widetilde{T}_{\eta}(\overline{A})/\widetilde{T}_{\eta}(\overline{\mathcal{O} })\widetilde{T}_{\eta}(\overline{F})\), and of \(\mathrm{Pic}(C_{\alpha})/\pi^{*}\,\mathrm{Pic}(\overline{C})\) with the similar group associated with \(T_{\eta}\subset\mathrm{PGL}_{2}\). Recall that by Proposition 3.20(3) we have a commutative extension \(H_{\eta}\) of \(T_{\eta}(\overline{\mathbb{A}})\) by \(\omega_{\overline{C}}(\overline{\mathbb{A}})\) acting on the \(\mathbb{C}^{*}\)-torsor \(\mathcal{L}_{\psi}\) over \(HF_{\alpha}(k)\), such that \(\omega_{\overline{C}}(\overline{\mathbb{A}})\) acts by \(\psi_{\overline{C}}^{-1}\). It is easy to see that this action on \(\mathcal{L}_{\psi}\) factors through the action of the induced extension \(H_{U(1)}\) of \(T_{\eta}(\overline{\mathbb{A}})/T_{\eta}(\overline{\mathcal{O}})T_{\eta}( \overline{F})\simeq\mathrm{Pic}(C_{\alpha})/\pi^{*}\,\mathrm{Pic}(\overline{C})\) by \(U(1)\). Now we observe that since \(C_{\alpha}\) is smooth, the action of \(\mathrm{Pic}(C_{\alpha})/\pi^{*}\,\mathrm{Pic}(\overline{C})\) on \(HF_{\alpha}(k)\) is simply transitive. Thus, we have a basis \((f_{\chi})\) with the claimed properties. The fact that the subspaces \(\mathbb{C}\cdot f_{\chi}\) are preserved by \(\mathcal{H}_{\mathrm{PGL}_{2},C}\)-action follows from the fact they can be identified with the summands of the decomposition \[\widetilde{\mathcal{S}}_{\eta}^{G(\mathcal{O})}=\bigoplus_{\chi}\widetilde{ \mathcal{S}}_{\alpha,\chi}^{G(\mathcal{O})}\] (see Proposition 3.17). ## Appendix A Some results on groupoids. ### Push-forward For a small groupoid \(\Gamma\) we denote by \(\mathbb{C}(\Gamma)\) the space of \(\mathbb{C}\)-valued functions on the set of isomorphism classes of \(\Gamma\). For a functor of groupoids \(\Phi:\Gamma_{1}\to\Gamma_{2}\) the pullback map \(\Phi^{*}:\mathbb{C}(\Gamma_{2})\to\mathbb{C}(\Gamma_{1})\) is given by \(\Phi^{*}f(\gamma_{1})=f(\Phi(\gamma_{1}))\). Assume that all objects of \(\Gamma_{1}\) and \(\Gamma_{2}\) have finite groups of automorphisms, and that for every \(\gamma_{2}\in\Gamma_{2}\) there is finitely many isomorphism classes of \(\gamma_{1}\in\Gamma_{1}\) such that \(\Phi(\gamma_{1})\simeq\gamma_{2}\). Then we define the push-forward \[\Phi_{*}f(\gamma_{2})=\sum_{\gamma_{1};\Phi(\gamma_{1})\simeq\gamma_{2}}\frac{ \left|\mathrm{Aut}(\gamma_{2})\right|}{\left|\mathrm{Aut}(\gamma_{1})\right|}f( \gamma_{1}).\] (A.1) Recall that for each object \(\gamma_{2}\), one defines the fiber groupoid \(\Phi^{-1}(\gamma_{2})\), whose objects are pairs \((\gamma_{1},\phi)\), where \(\gamma_{1}\) is an object of \(\Gamma_{1}\) and \(\phi:\Phi(\gamma_{1})\to\gamma_{2}\) is an isomorphism. This fiber shows up when computing the push-forward map on functions \(\Phi_{*}:\mathbb{C}(\Gamma_{1})\to\mathbb{C}(\Gamma_{2})\) (when it is defined). **Lemma A.1**.: _Let \(\Phi:\Gamma_{1}\to\Gamma_{2}\) be a functor, such that for every \(\gamma_{2}\in\Gamma_{2}\) there is finitely many isomorphism classes of \(\gamma_{1}\in\Gamma_{1}\) such that \(\Phi(\gamma_{1})\simeq\gamma_{2}\). Assume also that all objects of \(\Gamma_{1}\) and \(\Gamma_{2}\) have finite groups of automorphisms. Then for a function \(f\in\mathbb{C}(\Gamma_{1})\), one has_ \[\Phi_{*}f(\gamma_{2})=\sum_{(\gamma_{1},\phi)\in\Phi^{-1}(\gamma_{2})}\frac{1} {|\operatorname{Aut}(\gamma_{1},\phi)|}f(\gamma_{1}).\] (A.2) Proof.: For fixed objects \(\gamma_{1}\) and \(\gamma_{2}\), let us consider the finite set \(H\coloneqq\operatorname{Hom}(\Phi(\gamma_{1}),\gamma_{2})\). There is a natural action of \(\operatorname{Aut}(\gamma_{1})\times\operatorname{Aut}(\gamma_{2})\) on \(H\), such that the action of the subgroup \(\operatorname{Aut}(\gamma_{2})\) on \(H\) is simply transitive, while the stabilizer subgroup of \(\phi\in H\) under the action of the subgroup \(\operatorname{Aut}(\gamma_{1})\) can be identified with \(\operatorname{Aut}(\gamma_{1},\phi)\). Hence, the size of the corresponding orbit is \[|\operatorname{Aut}(\gamma_{1})\cdot\phi|=\frac{|\operatorname{Aut}(\gamma_{1 })|}{|\operatorname{Aut}(\gamma_{1},\phi)|}.\] Now, the coefficient of \(f(\gamma_{1})\) in the right-hand side of (A.2) is given by \[\sum_{\phi\in H/\operatorname{Aut}(\gamma_{1})}\frac{1}{| \operatorname{Aut}(\gamma_{1},\phi)|}=\sum_{\phi\in H}\frac{1}{|\operatorname{ Aut}(\gamma_{1})\cdot\phi|}\cdot\frac{1}{|\operatorname{Aut}(\gamma_{1},\phi)|}=\] \[\sum_{\phi\in H}\frac{1}{|\operatorname{Aut}(\gamma_{1})|}=\frac{ |H|}{|\operatorname{Aut}(\gamma_{1})|}=\frac{|\operatorname{Aut}(\gamma_{2})| }{|\operatorname{Aut}(\gamma_{1})|}.\] But this is equal to the coefficient of \(f(\gamma_{1})\) in \(\Phi_{*}f(\gamma_{2})\), and our assertion follows. ### Double cosets groupoids Let \(G\) be a group, \(H,K\subset G\). Then the set of double cosets \(H\backslash G/K\) can be viewed as a set of isomorphism classes of a groupoid. Namely, the objects of this groupoid are elements of \(G\). A morphism from \(g_{1}\) to \(g_{2}\) is a pair of elements \((h\in H,k\in K)\) such that \(hg_{1}k=g_{2}\). The composition of \((h,k):g_{1}\to g_{2}\) with \((h^{\prime},k^{\prime}):g_{2}\to g_{3}\) is \((h^{\prime}h,kk^{\prime}):g_{1}\to g_{3}\). **Lemma A.2**.: _Let \(\pi:B\to T\) be a surjection of groups with the kernel \(U\). Let \(B_{1},B_{2}\subset B\) be a pair of subgroups, and let \(T_{1}=\pi(B_{1})\), \(T_{2}=\pi(B_{2})\). Consider the induced groupoid functor_ \[\Pi:B_{1}\backslash B/B_{2}\to T_{1}\backslash T/T_{2}.\] _Let us set \(U_{1}:=U\cap B_{1}\). For an element \(b_{0}\), let us set_ \[U_{2,b_{0}}:=U\cap b_{0}B_{2}b_{0}^{-1}.\] _Then the map \(u\mapsto ub_{0}\) naturally extends to an equivalence of groupoids_ \[U_{1}\backslash U/U_{2,b_{0}}\rTo\Pi^{-1}(\pi(b_{0})),\] _where on the right we take the groupoid fiber._ Proof.: Set \(t_{0}=\pi(b_{0})\). By the definition, the objects of \(\Pi^{-1}(t_{0})\) are triples \(b\in B\), \(t_{1}\in T_{1}\) and \(t_{2}\in T_{2}\), such that \(\pi(b)=t_{1}t_{0}t_{2}\). A morphism \((b,t_{1},t_{2})\to(b^{\prime},t_{1}^{\prime},t_{2}^{\prime})\) is given by a pair of elements \(b_{1}\in B_{1}\), \(b_{2}\in B_{2}\), such that \[b^{\prime}=b_{1}bb_{2},\ \ t_{1}^{\prime}=\pi(b_{1})t_{1},\ \ t_{2}^{\prime}=t_{2} \pi(b_{2}).\] Since \(T_{1}=\pi(B_{1})\), \(T_{2}=\pi(B_{2})\), we see that every object of \(\Pi^{-1}(t_{0})\) is isomorphic to one with \(t_{1}=t_{2}=1\). A morphism between such objects \((b,1,1)\to(b^{\prime},1,1)\), where \(\pi(b)=\pi(b^{\prime})=t_{0}\), is given by a pair of elements \(u_{1}\in U\cap B_{1}=U_{1}\), \(u_{2}\in U\cap B_{2}\) such that \(b^{\prime}=u_{1}bu_{2}\). If we write \(b=ub_{0}\), \(b^{\prime}=u^{\prime}b_{0}\), then this equation becomes \[u^{\prime}=u_{1}u(b_{0}u^{\prime}b_{0}^{-1}).\] Since \(U_{2,b_{0}}=b_{0}(U\cap B_{2})b_{0}^{-1}\), we get the claimed equivalence. ## Appendix B Geometric interpretation of the constant term operator In this section we will provide a geometric interpretation of the constant term operator for \(G=\operatorname{GL}_{2}\) and a special nilpotent extension \(C\) of \(\overline{C}\) of length \(2\). Recall that in the case of the reduced curve, the constant term operator is related to the moduli space of \(B\)-bundles, i.e., of pairs \(L\subset V\), where \(L\) is a line subbundle in a rank \(2\) bundle. In the case of a nilpotent extension of length \(2\), we will use certain generalizations of line bundles, which we call _quasi line bundles_. In Sec. B.1 we describe some basic properties of quasi line bundles. We give adelic description of the groupoid of quasi line bundles in Sec. B.2. In Lemma B.11 and Proposition B.12 we give geometric interpretations of the double coset groupoids and maps between them involved in the constant term operator. Finally, in Proposition B.13 we provide a geometric interpretation of the constant term operator. ### Quasi line bundles Let \(C\) be a special nilpotent extension of \(\overline{C}\) of length \(2\). We denote by \(\mathcal{N}\subset\mathcal{O}_{C}\) the nilradical, i.e., the ideal of \(\overline{C}\subset C\). Note that \(\mathcal{N}\) is a line bundle on \(\overline{C}\). When working locally we denote by \(\epsilon\in\mathcal{N}\) a generator as a module over \(\mathcal{O}_{\overline{C}}\). **Definition B.1**.: A coherent sheaf \(M\) on \(C\) is called a _quasi line bundle_ if locally it admits an embedding \(M\hookrightarrow\mathcal{O}_{C}\) and if \(\mathcal{N}M\neq 0\). For a coherent sheaf \(M\) on \(C\) we set \[\overline{M}:=\ker(M\to\underline{\operatorname{Hom}}(\mathcal{N},M):m\mapsto (x\mapsto xm)).\] We view \(\overline{M}\) as a coherent sheaf on \(\overline{C}\). If \(\epsilon\) is a local generator of \(\mathcal{N}\) then \(\overline{M}=\ker(\epsilon:M\to M)\) and \(M/\overline{M}\simeq\epsilon M\). **Proposition B.2**.: _The following conditions for a coherent sheaf \(M\) are equivalent:_ 1. \(M\) _is a quasi line bundle;_ 2. \(\mathcal{N}M\neq 0\) _and_ \(\overline{M}\) _is a line bundle on_ \(\overline{C}\)_;_ 3. \(\overline{M}\) _is a line bundle on_ \(\overline{C}\)_, and there exists a line bundle_ \(\mathcal{L}\) _on_ \(C\) _and an embedding_ \(\mathcal{L}\hookrightarrow M\) _inducing an isomorphism_ _, or equivalently, such that_ \(\mathcal{N}\mathcal{L}=\mathcal{N}M\)_;_ 4. \(\mathcal{N}M\neq 0\)_, and there exists a line bundle_ \(\mathcal{L}\) _on_ \(C\) _and an embedding_ \(M\hookrightarrow\mathcal{L}\) _such that_ \(\overline{M}=\mathcal{N}\mathcal{L}\)_;_ 5. \(M\) _is locally isomorphic to an ideal_ \((\epsilon,f)\subset\mathcal{O}\)_, where_ \(f\not\equiv 0\mod(\epsilon)\)_._ Proof.: \((1)\Longrightarrow\) (2). Locally we have an embedding \(M\subset\mathcal{O}\). Thus, \(\overline{M}\) is a subsheaf in \(\overline{\mathcal{O}}=\mathcal{O}_{\overline{C}}\). Since \(\mathcal{N}M\) is contained in \(\overline{M}\), we have \(\overline{M}\neq 0\), so \(\overline{M}\) is a line bundle on \(\overline{C}\). \((2)\Longrightarrow(3)\). Since \(\mathcal{N}M\neq 0\) is contained in \(\overline{M}\), it is also a line bundle. Thus, the embedding \(\mathcal{N}M\hookrightarrow\overline{M}\) identifies \(\mathcal{N}M\) with \(\overline{M}(-D)\) for some effective divisor \(D\subset\overline{C}\). Let us consider the coherent sheaf \(M/\mathcal{N}M\) on \(\overline{C}\). Then \(\overline{M}/\mathcal{N}M\subset M/\mathcal{N}M\) is exactly its torsion subsheaf, and the quotient by it is isomorphic to the line bundle \(M/\overline{M}\). We can choose a splitting \(\sigma:M/\overline{M}\to M/\mathcal{N}M\) on \(\overline{C}\). Now define \(\mathcal{L}\subset M\) to be the preimage of \(\operatorname{im}(\sigma)\subset M/\mathcal{N}M\). We have \[\mathcal{L}+\overline{M}=M.\] Hence, \(\mathcal{N}\mathcal{L}=\mathcal{N}M\), and \[\mathcal{L}/\mathcal{N}\mathcal{L}\simeq\mathcal{L}/\mathcal{N}M\simeq \operatorname{im}(\sigma)\simeq M/\overline{M}\] is a line bundle on \(\overline{C}\). This implies that \(\mathcal{L}\) is a line bundle with the required properties. \((3)\Longrightarrow(4)\). Consider an embedding \(\mathcal{L}\hookrightarrow M\) with \(\mathcal{N}\mathcal{L}=\mathcal{N}M\). Then \[M/\mathcal{L}\simeq\overline{M}/\mathcal{N}\mathcal{L}\simeq\overline{M}|_{D},\] where \(D\) is the effective divisor on \(\overline{C}\) corresponding to the embedding of line bundles \(\mathcal{N}\mathcal{L}\rightarrow\overline{M}\). Let \(j:U\to C\) be the embedding of the complement to \(D\). Then we have \(\mathcal{L}|_{U}=M|_{U}\), so we can view \(M\) as a subsheaf in \(j_{*}(\mathcal{L}|_{U})\). Hence, \(M/\mathcal{L}\) is a subsheaf in \(j_{*}(\mathcal{L}|_{U})\). Let us choose an effective Cartier divisor \(\widetilde{D}\subset C\) reducing to \(D\subset\overline{C}\). Then the subsheaf in \(j_{*}(\mathcal{L}|_{U})/\mathcal{L}\) annihilated by the local equations of \(\widetilde{D}\) is exactly \(\mathcal{L}(\widetilde{D})/\mathcal{L}\). Since \(M/\mathcal{L}\simeq\overline{M}|_{D}\), we deduce that \(M/\mathcal{L}\) is contained in \(\mathcal{L}(\widetilde{D})/\mathcal{L}\). In other words, we get an inclusion \[M\subset\mathcal{L}(\widetilde{D}).\] Consider the induced embeddings \[\mathcal{N}\mathcal{L}=\overline{\mathcal{L}}\subset\overline{M}\subset \overline{\mathcal{L}(\widetilde{D})}=\overline{L}(D).\] We see that \(\overline{M}=\overline{\mathcal{L}(\widetilde{D})}=\mathcal{N}\cdot\mathcal{L }(\widetilde{D})\), so the inclusion \(M\rightarrow\mathcal{L}(\widetilde{D})\) satisfies the required property. \((4)\Longrightarrow(5)\), Locally we get an embedding \(M\hookrightarrow\mathcal{O}\) such that \(\overline{M}=\mathcal{N}=(\epsilon)\subset\mathcal{O}\). We also know that \(M\neq(\epsilon)\). Hence, \(M/(\epsilon)\subset\overline{C}\) is a line bundle on \(\overline{C}\). Therefore, if \(\overline{f}\) is a local generator of \(M/(\epsilon)\), then \(M=(\epsilon,f)\) for any lifting \(f\) of \(\overline{f}\). \((5)\Longrightarrow(1)\). This is clear. As we have seen in the above proof, if \(M\) is a quasi line bundle then we have an embedding of line bundles on \(\overline{C}\), \[\mathcal{N}M\hookrightarrow\overline{M}.\] which gives an effective divisor \(D\) on \(\overline{C}\), such that \(\mathcal{N}M=\overline{M}(-D)\). **Definition B.3**.: For an effective divisor \(D\) on \(\overline{C}\), we define the sheaf of \(\mathcal{O}_{C}\)-algebras \[\mathcal{O}_{C}[D]\coloneqq\mathcal{O}_{C}+\mathcal{N}(D)\] (the sum is taken in \(\eta_{*}\mathcal{O}_{C,\eta}\), where \(\eta\) is the general point). We also define the ideal \(\mathcal{O}_{C}[-D]\subset\mathcal{O}_{C}\) as \[\mathcal{O}_{C}[-D]:=\ker(\mathcal{O}_{C}\to\mathcal{O}_{D}=\mathcal{O}_{ \overline{C}}/\mathcal{O}_{\overline{C}}(-D)).\] Note that \(\mathcal{O}_{C}[D]\) and \(\mathcal{O}_{C}[-D]\) are both quasi line bundles with the associated divisor \(D\). **Lemma B.4**.: _The category of quasi line bundle with an associated divisor \(D\) is equivalent to the category of locally trivial \(\mathcal{O}_{C}[D]\)-modules._ Proof.: The characterization (5) from Proposition B.2 shows that the multiplication by \(\mathcal{O}_{C}[D]\) is well defined on any quasi line bundle with the associated divisor \(D\). Furthermore, a morphism of quasi line bundles with the associated divisor \(D\) is automatically a morphism of \(\mathcal{O}_{C}[D]\)-modules (since for a quasi line bundle \(M\) the natural map \(M\to j_{*}j^{*}M\) is injective, where \(j:C\setminus D\to C\) is the open embedding). For a coherent sheaf \(M\) on \(C\), we set \(M^{\vee}:=\underline{\mathrm{Hom}}(M,\mathcal{O})\). **Lemma B.5**.: _(i) Let \(M=(\epsilon,f)\), where \(\epsilon\) is a local generator of \(\mathcal{N}\) and \(f\not\equiv 0\mod(\epsilon)\). Then there is an isomorphism \(M^{\vee}\simeq(\epsilon,f)\). The corresponding pairing is given by_ \[(\alpha\epsilon+\beta f,a\epsilon+bf):=(\alpha b+\beta a)\epsilon+\beta bf=( \alpha\epsilon+\beta f)(a\epsilon+bf)/f.\] _(ii) If \(M\) is a quasi line bundle with the associated divisor \(D\), then the natural map_ \[\underline{\mathrm{Hom}}_{\mathcal{O}_{C}[D]}(M,\mathcal{O}_{C}[-D])\to M^{\vee}\] _is an isomorphism. Hence, \(M^{\vee}\) is also a quasi line bundle with the associated divisor \(D\), and the natural map \(M\to M^{\vee\vee}\) is an isomorphism. (iii) For quasi line bundles \(L\) and \(M\) with the same associated divisor \(D\), the coherent sheaf \(L\otimes M\) is also a quasi line bundle with the associated divisor \(D\)._ Proof.: (i) This can be deduced by computing \(M^{\vee}\) using the following presentation: where \[A=\begin{pmatrix}\epsilon&-f\\ 0&\epsilon\end{pmatrix}.\] Indeed, we get that elements of \(M^{\vee}\) correspond to \((u,v)\) such that \((u,v)A=0\), i.e., \[u\epsilon=0,\ \ uf=v\epsilon.\] This easily implies that \(u\) is determined by \(v\) and that \(v\) can be any element of \((\epsilon,f)\). Thus, we get an isomorphism \(M^{\vee}\simeq(\epsilon,f)\). (ii) The assertion is local, so it follows from the explicit form of the pairing in (i). (iii) This follows immediately from Lemma B.4 since tensoring \(L\) and \(M\) over \(\mathcal{O}\) is the same as tensoring them over \(\mathcal{O}_{C}[D]\) **Lemma B.6**.: _(i) Any surjective homomorphism \(V\to M\), where \(V\) is a rank 2 vector bundle and \(M\) is a quasi line bundle, extends to an exact sequence_ _where \(L\) is a quasi line bundle. Furthermore, any such sequence is locally of the form_ (B.1) _where \(f\not\equiv 0\mod(\epsilon)\), \(p(a,b)=a\epsilon+bf\), \(i(a^{\prime}\epsilon+b^{\prime}f)=(a^{\prime}\epsilon+b^{\prime}f,-b^{\prime}\epsilon)\). (ii) Let \(\epsilon\) be a generator of \(\mathcal{N}\) near the divisor \(D\) associated with \(L\). An exact sequence as in (i) induces an isomorphism_ _such that for a local section \(s\) of \(V\) such that \(p(s)\in\overline{M}\), one has_ \[\delta(p(s)\mod\epsilon M)=s^{\prime}\mod\epsilon L,\] _where \(s^{\prime}\) is a section of \(L\) such that \(i(s^{\prime})=\epsilon s\)._ Proof.: (i) If \(M\) is a line bundle then this is clear. Now assume that \(M=(\epsilon,f)\), where \(f\) is in the maximal ideal \(\mathfrak{m}\) of some point then \(M=(\epsilon,f)\) has 2-dimensional fiber \(M/\mathfrak{m}\). Hence, any surjection \(\mathcal{O}^{2}\to M\) is locally isomorphic to \((a,b)\mapsto at+bf\). (ii) The long exact sequence associated with the short exact sequence of complexes gives an exact sequence It remains to observe that the image of \(\epsilon V\to\overline{M}\) is \(\epsilon M\), while the kernel of \(L/\epsilon L\to V/\epsilon V\) is \(\overline{L}/\epsilon L\). _Remark B.7_.: If \(L\hookrightarrow V\) is an embedding of a quasi line bundle into a rank 2 vector bundle such that \(\overline{L}\) is a subbundle in \(\overline{V}\) on \(\overline{C}\), it is not necessarily true that \(V/L\) is a quasi line bundle. For example, take \(L=(\epsilon,f)\), where \(f\not\equiv 0\mod(\epsilon)\), and consider the embedding given as the composition \[L\to\mathcal{O}\to\mathcal{O}\oplus\mathcal{O}.\] **Lemma B.8**.: _Given an exact sequence_ \[0\to L\to V\to M\to 0,\] _where \(V\) is a rank \(2\) bundle, \(L\) and \(M\) are quasi line bundles, there is a natural pairing_ \[L\otimes M\to\bigwedge^{2}V=\det(V):l\otimes m\mapsto l\wedge\widetilde{m},\] _where \(\widetilde{m}\) is any lifting of \(m\). This pairing induces isomorphisms_ _Thus, for fixed \(\det(V)\), classes of isomorphism of \(L\) and \(M\) determine each other._ Proof.: It is enough to check this statement for the sequence (B.1). The corresponding pairing is \[\begin{split}&(\alpha\epsilon+\beta f)\otimes(a\epsilon+bff)\mapsto i (\alpha\epsilon+\beta f)\wedge\widehat{a\epsilon+bff}=((\alpha\epsilon+\beta f )e_{1}-\beta\epsilon e_{2})\wedge(ae_{1}+be_{2})=\\ &[b(\alpha\epsilon+\beta f)+a\beta\epsilon]\cdot e_{1}\wedge e_{ 2}\end{split}\] which coincides with the pairing of Lemma B.5(i). ### Adelic interpretation of quasi line bundles Let us fix an effective divisor \(D=\sum_{p}n_{p}[p]\) on \(\overline{C}\). We want to describe the set of isomorphism classes of quasi-line bundles on \(C\) with the associated divisor \(D\) (for \(D=0\) these will be usual line bundles on \(C\)). For each point \(p\), let us consider the \(\mathcal{O}_{p}\)-subalgebra \[\mathcal{O}_{p}[D]:=\mathcal{O}_{p}+t_{p}^{-n_{p}}\overline{\mathcal{O}}_{p} \epsilon_{p}\subset F_{p},\] where \(\epsilon_{p}\) is a generator of the nilradical of \(\mathcal{O}_{p}\). Note that these are completions of the stalks of the sheaf of \(\mathcal{O}_{C}\)-algebras \(\mathcal{O}_{C}[D]\) (see Definition B.3). We denote by \(\mathcal{O}[D]\subset\mathbb{A}\) the corresponding subalgebra in the algebra of adeles. We also have the corresponding subgroup of invertible elements in the group of ideles, \(\mathcal{O}[D]^{*}\subset\mathbb{A}^{*}\). **Lemma B.9**.: _The groupoid of quasi-line bundles on \(C\) with the underlying divisor \(D\) is equivalent to the double cosets groupoid_ \[F^{*}\backslash\mathbb{A}^{*}/\mathcal{O}[D]^{*}.\] Proof.: This follows from Lemma B.4 similarly to the case of line bundles. The equivalence is obtained by choosing local isomorphisms of a quasi line bundle with \(\mathcal{O}[D]\). More precisely, the quasi line bundle \(L\) associated with \(a=(a_{p})\in\mathbb{A}^{*}\) and the divisor \(D\) is defined as follows: \[L(U):=\{f\in F\ |\ f\in\bigcap_{p\in U}a_{p}\cdot\mathcal{O}_{p}[D]\}.\] **Lemma B.10**.: _Let \(L\) (resp., \(M\)) be the quasi line bundle associated with the idele \(a_{1}\) (resp., \(a_{2}\)) and the divisor \(D\). Let \(t_{p}\in\mathcal{O}_{p}\) denote some local uniformizers, and let \(n_{p}\) be the multiplicity of \(p\) in \(D\). Then_ * _the quasi line bundle_ \(L\otimes M\) _is represented by_ \(a_{1}a_{2}\)_;_ * _the quasi line bundle_ \(\underline{\mathrm{Hom}}(M,L)\) _is represented by_ \(a_{1}a_{2}^{-1}\)_;_ * _the quasi line bundle_ \(L^{\vee}\) _is represented by the idele_ \(a_{1}^{-1}(t_{p}^{n_{p}})\)_._ Proof.: The first two items are clear if we think of quasi line bundles with the associated divisor \(D\) as locally trivial \(\mathcal{O}[D]\)-modules (see Lemma B.4). Since \(\mathcal{O}[-D]\) is represented by the idele \((t_{p}^{n_{p}})\), the isomorphism \(L^{\vee}\simeq\underline{\mathrm{Hom}}_{\mathcal{O}[D]}(L,\mathcal{O}[-D])\) (see Lemma B.5(ii)) shows that \(L^{\vee}\) is represented by \(a_{1}^{-1}(t_{p}^{n_{p}})\). Now we can give a geometric interpretation of \(\mathrm{QBun}_{T}(C)=\sqcup_{D}\mathrm{QBun}_{T}(C,D)\), where \(\mathrm{QBun}_{T}(C,D)=T(F)\backslash T(\mathbb{A})/T(\mathcal{O})[D]\) (see Sec. 4.1). **Lemma B.11**.: _One has an identification of \(\operatorname{QBun}_{T}(C)\) with the groupoid \(\operatorname{QTr}\) of tripes \((L,M,\mathcal{L})\), where \(L\) and \(M\) are quasi line bundles, \(\mathcal{L}\) is a line bundle on \(C\), equipped with an isomorphism_ _and a pairing_ \[\beta:L\otimes M\to\mathcal{L}\] _Under this equivalence, \(\operatorname{QBun}_{T}(C,D)\) corresponds to \((L,M,\mathcal{L})\) such that the divisor in \(\overline{C}\) associated with \(L\) (or \(M\)) is \(D\)._ Proof.: Recall that \(T(\mathcal{O})[D]\) consists of \(t=\operatorname{diag}(a_{1},a_{2})\), where \(a_{1},a_{2}\in\mathcal{O}[D]^{*}\), \(a_{1}a_{2}\in\mathcal{O}^{*}\), \(a_{1}-a_{2}\in\overline{\mathcal{O}}(-D)+(\epsilon)\). Let \(L\) (resp., \(M\)) be the quasi line bundle associated with the idele \(a_{1}(t_{p}^{n_{p}})\) (resp., \(a_{2}\)), and let \(\mathcal{L}\) be the line bundle associated with the idele \(a_{1}a_{2}\). In other words, \(L_{p}=a_{1,p}\mathcal{O}_{p}[-D]\) and \(M_{p}=a_{2,p}\mathcal{O}_{p}[D]\). We consider the pairing \(\beta:L\otimes M\to\mathcal{L}\) induced by the product on \(F\). Since \[\overline{\mathcal{O}[D]}=\ker(\epsilon:\mathcal{O}[D]\to\mathcal{O}[D])= \epsilon\overline{\mathcal{O}}(D),\ \ \overline{\mathcal{O}[-D]}=\epsilon\overline{\mathcal{O}},\] \[\epsilon\mathcal{O}[D]=\epsilon\overline{\mathcal{O}},\ \ \epsilon\mathcal{O}[-D]= \epsilon\overline{\mathcal{O}}(-D),\] we see that the line bundles \(\overline{L}\) and \(\mathcal{N}L\) (resp., \(\overline{M}\) and \(\mathcal{N}M\)) are associated with the ideles \(\overline{a}_{1}\) and \(\overline{a}_{1}(t_{p}^{n_{p}})\) (resp., \(\overline{a}_{2}(t_{p}^{-n_{p}})\) and \(\overline{a}_{2}\)). Thus, we have identifications We then set \(\alpha=\alpha_{2}\circ\alpha_{1}^{-1}\). This gives an object of \(\operatorname{QTr}\). We have to check that this defines a morphism of groupoids. First, we claim that the constructed data \((L,M,\mathcal{L},\alpha,\beta)\) do not change when we multiply \(t\) with an element \(\operatorname{diag}(b_{1},b_{2})\) of \(T(\mathcal{O})[D]\). Indeed, it is clear that \(L\), \(M\), \(\mathcal{L}\) and \(\beta\) do not change since \(b_{1},b_{2}\in\mathcal{O}[D]^{*}\) and \(b_{1}b_{2}\in\mathcal{O}^{*}\). Also, \(\alpha_{2}\circ\alpha_{1}^{-1}\) depends only on the image of \(\overline{a}_{2}/\overline{a}_{1}\) in \(\mathcal{O}_{D}^{*}\), which gets multiplied by \(\overline{b}_{2}/\overline{b}_{1}\equiv 1\mod\overline{\mathcal{O}}(-D)\). Next, if we have \(t_{F}=\operatorname{diag}(c_{1},c_{2})\in T(F)\), then we get an isomorphism \((L,M,\mathcal{L},\alpha,\beta)\to(L^{\prime},M^{\prime},\mathcal{L}^{\prime}, \alpha^{\prime},\beta^{\prime})\), where \((L^{\prime},M^{\prime},\mathcal{L}^{\prime},\alpha^{\prime},\beta^{\prime})\) is the object of \(\operatorname{QTr}\) associated with \(t_{F}t\). Namely, multiplication by \(c_{1}\) (resp., \(c_{2}\), resp., \(c_{1}c_{2}\)) gives an isomorphism \(L\to L^{\prime}\) (resp., \(M\to M^{\prime}\), resp., \(\mathcal{L}\to\mathcal{L}^{\prime}\)), compatible with the extra data. To see that our functor is fully faithful, we have to show that the above map identifies automorphism of every object \((L,M,\mathcal{L},\alpha,\beta)\) of \(\operatorname{QTr}\) with \(T(F)\cap T(\mathcal{O})[D]\). Such an automorphism is given by a pair of principal ideles \(c_{1}\in F^{*}\), \(c_{2}\in F^{*}\) which give automorphisms of \(L\) and \(M\). Then by compatibility with the pairing \(\beta\), \(c_{1}c_{2}\) should give an automorphism of \(\mathcal{L}\). This means that \(c_{1},c_{2}\in\mathcal{O}[D]^{*}\) and \(c_{1}c_{2}\in\mathcal{O}^{*}\). The condition of compatibility with \(\alpha\) means that \(\overline{c_{1}}/\overline{c_{2}}\equiv 1\mod\overline{\mathcal{O}}(-D)\). In other words, we get that \(\operatorname{diag}(c_{1},c_{2})\in T(F)\cap T(\mathcal{O})[D]\). For essential surjectivity, starting from the data \((L,M,\mathcal{L},\alpha,\beta)\) we first find ideles \(a_{2}\) and \(a\) representing \(M\) and \(\mathcal{L}\). Then we set \(a_{1}=aa_{2}^{-1}\). Then by Lemma B.10, \(L\simeq\underline{\operatorname{Hom}}(M,\mathcal{L})\simeq M^{\vee}\otimes \mathcal{L}\) will be represented by \(a_{1}(t_{p}^{n_{p}})\), so that the pairing \(\beta\) is induced by the product on \(F\). The isomorphism \(\alpha\) may differ from the isomorphism \(\alpha^{\prime}\) obtained from \((a_{1},a_{2})\) using our functor by some element in \(\overline{b}\in\overrightarrow{\mathcal{O}_{D}}\). Finally, we can find an integer idele \(b\in\mathcal{O}^{*}\). Then replacing \(a_{1}\) by \(a_{1}b\) (which does not change \(L\)) we will get an element \((a_{1},a_{2})\) producing the given object of QTr. ### Geometric interpretation of the constant term operator Recall that we introduced the element \[g_{D}\coloneqq\begin{pmatrix}1&0\\ (t_{p}^{-n_{p}})\epsilon&1\end{pmatrix}\in G(\mathbb{A}).\] Let us consider the subgroup \[B(\mathcal{O})[D]\coloneqq B(\mathbb{A})\cap g_{D}G(\mathcal{O})g_{D}^{-1} \subset B(\mathbb{A}).\] Recall that the image of this subgroup under the projection \(B(\mathbb{A})\to T(\mathbb{A})\) is the subgroup \(T(\mathcal{O})[D]\subset T(\mathbb{A})\) (see Sec. 4.1). With every \(b=(b_{p})\in B(F)\backslash B(\mathbb{A}_{C})/B(\mathcal{O})[D]\), we can associate canonically an exact sequence \[0\to L\to V\to M\to 0,\] where \(V\) is vector bundle of rank \(2\) and \(L\) and \(M\) are quasi line bundles. Namely, we have the following morphisms of exact sequences where \(\mathcal{O}_{p}[-D]=\mathcal{O}_{p}(-D)+\epsilon\mathcal{O}_{p}\), and \(\operatorname{diag}(a_{1},a_{2})\in T(\mathbb{A})\) is the diagonal part of \(b\). Now we define \(V=V_{b}\) and \(L=L_{b}\subset V_{b}\) by \[V(U)\coloneqq\{v\in F^{2}\ |\ v\in\bigcap_{p\in U}b_{p}\cdot g_{D,p}\mathcal{O} _{p}^{2}\},\ \ L(U)=V(U)\cap F\cdot e_{1}=(a_{1})_{p}\mathcal{O}_{p}[-D]\cdot e_{1}.\] (B.2) If we change \((b_{p})\) by \((b_{p})b^{\prime}\), where \(b^{\prime}\in B(\mathcal{O})[D]\) then the pair \((V,L)\) does not change (since \(b^{\prime}g_{D}\mathcal{O}^{2}=g_{D}\mathcal{O}^{2}\)). On the other hand, if we replace \(b\) by \(b_{0}b\), where \(b_{0}\in B(F)\), then the multiplication by \(b_{0}\) induces an isomorphism between the corresponding pairs \((V,L)\). Note that \(L\) is identified the quasi line bundle associated with the idele \(a_{1}(t_{p}^{n_{p}})\), while \(M:=V/L\) is identified with the quasi line bundle associated with \(a_{2}\). Let us denote by \(\operatorname{QBun}_{B}(C,D)\) the groupoid of pairs \(L\subset V\), where \(V\) is a rank \(2\) vector bundle on \(C\) and \(L\subset V\) is a quasi line bundle with the associated divisor \(D\), such that \(V/L\) is also a quasi line bundle. We have a natural projection \[p:\operatorname{QBun}_{B}(C,D)\to\operatorname{Bun}_{G}(C)\] sending \(L\subset V\) to \(V\), and the functor \[q:\operatorname{QBun}_{B}(C,D)\to\operatorname{QTr}\simeq\operatorname{QBun}_{T }(C,D)\] associating with a pair \(L\subset V\), the triple \((L,M:=V/L,\mathcal{L}:=\det(V))\) together with the induced isomorphism \(\overline{M}/\epsilon M\stackrel{{\sim}}{{\relbar\joinrel\relbar \joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar \joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar \joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar \joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\rel\bar \joinrel\relbar\joinrel\rel\ell\join\relbar\joinrel\relbar\join\rel\joinrel\relbar \join\rel\relbar\join\rel\join\relbar\joinrel\rel\ell\join\relbar\join\rel \join\relbar\join\rel\join\relbar\join\rel\join\relbar\join\rel\rel\join \relbar\join\rel\join\relbar\join\rel\join\relbar\join\rel\join\rel\ell\join\rel \join\relbar\join\rel\join\relbar\join\rel\join\relbar\join\rel\join\rel \join\relbar\join\rel\join\rel\ell\join\relbar\join\rel\join\relbar\join\rel \join\rel\join\relbar\join\rel\join\relbar\join\rel\join\rel\join\relbar \join\rel\join\rel\join\relbar\join\rel\join\rel\join\relbar\join\rel\join \rel\join\relbar\join\rel\join\rel\join\relbar\join\rel\join\rel\join\relbar \join\rel\join\rel\join\relbar\join\rel\join\rel\join\relbar\join\rel\join \rel\join\relbar\join\rel\join\relbar\join\rel\join\rel\join\relbar\join\join \rel\join\relbar\join\rel\join\rel\join\relbar\join\rel\join\rel\join\relbar \join\rel\join\relbar\join\rel\join\relbar\join\rel\join\relbar\join\rel \join\rel\join\relbar\join\rel\join\relbar\join\rel\join\rel\join\relbar \join\rel\join\relbar\join\rel\join\relbar\join\rel\join\relbar\join\rel \join\rel\join\relbar\join\rel\join\relbar\join\rel\join\rel\join\relbar \join\rel\join\rel\join\relbar\join\rel\join\relbar\join\rel\join\rel \join\relbar\join\rel\join\relbar\join\rel\join\rel\join\relbar\join\rel \join\rel\join\relbar\join\rel\join\rel\join\relbar\join\rel\join\rel\relbar \join\rel\join\rel\join\relbar\join\rel\join\rel\join\relbar\join\rel\join \rel\relbar\join\rel\join\rel\join\relbar\join\rel\join\rel\join\relbar\join\rel \join\rel\relbar\join\rel\join\rel\join\rel\relbar\join\rel\join\rel\relbar \join\rel\join\rel\relbar\join\rel\join\rel\relbar\join\rel\join\rel\relbar \join\rel\join\rel\relbar\join\rel\join\rel\relbar\join\rel\join\rel\relbar \join\rel\rel\join\rel\relbar\join\rel\join\rel\relbar\join\rel\join\rel\rel\join \relbar\join\rel\rel\join\rel\relbar\join\rel\rel\join\rel\relbar\join\rel\rel\join \rel\rel\join\rel\relbar\join\rel\rel\join\rel\rel\join\rel\relbar\join\rel\rel \join\rel\rel\join\rel\relbar\join\rel\rel\join\rel\rel\join\rel\rel\join\rel\relbar \join\rel\rel\join\rel\rel\rel\join\rel\rel\rel\join\rel\rel\relbar\join\rel\rel\join\rel \rel\join\rel\rel\rel\join\rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel\rel\rel\join\rel \rel\rel\join\rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel\rel\join\rel\rel\rel\rel\join\rel \rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel\rel\rel\join\rel \rel\rel\rel\rel\join\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel \rel\rel\rel\rel\join\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\join\rel\rel\rel \rel\rel\rel\join\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\join\rel \rel\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\rel\join\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\join\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\relrel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\relrel\rel\relrel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\relrel\relrel\rel\relrel\rel\rel\rel\relrel\rel\relrel\rel\relrel\rel\rel\relrel\rel\rel\rel\rel\relrel\rel\rel\relrel\rel\rel\rel\relrel\rel\relrel\rel\relrel\rel\rel\relrel\rel\rel\relrel\relrel\relrel\rel\relrel\relrel\relrel\relrel\relrel\relrel\rel\relrel\relrel\relrel\rel\rel\relrel\relrel\relrel\relrel\rel\relrel\relrelrel\rel\relrel\relrel\relrelrel\relrel\relrel\relrel\relrel\relrel\relrel\relrelrel\relrelrel\relrel\relrelrel\relrelrelrel\relrelrelrel\relrelrel\relrelrelrelrel\ **Step 4**. The identification of \(p\) is clear. To identify \(q\), we observe that the pair \((L,M=V/L)\) coming from the object of \(\operatorname{QBun}_{B}(C,D)\) associated with \(b\in B(\mathbb{A})\), is exactly the pair of quasi line bundles associated with the diagonal part of \(b\), \(t=\operatorname{diag}(a_{1},a_{2})\). The line bundle \(\det(V)\) is associated with the idele \(\det(bg_{D})=a_{1}a_{2}\), and it is easy to see that the pairing \(\beta:L\otimes M\to\det(V)\) is induced by the product on \(F\). Finally, we can calculate the homomorphism \(\delta:\overline{M}/\epsilon M\to\overline{L}/\epsilon L\). The local generator \(\epsilon t_{p}^{-n_{p}}(a_{2})_{p}\) of \(\overline{M}_{p}\) can be lifted to \(x=b_{p}g_{D}e_{1}\in V_{p}\). Then \(\epsilon x\) comes from \(\epsilon(a_{1})_{p}e_{1}\in L_{p}\). Hence, the isomorphism \(\delta^{-1}\) agrees with the morphism \(\alpha\) constructed in Lemma B.11. Now we can give a geometric interpretation of our constant term operators \(E_{D}:\mathbb{C}(\operatorname{Bun}_{G}(C))\to\mathbb{C}(\operatorname{QBun}_ {T}(C,D))\). **Proposition B.13**.: _For \(t\in\operatorname{QBun}_{T}(C,D)\), ne has_ \[E_{D}f(t)=\operatorname{vol}(U_{t,D})\cdot q_{*}p^{*}f.\] _Furthermore, if \(t\) corresponds to the data \((L,M,\mathcal{L})\) as in Lemma B.11, then_ \[\operatorname{vol}(U_{t,D})=\frac{|H^{1}(C,\mathcal{O})|}{|H^{0}(C,\mathcal{O })|}\cdot\frac{|H^{0}(C,\underline{\operatorname{Hom}}(M,L))|}{|H^{1}(C, \underline{\operatorname{Hom}}(M,L))|}.\] Proof.: By Proposition B.12, we can use the adelic descriptions of \(p\) and \(q\). The key observation is that applying Lemma A.2 with \(B=B(\mathbb{A})\), \(B_{1}=B(F)\), \(B_{2}=B(\mathcal{O})[D]\), we can identify the fiber groupoid \(q^{-1}(t)\), for \(t\in T(F)\backslash T(\mathbb{A})/T(\mathcal{O})[D]\), with the double coset groupoid \(U(F)\backslash U(\mathbb{A})/U_{t,D}\), where \[U_{t,D}=U(\mathbb{A}_{C})\cap tB(\mathcal{O}[D])t^{-1}\] is exactly the subgroup introduced before (see Sec. 4.1). Note that the automorphism group of any object in this groupoid is the finite group \(U(F)\cap U_{t,D}\). Hence, by Lemma A.1, we have \[q_{*}p^{*}f(t)=\frac{1}{|U(F)\cap U_{t,D}|}\cdot\sum_{u\in U(F)\backslash U( \mathbb{A})/U_{t,D}}f(utg_{D})du.\] Comparing this with the formula (4.4) for \(E_{D}f\), we obtain \[E_{D}f(t)=\operatorname{vol}(U_{t,D})\cdot q_{*}p^{*}f(t).\] It remains to compute \(\operatorname{vol}(U_{t,D})\) in terms of the quasi line bundles \(L\) and \(M\) associated with \(t\in\operatorname{QBun}_{T}(C,D)\). Let \(t=\operatorname{diag}(a_{1},a_{2})\). Then under the natural isomorphism \(U(\mathbb{A}_{C})\simeq\mathbb{A}_{C}\), the subgroup \(U_{t,D}\) corresponds to \(a_{1}a_{2}^{-1}(\mathcal{O}(-D)+\epsilon\mathcal{O})=a_{1}a_{2}^{-1}(t_{p}^{ n_{p}})\mathcal{O}[D]\). Recall that \(L\) is associated with \(a_{1}(t_{p}^{n_{p}})\), while \(M\) is associated with \(a_{2}\). Hence, by Lemma B.10, the quasi line bundle \(\underline{\operatorname{Hom}}(M,L)\) is represented by the idele \(a_{1}a_{2}^{-1}(t_{p}^{n_{p}})\) (and the divisor \(D\)). Hence, we get \[U(\mathbb{A}_{C})/(U(F)\cdot U_{t,D})\simeq H^{1}(\underline{\operatorname{ Hom}}(M,L)),\] \[U(F)\cap U_{t,D}\simeq H^{0}(\underline{\operatorname{Hom}}(M,L)).\] Now the exact sequence \[1\to U_{t,D}/(U(F)\cap U_{t,D})\to U(\mathbb{A})/U(F)\to U(\mathbb{A})/(U(F) \cdot U_{t,D})\to 1\] gives the identity \[\operatorname{vol}(U(\mathbb{A})/U(F))=\frac{\operatorname{vol}(U_{t,D})}{|U(F) \cap U_{t,D}|}\cdot|U(\mathbb{A})/(U(F)\cdot U_{t,D})|.\] Taking into account the above identifications, we get \[\operatorname{vol}(U_{t,D})=\operatorname{vol}(U(\mathbb{A})/U(F))\cdot\frac{| H^{0}(C,\underline{\operatorname{Hom}}(M,L))|}{|H^{1}(C,\underline{\operatorname{Hom}}(M,L))|}.\] Finally, for \(t=1\) and \(D=0\) this identity shows that \[\operatorname{vol}(U(\mathbb{A})/U(F))=\frac{|H^{1}(C,\mathcal{O})|}{|H^{0}(C,\mathcal{O})|}.\]
2303.11536
Indeterminate Probability Neural Network
We propose a new general model called IPNN - Indeterminate Probability Neural Network, which combines neural network and probability theory together. In the classical probability theory, the calculation of probability is based on the occurrence of events, which is hardly used in current neural networks. In this paper, we propose a new general probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. Besides, for our proposed neural network framework, the output of neural network is defined as probability events, and based on the statistical analysis of these events, the inference model for classification task is deduced. IPNN shows new property: It can perform unsupervised clustering while doing classification. Besides, IPNN is capable of making very large classification with very small neural network, e.g. model with 100 output nodes can classify 10 billion categories. Theoretical advantages are reflected in experimental results.
Tao Yang, Chuang Liu, Xiaofeng Ma, Weijia Lu, Ning Wu, Bingyang Li, Zhifei Yang, Peng Liu, Lin Sun, Xiaodong Zhang, Can Zhang
2023-03-21T01:57:40Z
http://arxiv.org/abs/2303.11536v1
# Indeterminate Probability Neural Network ###### Abstract We propose a new general model called IPNN - Indeterminate Probability Neural Network, which combines neural network and probability theory together. In the classical probability theory, the calculation of probability is based on the occurrence of events, which is hardly used in current neural networks. In this paper, we propose a new general probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. Besides, for our proposed neural network framework, the output of neural network is defined as probability events, and based on the statistical analysis of these events, the inference model for classification task is deduced. IPNN shows new property: It can perform unsupervised clustering while doing classification. Besides, IPNN is capable of making very large classification with very small neural network, e.g. model with 100 output nodes can classify 10 billion categories. Theoretical advantages are reflected in experimental results. (Source code: [https://github.com/Starfruit007/ipnn](https://github.com/Starfruit007/ipnn)) Machine Learning, ICML ## 1 Introduction Humans can distinguish at least 30,000 basic object categories (Biederman, 1987), classification of all these would have two challenges: It requires huge well-labeled images; Model with softmax for large scaled datasets is computationally expensive. Zero-Shot Learning - ZSL (Lampert et al., 2009; Fu et al., 2018) method provides an idea for solving the first problem, which is an attribute-based classification method. ZSL performs object detection based on a human-specified high-level description of the target object instead of training images, like shape, color or even geographic information. But labelling of attributes still needs great efforts and expert experience. Hierarchical softmax can solve the computationally expensive problem, but the performance degrades as the number of classes increase (Mohammed and Umaashankar, 2018). Probability theory has not only achieved great successes in the classical area, such as Naive Bayesian method (Cao, 2010), but also in deep neural networks (VAE (Kingma and Welling, 2014), ZSL, etc.) over the last years. However, both have their shortages: Classical probability can not extract features from samples; For neural networks, the extracted features are usually abstract and cannot be directly used for numerical probability calculation. What if we combine them? There are already some combinations of neural network and bayesian approach, such as probability distribution recognition (Su and Chou, 2006; Kocadagli and Askgil, 2014), Bayesian approach are used to improve the accuracy of neural modeling (Morales and Yu, 2021), etc. However, current combinations do not take advantages of ZSL method. We propose an approach to solve the mentioned problems, and our contributions are summarized as follows: * indeterminate probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. * We interpret the output neurons of neural network as events of discrete random variables, and indeterminate probability is defined to describe the uncertainty of the probability event state. * We propose a novel unified combination of (indeterminate) probability theory and deep neural network. The neural network is used to extract attributes which are defined as discrete random variables, and the inference model for classification task is derived. Besides, these attributes do not need to be labeled in advance. The rest of this paper is organized as follows: In Section 2, we first introduce a coin toss game as example of human cognition to explain the core idea of IPNN. In Section 3, the model architecture and indeterminate probability is derived. Section 4 discusses the training strategy and related hyperparameters. In Section 5, we evaluate IPNN and make an impact analysis on its hyper-parameters. Finally, we put forward some future research ideas and conclude the paper in Section 6. ## 2 Background Let's first introduce a small game - coin toss: a child and an adult are observing the outcomes of each coin toss and record the results independently (heads or tails), the child can't always record the results correctly and the adult can record it correctly, in addition, the records of the child are also observed by the adult. After several coin tosses, the question now is, suppose the adult is not allowed to watch the next coin toss, what is the probability of his inference outcome of next coin toss via the child's record? As shown in Figure 1, random variables X is the random experiment itself, and \(X=x_{k}\) represent the \(k^{th}\) random experiment. Y and A are defined to represent the adult's record and the child's record, respectively. And \(hd,tl\) is for heads and tails. For example, after 10 coin tosses, the records are shown in Table 1. We formulate X compactly with the ground truth, as shown in Table 2 and Table 3. Through the adult's record Y and the child's records A, we can calculate the conditional probability of Y given A, as shown in Table 4. We define this process as observation phase. For next coin toss (\(X=x_{11}\)), the question of this game is formulated as calculation of the probability \(P^{A}(Y|X)\), superscript A indicates that Y is inferred via record A, not directly observed by the adult. For example, given the next coin toss \(X=hd=x_{11}\), the \begin{table} \begin{tabular}{c c c c} \hline \hline Experiment & Truth & A & Y \\ \(X=x_{1}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{2}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{3}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{4}\) & \(hd\) & \(A=hd\) & \(Y=hd\) \\ \(X=x_{5}\) & \(hd\) & \(\mathbf{A=t1}\) & \(Y=hd\) \\ \(X=x_{6}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{7}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{8}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{9}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{10}\) & \(tl\) & \(A=tl\) & \(Y=tl\) \\ \(X=x_{11}\) & \(hd\) & A=? & Y=? \\ \hline \hline \end{tabular} \end{table} Table 1: Example of 10 times coin toss outcomes \begin{table} \begin{tabular}{c c c} \hline \hline \(\frac{\#(Y,X)}{\#(X)}\) & \(Y=hd\) & \(Y=tl\) \\ \hline \(X=hd\) & 5/5 & 0 \\ \(X=tl\) & 0 & 5/5 \\ \hline \(\frac{\#(A,X)}{\#(X)}\) & \(A=hd\) & \(A=tl\) \\ \hline \(X=hd\) & 4/5 & 1/5 \\ \(X=tl\) & 0 & 5/5 \\ \hline \hline \end{tabular} \end{table} Table 3: The adult’s and child’s records: \(P(Y|X)\) and \(P(A|X)\) Figure 1: Example of coin toss game. \begin{table} \begin{tabular}{c c c} \hline \hline \(\frac{\#(Y,X)}{\#(X)}\) & \(Y=hd\) & \(Y=tl\) \\ \hline \(A=hd\) & 4/4 & 0 \\ \(A=tl\) & 1/6 & 5/6 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of observation phase: \(P(Y|A)\) child's record has then two situations: \(P(A=hd|X=hd=x_{11})=4/5\) and \(P(A=tl|X=hd=x_{11})=1/5\). With the adult's observation of the child's records, we have \(P(Y=hd|A=hd)=4/4\) and \(P(Y=hd|A=tl)=1/6\). Therefore, given next coin toss \(X=hd=x_{11}\), \(P^{A}(Y=hd|X=hd=x_{11})\) is the summation of these two situations: \(\frac{4}{5}\cdot\frac{4}{4}+\frac{1}{5}\cdot\frac{1}{6}\). Table 5 answers the above mentioned question. Let's go one step further, we can find that even the child's record is written in unknown language (e.g. \(A\in\{ZHENG,FAN\}\)), Table 4 and Table 5 can still be calculated by the man. The same is true if the child's record is written from the perspective of attributes, such as color, shape, etc. Hence, if we substitute the child with a neural network and regard the adult's record as the sample labels, although the representation of the model outputs is unknown, the labels of input samples can still be inferred from these outputs. This is the core idea of IPNN. ## 3 IPNN ### Model Architecture Let \(X\in\{x_{1},x_{2},\ldots,x_{n}\}\) be training samples (\(X=x_{k}\) is understood as \(k^{th}\) random experiment - select one train sample.) and \(Y\in\{y_{1},y_{2},\ldots,y_{m}\}\) consists of \(m\) discrete labels (or classes), \(P(y_{l}|x_{k})=y_{l}(k)\in\{0,1\}\) describes the label of sample \(x_{k}\). For prediction, we calculate the posterior of the label for a given new input sample \(x_{n+1}\), it is formulated as \(P^{\text{A}}\left(y_{l}\mid x_{n+1}\right)\), superscript A stands for the medium - model outputs, via which we can infer label \(y_{l},\;\;l=1,2,\ldots,m\). After \(P^{\text{A}}\left(y_{l}\mid x_{n+1}\right)\) is calculated, the \(y_{l}\) with maximum posterior is the predicted label. Figure 2 shows IPNN model architecture, the output neurons of a general neural network (FFN, CNN, Resnet (He et al., 2016), Transformer (Vaswani et al., 2017), Pretrained-Models (Devlin et al., 2019), etc.) is split into N unequal/equal parts, the split shape is marked as Equation (1), hence, the number of output neurons is the summation of the split shape, see Equation (2). Next, each split part is passed to'softmax', so the output neurons can be defined as discrete random variable \(A^{j}\in\left\{a^{j}_{1},a^{j}_{2},\ldots,a^{j}_{M_{j}}\right\},j=1,2,\ldots,N\), and each neuron in \(A^{j}\) is regarded as an event. After that, all the random variables together forms the N-dimensional joint sample space, marked as \(\mathbb{A}=(A^{1},A^{2},\ldots,A^{N})\), and all the joint sample points are fully connected with all labels \(Y\in\{y_{1},y_{2},\ldots,y_{m}\}\) via conditional probability \(P\left(Y=y_{l}|A^{1}=a^{1}_{i_{1}},A^{2}=a^{2}_{i_{2}},\ldots,A^{N}=a^{N}_{i_{ N}}\right)\), or more compactly written as \(P\left(y_{l}|a^{1}_{i_{1}},a^{2}_{i_{2}},\ldots,a^{N}_{i_{N}}\right)\)1.2 Footnote 1: All the probability is formulated compactly in this paper. Footnote 2: Reading symbols see Appendix E. \[\text{Split shape}:=\{M_{1},M_{2},\ldots,M_{N}\} \tag{1}\] \[\text{Number of model output neurons}:=\sum_{j=1}^{N}M_{j} \tag{2}\] \[\text{Number of joint sample points}:=\prod_{j=1}^{N}M_{j} \tag{3}\] ### Indeterminate Probability Theory In classical probability theory, given a sample \(x_{k}\) (perform an experiment), its event or joint event has only two states: happened or not happened. However, for IPNN, the model \begin{table} \begin{tabular}{c c c} \hline \hline \(\sum_{A}\left(\frac{\#(A,X)}{\#X}\cdot\frac{\#(Y,A)}{\#A}\right)\) & \(Y=hd\) & \(Y=tl\) \\ \hline \(X=hd=x_{11}\) & \(\frac{4}{5}\cdot\frac{4}{4}+\frac{1}{5}\cdot\frac{1}{6}\) & \(\frac{4}{5}\cdot 0+\frac{1}{5}\cdot\frac{5}{6}\) \\ \(X=tl=x_{11}\) & \(0\cdot\frac{4}{4}+\frac{5}{5}\cdot\frac{1}{6}\) & \(0\cdot 0+\frac{5}{5}\cdot\frac{5}{6}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Results of inference phase: \(P^{A}(Y|X)\) Figure 2: IPNN – model architecture. With the increase of number of random variables, the joint sample points increases exponentially, see Equation (3), but \(P\left(y_{l}|a^{1}_{i_{1}},a^{2}_{i_{2}},\ldots,a^{N}_{i_{N}}\right)\) is statistically calculated, not model weights. only outputs the probability of an event and its state is indeterminate, that's why this paper is called IPNN. This difference makes the calculation of probability (especially joint probability) also different. Equation (4) and Equation (5) will later formulate this difference. Given an input sample \(x_{k}\), using Assumption 3.1 the model outputs can be formulated as: \[P\left(a_{i_{j}}^{j}\mid x_{k}\right)=\alpha_{i_{j}}^{j}(k) \tag{4}\] **Assumption 3.1**.: Given an input sample \(X=x_{k}\), **IF**\(\sum_{i_{j}=1}^{M_{j}}\alpha_{i_{j}}^{j}(k)=1\) and \(\alpha_{i_{j}}^{j}(k)\in[0,1],k=1,2,\ldots,n\). **THEN**, \(\left\{a_{1}^{j},a_{2}^{j},\ldots,a_{M_{j}}^{j}\right\}\) can be regarded as collectively exhaustive and exclusive events set, they are partitions of the sample space of random variable \(A^{j},j=1,2,\ldots,N\). In classical probability situation, \(\alpha_{i_{j}}^{j}(k)\in\{0,1\}\), which indicates the state of event is 0 or 1. For joint event, given \(x_{k}\), using Assumption 3.2 and Equation (4), the joint probability is formulated as: \[P\left(a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\mid x_{k}\right)=\prod _{j=1}^{N}\alpha_{i_{j}}^{j}(k) \tag{5}\] **Assumption 3.2**.: Given an input sample \(X=x_{k}\), \(A^{1},A^{2},\ldots,A^{N}\) is mutually independent. Where it can be easily proved, \[\sum_{\mathbb{A}}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)=1,k=1,2, \ldots,n. \tag{6}\] In classical probability situation, \(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\in\{0,1\}\), which indicates the state of joint event is 0 or 1. Equation (4) and Equation (5) describes the uncertainty of the state of event \(\left(A^{j}=a_{i_{j}}^{j}\right)\) and joint event \(\left(A^{1}=a_{i_{1}}^{1},A^{2}=a_{i_{2}}^{2},\ldots,A^{N}=a_{i_{N}}^{N}\right)\). ### Observation Phase In observation phase, the relationship between all random variables \(A^{1},A^{2},\ldots,A^{N}\) and \(Y\) is established after the whole observations, it is formulated as: \[P\left(y_{l}\mid a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)=\frac {P\left(y_{l},a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)}{P\left( a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)} \tag{7}\] Because the state of joint event is not determinate in IPNN, we cannot count its occurrence like classical probability. Hence, the joint probability is calculated according to total probability theorem over all samples \(X=(x_{1},x_{2},\ldots,x_{n})\), and with Equation (5) we have: \[P\left(a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right) \tag{8}\] \[\quad=\sum_{k=1}^{n}\left(P\left(a_{i_{1}}^{1},a_{i_{2}}^{2}, \ldots,a_{i_{N}}^{N}\mid x_{k}\right)\cdot P(x_{k})\right)\] \[\quad=\sum_{k=1}^{n}\left(\prod_{j=1}^{N}P\left(a_{i_{j}}^{j}\mid x _{k}\right)\cdot P(x_{k})\right)\] \[\quad=\frac{\sum_{k=1}^{n}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j} (k)\right)}{n}\] Because \(Y=y_{l}\) is sample label and \(A^{j}=a_{i_{j}}^{j}\) comes from model, it means \(A^{j}\) and Y come from different observer, so we can have Assumption 3.3 (see Figure 3). **Assumption 3.3**.: Given an input sample \(X=x_{k}\), \(A^{j}\) and Y is mutually independent in observation phase, \(j=1,2,\ldots,N\). Therefore, according to total probability theorem, Equation (5) and the above assumption, we derive: \[P\left(y_{l},a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right) \tag{9}\] \[\quad=\sum_{k=1}^{n}\left(P\left(y_{l},a_{i_{1}}^{1},a_{i_{2}}^{2 },\ldots,a_{i_{N}}^{N}\mid x_{k}\right)\cdot P(x_{k})\right)\] \[\quad=\sum_{k=1}^{n}\left(P\left(y_{l}\mid x_{k}\right)\cdot\prod _{j=1}^{N}P\left(a_{i_{j}}^{j}\mid x_{k}\right)\cdot P(x_{k})\right)\] \[\quad=\frac{\sum_{k=1}^{n}\left(y_{l}(k)\cdot\prod_{j=1}^{N} \alpha_{i_{j}}^{j}(k)\right)}{n}\] Substitute Equation (8) and Equation (9) into Equation (7), we have: \[P\left(y_{l}|a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)=\frac{ \sum_{k=1}^{n}\left(y_{l}(k)\cdot\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)} {\sum_{k=1}^{n}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)} \tag{10}\] Where it can be proved, \[\sum_{l=1}^{m}P\left(y_{l}\mid a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N} \right)=1 \tag{11}\] Figure 3: Independence illustration of observation phase with Bayesian network ### Inference Phase Given \(A^{j}\), with Equation (10) (passed experience) label \(y_{l}\) can be inferred, this inferred \(y_{l}\) has no pointing to any specific sample \(x_{k}\), incl. also new input sample \(x_{n+1}\), see Figure 4. So we can have following assumption: **Assumption 3.4**.: Given \(A^{j}\), \(X\) and \(Y\) is mutually independent in inference phase, \(j=1,2,\ldots,N\). Therefore, given a new input sample \(X=x_{n+1}\), according to total probability theorem over joint sample space \(\left(a_{i_{1}}^{1},a_{i_{2}}^{2},\ldots,a_{i_{N}}^{N}\right)\in\mathbb{A}\), with Assumption 3.4, Equation (5) and Equation (10), we have: \[P^{\mathbb{A}}\left(y_{l}\mid x_{n+1}\right)\] \[=\sum_{\mathbb{A}}\left(P\left(y_{l},a_{i_{1}}^{1},\ldots,a_{i_{N }}^{N}\mid x_{n+1}\right)\right)\] \[=\sum_{\mathbb{A}}\left(P\left(y_{l}\mid a_{i_{1}}^{1},\ldots,a_{ i_{N}}^{N}\right)P\left(a_{i_{1}}^{1},\ldots,a_{i_{N}}^{N}\mid x_{n+1}\right)\right)\] \[=\sum_{\mathbb{A}}\left(\frac{\sum_{k=1}^{n}\left(y_{l}(k)\prod_{ j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)}{\sum_{k=1}^{n}\left(\prod_{j=1}^{N} \alpha_{i_{j}}^{j}(k)\right)}\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(n+1)\right) \tag{12}\] And the maximum posterior is the predicted label of an input sample: \[\hat{y}:=\operatorname*{arg\,max}_{l\in\{1,2,\ldots,m\}}P^{\mathbb{A}}\left(y _{l}\mid x_{n+1}\right) \tag{13}\] ### Discussion Our proposed theory is derived based on three our proposed conditional mutual independency assumptions, see Assumption 3.2 Assumption 3.3 and Assumption 3.4. However, in our opinion, these assumptions can neither be proved nor falsified, and we do not find any exceptions until now. Since this theory can not be mathematically proved, we can only validate it through experiment. Finally, our proposed indeterminate probability theory is an extension of classical probability theory, and classical probability theory is one special case to our theory. More details to understand our theory, see Appendix A. ## 4 Training ### Training Strategy Given an input sample \(x_{t}\) from a mini batch, with a minor modification of Equation (12): \[P^{\mathbb{A}}\left(y_{l}\mid x_{t}\right)\] \[=\sum_{\mathbb{A}}\left(\frac{\sum_{k=b\cdot t_{0}+1}^{b\cdot t_{ 1}}\left(y_{l}(k)\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)}{\sum_{k=b\cdot t _{0}+1}^{b\cdot t_{1}}\left(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right)}\prod_ {j=1}^{N}\alpha_{i_{j}}^{j}(t)\right)\] \[\approx\sum_{\mathbb{A}}\left(\frac{\max(H+h(t1),\epsilon)}{\max (G+g(t1),\epsilon)}\cdot\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(t)\right) \tag{14}\] Where \(b\) is for batch size, \(t_{0}=\max(0,t_{1}-T)\), \(t_{1}=\left\lceil\frac{t}{b}\right\rceil,t=1,2,\ldots,n\). Hyper-parameter T is for forgetting use, i.e., \(H\) and \(G\) are calculated from the recent T batches. Hyper-parameter T is introduced because at beginning of training phase the calculated result with Equation (10) is not good yet. And the \(\epsilon\) on the denominator is to avoid dividing zero, the \(\epsilon\) on the numerator is to have an initial value of 1. Besides, \[h(t1)=\sum_{k=b\cdot(t_{1}-1)+1}^{b\cdot t_{1}}\left(y_{l}(k) \cdot\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(k)\right) \tag{15}\] \[g(t1)=\sum_{k=b\cdot(t_{1}-1)+1}^{b\cdot t_{1}}\left(\prod_{j=1} ^{N}\alpha_{i_{j}}^{j}(k)\right)\] (16) \[H=\sum_{k=\max(1,t_{1}-T)}^{t_{1}-1}h(k),\text{for }t_{1}=2,3,\ldots\] (17) \[G=\sum_{k=\max(1,t_{1}-T)}^{t_{1}-1}g(k),\text{for }t_{1}=2,3,\ldots \tag{18}\] Where \(H\) and \(G\) are not needed for gradient updating during back-propagation. We use cross entropy as loss function: \[\mathcal{L}=-\sum_{l=1}^{m}\left(y_{l}(k)\cdot\log P^{\mathbb{A}}\left(y_{l} \mid x_{t}\right)\right) \tag{19}\] The detailed algorithm implementation is shown in Algorithm 1. With Equation (14) we can get that \(P^{\mathbb{A}}\left(y_{l}\mid x_{1}\right)=1\) for the first input sample if \(y_{l}\) is the ground truth and batch size is 1. Therefore, for IPNN the loss may increase at the beginning and fall back again while training. Figure 4: Independence illustration of inference phase with Bayesian network ### Multi-degree Classification (Optional) In IPNN, the model outputs N different random variables \(A^{1},A^{2},\ldots,A^{N}\), if we use part of them to form sub-joint sample spaces, we are able of doing sub classification task, the sub-joint spaces are defined as \(\Lambda^{1}\subset\mathbb{A},\Lambda^{2}\subset\mathbb{A},\ldots\) The number of sub-joint sample spaces is: \[\sum_{j=1}^{N}{N\choose j}=\sum_{j=1}^{N}\left(\frac{N!}{j!(N-j)!}\right) \tag{20}\] If the input samples are additionally labeled for part of sub-joint sample spaces3, defined as \(Y^{\tau}\in\{y_{1}^{\tau},y_{2}^{\tau},\ldots,y_{m^{\tau}}^{\tau}\}\). The sub classification task can be represented as \(\left\langle X,\Lambda^{1},Y^{1}\right\rangle,\left\langle X,\Lambda^{2},Y^{2 }\right\rangle,\ldots\) With Equation (19) we have, Footnote 3: It is labelling of input samples, not sub-joint sample points. \[\mathcal{L}^{\tau}=-\sum_{l=1}^{m^{\tau}}\left(y_{l}^{\tau}(k)\cdot\log P^{ \Lambda^{\tau}}\left(y_{l}^{\tau}\mid x_{t}\right)\right),\tau=1,2,\ldots \tag{21}\] Together with the main loss, the overall loss is \(\mathcal{L}+\mathcal{L}^{1}+\mathcal{L}^{2}+\dots\) In this way, we can perform multi-degree classification task. The additional labels can guide the convergence of the joint sample spaces and speed up the training process, as discussed later in Section 5.2. ### Multi-degree Unsupervised Clustering If there are no additional labels for the sub-joint sample spaces, the model are actually doing unsupervised clustering while training. And every sub-joint sample space describes one kind of clustering result, we have Equation (20) number of clustering situations in total. ### Designation of Joint Sample Space As in Appendix B proved, we have following proposition: **Proposition 4.1**.: _IPNN converges to global minimum only when \(P\left(y_{l}|a_{1_{1}}^{1},a_{n_{2}}^{2},\ldots,a_{n_{N}}^{N}\right)=1,\) for \(\prod_{j=1}^{N}\alpha_{i_{j}}^{j}(t)>0,i_{j}=1,2,\ldots,M_{j}\). In other word, each joint sample point corresponds to an unique category. However, a category can correspond to one or more joint sample points._ **Corollary 4.2**.: _The necessary condition of achieving the global minimum is when the split shape defined in Equation (1) satisfies: \(\prod_{j=1}^{N}M_{j}\geq m\), where \(m\) is the number of classes. That is, for a classification task, the number of all joint sample points is greater than the classification classes._ Besides, the unsupervised clustering (Section 4.3) depends on the input sample distributions, the split shape shall not violate from multi-degree clustering. For example, if the main attributes of one dataset shows three different colors, and your split shape is \(\{M_{1}=2,M_{2}=2,\dots\}\), this will hinder the unsupervised clustering, in this case, the shape of one random variable is better set to 3. And as in Appendix C also analyzed, there are two local minimum situations, improper split shape will make IPNN go to local minimum. In addition, the latter part from Proposition 4.1 also implies that IPNN may be able of doing further unsupervised classification task, this is beyond the scope of this discussion. ## 5 Experiments and Results To evaluate the effectiveness of the proposed approach, we conducted experiments on MNIST (Deng, 2012) and a self-designed toy dataset. ### Unsupervised Clustering As in Section 4.3 discussed, IPNN is able of performing unsupervised clustering, we evaluate it on MNIST. The split shape is set to \(\{M_{1}=2,M_{2}=10\}\), it means we have two random variables, and the first random variable is used to divide MNIST labels \(0,1,\ldots 9\) into two clusters. The cluster results is shown in Figure 5. We find only when \(\epsilon\) in Equation (14) is set to a relative high value that IPNN prefers to put number 1,4,7,9 into one cluster and the rest into another cluster, otherwise, the clustering results is always different for each round training. The reason is unknown, our intuition is that high \(\epsilon\) makes that each category catch the free joint sample point more harder, categories have similar attributes together will be more possible to catch the free joint sample point. \[\frac{1}{round}\cdot\sum_{i=1}^{round}\frac{\text{number of samples with label $l$}}{\text{in one cluster at $i^{th}$ round}} \tag{22}\] ### Avoiding Local Minimum with Multi-degree Classification Another experiment is designed by us to check the performance of multi-degree classification (see Section 4.2): classification of binary vector into decimal value. The binary vector is the model inputs from '00000000000' to '111111111111', which are labeled from 0 to 4095. The split shape is set to \(\{M_{1}=2,M_{2}=2,\ldots,M_{12}=2\}\), which is exactly able of making a full classification. Besides, model weights are initialized as uniform distribution of \([-0.3,0.3]\), as discussed in Appendix C. The result is shown in Figure 6, IPNN without multi degree classification goes to local minimum with only \(69.5\%\) train accuracy. We have only additionally labeled for 12 sub-joint spaces, and IPNN goes to global minimum with \(100\%\) train accuracy. Therefore, with only \(\sum_{1}^{12}2=24\) output nodes, IPNN can classify 4096 categories. Theoretically, if model with 100 output nodes are split into 10 equal parts, it can classify 10 billion categories. Hence, compared with the classification model with only one'softmax' function, IPNN has no computationally expensive problems (see Section 1). ### Hyper-parameter Analysis IPNN has two import hyper-parameters: split shape and forget number T. In this section, we have analyzed it with test on MNIST, batch size is set to 64, \(\epsilon=10^{-6}\). As shown in Figure 7, if the number of joint sample points (see Equation (3)) is smaller than 10, IPNN is not able of making a full classification and its test accuracy is proportional to number of joint sample points, as number of joint sample points increases over 10, IPNN goes to global minimum for both 3 cases, this result is consistent with our analysis. However, we have exceptions, the accuracy of split shape with \(\{M_{1}=2,M_{2}=5\}\) and \(\{M_{1}=2,M_{2}=6\}\) is not high. From Figure 5 we know that for the first random variable, IPNN sometimes tends to put number 1,4,7,9 into one cluster and the rest into another cluster, so this cluster result request that the split shape need to be set minimums to \(\{M_{1}=2,M_{2}\geq 6\}\) in order to have enough free joint sample points. That's why the accuracy of split shape with \(\{M_{1}=2,M_{2}=5\}\) is not high. (For \(\{M_{1}=2,M_{2}=6\}\) case, only three numbers are in one cluster.) Another test in Figure 8 shows that IPNN will go to local minimum as forget number T increases and cannot go to global minimum without further actions, hence, a relative small forget number T shall be found with try and error. ## 6 Conclusion For a classification task, we proposed an approach to extract the attributes of input samples as random variables, and these variables are used to form a large joint sample space. After IPNN converges to global minimum, each joint sample point will correspond to an unique category, as discussed in Proposition 4.1. As the joint sample space increases Figure 5: Unsupervised clustering results on MNIST: \(\epsilon=2\), batch size \(b=64\), forget number \(T=5\), epoch is 5 per round. The test was repeated for 876 rounds with same configuration (different random seeds) in order to check the stability of clustering performance, each round clustering result is aligned using Jaccard similarity (Raff and Nicholas, 2017), the percentage is calculated with Equation (22). Figure 6: Loss of multi-degree classification of ‘binary to decimal’ on train dataset. Input samples are additionally labeled with \(Y^{i}\in\{0,1\}\) for \(i^{th}\) bit is 0 or 1, respectively. \(Y^{i}\) corresponds to sub-joint sample space \(\Lambda^{i}\) with split shape \(\{M_{i}=2\},i=1,2,\ldots 12\). Batch size is 4096, forget number \(T=5,\ \ \epsilon=10^{-6}\). exponentially, the classification capability of IPNN will increase accordingly. We can then use the advantages of classical probability theory, for example, for very large joint sample space, we can use the Bayesian network approach or mutual independence among variables (see Appendix D) to simplify the model and improve the inference efficiency, in this way, a more complex Bayesian network could be built for more complex reasoning task. ## Acknowledgment Thanks to Mr. Su Jianlin for his good introduction of VAE model4, which motivates the implementation of this idea. Footnote 4: Website: [https://kexue.fm/archives/5253](https://kexue.fm/archives/5253)
2310.09241
Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI, i.e., predicting the judgment of the case in terms of case fact description. Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems. Thus, it is worthwhile to explore the utilization of precedents in the LJP. Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task. These can be broken down into two categories: large language models (LLMs) and domain-specific models. LLMs are capable of interpreting and generating complex natural language, while domain models are efficient in learning task-specific information. In this paper, we propose the precedent-enhanced LJP framework (PLJP), a system that leverages the strength of both LLM and domain models in the context of precedents. Specifically, the domain models are designed to provide candidate labels and find the proper precedents efficiently, and the large models will make the final prediction with an in-context precedents comprehension. Experiments on the real-world dataset demonstrate the effectiveness of our PLJP. Moreover, our work shows a promising direction for LLM and domain-model collaboration that can be generalized to other vertical domains.
Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, Kun Kuang
2023-10-13T16:47:20Z
http://arxiv.org/abs/2310.09241v1
# Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration ###### Abstract Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI, i.e., predicting the judgment of the case in terms of case fact description. Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems. Thus, it is worthwhile to explore the utilization of precedents in the LIP. Recent advances in deep learning have enabled a variety of techniques to be used to solve the LIP task. These can be broken down into two categories: large language models (LLMs) and domain-specific models. LLMs are capable of interpreting and generating complex natural language, while domain models are efficient in learning task-specific information. In this paper, we propose the precedent-enhanced LJP framework (PLJP) - a system that leverages the strength of both LLM and domain models in the context of precedents. Specifically, the domain models are designed to provide candidate labels and find the proper precedents efficiently, and the large models will make the final prediction with an in-context precedents comprehension. Experiments on the real-world dataset demonstrate the effectiveness of our PLJP. Moreover, our work shows a promising direction for LLM and domain-model collaboration that can be generalized to other vertical domains. ## 1 Introduction Legal AI has been the subject of research for several decades, with the aim of assisting individuals in various legal tasks, including legal QA [15], court view generation [20], legal entity recognition [1], and so on. As one of the most important legal tasks, legal judgment prediction (LJP) aims to predict the legal judgment of the case based on the case fact description. The legal judgment typically includes the law article, charge and prison term. Precedents, which refer to previous cases with similar fact descriptions, hold a crucial position within national legal systems [16]. On a more macro level, precedents are known as the collective body of judge-made laws in a nation[1]. They serve the purpose of ensuring consistency in judicial decisions, providing greater legal guidance to judges and facilitating legal progress and evolution to meet dynamic legal demands. In the Common Law system, the precedents are the mandatory basis of the judgment of the subsequent case [14]. In the Civil Law system, judge-made laws are perceived as secondary legal sources while written laws are the basic legal sources[1]. In the contemporary era, there is also a growing trend to treat the precedents as a source of "soft" law [21], and judges are expected to take them into account when reaching a decision [16]. Thus, it is worthwhile to explore the utilization of precedents in the legal judgment prediction. With the development of deep learning, many technologies have been adopted in the LIP task, which can be split into two categories: large lan Figure 1: An illustration of the judicial process, our motivation is to promote the collaboration between the domain model and LLM (right part) for simulating the judicial process of the human judge (left). guage models (LLMs) and domain-specific models (Ge et al., 2023). Owing to extensive training, LLMs are good at understanding and generating complex natural language, as well as in-context learning. On the other hand, domain-specific models are designed to cater to specific tasks and offer cost-effective solutions. However, when it comes to incorporating precedents into the LJP task, both categories of models face certain limitations. LLMs, constrained by their prompt length, struggle to grasp the meaning of numerous abstract labels and accurately select the appropriate one. For domain models, though trained with label annotations, the drawback is the limited ability to comprehend and distinguish the similarities and differences between the precedents and the given case. In this paper, as Fig. 1 shows, we try to collaborate the LLMs with the domain-specific models and propose a novel precedent-enhanced legal judgment prediction framework (PLJP). Specifically, domain models contribute by providing candidate labels and finding the proper precedents from the case database effectively; the LLMs will decide the final prediction through an in-context precedent comprehension. Following the previous LJP works (Zhong et al., 2018; Yue et al., 2021; Dong and Niu, 2021), our experiments are conducted on the publicly available real-world legal dataset. To prevent any potential data leakage during the training of the LLMs, where the model may have already encountered the test cases, we create a new test set comprising cases that occurred after 2022. This is necessary because the LLMs we utilize have been trained on a corpus collected only until September 2021. By doing so, we ensure a fair evaluation of the PLJP framework. Remarkably, our proposed PLJP framework achieves state-of-the-art (SOTA) performance on both the original test set and the additional test set. To sum up, our main contributions are as follows: * We address the important task of legal judgment prediction (LJP) by taking precedents into consideration. * We propose a novel precedent-enhanced legal judgment prediction (PLJP) framework that leverages the strength of both LLM and domain models. * We conduct extensive experiments on the real-world dataset and create an additional test set to ensure the absence of data leakage during LLM training. The results obtained on both the original and additional test sets validate the effectiveness of the PLJP framework. * Our work shows a promising direction for LLM and domain-model collaboration that can be generalized over vertical domains. We make all the codes and data publicly available to motivate other scholars to investigate this novel and interesting research direction1. Footnote 1: The github link is hidden for the anonymous review. ## 2 Related Work ### Legal AI Legal Artificial Intelligence (Legal AI) aims to enhance tasks within the legal domain through the utilization of artificial intelligence techniques (Zhong et al., 2020; Katz et al., 2023). Collaborative efforts between researchers in both law and computer fields have been lasting to explore the potential of Legal AI and its applications across various legal tasks. These tasks encompass areas such as legal question answering (QA) (Monroy et al., 2009), legal entity recognition (Cardellino et al., 2017), court view generation (Wu et al., 2020), legal summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019), legal language understanding(Chalkidis et al., 2022) and so on. In this work, we focus on the task of legal judgment prediction, which is one of the most common tasks in Legal AI. ### Legal Judgment Prediction Legal judgment prediction (LJP) aims to predict judgment results based on the fact descriptions automatically (Lin et al., 2012; Chalkidis et al., 2019; Yue et al., 2021; Xu et al., 2020; Niklaus et al., 2021; Malik et al., 2021; Feng et al., 2022; Lyu et al., 2022; Gan et al., 2022). The LJP methods in earlier years required manually extracted features (Keown, 1980), which is simple but costly. Owing to the prosperity of machine learning (Wu et al., 2022; Shen et al., 2022; Li et al., 2022, 2022; Zhang et al., 2022; Li et al., 2023; Zhang et al., 2023), researchers began to formalize the LJP problem with machine learning methods. These data-driven methods can learn the features with far less labor (e.g., only the final labels are required). Sulea et al. (2017) developed an ensemble system that averages the output of multiple SVM to improve the performance of LJP. Luo et al. (2017) utilized an attention mechanism in the LJP. Zhong et al. (2018) considered the dependency of the sub-tasks in the LJP. Yue et al. (2021) investigated the problem by separating the representation of fact description into different embedding. Liu et al. (2022) used contrastive learning in the LJP. However, these existing LJP methods tend to overlook the significance of precedents. In this study, we propose a precedent-enhanced LJP framework (PLJP) that leverages the collaboration between domain-specific models and large language models (LLMs) to address the LJP task. ### Precedent Retrieval The precedent is the basis of judgment in the Common Law system, and also an important reference for decision-making in the Civil Law system. Therefore, precedent retrieval is another valuable task in Legal AI (Althammer et al., 2021). There are two main precedent retrieval models: expert knowledge-based models and natural language processing (NLP)-based models (Bench-Capon et al., 2012). Expert knowledge-based models use the designed sub-elements to represent the legal cases (Saravanan et al., 2009), while NLP-based models mainly convert the text into embeddings and then calculate the similarity from the embedding level (Ma et al., 2021; Chalkidis et al., 2020). Most retrieval models required additional annotation so can not be directly applied to the LJP task. In our paper, we use an unsupervised dense retrieval model (Izacard et al., 2022) to get the precedents, which can be updated by other retrieval models if needed. ### Large Language Models Large language models (LLMs), such as ChatGPT, have attracted widespread attention from society (Zhao et al., 2023). With pre-training over large-scale corpora, LLMs show strong capabilities in interpreting and generating complex natural language, as well as reasoning (e.g., in-context learning). The technical evolution of LLMs has been making an important impact on the fields of natural language processing (Brown et al., 2020; Touvron et al., 2023), computer vision (Shao et al., 2023; Wu et al., 2023), and reinforcement learning (Du et al., 2023). In the legal domain, LLMs can also be used for many tasks such as legal document analysis and legal document writing (Sun, 2023). However, in the prediction tasks, which can involve dozens of abstract labels, the performance of LLMs is not as good as in generation tasks, due to the limited prompt length. In this paper, we explore the utilization of LLMs in the LJP task with the collaboration of domain-specific models. ## 3 Problem Formulation In this work, we focus on the problem of legal judgment prediction. We first clarify the definition of the terms as follows. \(\bullet\)**Fact Description** refers to a concise narrative of the case, which typically includes the timeline of events, the actions or conduct of each party, and any other essential details that are relevant to the case. Here we define it as a token sequence \(f=\{w_{f}^{f}\}_{t=1}^{l_{f}}\), where \(l_{f}\) is the length. \(\bullet\)**Judgment** is the final decision made by a judge in a legal case based on the facts and the precedents. It typically consists of the law article, the charge, and the prison term. We represent the judgment of a case as \(j=(a,c,t)\), where \(a\), \(c\), \(t\) refer to the labels of article, charge and prison term, respectively. \(\bullet\)**Precedent** is the previous case with a similar fact. The judgments of the precedents are important references for the current case. Here, a precedent is defined as \(p=(f_{p},j_{p})\), where \(f_{p}\) is its fact description and \(j_{p}\) is its judgment. For a given case, there can be several precedents, which can be denoted as \(P=\{p_{1},p_{2},...,p_{n}\}\), where \(n\) is the number of precedents. Then the problem can be defined as: **Problem 1** (Legal Judgment Prediction).: _Given the fact description \(f\), our task is to get and comprehend the precedents \(P\), then predict the judgment \(j=(a,c,t)\)._ ## 4 Precedent-Enhanced LJP (PLJP) In this section, we describe our precedent-enhanced legal judgment prediction framework (PLJP), Fig. 2 shows the overall framework. ### Case Database Construction Before we use the precedents, we have to collect a large number of previous cases to construct a case database. Since the fact descriptions are usually long and elaborate, it is difficult for the models to get the proper precedents. To this end, we reorganize the fact description of these previous cases with the help of LLMs. #### 4.1.1 Fact Reorganization Given a fact description of a case, we summarize it from three aspects: subjective motivation, objective behavior, and ex post facto circumstances. The reorganization doesn't require human annotation and is completed by the LLMs with the following prompts: _"A fact description can be categorized into subjective motivation, objective behavior, and ex post facto circumstances. Subjective motivation refers to the psychological attitude of the perpetrator towards their harmful actions and their consequences, including intent, negligence, and purposes of the crime. Objective behavior pertains to the necessary conditions for constituting a crime in terms of observable activities, including harmful conduct, harmful results, and the causal relationship between the conduct and the results. Ex post facto circumstances are various factual situations considered when determining the severity of penalties. Mitigating circumstances for lenient punishment include voluntary surrender and meritorious conduct, while aggravating circumstances for harsher punishment include recidivism. Based on the provided information, your task is to summarize the following facts."_ The reorganization reduces the length of facts and makes the precedents easy to get and comprehend in the PLJP. After the reorganization, the fact description \(f\) is translated to a triplet \((sub,obj,ex)\), which indicates the subjective motivation, objective behavior, and ex post facto circumstances, respectively. Finally, a previous case in the case database is stored as a pair of reorganized facts and the judgment. ### Legal Judgment Prediction Next, we describe the collaboration of the LLM and domain models in legal judgment prediction. #### 4.2.1 Domain Models The domain models are trained on specific datasets, aiming to solve certain tasks. Here, we use two kinds of domain models, including the predictive model and the retrieval model. Predictive model.The predictive model takes the fact description as the input and outputs the candidate labels of the three sub-tasks (e.g., law article, charge, prison term). Since the fact description \(f=\{w_{t}^{f}\}_{t=1}^{lf}\) are sequences of words, we first transform it into embedding sequence \(H^{f}\in\mathbb{R}^{lf}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^ {\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^ {\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{ }^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{ to obtain the label probability distribution \(P\in\mathbb{R}^{m}\): \[\begin{split} h^{f}&=\mathrm{MaxPooling}(H^{f}),\\ P&=\mathrm{Softmax}(W^{p}\cdot h^{f}+b^{p}),\end{split} \tag{2}\] where \(W^{p}\in\mathbb{R}^{m\times d}\) and \(b^{p}\in\mathbb{R}^{m}\) are learnable parameters. Note \(m\) varies in different sub-tasks. Then, each sub-task gets its candidate labels according to the probability distribution \(P\), and the number of candidate labels is equal to the number of precedents \(n\). Retrieval model.The retrieval model aims to get the proper precedents of the given case based on its reorganized fact \((sub,obj,ex)\). Formally, to get the similarity score of any two texts \(D_{1}\) and \(D_{2}\), we will first encode each of them independently using the same encoder: \[h_{D_{1}}=\mathrm{Encoder}(D_{1}),h_{D_{2}}=\mathrm{Encoder}(D_{2}), \tag{3}\] where \(h_{D_{1}}\in\mathbb{R}^{d^{\prime}}\) and \(h_{D_{2}}\in\mathbb{R}^{d^{\prime}}\) are the embedding of each, \(d^{\prime}\) is the dimension. The similarity score \(s(D_{1},D_{2})\) is then the cosine similarity of the \(h_{D_{1}}\) and \(h_{D_{2}}\): \[s(D_{1},D_{2})=\frac{h_{D_{1}}\cdot h_{D_{2}}}{\left\|h_{D_{1}}\right\|\left\| h_{D_{2}}\right\|}. \tag{4}\] Here we concatenate the \(sub\), \(obj\) and \(ex\) into a whole text to calculate the similarity score of the given case and the cases in the case database. For each candidate label, we pick one case as the precedent: the case that has the highest similarity score and has the same label. For example, if the label "Theft" is in the candidate labels in the charge prediction, we will find the most similar previous case with the same label as the corresponding precedent. The one-to-one relationship between the candidate label and precedent helps the LLM distinguish the differences among the labels. In other words, the precedent serves as a supplementary explanation of the label. Finally, we get precedents \(P=\{p_{1},p_{2},...,p_{n}\}\) for the given case. #### 4.2.2 LLMs The large language models are models with billions of parameters, which are trained on large-scale corpora, and show strong capabilities in interpreting and generating complex natural language. LLMs contribute to PLJP by fact reorganization and in-context precedent comprehension. Fact ReorganizationThe fact reorganization is described in case database construction (Sec. 4.1.1), which aims to summarize the fact description from three aspects by the LLMs. Besides the database contribution, as Fig. 2 shows, when a new test case comes, the LLMs will reorganize the fact description with the same prompt. In-Context Precedent ComprehensionSince LLMs are capable of understanding complex natural language, we stack the given case with its precedents and let the LLMs make the final prediction by an in-context precedent comprehension. Specifically, the prompt of law article prediction is designed as follows: _"Based on the facts, we select the candidate law articles by the domain models and select the following three precedents based on the candidate law articles. Please comprehend the difference among the precedents, then compare them with the facts of this case, and choose the final label."_ Consider the topological dependencies among the three sub-tasks (Zhong et al., 2018), in the prediction of charge, we add the predicted law article in the prompt; in the prediction of prison term, we add the predicted law article and charge. ### Training In PLJP, considering the realizability, we train domain models on legal datasets and leave the LLMs unchanged. To train predictive models, the cross-entropy loss is employed. As for retrieval models, contrastive loss is used like Izacard et al. (2022). includes a fact description accompanied by a complete judgment encompassing three labels: law articles, charges, and prison terms2. Footnote 2: Prison terms are divided into non-overlapping intervals. To mitigate the potential data leakage during the training of LLMs, which were trained on corpora collected until September 2021, we have compiled a new dataset called CJO22. This dataset exclusively contains legal cases that occurred after 2022, sourced from the same origin as CAIL20183. However, due to its limited size, the newly collected CJO22 dataset is inadequate for the training purposes of the domain models. Consequently, we utilize it solely as an additional test set. To facilitate meaningful comparisons, we retain only the labels that are common to both datasets, considering that the labels may not be entirely aligned. Footnote 3: [https://wenshu.court.gov.cn/](https://wenshu.court.gov.cn/) Tab. 1 shows the statistics of the processed datasets, and all the experiments are conducted on the same datasets. For CAIL2018 dataset, we randomly divide it into training set, validation set and test set according to the ratio of 8: 1: 1. The previous cases in the case database are sampled from the training dataset, and we set the amount to 4000. ### Baselines For domain-specific LJP baselines, we implement the following for comparison: **CNN**[11] extracts text features through convolutional operations with different kernels for text classification; **BERT**[12] is a pre-trained language model and can be easily fine-tuned on the downstream tasks; **TopJudge**[23] use multi-task learn \begin{table} \begin{tabular}{l c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{CJO22} & \multicolumn{3}{c}{CAIL2018} \\ & Acc & Ma-P & Ma-R & Ma-F & Acc & Ma-P & Ma-R & Ma-F \\ \hline CNN [11] & 76.14 & 35.48 & 38.55 & 35.39 & 80.50 & 40.10 & 38.33 & 38.49 \\ BERT [14] & 82.62 & 45.89 & 47.91 & 45.83 & 82.77 & 36.82 & 35.94 & 35.82 \\ Roberta [15] & 80.32 & 42.36 & 44.22 & 41.80 & 83.08 & 48.09 & 44.25 & 44.87 \\ TopJudge [23] & 78.73 & 40.38 & 41.47 & 40.09 & 80.46 & 40.96 & 40.96 & 38.24 \\ R-Former [23] & 87.69 & 53.03 & 49.35 & 50.23 & **87.82** & 56.13 & 56.57 & 55.81 \\ LADAN [10] & 79.44 & 48.43 & 44.13 & 46.18 & 82.82 & 42.57 & 39.00 & 40.71 \\ Neurludge [24] & 71.38 & 52.86 & 53.52 & 52.62 & 76.91 & 55.95 & 52.92 & 53.56 \\ EPM[12] & 84.19 & 47.21 & 43.79 & 44.39 & 85.80 & 49.08 & 45.76 & 47.32 \\ CTM[15] & 79.44 & 47.83 & 42.25 & 43.43 & 84.72 & 46.46 & 44.83 & 45.10 \\ Dav003 & 2.10 & 0.82 & 0.17 & 0.26 & 1.02 & 0.30 & 0.08 & 0.13 \\ 3.5urto & 9.13 & 2.54 & 1.61 & 1.53 & 4.08 & 4.95 & 3.64 & 2.30 \\ \hline PLJP(CNN) & 87.67 & 55.21 & 55.59 & 54.37 & 86.05 & 58.08 & 56.46 & 54.92 \\ PLJP(BERT) & **94.18** & **74.65** & **76.23** & **74.84** & 87.07 & **58.81** & **57.29** & **56.63** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of law article prediction, the best is **bolded** and the second best is underlined. Figure 3: The Ma-F of PLJP with different number of precedents. \begin{table} \begin{tabular}{l c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{CJO22} & \multicolumn{3}{c}{CAIL2018} \\ & Acc & Ma-P & Ma-R & Ma-F & Acc & Ma-P & Ma-R & Ma-F \\ \hline CNN [11] & 74.91 & 74.00 & 78.12 & 73.97 & 87.52 & 88.23 & 88.31 & 88.17 \\ BERT [14] & 80.50 & 80.34 & 81.09 & 78.36 & 89.10 & 90.10 & 89.48 & 89.63 \\ Roberta [15] & 79.26 & 78.93 & 81.25 & 78.18 & 90.30 & 91.02 & 90.97 & 90.94 \\ Topludge [23] & 76.67 & 74.00 & 77.40 & 74.62 & 87.31 & 88.68 & 87.84 & 88.20 \\ R-Former [23] & 90.71 & **93.06** & 88.66 & **89.82** & 91.54 & 91.61 & **91.96** & **91.58** \\ LADAN [10] & 79.64 & 48.43 & 44.13 & 46.18 & 88.09 & 90.12 & 88.82 & 89.47 \\ NeurJudge [24] & 71.85 & 69.37 & 71.09 & 68.66 & 82.13 & 82.71 & 82.30 & 82.36 \\ EPM[12] & 83.49 & 80.36 & 83.29 & 81.87 & 91.20 & 90.81 & 89.99 & 90.46 \\ CTM[15] & 79.33 & 82.39 & 83.12 & 82.81 & 90.28 & 90.34 & 88.08 & 86.30 \\ Dav003 & 44.65 & 52.43 & 32.93 & 35.29 & 25.85 & 35.37 & 25.09 & 22.08 \\ 3.5urto & 58.37 & 56.03 & 40.68 & 42.62 & 49.65 & 42.29 & 34.05 & 31.85 \\ \hline PLJP(CNN) & 91.62 & 83.43 & 84.88 & 83.40 & 91.49 & 81.80 & 83.95 & 80.06 \\ PLJP(BERT) & **94.18** & 90.25 & **88.67** & 89.05 & **94.99** & **92.12** & 91.10 & 91.33 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of charge prediction, the best is **bolded** and the second best is underlined. ing and capture the dependencies among the three sub-task in LJP; **NeurJudge**Yue et al. (2021) splits the fact description into different parts for making predictions; **R-Former**Dong and Niu (2021) formalizes LJP as a node classification problem over a global consistency graph and relational learning is introduced; **LADAN**Xu et al. (2020) uses graph distillation to extract discriminative features of the fact **Retri-BERT**Chalkidis and Kenentchedjhieva (2023) retrieves similar documents to augment the input document representation for multi-label text classification; **EPM**Feng et al. (2022) locates event-related information essential for judgment while utilizing cross-task consistency constraints among the subtasks; **CTM**Liu et al. (2022) establishes a LJP framework with case triple modeling from contrastive case relations. We use the LLM baselines as follows4: **Dav003** means the text-davinci-003, **3.5turbo** means the gpt-3.5-turbo. These LLMs are both from the GPT-3.5 family, released by OpenAI and can understand and generate complex natural language5. Footnote 4: We give a fixed example in the prompt to help the LLMs understand the tasks. Footnote 5: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) For PLJP, we take the CNN and BERT as the predictive models, and take the text-davinci-003 as the implementation of the LLM, named as PLJP(CNN) and PLJP(BERT). The top-k accuracy of CNN and BERT is shown in the Appendix. Considering the length limit of the prompt, we set the number of precedents to 3. We also do ablation experiments as follows: **PLJP w/o p** refers to the removal of precedents, and the prediction of labels is done solely based on the candidate labels using the LLM; **PLJP w/o c** denotes we remove the candidate labels and predict the label only with the fact description and precedents; **PLJP w/o d** means we predict the three labels independently instead of considering the dependencies among the three subtasks; **PLJP w/o r** denotes we find precedents based the raw fact instead of from the reorganized fact; **PLJP w/ e** means we let the LLMs generate the explanation of the prediction as well. In the ablation study, PLJP means PLJP(BERT). ### Experiment Settings Here we describe the implementation of PLJP in our experiments. Note all the LLMs and domain models are replaceable in the PLJP framework. In the experiments, for the LLMs, we directly use the APIs provided by OpenAI. For the domain models, we use the unsupervised dense retrieval model Izacard et al. (2022) in precedent retrieval, which gets the precedents from the case database \begin{table} \begin{tabular}{l c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{CJO22} & \multicolumn{3}{c}{CAIL2018} \\ & Acc & Ma-P & Ma-R & Ma-F & Acc & Ma-P & Ma-R & Ma-F \\ \hline CNN LeCun et al. (1989) & 27.38 & 18.48 & 17.51 & 17.44 & 34.42 & 32.22 & 30.53 & 31.05 \\ BERT Devlin et al. (2019) & 36.80 & 29.83 & 27.50 & 27.03 & 40.00 & 37.53 & 33.66 & 33.58 \\ Roberta Liu et al. (2019) & 29.74 & 24.73 & 24.76 & 23.22 & 40.84 & 38.62 & **38.55** & **38.50** \\ TopJudge Zhong et al. (2018) & 27.14 & 19.76 & 17.69 & 17.94 & 35.54 & 33.55 & 31.08 & 32.00 \\ R-Former Dong and Niu (2021) & 38.63 & 32.63 & 32.76 & 29.51 & 40.70 & 36.09 & 36.76 & 35.04 \\ LADAN Xu et al. (2020) & 33.69 & 26.40 & 22.94 & 24.55 & 38.03 & 33.66 & 30.08 & 31.77 \\ NeurJudge Yue et al. (2021) & 26.80 & 26.81 & 26.85 & 25.97 & 33.53 & 36.46 & 37.26 & 36.53 \\ EPMF Feng et al. (2022) & 36.91 & 30.65 & 31.61 & 30.20 & 40.25 & 37.96 & 37.00 & 37.34 \\ CTM Liu et al. (2022) & 36.81 & 27.10 & 25.96 & 26.46 & 39.56 & 38.66 & 38.02 & 37.84 \\ Dav003 & 0.47 & 5.56 & 0.21 & 0.41 & 0.68 & 10.38 & 0.49 & 0.94 \\ 3.5turbo & 1.40 & 1.16 & 1.07 & 1.11 & 1.02 & 2.71 & 1.13 & 1.15 \\ \hline PLJP(CNN) & 36.51 & 20.21 & 21.44 & 20.07 & 40.81 & 32.77 & 35.59 & 25.71 \\ PLJP(BERT) & **43.52** & **33.37** & **35.67** & **31.98** & **48.72** & **42.64** & 36.80 & 35.43 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of prison term prediction, the best is **bolded** and the second best is underlined. \begin{table} \begin{tabular}{l c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{CJO22} & \multicolumn{3}{c}{CAIL2018} \\ & Law Article & \multicolumn{3}{c|}{Charge} & \multicolumn{3}{c}{Prison Term} & \multicolumn{3}{c}{Law Article} & \multicolumn{3}{c}{Charge} & \multicolumn{3}{c}{Prison Term} \\ & Acc & Ma-F & Acc & Ma-F & Acc & Ma-F & Acc & Ma-F & Acc & Ma-F & Acc & Ma-F \\ \hline w/o p & 54.65 & 28.32 & 83.48 & 76.33 & 35.81 & 20.84 & 85.03 & 51.54 & 85.03 & 70.07 & 32.31 & 22.58 \\ w/o c & 45.34 & 40.22 & 42.32 & 41.85 & 32.55 & 20.26 & 67.35 & 46.65 & 72.79 & 60.34 & 26.53 & 13.66 \\ w/o d & **94.18** & **74.84** & 85.58 & 70.50 & 39.53 & 20.31 & 87.07 & 56.63 & 87.41 & 73.45 & 38.09 & 21.44 \\ w/o r & 88.13 & 87.85 & 75.67 & 74.83 & 36.27 & 23.70 & 86.05 & 58.26 & 86.73 & 77.53 & 38.10 & 21.70 \\ w/ e & 90.70 & 67.90 & 80.70 & 66.53 & 35.35 & 20.21 & **98.0** & **61.64** & 85.37 & 68.48 & 38.44 & 23.14 \\ \hline PLJP & **94.18** & **74.84** & **94.18** & **89.05** & **43.52** & **31.98** & 87.07 & 56.63 & **94.99** & **91.33** & **48.72** & **35.43** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of ablation experiments, the best is **bolded** and the second best is underlined. according to the reorganized facts. For other domain models such as TopJudge and NeurJudge, we use the training settings from the original paper. For the metrics, we employ Accuracy (Acc), Macro-Precision (Ma-P), Macro-Recall (Ma-R) and Macro-F1 (Ma-F). ### Experiment Results We analyze the experimental results in this section. Result of judgment prediction:From Tab. 2, Tab. 3 and Tab. 4, we have the following observations: 1) The LLMs perform not well in the prediction tasks alone, especially when the label has no actual meaning (e.g., the index of the law article and prison term). 2) By applying our PLJP framework with the collaboration of LLMs and domain models, the simple models (e.g., CNN, BERT) gain significant improvement. 3) The model performance on CJO22 is lower than that on CAIL2018, which shows the challenge of the newly constructed test set. 4) PLJP(BERT) achieves the best performance in almost all the metric evaluation metrics in both CAIL2018 and CJO22 test sets, which proves the effectiveness of the PLJP. 5) Compared to the prediction of the law article and charge, the prediction of prison term is still a more challenging task. 6) The reported results of the LJP baselines are not as good as the original papers, this may be because we keep all the low-frequency labels instead of removing them as the original papers did. Results of ablation experiment:From Tab. 5, we can conclude that: 1) The performance gap of the PLJP w/o p and PLJP demonstrates the effects of the precedents. 2) The results of PLJP w/o c prove the importance of the candidate labels. 3) Considering the topological dependence of the three sub-tasks benefits the model performance as PLJP w/o d shows. 4) When we use the raw fact instead of the reorganized fact, the performance drops (e.g., the Acc of prison term in CJO22 drops from 45.32% to 36.27%). 5) If we force the LLMs to generate the explanation of the prediction, the performance also drops a bit. We put cases with explanations in the Appendix. From Fig. 3, we can find that the performance of PLJP improves as the number of precedents increases, which also proves the effectiveness of injecting precedents into the LJP. ### Case Study Fig. 4 shows an intuitive comparison among the three methods in the process of charge prediction. Based on the fact description of the given case, the domain models provide candidate charges with the corresponding precedents. As the case shows, the defendant made fraud by selling the cars that were rented from other people. However, since there contains "contract" in the fact description, baselines (e.g., R-Former and BERT) can be misled and predict the wrong charge of "Contract Fraud". Through an in-context precedent comprehension by the LLMs, PLJP(BERT) distinguishes the dif Figure 4: The charge prediction of a given case. The green parts are useful information for prediction, while the red parts are content that can be confused by the domain models. ferences among the precedents and the given case (e.g., the crime does not occur during the contracting process, and the contract is only a means to commit the crime), and give the right result of "Fraud". ## 6 Conclusion and Future Work In this paper, we address the important task of legal judgment prediction (LJP) by taking precedents into consideration. We propose a novel framework called precedent-enhanced legal judgment prediction (PLJP), which combines the strength of both LLMs and domain models to better utilize (e.g., retrieve and comprehend) the precedents. Experiments on the real-world dataset prove the effectiveness of the PLJP. Based on the PLJP, in the future, we can explore the following directions: 1) Develop methods to identify and mitigate any biases that could affect the predictions and ensure fair and equitable outcomes. 2) Validate the effectiveness of LLM and domain collaboration in other vertical domains such as medicine and education. ### Ethical Discussion With the increasing adoption of Legal AI in the field of legal justice, there has been a growing awareness of the ethical implications involved. The potential for even minor errors or biases in AI-powered systems can lead to significant consequences. In light of these concerns, we have to claim that our work is an algorithmic exploration and will not be directly used in court so far. Our goal is to provide suggestions to judges rather than making final judgments without human intervention. In practical use, human judges should be the final safeguard to protect justice fairness. In the future, we plan to study how to identify and mitigate potential biases to ensure the fairness of the model. ## 7 Limitations In this section, we discuss the limitations of our works as follow: \(\bullet\) We only interact with the LLMs one round per time. The LLMs are capable of multi-round interaction (e.g., Though of Chains), which may help the LLM to better understand the LJP task. \(\bullet\) We validate the effectiveness of LLM and domain model collaboration in the legal domain. It's worthwhile to explore such collaboration in other vertical domains such as medicine and education, as well as in other legal datasets (e.g., the datasets from the Common Law system).
2303.09379
Remote gate control of topological transitions in moiré superlattices via cavity vacuum fields
Placed in cavity resonators with three-dimensionally confined electromagnetic wave, the interaction between quasiparticles in solids can be induced by exchanging virtual cavity photons, which can have a non-local characteristic. Here we investigate the possibility of utilizing this nonlocality to realize the remote control of the topological transition in mesoscopic moir\'{e} superlattices at full filling (one electron/hole per supercell) embedded in a split-ring terahertz electromagnetic resonator. We show that gate tuning one moir\'{e} superlattice can remotely drive a topological band inversion in another moir\'{e} superlattice not in contact but embedded in the same cavity. Our study of remote on/off switching of a topological transition provides a novel paradigm for the control of material properties via cavity vacuum fields.
Zuzhang Lin, Chengxin Xiao, Danh-Phuong Nguyen, Geva Arwas, Cristiano Ciuti, Wang Yao
2023-03-16T15:08:43Z
http://arxiv.org/abs/2303.09379v1
# Remote gate control of topological transitions in moire superlattices via cavity vacuum fields ###### Abstract Placed in cavity resonators with three-dimensionally confined electromagnetic wave, the interaction between quasiparticles in solids can be induced by exchanging virtual cavity photons, which can have a non-local characteristic. Here we investigate the possibility of utilizing this nonlocality to realize the remote control of the topological transition in mesoscopic moire superlattices at full filling (one electron/hole per supercell) embedded in a split-ring terahertz electromagnetic resonator. We show that gate tuning one moire superlattice can remotely drive a topological band inversion in another moire superlattice not in contact but embedded in the same cavity. Our study of remote on/off switching of a topological transition provides a novel paradigm for the control of material properties via cavity vacuum fields. In recent years, the strong interaction between light and condensed matter systems, typically realized in a cavity-embedded configuration, has attracted widespread research interest [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. One of the most salient feature of this regime is that the exchange of virtual cavity photons can mediate a plethora of virtual-excitation-dressed ground states [16], including superconductivity [17; 18], superfluidity [19] and charge-density-wave phases [20]. Most importantly, the virtual-photon-mediated interaction can possess a remarkable nonlocal character when the cavity photon mode field is discretized in energy while delocalized all over the electronic sample, as it has been pioneered in the context of the quantum Hall effect [21; 22; 23]. The nonlocality raises the intriguing possibility of remote control of a matter system. However, this has been largely overlooked as most research efforts regard cavity-embedded matter as a macroscopic system in the thermodynamic limit [3; 8; 24; 25; 26]. To uncover the remote control possibilities from the nonlocal characteristic, one has to consider mesoscopic configurations. In this respect, an interesting configuration is mesoscopic moire superlattice embedded in a metallic split-ring terahertz (THz) electromagnetic resonator [21; 22; 23; 24; 27; 28; 29; 30; 31; 32]. The THz resonator enjoys deep subwavelength mode confinement and strongly enhanced electric field vacuum fluctuations [30; 31]. moire superlattice-a platform for tailoring versatile material properties-- is suitable for exploring cavity control at frequency down to the THz range, given their meV scale mini-gaps tunable by twisting angles [33]. In experimental reality, these superlattices are mesoscopic with practically limited lattice sites, having spatial dimension much smaller than the cavity mode volume of the THz resonators. Most importantly, moire superlattices exhibit remarkable topological matter properties [34; 35; 33] and can serve as a prototype for remote control of topological transitions in matters. In this work, we demonstrate remote gate control of topological transition in a mesoscopic moire superlattice (moire 1), by gate tuning a second moire superlattice (moire 2) that shares the same cavity vacuum with moire 1. Within a mean-field description corroborated by exact diagonalization calculations for smaller size system, we find that the presence of a moire can perturb the cavity vacuum field, which, in turn, introduces a mass term to tune the topological transition of the moire minibands. This forms the basis of the cavity-mediated nonlocal interaction between two moire superlattices embedded in a common cavity. By tuning the interlayer bias applied on moire 2, a remote control of the mini-bands Chern numbers of moire 1 can be realized, and vice versa. We emphasize that the present mechanism does not require any electronic contact between the two moire samples which can also have different sizes and characteristic parameters. The principle can be straightforwardly extended to enable non-local interplay between multiple mesoscopic systems of distinct natures. We consider the configuration where moire 1 and moire 2 are embedded into a THz resonator (Fig. 1). As an exemplary demonstration, let us assume that both moire systems are transition metal dichalcogenides (TMDs) homobilayers with small twisting angles from 0 degree (R-type). The low energy valence states of such moire superlattice at the **K** (**K\({}^{\prime}\)**) valley can be described by a two-band tight-binding (TB) model, of complex amplitude next-nearest-neighbor hopping on hexagonal superlattice sites due to the real-space Berry connection from moire patterns, essentially a Haldane model [35; 33]. Without losing the essence of the physics to be discussed, we only consider one valley for each moire, as the remote control via the cavity vacuum is valley-independent by itself. In the basis of their Bloch eigenstates, \(\hat{H}_{1}\) and \(\hat{H}_{2}\) can be written as \(\hat{H}_{1}=\sum_{\textbf{k}}(E_{\textbf{k}}\hat{c}_{\textbf{k}}^{\dagger}\hat{c }_{\textbf{k}}\hat{s}_{\textbf{k}}+E_{\textbf{k}}\hat{c}_{\textbf{k}}^{\dagger} \hat{c}_{\textbf{k}})\) and \(\hat{H}_{2}=\sum_{\textbf{q}}(\mathcal{E}_{\textbf{gq}}\hat{d}_{\textbf{gq}}^{ \dagger}\hat{d}_{\textbf{gq}}+\mathcal{E}_{\textbf{e}}\hat{d}_{\textbf{rq}}^{ \dagger}\hat{d}_{\textbf{rq}})\), where \(\hat{c}^{\dagger}\) (\(\hat{c}\)) and \(\hat{d}^{\dagger}\) (\(\hat{d}\)) are creation (annihilation) fermionic operators of the quasiparticles in moire 1 and moire 2, respectively. Note that \(\mathcal{E}\) and \(\mathcal{E}\) are corresponding moire mini-band energies. The subscripts \(g\) and \(e\) respectively refer to the lower and upper bands, while **k** and **q** are the wavevectors of moire 1 and moire 2, respectively. The interlayer bias applied to such moire superlattice creates onsite energy difference between its two sublattices, which can tune the mini-bands dispersion and topology locally. Let us now consider a single-mode cavity with a cavity field polarized along the plane of the moire superlattices. Within the TB model of the moire superlattices the cavity coupling is enforced via the Peierls substitution. In the following, we will consider the light-matter Hamiltonian: \[\hat{H}=\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{1v}+\hat{H}_{2v}+\hat{H}_{f}, \tag{1}\] where \(\hat{H}_{f}=\hbar o\hat{a}^{\dagger}\hat{a}\) is the bare cavity Hamiltonian with \(\omega\) the frequency of the cavity mode and \(\hbar\) the reduced Planck constant. The operator \(\hat{H}_{1v}=\chi\left(a+a^{\dagger}\right)\hat{\mathcal{M}}\) (\(\hat{H}_{2v}=\chi\left(a+a^{\dagger}\right)\hat{\mathcal{N}}\)) describes the interaction between the cavity quantized field and moire 1 (moire 2), \(\chi\) is the coupling strength, and \(\hat{\mathcal{M}}\equiv\sum_{f\hbar}\mathcal{M}_{f\hbar}\hat{\epsilon}_{f \hbar}^{\dagger}\hat{\epsilon}_{\hbar}\hat{\epsilon}_{\hbar}\) (\(\hat{\mathcal{N}}\equiv\sum_{f\hbar}\mathcal{N}_{f\hbar}\hat{\epsilon}_{f \hbar}^{\dagger}\hat{a}_{\hbar}\)) is a Hermitian operator with \(i\), \(f=g,e\). The Hamiltonian \(\hat{H}\) acts on a Hilbert space consisting of subspaces \(\xi_{n}\) (\(n=0,1,2...\)) in which the photon number is \(\langle a^{\dagger}a\rangle=n\). Following the Schrieffer-Wolff (SW) transformation [36] to eliminate the light-matter interaction (\(\hat{H}_{1v}+\hat{H}_{2v}\)) to the first order, we get a block diagonalized Hamiltonian \(\hat{H}_{S}=e^{\hat{S}}\hat{H}e^{-\hat{S}}\) with the generator \(\hat{S}\) satisfying \(\{\hat{S},\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{f}\}=-(\hat{H}_{1v}+\hat{H}_{2v})\). Projecting the Hamiltonian \(\hat{H}_{S}\) into the low energy sector (see Supplementary Material) gives an effective many-body Hamiltonian \(\hat{H}_{\text{tot, eff}}=\hat{H}_{1,\text{eff}}+\hat{H}_{2,\text{eff}}+\hat{H}_{ 1}+\hat{I}_{2}\). While the first two terms \(\hat{H}_{1,\text{eff}}=\hat{H}_{1}+\frac{1}{2}[\hat{S}_{1},\hat{H}_{1v}]\) and \(\hat{H}_{2,\text{eff}}=\hat{H}_{2}+\frac{1}{2}[\hat{S}_{2},\hat{H}_{2v}]\) describe the interaction of quasiparticles respectively in moire 1 and moire 2, the remaining two terms \(\hat{I}_{1}=\frac{1}{2}[\hat{S}_{1},\hat{H}_{2v}]\) and \(\hat{I}_{2}=\frac{1}{2}[\hat{S}_{2},\hat{H}_{1v}]\) denote that the quasiparticles in moire 1 interact with quasiparticles in moire 2. The specific form of Figure 1: **Sketch of the remote topological control scheme via cavity vacuum fields.** (A) Set-up with two moiré superlattices (moiré 1 and moire 2) embedded in a metallic split-ring THz electromagnetic resonator. (B) Schematic diagram of the spatial dependence of the cavity electric field concentrated on the gap of the metallic split-ring THz electromagnetic resonator (the red color denotes the part with the largest electric vacuum field). (C) Illustration of the physical mechanism providing remote control of the topological transition via cavity vacuum fields. The two moire superlattices sharing the same cavity vacuum interact by exchanging virtual photons. By gate tuning moiré 2, a topological transition is induced in moire 1. Equation (2) indicates that the photons absorbed (emitted) during the transitions \(\ket{i}\rightarrow\ket{f}\) of particles in moire 1 will be emitted (absorbed) accompanying the transitions \(\ket{i^{\prime}}\rightarrow\ket{f^{\prime}}\) of particles in moire 2. These are the cavity-mediated interaction terms responsible for the remote control of moire 2 on moire 1, and vice versa. Note that for simplicity we have omitted the Coulomb electron-electron interaction terms in each moire superlattice assuming that Coulomb interaction is strongly screened by a dielectric substrate. Regardless, the remote control scheme would remain the same with the cavity-mediated interaction between remote moire superlattices. Within a mean-field framework, we can approximate the bilinear terms in \(\hat{H}_{\text{tot, eff}}\) as \(\hat{O}\hat{O}^{\prime}\approx\langle O\rangle\hat{O}^{\prime}+\hat{O}\langle O ^{\prime}\rangle-\langle O\rangle\langle O^{\prime}\rangle\), where \(\hat{O},\hat{O}^{\prime}=\hat{e}_{f\mathbf{k}}^{\dagger}\hat{e}_{\mathbf{k}}, \hat{d}_{f\mathbf{q}}^{\dagger}\hat{d}_{i\mathbf{q}}\). By grouping the resulting terms according to the operators \(\hat{e}_{f\mathbf{k}}^{\dagger}\hat{e}_{\mathbf{k}}\) and \(\hat{d}_{f\mathbf{q}}^{\dagger}\hat{d}_{i\mathbf{q}}\), we get a mean-field Hamiltonian \(\hat{H}_{\text{MF}}=\hat{H}_{\text{1,MF}}+\hat{H}_{\text{2,MF}}\), where \(\hat{H}_{\text{1,MF}}\) and \(\hat{H}_{\text{2,MF}}\) respectively describe the mean-field effects on moire 1 and moire 2: \[\hat{H}_{\text{1,MF}} =\sum_{\mathbf{k}}[\tilde{E}_{\mathbf{g}\mathbf{k}}\hat{e}_{ \mathbf{g}\mathbf{k}}^{\dagger}\hat{e}_{\mathbf{g}\mathbf{k}}+\tilde{E}_{ \mathbf{c}\mathbf{k}}\hat{e}_{\mathbf{c}\mathbf{k}}^{\dagger}\hat{e}_{\mathbf{ c}\mathbf{k}}+(\tilde{i}_{\mathbf{1}\mathbf{k}}\hat{e}_{\mathbf{g}\mathbf{k}}^{ \dagger}\hat{e}_{\mathbf{c}\mathbf{k}}+h.c.)]+\varepsilon_{1}\,, \tag{3}\] \[\hat{H}_{\text{2,MF}} =\sum_{\mathbf{q}}[\tilde{\varepsilon}_{\mathbf{g}\mathbf{q}}\hat{ d}_{\mathbf{g}\mathbf{q}}^{\dagger}\hat{d}_{\mathbf{g}\mathbf{q}}+\tilde{E}_{ \mathbf{c}\mathbf{q}}\hat{d}_{\mathbf{q}\mathbf{q}}^{\dagger}\hat{d}_{\mathbf{ eq}}+(\tilde{i}_{\mathbf{2}\mathbf{d}}\hat{d}_{\mathbf{g}\mathbf{q}}^{ \dagger}\hat{d}_{\mathbf{eq}}+h.c.)]+\varepsilon_{2}.\] Here \(\tilde{E}_{\mathbf{g}\mathbf{k}}\), \(\tilde{E}_{\mathbf{c}\mathbf{k}}\), \(\tilde{i}_{\mathbf{1k}}\), \(\tilde{\mathcal{E}}_{\mathbf{gq}}\), \(\tilde{\mathcal{E}}_{\mathbf{eq}}\), \(\tilde{i}_{\mathbf{2q}}\), \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are renormalized parameters with mean-field corrections (see details in Supplementary Material). The many-body ground state \(\ket{\Psi}=\prod_{\mathbf{k}}\left(u_{\mathbf{1k}}^{*}\hat{e}_{\mathbf{g} \mathbf{k}}^{\dagger}+v_{\mathbf{1k}}^{*}\hat{e}_{\mathbf{c}\mathbf{k}}^{*} \right)\prod_{\mathbf{q}}\left(u_{\mathbf{2q}}^{*}\hat{d}_{\mathbf{g}\mathbf{q }}^{\dagger}+v_{\mathbf{2q}}^{*}\hat{d}_{\mathbf{eq}}^{\dagger}\right)\ket{0}\) features interband coherence of moire 1 (moire 2) characterized by the mean-field order parameter \(\Delta_{\mathbf{1k}}=v_{\mathbf{1k}}u_{\mathbf{1k}}^{*}\) (\(\Delta_{\mathbf{2q}}=v_{\mathbf{2q}}u_{\mathbf{2q}}^{*}\)). The mean-field order parameters can be solved self-consistently through the following gap-like equations \[\Delta_{\mathbf{1k}} =-\frac{\tilde{i}_{\mathbf{1k}}}{\sqrt{4\left|\tilde{i}_{\mathbf{ 1k}}\right|^{2}+\left(\tilde{E}_{\mathbf{c}\mathbf{k}}-\tilde{E}_{\mathbf{g} \mathbf{k}}\right)^{2}}}, \tag{4}\] \[\Delta_{\mathbf{2q}} =-\frac{\tilde{i}_{\mathbf{2q}}}{\sqrt{4\left|\tilde{i}_{\mathbf{ 2q}}\right|^{2}+\left(\tilde{E}_{\mathbf{eq}}-\tilde{E}_{\mathbf{g}\mathbf{q }}\right)^{2}}}.\] Note that these two equations are not independent: the order parameters of moire 1 are affected by the order parameters of moire 2, and vice versa. To test the validity of our mean-field approach, we have also performed exact diagonalization results with a small number of electrons, yielding the same qualitative results (see the Supplementary Material). In the calculations presented below, to exemplify the dissimilarities of the two moire superlattices, we use a 21 by 21 superlattice for moire 1 with strengths of the nearest and next-nearest neighbor hopping being 0.29 meV and 0.06 meV respectively [33], while moire 2 is a 10 by 10 superlattice with the corresponding hopping amplitudes being 0.5 meV and 0.2 meV instead [35]. The phase of the next-nearest-hoping is \(2\pi/3\) for both moires, corresponding to a positive flux Haldane model from valley K. We consider a THz resonator cavity mode of volume \(V=7\times 10^{6}\) nm\({}^{3}\) and quantized mode energy \(\hbar o\) = 8.1 meV, which leads to a light-matter coupling strength \(\chi=0.17\). More details are given in Supplementary Material. As an example, we first solve the gap equation by fixing the interlayer bias applied on moire 1 at 0.7 meV. In the absence of the cavity quantized field (i.e., \(\chi=0\)), moire 1 displays an electronic band gap at the \(\mathbf{K}\) point (Fig. 2A) and is topologically trivial. Embedding moire 1 _alone_ in the cavity, we find negligible change is introduced to its electronic structure at the given bias, whereas the cavity vacuum is also negligibly Figure 2: **Inter-miniband transition energy, mean-field order parameter, and topological band inversion.** (A-B) Color plot of the inter-miniband transition energy in reciprocal space for moiré 1 consisting of 21 by 21 superlattice sites. Calculation in (A) uses the bare Hamiltonian \(\hat{H}_{\text{1,i.e.}}\) in the absence of cavity, and (B) uses \(\hat{H}_{\text{1,MF}}\) in the presence of cavity quantized field perturbed by a second superlattice (moire 2). See text. (C) Color plot of the mean-field order parameter \(\Delta_{\mathbf{1k}}\) in reciprocal space, corresponding to the calculation in (B). (D) The miniband dispersion predicted by moiré 1’s mean-field Hamiltonian \(\hat{H}_{\text{1,MF}}\). The eigenstate amplitudes on to moiré 1’s behave Hamiltonian basis are indicated by the size of the spheres. perturbed. When the cavity also hosts a second moire, tuning the bias of moire 2 can drastically change the electronic structure of moire 1. In Fig. 2B, we plot the mini-band transition energies of moire 1, calculated when moire 2 is biased at 0.2 meV, which now exhibits an electronic band gap at the \(\mathbf{P}\) point instead. Its ground state has a pronounced interband coherence near the \(\mathbf{K}\) point (Fig. 2C), which is reasonable according to the Eq. (4) as \(\tilde{\mathcal{E}}_{e\mathbf{k}}-\tilde{\mathcal{E}}_{g\mathbf{k}}\propto \mathcal{E}_{e\mathbf{k}}-\mathcal{E}_{g\mathbf{k}}\) reaches the smallest value near the \(\mathbf{K}\) point. Notably, a small circular region (indicated by the red dashed circle in Fig. 2C) where \(\Delta_{\mathbf{1k}}\) is almost zero, is surrounded by the areas with maximal interband coherence (\(\Delta_{\mathbf{1k}}\sim 0.5\)). We note that the zero \(\Delta_{\mathbf{1k}}\) at this region is due to band inversion (\(v_{\mathbf{1k}}=1,u_{\mathbf{1k}}=0\)), which is different from the zero \(\Delta_{\mathbf{1k}}\) elsewhere (e.g. in the region near \(\Gamma\) point where \(v_{\mathbf{1k}}=0,u_{\mathbf{1k}}=1\)). This is confirmed by the band dispersion of the Hamiltonian \(\hat{\mathbf{H}}_{\mathbf{1,MF}}\) and by the wavefunction projections on the original Hamiltonian basis (Fig. 2D). Furthermore, the calculated Chern numbers of the lower and upper bands are found to be 1 and -1, respectively. Therefore, in the presence of moire 2, the cavity-mediated interaction has provided a topological nontrivial mass term to moire 1. This topological nontrivial mass term on moire 1 arising from the cavity-mediated coupling is tunable by the interlayer bias on moire 2. As a result, gate tuning moire 2 will realize a remote control of the topological transition in moire 1. By tuning the interlayer bias on moire 2 (denoted as \(V_{2}\) hereafter) from 2 meV to 0 meV, we indeed observe the gap of moire 1 closes and reopens at a critical value of 1.2 meV of \(V_{2}\), accompanied by a corresponding step change in Chern number from zero to one (Fig. 3A). Conversely, the remote control of topological transition in moire 2 by gate tuning moire 1 (\(V_{1}\) Figure 3: **Remote gate control of topological transitions.** (A) Topological transition of moiré 1 controlled remotely by the interlayer bias of moire 2 (\(V_{2}\)), while fixing moiré 1’s own bias at 0.7 meV. The change of cavity field (\(a\)), moire 1’s Chern number and gap as function of \(V_{2}\) are shown respectively by the background color, red dots, black squares. (B) The reciprocal topological control of moiré 2 remotely by the interlayer bias of moire 1 (\(V_{1}\)), while fixing moiré 2’s bias at 2.2 meV. (C) Schematic diagram of the interaction due to the exchange of virtual photons between moire 1 and moiré 2. At two \(V_{2}\) values, the mean-field order parameter \(\Delta_{\mathbf{1k}}\) is shown, where moiré 1 is topologically trivial and nontrivial respectively. can also be realized (Fig. 3B). To reveal the physical insight of the remote control, we calculate the expectation value of field operator \(\hat{a}\) to the leading order \[\begin{split}\langle a\rangle&=\langle\Psi|e^{\hat{ S}}\hat{a}e^{-\hat{S}}|\Psi\rangle\\ &\sum_{f\hat{\mathbf{k}}}\frac{\chi\mathcal{M}_{f\hat{\mathbf{k} }}\left\langle\hat{e}_{f\mathbf{k}}\hat{e}_{\hat{\mathbf{k}}}\right\rangle}{E _{\mathbf{i}\mathbf{k}}-E_{f\mathbf{k}}-\hbar o}+\sum_{f\hat{\mathbf{i}}\mathbf{ q}}\frac{\chi\mathcal{N}_{f\hat{\mathbf{i}}\mathbf{q}}\left\langle\hat{d}_{f \mathbf{q}}\hat{d}_{\mathbf{iq}}\right\rangle}{E_{\mathbf{i}\mathbf{q}}-E_{f \mathbf{q}}-\hbar o}\end{split} \tag{5}\] as a function of the interlayer bias. As shown in Fig. 3A, when \(V_{2}\) is tuned from 2 meV to 1.8 meV, \(\langle a\rangle\) is negligibly small and the gap of moire 1 remains unchanged. Further reducing \(V_{2}\), \(\langle a\rangle\) starts to increase noticeably, and at the same time the gap of moire 1 starts to change, and eventually a topological transition occurs. Therefore, the remote control of topological transition in moire 1 is realized through modulating the cavity vacuum upon gate tuning moire 2. We find that nonzero \(\langle a\rangle\)[37] occurs simultaneously with the interband coherence of the electronic many-body ground state. In parameter regimes where \(\langle a\rangle\) vanishes, both moires have negligible interband coherence in the ground states and have no response to the remote control gate. The threshold \(\langle a\rangle\) value needed to bring a moire across the topological transition point depends on the parameters of its bare Hamiltonian without the cavity (c.f. Fig. 3A and 3B). We also notice that the light-matter coupling terms \(\chi(\hat{a}^{\dagger}+\hat{a})\sum_{\mathbf{k}}\mathcal{M}_{g\mathbf{k}}\hat{ e}_{g\mathbf{k}}^{\dagger}\hat{e}_{g\mathbf{k}}\) and \(\chi(\hat{a}^{\dagger}+\hat{a})\sum_{\mathbf{q}}\mathcal{N}_{g\mathbf{q}}\hat{ d}_{g\mathbf{q}}^{\dagger}\hat{d}_{g\mathbf{q}}\), which perturb the cavity vacuum while leaving the electronic state unaffected, are essential here. The nonzero value of \(\mathcal{M}_{g\mathbf{k}}\) and \(\mathcal{N}_{g\mathbf{q}}\) are allowed here by the lack of parity in eigenstates of Hamiltonians \(\hat{H}_{1}\) and \(\hat{H}_{2}\) as the out-of-plane mirror symmetry is broken in twisted TMDs bilayers. The expectation values \(\langle\mathcal{M}_{g\mathbf{k}}\hat{e}_{g\mathbf{k}}^{\dagger}\hat{e}_{g \mathbf{k}}\rangle\) and \(\langle\sum_{\mathbf{q}}\mathcal{N}_{g\mathbf{q}}\hat{d}_{g\mathbf{q}}^{ \dagger}\hat{d}_{g\mathbf{q}}\rangle\) vanish in the ground states of the bare moire Hamiltonians \(\hat{H}_{1}\) and \(\hat{H}_{2}\) respectively, but become finite in the ground states of their mean-field interaction Hamiltonian \(\hat{H}_{1,\text{MF}}\) and \(\hat{H}_{2,\text{MF}}\) under bias parameters where interband coherence spontaneously emerges. In conclusion, we have shown that by gate tuning a remote moire superlattice it is possible to induce a topological transition of a second mesoscopic moire system via cavity vacuum field. The remote cascade control of multiple moire superlattices embedded in one cavity is possible following the same scheme. Besides topological transitions, the mesoscopic system consisting of cavity-embedded moire superlattices may also provide an exciting platform to investigate the possible remote control of other physical properties, such as superconductivity and ferromagnetism. **Acknowledgment:** We thank Hsun-Chi Chan for helping us generate Fig. 1B. **Fundings:** The work is supported by Research Grant Council (RGC) of Hong Kong SAR China through grants HKU SRFS2122-7S05 and AoE/P-701/20, and a grant under the ANR/RGC Joint Research Scheme sponsored by RGC and French National Research Agency (A-HKU705/21, ANR-21-CE30-0056-01). W.Y. also acknowledges support by Tencent Foundation. G.A. and C.C. also acknowledge support from the Israeli Council for Higher Education - VATAT. **Data and materials availability**: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. All data related to this study may be available from the corresponding author upon reasonable request.
2310.06172
Hypertoric 2-categories O and symplectic duality
We define 2-categories of microlocal perverse (resp. coherent) sheaves of categories on the skeleton of a hypertoric variety and show that the generators of these 2-categories lift the projectives (resp. simples) in hypertoric category $\mathcal{O}$. We then establish equivalences of 2-categories categorifying the Koszul duality between Gale dual hypertoric categories $\mathcal{O}$. These constructions give a prototype for understanding symplectic duality via the fully extended 3d mirror symmetry conjecture.
Benjamin Gammage, Justin Hilburn
2023-10-09T21:52:50Z
http://arxiv.org/abs/2310.06172v1
# Hypertoric 2-categories \(\mathcal{O}\) and symplectic duality ###### Abstract. We define 2-categories of microlocal perverse (resp. coherent) sheaves of categories on the skeleton of a hypertoric variety and show that the generators of these 2-categories lift the projectives (resp. simples) in hypertoric category \(\mathcal{O}\). We then establish equivalences of 2-categories categorifying the Koszul duality between Gale dual hypertoric categories \(\mathcal{O}\). These constructions give a prototype for understanding symplectic duality via the fully extended 3d mirror symmetry conjecture. ###### Contents * 0 Introduction * 0.1 Overview * 0.2 Dual toric stacks * 0.3 Hypertoric varieties and Lagrangian skeleta * 0.4 Category \(\mathcal{O}\) and decategorification * 0.5 Predictions for Fueter theory * 0.6 Notation and conventions * 1 Hyperplanes and hyperkahler manifolds * 1.1 Hyperplane arrangements * 1.2 Gale duality * 1.3 Toric hyperkahler manifolds * 1.4 Toric holomorphic Lagrangians * 2 Hypertoric categories \(\mathcal{O}\) * 2.1 de Rham category \(\mathcal{O}\) * 2.2 Betti category \(\mathcal{O}\) and Riemann-Hilbert * 3 Microlocal perverse schobers * 3.1 Spherical functors and perverse schobers * 3.2 2-category \(\mathcal{O}\) * 4 Microlocal coherent schobers * 4.1 Coherent sheaves of categories * 4.2 Singular support * 4.3 Monoidal structure * 4.4 Microlocalization * 4.5 The hypertoric 2-category * 5 Proof of Theorems A & B ## 0. Introduction This paper inaugurates a program to understand features of categories \(\mathcal{O}\) in terms of a pair of categorifications arising from symplectic and algebraic geometry, realizing predictions from 3-dimensional supersymmetric gauge theory. ###### Contents * 1 Introduction * 2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.7 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.8 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.9 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.1 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.2 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.3 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.4 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.5 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) * 2.6 The \(\mathbb{Z}/2\)-graded category \(\mathcal{O}\) in [13, 14, 15, 16] and subsequently developed in many places; for our purposes, the most important are [1, 17, 18, 19]. On the other hand, the A-model 2-category is less well-understood. It should be a "Fukaya-type" 2-category, defined by counting \(J\)-holomorphic 2-disks and Fueter 3-disks, with Hom categories as studied in [16], but serious analytic and conceptual difficulties have obstructed a full definition. Inspired by similar theorems [14, 15] for Fukaya categories, we have conjectured in [15] that the A-model 2-category is equivalent to one defined topologically via the perverse schobers of [13]. With 2-categories of boundary conditions for both the A- and B-models in hand, and guided by [1], we are now able to state our prediction for the 2-categorical origins of category \(\mathcal{O}:\) **Conjecture**.: _To a sectorial conical symplectic resolution \((X\supset\mathbb{L})\), there is associated a pair of 2-categories_ \[2\mathcal{O}_{\mathsf{A}}:=\mu\mathsf{PervCat}(\mathbb{L}),\qquad 2\mathcal{O}_{ \mathsf{B}}:=\mu\mathsf{CohCat}(\mathbb{L})\] _of **microlocal perverse (resp. coherent) schobers** on \(\mathbb{L},\) with canonical generators \(\mathcal{P}\) and \(\mathcal{S},\) respectively. After taking periodic cylic homology, these become the projectives and simples in a \(\mathbb{Z}/2\)-graded version of category \(\mathcal{O}\):_ \[\mathsf{HP}(\mathsf{end}_{2\mathcal{O}_{\mathsf{A}}}(\mathcal{P}))\simeq \mathsf{end}_{\mathcal{O}_{\mathbb{Z}/2}}(P),\qquad\mathsf{HP}(\mathsf{end}_{ 2\mathcal{O}_{\mathsf{B}}}(\mathcal{S}))\simeq\mathsf{end}_{\mathcal{O}_{ \mathbb{Z}/2}}(L).\] The 2-categories \(2\mathcal{O}_{(\mathsf{A/B})}\) are much richer than the usual 1-category \(\mathcal{O},\) which sees only the geometry of the Lagrangian \(\mathbb{L}.\) For instance, the \(\mathbb{E}_{2}\) (i.e., Drinfeld) center of \(2\mathcal{O}_{\mathsf{B}}\) is equivalent to the whole category \(\mathsf{Coh}(X)\) of coherent sheaves on \(X;\) dually, the \(\mathbb{E}_{3}\) center of \(2\mathcal{O}_{\mathsf{A}}\) recovers the Coulomb branch construction from [1]. In terms of the above 2-categories, we can now interpret symplectic duality as one consequence of a more fundamental statement about boundary conditions in dual theories. **Conjecture** (Fully extended 3d mirror symmetry).: _Let \((X\supset\mathbb{L})\) and \((X^{\vee}\supset\mathbb{L}^{\vee})\) be a dual pair of sectorial conical symplectic resolutions. Then there are equivalences of 2-categories_ \[2\mathcal{O}_{\mathsf{A}}\simeq 2\mathcal{O}_{\mathsf{B}}^{\vee},\qquad 2 \mathcal{O}_{\mathsf{B}}\simeq 2\mathcal{O}_{\mathsf{A}}^{\vee}\] _exchanging the canonical generators._ As a result, application of periodic cylic homology to the diagram (0.1.2) recovers the horizontal equivalences in (0.1.1). These vertical Koszul dualities thus appear only after decategorifying from \(2\mathcal{O}\) to 1-category \(\mathcal{O}.\) In this paper, we prove the above conjectures in the case where \(X,X^{\vee}\) are hypertoric varieties. ### Dual toric stacks Our story begins with the 3d mirror symmetry theorem formulated and proved in [15]: **Theorem 0.1**.: _Let \(\mathcal{S}\) be the stratification of \(\mathbb{C}\) as \(\mathbb{C}=\mathbb{C}^{\times}\sqcup 0.\) Then there is an equivalence of stable 2-categories4_ Footnote 4: To simplify notation here, we index our 2-categories here by the stratification \(\mathcal{S}\) rather than the union of Lagrangian conormals \(\mathbb{L}\) to \(\mathcal{S},\) as we do elsewhere in the paper. For clarity, we also write “sheaves of categories” here for what elsewhere is often denoted by the more economical word “schobers.” More broadly, our notation throughout this paper differs from that in [15]: We write \(\mathsf{PervCat}\) (resp. \(\mathsf{CohCat}\)) for the 2-categories which were written there as \(\mathsf{Perv}^{(2)}\) (resp. \(\mathsf{IndCoh}^{(2)}\)). (0.2.1) between the 2-category of "perverse sheaves of categories"_[11] _on_\(\mathbb{C}\) _with a singularity at 0, and the 2-category of_ \(\mathbb{C}^{\times}\)_-equivariant "coherent sheaves of categories"_[12] _on_\(\mathbb{C}\) _with a singularity at_ \(0.\)__ _More generally, let_ (0.2.2) _be an exact sequence of tori, and let \(\mathcal{S}\) denote the stratification of \(\mathbb{C}^{n}\) by coordinate hyperplanes. Then there is an equivalence of stable 2-categories_ \[\mathsf{PervCat}_{\mathcal{S}}^{G}(\mathbb{C}^{n})\simeq\mathsf{CohCat}_{ \mathcal{S}}^{F^{\vee}}(\mathbb{C}^{n}).\lx@note{footnote}{Despite the apparent symmetry of the statement, note that the copies of $\mathbb{C}^{n}$ on the two sides of (0.2.3) are actually dual to each other, so that the right-hand $\mathbb{C}^{n}$ is naturally a representation of $((\mathbb{C}^{\times})^{n})^{\vee}$ and hence also $F^{\vee}$.} \tag{0.2.3}\] The stratifications involved in the statements of Theorem 0.1 should be understood as singular-support conditions inside the cotangent bundle of \(\mathrm{T}^{*}\mathbb{C}^{n}\) -- or, more properly, inside the stacky cotangent bundles \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\) and \(\mathrm{T}^{*}(\mathbb{C}^{n}/F^{\vee}).\) A first goal of this paper is to describe the equivalence (0.2.3) microlocally, so that we may restrict to stable loci inside the stacky cotangent bundles. ### Hypertoric varieties and Lagrangian skeleta Throughout the paper, we fix the choice of exact sequence (0.2.2). The stack \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\) can also be written as a stacky Hamiltonian reduction, \[\mathrm{T}^{*}(\mathbb{C}^{n}/G)=\mu^{-1}(0)/G, \tag{0.3.1}\] where \(\mu:\mathrm{T}^{*}\mathbb{C}^{n}\to\mathfrak{g}^{\vee}\) is the moment map for the Hamiltonian \(G\)-action on \(\mathrm{T}^{*}\mathbb{C}^{n}.\) We can pass from the stack (0.3.1) to a smooth variety by GIT. Let \(t\in\mathfrak{g}^{\vee}_{\mathbb{Z}}\) be a character for the group \(G\). The character \(t\) specifies a \(G\)-equivariant line bundle \(\mathcal{O}(t)\) on \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\) and hence on \(\mu^{-1}(0)\), which allows us to perform a GIT quotient \[\mathfrak{M}_{G}(t):=\mu^{-1}(0)/\!/_{t}G=\mu^{-1}(0)^{t-ss}/G. \tag{0.3.2}\] **Example 0.2**.: We highlight two special cases of the above construction: 1. For \(G=\mathbb{C}^{\times}\) embedded diagonally in \((\mathbb{C}^{\times})^{n}\), at generic parameter \(t\) the variety \(\mathfrak{M}_{G}(t)\) is \(\mathrm{T}^{*}\mathbb{P}^{n-1}\). 2. If \(G\) is the kernel of the multiplication map \((\mathbb{C}^{\times})^{n}\to\mathbb{C}^{\times}\), then at generic parameter \(t\), the the variety \(\mathfrak{M}_{G}(t)\) is a resolution of the \(A_{n-1}\) surface singularity \(\mathbb{C}^{2}/\mathbb{Z}/(n-1)\). The spaces \(\mathfrak{M}_{G}(t)\) were first introduced in [10], where they were called _toric hyperkahler manifolds_, and studied further in [12] and in [13, 14]; in [15] they were called _hypertoric varieties_, to avoid confusion with toric varieties. These spaces are the simplest examples of symplectic resolutions, and often serve as a general testing ground for aspects of the general theory, including categories \(\mathcal{O}\)[13] and Koszul duality [13], the quantum differential equation [16], and K-theoretic quasimap counts [17]. Following the program begun in [18], we seek to derive as much of this theory as possible from an analysis of 2-categories. In this paper, we will recover category \(\mathcal{O}\), and the statement of Koszul duality for dual categories \(\mathcal{O}\). In order to describe category \(\mathcal{O}\) for these varieties, it will be helpful to reframe the discussion in terms of a certain Lagrangian subspace. **Definition 0.3**.: Let \(\mathbb{L}\subset\mathrm{T}^{*}\mathbb{C}^{n}\) be the singular Lagrangian given by the union of conormals to intersections of coordinate hyperplanes, and let \(\mathbb{L}_{G}\) be its image in \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\). The Lagrangian \(\mathbb{L}_{G}\) may be understood as a union of conormals to stacky hyperplane intersections in the stack \(\mathbb{C}^{n}/G\); alternatively, one may note that \(\mathbb{L}\) is contained in \(\mu^{-1}(0)\subset\mathrm{T}^{*}\mathbb{C}^{n}\), so that we can treat \(\mathbb{L}_{G}\) as its projection to \(\mu^{-1}(0)/G.\) As a singular-support condition, the conic Lagrangian \(\mathbb{L}_{G}\) imposes the condition of local constancy along toric strata of \(\mathbb{C}^{n}/G\); we may therefore rewrite (0.2.3) as the equivalence \[\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\simeq\mathsf{CohCat}_{ \mathbb{L}_{F^{\vee}}}(\mathbb{C}^{n}/F^{\vee}) \tag{0.3.3}\] between perverse (resp. coherent) sheaves of categories with singular support along the Lagrangian \(\mathbb{L}_{G}\) (resp. \(\mathbb{L}_{F^{\vee}}\)). Our goal in this paper is to move from the Lagrangian \(\mathbb{L}_{G}\), inside the stack \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\), to a Lagrangian contained in the stable locus \(\mathfrak{M}_{G}(t)\). To this end, we will introduce a family of Lagrangians \(\mathbb{L}_{G}(t,m)\), depending on a pair of parameters. The first of these is the stability condition \(t\in\mathfrak{g}_{\mathbb{Z}}^{\vee}\), and the latter is a cocharacter \(m\in\mathfrak{f}_{\mathbb{Z}}\), which determines an inclusion \(m:\mathbb{C}^{\times}\hookrightarrow F\) and therefore a Hamiltonian action of \(\mathbb{C}^{\times}\) on the symplectic stack \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\). **Definition 0.4**.: For \((t,m)\in\mathfrak{g}_{\mathbb{Z}}^{\vee}\times\mathfrak{f}_{\mathbb{Z}}\), we write \[\mathbb{L}(t,m):=\{x\in\mathbb{L}_{G}^{t-ss}\mid\lim_{\lambda\to\infty}m( \lambda)\cdot x\text{ exists}\}\] for the subspace of \(\mathbb{L}_{G}\) consisting of those points which are \(t\)_-semistable_ and \(m\)_-bounded_. When \(t,m=0\), we recover our original Lagrangian \(\mathbb{L}_{G}(0,0)=\mathbb{L}_{G}\). In general, the Lagrangian \(\mathbb{L}_{G}(t,m)\) is a locally closed subset of \(\mathbb{L}_{G}\): turning on \(t\) deletes the closed subset of \(t\)-unstable points, while turning on \(m\) deletes the open subset of \(m\)-unbounded points. **Remark 0.5**.: In the physics literature, \(t\) is called the Fayet-Iliopolous (FI) parameter and \(m\) the mass parameter. We prefer the names _stability parameter_ and _attraction parameter_. Observe that the parameters \(t\) and \(m\) for \(\mathbb{L}_{G}\) play the opposite roles for \(\mathbb{L}_{F^{\vee}}\). This is part of a general paradigm in mirror symmetry that parameters exchange roles under mirror duality (which in this case is implemented by dualizing the exact sequence (0.2.2)). In order to lift this duality to the level of 2-categories, we will have to understand how the 2-categories in (0.3.3) change when we replace \(\mathbb{L}_{G}\) (resp. \(\mathbb{L}_{F^{\vee}}\)) with \(\mathbb{L}_{G}(t,m)\) (resp. \(\mathbb{L}_{F^{\vee}}(m,t)\)). In other words, we need a good theory of _microlocalization_ for perverse (resp. coherent) sheaves of categories. On the A-side, we lack even a general non-microlocal theory for perverse sheaves of categories, so a full theory of microlocalization is out of reach. Nevertheless, inspired by the microlocal behavior of the 1-category of perverse sheaves, we can define directly our expectation for microlocal perverse sheaves of categories on the Lagrangian \(\mathbb{L}_{G}\). The key fact is categorical Kirwan surjectivity, which has been established for DQ-modules in [1] and will be proven, in the hypertoric setting, for microlocal perverse sheaves in [1]. This theorem implies that the category of microlocal perverse sheaves on \(\mathbb{L}_{G}(t,m)\) admits a description purely in terms of (non-microlocal) perverse sheaves. We therefore imitate the construction of microlocal perverse sheaves on \(\mathbb{L}_{G}\), which we recall in SS2.2. For each component \(\mathbb{L}_{G}^{\alpha}\) of \(\mathbb{L}_{G}\), we can define a projective object \(\mathcal{P}_{\alpha}\) in the 2-category \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) which corepresents the functor taking a perverse sheaf of categories to its microstalk at a smooth point of \(\mathbb{L}_{G}^{\alpha}\). In addition, if \(\mathbb{L}_{G}^{\prime}\subset\mathbb{L}_{G}\) is the closed embedding of a union of the components of \(\mathbb{L}_{G}\), we have an embedding \(\mathsf{PervCat}_{\mathbb{L}_{G}^{\prime}}(\mathbb{C}^{n}/G)\hookrightarrow \mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\). **Definition 0.6** (Definition 3.12).: The 2-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) of _microlocal perverse sheaves of categories_ on the Lagrangian \(\mathbb{L}_{G}(t,m)\) is defined as the quotient \[\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m)):=\frac{\mathsf{PervCat}_{\mathbb{L}_{G} }(\mathbb{C}^{n}/G)}{\mathsf{PervCat}_{\mathbb{L}_{G}^{\prime}}(\mathbb{C}^{n}/ G),\langle\mathcal{P}_{\alpha}\rangle_{\alpha\in m\text{-unbdd}}} \tag{0.3.4}\] of \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) by the sub-2-category generated by the perverse sheaves of categories with \(t\)-unstable microsupport and the projective objects \(\mathcal{P}_{\alpha}\) corresponding to \(m\)-unbounded components \(\mathbb{L}_{G}^{\alpha}\) of \(\mathbb{L}_{G}\). **Remark 0.7**.: The 2-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) defined in (0.3.4) depends not only on the Lagrangian \(\mathbb{L}_{G}(t,m)\) but also on the ambient cotangent stack \(\mathrm{T}^{*}(\mathbb{C}^{n}/G).\) See SS0.5 for a conjectural justification for this dependence in terms of generalized Seiberg-Witten theory. On the B-side, more is known, thanks to works in progress of Dima Arinkin and German Stefanich. Suppose the Lagrangian \(\mathbb{L}_{F^{\vee}}\) is the image in \(\mathrm{T}^{*}Y\) of the conormal \(\mathrm{N}^{*}_{X}Y\) of a map \(X\to Y.\) Then we can present \(\mathsf{CohCat}_{\mathbb{L}_{F^{\vee}}}(\mathbb{C}^{n}/F^{\vee})\) as the 2-category with a single object, whose endomorphisms are given by the monoidal category \(\mathsf{Coh}(X\times_{Y}X).\) In other words, we have a presentation \[\mathsf{CohCat}_{\mathbb{L}_{F^{\vee}}}(\mathbb{C}^{n}/F^{\vee})\simeq\mathsf{ Mod}_{\mathsf{Coh}(X\times_{Y}X)}(\mathsf{St})\] as the 2-category of module categories over \(\mathsf{Coh}(X\times_{Y}X).\) Given a conic open subset \(U\subset\mathbb{L}_{F^{\vee}},\) using Definition 4.14 we produce a singular-support condition \(\tilde{U},\)6 in the sense of [1], for coherent sheaves on \(X\times_{Y}X.\) Footnote 6: This is \(F^{-1}(U)\) in the notation of §4. **Definition 0.8** (Definition 4.19).: With \(X\to Y\) as above, we define the 2-category of _microlocal coherent sheaves of categories on \(U\)_ as the 2-category \[\mu\mathsf{CohCat}(U):=\mathsf{Mod}_{\mathsf{Coh}_{\tilde{U}}(X\times_{Y}X)}( \mathsf{St})\] of module categories for the monoidal category \(\mathsf{Coh}_{\tilde{U}}(X\times_{Y}X)\) of coherent sheaves on the fiber product with singular support contained in \(U\). To justify our notation, we prove that the 2-category so defined is invariant of the choice of map \(X\to Y.\) **Proposition 0.9** (Proposition 4.18 and Proposition 4.20).: _Let \(X\to Y\) and \(X^{\prime}\to Y\) two maps whose conormals \(\mathrm{N}^{*}_{X}Y\) and \(\mathrm{N}^{*}_{X^{\prime}}Y\) have the same image in \(\mathrm{T}^{*}Y,\) and let \(U\subset\mathrm{T}^{*}Y\) be a conic open subset. Then the monoidal categories \(\mathsf{Coh}_{\tilde{U}}(X\times_{Y}X)\) and \(\mathsf{Coh}_{\tilde{U}}(X^{\prime}\times_{Y}X^{\prime})\) are Morita equivalent._ **Remark 0.10**.: As on the A-side, the above definition of microlocalization is very much provisional. Ultimately, microlocal coherent sheaves should be global sections of a sheaf of categories defined by stipulating that its stalks are calculated by the procedure of Definition 0.8. That Definition 0.8 recovers the true microlocalization in the case of hypertoric varieties would then follow from a "2-categorical spectral Kirwan surjectivity" theorem. We leave the full development of this coherent microlocal theory to future work. Our first main theorem establishes our predicted mirror relationship between the two 2-categories we have defined. **Theorem A**.: _There is an equivalence of stable 2-categories_ \[\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\simeq\mu\mathsf{CohCat}(\mathbb{L}_{F^{ \vee}}(m,t)) \tag{0.3.5}\] _between the 2-categories of microlocal perverse (resp. coherent) sheaves of categories on \(\mathbb{L}_{G}(t,m)\) (resp. \(\mathbb{L}_{F^{\vee}}(m,t)\)), identifying the canonical generators of these 2-categories._ **Remark 0.11**.: Although in the introduction we work over \(\mathbb{C}\) for the sake of symmetry, Theorem A remains true for an arbitrary choice of coefficients. As we explain in SS1, the spaces on the right-hand side of (0.3.5) can be defined for a general coefficient ring \(\Bbbk\), which should then be taken as the coefficient ring for the microlocal perverse schobers we consider (which will still be defined on the _complex_ algebraic stack \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\)) on the left-hand side. This is part of the usual paradigm in mirror symmetry and the Langlands program; cf. the discussion in [11] regarding the geometric Satake equivalence. Later, when we discuss category \(\mathcal{O},\) we will specialize to coefficients \(\Bbbk=\mathbb{C}\) to make contact with the original calculations of hypertoric category \(\mathcal{O}.\) **Example 0.12** (Example 3.15 and Example 4.32).: Let \(G\simeq\mathbb{C}^{\times}\hookrightarrow(\mathbb{C}^{\times})^{2}\), embedded as the kernel of the multiplication map \((\mathbb{C}^{\times})^{2}\stackrel{{ m}}{{\to}}\mathbb{C}^{\times} \simeq F.\) Then the \(2\)-categories in (0.3.5) can be described as follows: * An object of \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) is given by a spherical adjunction \(\mathcal{C}\)\(\xrightarrow{\begin{subarray}{c}F\\ \perp\\ F^{R}\end{subarray}}\)\(\mathcal{D}\) equipped with the following extra data: * An identification of the cotwist \(\mathsf{cofib}(FF^{R}\to\mathsf{id}_{\mathcal{D}})\) with the \(2\)-shift automorphism [2]; * An extra monad \(M\) on \(\mathcal{C}\); * An identification of twists \(\mathsf{fib}(\mathsf{id}_{\mathcal{C}}\to M)\simeq\mathsf{fib}(\mathsf{id}_{ \mathcal{C}}\to F^{R}F)\). The Hom category between two such objects is the category of functors commuting with all the above structure. * \(\mu\mathsf{CohCat}(\mathbb{L}_{F^{\vee}}(m,t))\) is given by the \(2\)-category \(\mathsf{Mod}_{\mathcal{A}}(\mathsf{St})\) of stable module categories over the monoidal category \[\mathcal{A}:=\mathsf{Coh}\left((\mathbb{P}^{1}\sqcup 0)\times_{\mathbb{P}^{1}}( \mathbb{P}^{1}\sqcup 0)\right).\] Using Koszul duality to write \(\mathsf{Coh}(0\times_{\mathbb{P}}^{1}0)\simeq\mathsf{Perf}_{\mathsf{k}[\beta]}\) where \(\beta\) is a generator of degree \(2\), and Beilinson's exceptional collection to identify \(\mathsf{Coh}(\mathbb{P}^{1})\) with modules over the Kronecker quiver, we can rewrite \(\mathcal{A}\) as the matrix monoidal category (0.3.6) \[\mathcal{A}\simeq\left(\begin{array}{cc}\mathsf{Perf}_{\bullet}\rightleftarrows &\mathsf{Perf}_{\mathsf{k}}\\ \mathsf{Perf}_{\mathsf{k}}&\mathsf{Perf}_{\mathsf{k}[\beta]}\end{array}\right),\] giving an explicit presentation of the generating objects, \(1\)-morphisms, and \(2\)-morphisms in the \(2\)-category \(\mathsf{Mod}_{\mathcal{A}}(\mathsf{St})\). Under the equivalence (0.3.5), the generators of the rings whose perfect modules form the components of \(\mathcal{A}\) in (0.3.6) correspond on the A-side to the structural morphisms among the spherical adjunctions and monads on the categories \(\mathcal{C}\) and \(\mathcal{D}\): namely, the arrows in the Kronecker quiver \(\bullet\rightrightarrows\bullet\) correspond to the maps \(T\rightrightarrows\mathsf{id}_{\mathcal{C}}\) whose respective cofibers are the monads \(M\) and \(F^{R}F\) on \(\mathcal{C}\), and the degree-\(2\) map \(\beta\) corresponds to the map \(\mathsf{id}_{\mathcal{D}}\to\mathsf{cofib}(FF^{R}\to\mathsf{id}_{ \mathcal{D}})\simeq[2]\). ### Category \(\mathcal{O}\) and decategorification As mentioned in Theorem A, each of our \(2\)-categories comes equipped with a canonical generator. Considered as an object in the automorphic (resp. spectral) \(2\)-category, we will denote this generator by \(\mathcal{P}\) (resp. \(\mathcal{S}\)). By taking their endomorphisms, we obtain monoidal categories \(\mathsf{end}_{\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))}(\mathcal{P})\) and \(\mathsf{end}_{\mu\mathsf{CohCat}(\mathbb{L}_{F^{\vee}}(m,t))}(\mathcal{S})\). For a dualizable category \(\mathcal{C}\), we write \(\mathsf{HH}(\mathcal{C})\) for the _Hochschild homology_ of \(\mathcal{C}\), which may be understood as the trace of the identity functor \(\mathsf{id}_{\mathcal{C}}\). The Hochschild homology comes equipped with an \(S^{1}\)-action, for which we may take invariants \(\mathsf{HH}^{S^{1}}(\mathcal{C})\) or coinvariants \(\mathsf{HH}_{S^{1}}(\mathcal{C})\) to obtain cyclic homologies, or Tate invariants \(\mathsf{HH}^{S^{1}}(\mathcal{C})\) to obtain the _periodic cyclic homology_ \[\mathsf{HP}(\mathcal{C}):=\mathsf{HH}^{tS^{1}}(\mathcal{C})\simeq\mathsf{cofib }(\mathsf{HH}_{S^{1}}(\mathcal{C})\to\mathsf{HH}^{S^{1}}(\mathcal{C})).\] The functor \(\mathsf{HP}(-)\) (like \(\mathsf{HH}(-)\)) is symmetric monoidal, so that for \(\mathcal{C}\) a monoidal category, \(\mathsf{HP}(\mathcal{C})\) is an algebra. We will be interested in applying the functor \(\mathsf{HP}(-)\) to the endomorphism categories of the generators \(\mathcal{P},\mathcal{S}\) discussed above, and relating the resulting algebras to hypertoric categories \(\mathcal{O}\). **Remark 0.13**.: For \(\mathcal{C}\) a monoidal category, the Hochschild homology algebra \(\mathsf{HH}(\mathcal{C})\) (referred to in [3] as the "naive trace") is a first approximation to a more sophisticated "categorical trace," in a sense which is made precise by [10, Theorem 3.8.5]. The naive trace is sufficient for our purposes here, but we will return to the calculation of categorical traces in future work, extending the categorical trace computation in [11] for the \(2\)-categories (0.2.1). As we have mentioned in SS0.1, categories \(\mathcal{O}\) are supposed to arise physically from reductions of 3d \(\mathcal{N}=4\) gauge theories. From this perspective, one class of input data from which one may produce a category \(\mathcal{O}\) is a stack of the form \(\mathrm{T}^{*}(V/G)\), where \(V\) is a representation of the reductive group \(G\), together with a pair of parameters specifying, respectively, a GIT stable locus \(\mathfrak{M}\subset\mathrm{T}^{*}(V/G)\) and a Hamiltonian \(\mathbb{C}^{\times}\) action on this stable locus. The stable set for this \(\mathbb{C}^{\times}\) action, inside the GIT-stable locus, is a Lagrangian subspace \(\mathbb{L}\). Given the data of \((\mathfrak{M},\mathbb{L})\), one may define a pair of categories: * The _Betti category_\(\mathcal{O}\) is the category \(\mathcal{O}^{\mathsf{Bet}}(\mathfrak{M},\mathbb{L}):=\mu\mathsf{Perv}_{ \mathbb{L}}(\mathbb{L})\) of microlocal perverse sheaves on \(\mathbb{L}\). * The _de Rham category_\(\mathcal{O}\) is a category \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M},\mathbb{L}):=\mathsf{DQ}^{\mathsf{ reg}}(\mathbb{L})\) is a category of regular holonomic DQ-modules on \(\mathbb{L}\). Each of these categories \(\mathcal{O}^{(*)}(\mathfrak{M},\mathbb{L})\) has a collection of simple objects \(L_{\alpha}\), with projective covers \(P_{\alpha}.\) We write \(L:=\bigoplus L_{\alpha}\) and \(P:=\bigoplus P_{\alpha}\) for the direct sum of the simples (resp. projectives). We will use periodic cyclic homology to recover their endomorphism algebras, up to collapsing the cohomological grading modulo \(2\). In the following theorem, we write \(\mathcal{O}^{(*)}_{\mathbb{Z}/2}(\mathfrak{M},\mathbb{L}):=\mathcal{O}^{(*)}( \mathfrak{M},\mathbb{L})\otimes_{\mathsf{Mod}_{\Bbbk}}\mathsf{Mod}_{\Bbbk((u ))}\) for the \(2\)-periodization of category \(\mathcal{O}\). **Theorem B**.: _Let \(\Bbbk=\mathbb{C}.\) There are equivalences of algebras_ \[\mathsf{HP}(\mathsf{end}_{\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))}( \mathcal{P})) \simeq\mathsf{end}_{\mathcal{O}^{\mathsf{Bet}}_{\mathbb{Z}/2}( \mathfrak{M}_{G}(t),\mathbb{L}(t,m))}(P), \tag{0.4.2}\] \[\mathsf{HP}(\mathsf{end}_{\mu\mathsf{CohCat}(\mathbb{L}_{F^{ \vee}}(m,t))}(\mathcal{S})) \simeq\mathsf{end}_{\mathcal{O}^{\mathsf{dR}}_{\mathbb{Z}/2}( \mathfrak{M}_{F^{\vee}}(m),\mathbb{L}_{F^{\vee}}(m,t))}(L). \tag{0.4.1}\] **Example 0.14**.: We return to the \(2\)-categories discussed in Example 0.12. In this case: * The category (0.4.1) is the (\(2\)-periodized) category of diagrams of vector spaces \(C\)\(\xrightarrow{x}\)\(D\) with the following relations: * The sphericality condition on \(F\) imposes that the linear maps \(1_{D}-xy\) and \(1_{C}-yx\) are invertible. * The exact triangle \(FF^{R}\to\mathsf{id}_{\mathcal{D}}\to[2]\) decategorifies to the relation \(xy=0.\) (This implies the invertibility conditions above.) (The extra monad \(M\) decategorifies to \(yx\) and thus does not contain any new information.) We thus recover the usual quiver description for the category \(\mathcal{O}\) associated to \(\widehat{\mathbb{C}^{2}/\mathbb{Z}}/2\) (which coincides with the classical BGG category \(\mathcal{O}\) for \(SL(2)\)). * The category (0.4.2) is a \(2\)-category of modules over the ring \(\mathsf{HP}(\mathcal{A})\), where \(\mathcal{A}\) is as in Example 0.12. Using the identification [15, Theorem 1.1.2] \[\mathsf{HP}(\mathcal{A})=\mathsf{HPCoh}\left((\mathbb{P}^{1}\sqcup 0) \times_{\mathbb{P}^{1}}(\mathbb{P}^{1}\sqcup 0)\right)\simeq C^{BM}_{*} \left((\mathbb{P}^{1}\sqcup 0)\times_{\mathbb{P}^{1}}(\mathbb{P}^{1}\sqcup 0) \right)_{\mathbb{Z}/2},\] we find that this category is the subcategory of (\(2\)-periodized) D-modules on \(\mathbb{P}^{1}\) generated by the D-modules \(\mathcal{O}_{\mathbb{P}^{1}}\) and \(\delta_{0}.\) These are the simple objects in category \(\mathcal{O}\) for \(\mathrm{T}^{*}\mathbb{P}^{1}\), which once again agrees with the classical category \(\mathcal{O}\) for \(SL(2)\). **Remark 0.15**.: The decategorification in Theorem B only recovers \(2\)-periodized versions of category \(\mathcal{O}\). One might hope for an enhanced version of Theorem B which recovers categories \(\mathcal{O}\) without collapsing to a \(\mathbb{Z}/2\)-grading. Such a result could be achieved by constructing a graded lift of the \(2\)-categories \(\mathcal{O}\) we define in this paper. We leave this question to future work. For a fixed hypertoric variety \(\mathfrak{M}_{G}(t)\), the two flavors of \(2\)-category \(\mathcal{O}\) we define, namely \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) and \(\mu\mathsf{CohCat}(\mathbb{L}_{G}(t,m))\) are very different, and certainly very far from being equivalent. However, one surprising feature of Theorem B is that they agree after \(\mathsf{HP}\)-decategorification: **Corollary 0.16**.: _There is an equivalence of categories_ \[\mathsf{Perf}_{\mathsf{HP}(\mathsf{end}_{\mu}\mathsf{PervCat}(\mathbb{L}_{G}(t,m)) (\mathcal{P}))} \simeq\mathcal{O}_{\mathbb{Z}/2}^{\mathsf{Bet}}(\mathfrak{M}_{G}(t ),\mathbb{L}_{G}(t,m))\] \[\simeq\mathcal{O}_{\mathbb{Z}/2}^{\mathsf{dR}}(\mathfrak{M}_{G}(t ),\mathbb{L}_{G}(t,m))\simeq\mathsf{Perf}_{\mathsf{HP}(\mathsf{end}_{\mu} \mathsf{CohCat}(\mathbb{L}_{G}(t,m))(\mathcal{S}))}.\] The above identification, given by the Riemann-Hilbert correspondence \(\mathcal{O}^{\mathsf{Bet}}\simeq\mathcal{O}^{\mathsf{dR}}\), is a categorification of the fact that both matrix factorizations \(\mathsf{MF}(X,f)\) and the Fukaya-Seidel category \(\mathsf{FS}(X,f)\) categorify the vanishing cohomology \(H^{*}(X,\varphi_{f})\) of a function \(f\). **Remark 0.17**.: Corollary 0.16 shows that the Koszul dualities among categories \(\mathcal{O}\) appear only after decategorification: after applying \(\mathsf{HP}\), the equivalence of Corollary 0.16 supplies diagonal arrows to the diagram (0.1.2). In other words, the miraculous fact underlying Koszul duality between dual categories \(\mathcal{O}\) is that the A- and B-type 2-categories \(2\mathcal{O}\) for the same space, though a priori unrelated, have the same decategorification -- the 1-category \(\mathcal{O}\) -- with canonical generators going to projectives (resp. simples) of this 1-category. We expect that essentially all the instances of Koszul duality appearing in geometric representation theory arise via decategorifications of this form (at least after passing to a \(\mathbb{Z}/2\) grading). ### Predictions for Fueter theory Conjecturally, \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) calculates the 2-category of boundary conditions for gauged Fueter theory (i.e., generalized 3d Seiberg-Witten theory [13]) associated to \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\), with generic real FI and mass parameters \(t,m\). Note that this theory is _not_ the same as the A-twisted sigma model (i.e., ungauged Fueter theory) on the stable locus \(\mathfrak{M}(t)\subset\mathrm{T}^{*}(\mathbb{C}^{n}/G)\): the gauge theory (even with generic FI parameter) receives contributions from the unstable locus. This explains the dependence of the 2-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) on the stack \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\) and not just the variety \(\mathfrak{M}(t).\) Indeed, the factorization homology of this 2-category on a closed 2-manifold is predicted in [1] to recover information about the moduli space of quasimaps, which depends on the stacky presentation of \(\mathfrak{M}_{G}(t)\). So far, the Fueter 2-category associated to a symplectic variety or stack \(\mathfrak{M}\), equipped with Lagrangian skeleton \(\mathbb{L}\), is not well-understood or even defined (although preliminary work in the ungauged case is available in [12]). Heuristically, the objects of this 2-category should be holomorphic Lagrangians in \(\mathfrak{M}\); 1-morphisms should be intersection points between these (after a displacement by a wrapping Hamiltonian); and 2-morphisms should be \(J_{\theta}\)-holomorphic curves between these. Our 3d mirror symmetry comparison Theorem A thus gives us detailed predictions for the behavior of the holomorphic Lagrangians in the gauged Fueter 2-category of \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\). The canonical generators of the 2-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) are expected to correspond to Lagrangian cocores or linking disks to the components of the skeleton \(\mathbb{L}_{G}(t,m))\), and Theorem A computes the Hom categories (and compositions) among these. **Example 0.18**.: We return to the situation of Example 0.12, where \[\mathbb{L}_{G}(t,m)=\mathbb{P}^{1}\cup\mathrm{T}^{*}_{\infty}\mathbb{P}^{1} \subset\mathrm{T}^{*}\mathbb{P}^{1}\subset\mathrm{T}^{*}(\mathbb{C}^{2}/ \mathbb{C}^{\times}).\] The 2-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) has two canonical generators, corepresenting microstalk functors at the two components of \(\mathbb{L}_{G}(t,m)\), and we computed their endomorphism category to be \[\mathcal{A}\simeq\left(\begin{array}{cc}\mathsf{Perf}_{\bullet\rightleftarrows \bullet}&\mathsf{Perf}_{\Bbbk}\\ \mathsf{Perf}_{\Bbbk}&\mathsf{Perf}_{\Bbbk[\beta]}\end{array}\right).\] Up to remembering the degree shift in the bottom-right corner, we can write the matrix \(\mathcal{A}\) entirely in terms of representations over quivers: \(\mathcal{A}\simeq\mathsf{Perf}_{\mathcal{Q}}\), where \[\mathcal{Q}\simeq\left(\begin{array}{cc}\bullet\rightrightarrows\bullet& \bullet\\ \bullet&\bullet\circlearrowleft\end{array}\right).\] This presentation makes evident that each of the four Hom categories in \(\mathcal{A}\) has distinguished generating objects (the vertices) and generating 1-morphisms (the arrows). It is natural to conjecture that these correspond to the (wrapped) intersection points and the \(J_{s}\)-holomorphic disks, respectively, among Lagrangian cocores in the Fueter 2-category. **Remark 0.19**.: Note that Example 0.18 gives a conjectural calculation not of the Fueter theory of \(T^{*}\mathbb{P}^{1}\) but rather of the gauged Fueter theory of \(T^{*}\mathbb{C}^{2}.\) In Example 0.12, the embedding of \(T^{*}\mathbb{P}^{1}\) into \(T^{*}\mathbb{C}^{2}\) contributed the extra monad \(M\), arising geometrically from the interaction of \(T^{*}_{\infty}\mathbb{P}^{1}\) with the unstable Lagrangian \(T^{*}_{0/\mathbb{C}^{\times}}(\mathbb{C}^{2}/\mathbb{C}^{\times}).\) That monad contributed one of the arrows in the Kronecker quiver appearing in the upper-left corner of \(\mathcal{Q}\) above. We therefore predict that the endomorphism category of the generators of the Fueter 2-category for \(T^{*}\mathbb{P}^{1}\) itself should be given by \[\mathcal{A}^{\prime}:=\left(\begin{array}{cc}\mathsf{Perf}_{\bullet\to \bullet}&\mathsf{Perf}_{\Bbbk}\\ \mathsf{Perf}_{\Bbbk}&\mathsf{Perf}_{\Bbbk[\beta]}\end{array}\right).\] ### Notation and conventions Our categorical conventions are largely the same as in [11], and we refer to SSA there for more details. We use the "implicit \(\infty\)" convention: all constructions are understood in the homotopy-coherent sense; in particular, \((n-)\)category always means \((\infty,n)\)-category. Stable categories are always assumed \(\Bbbk\)-linear for a fixed coefficient ring \(\Bbbk.\) We write \(\mathsf{St}\) for the stable 2-category of small (\(\Bbbk\)-linear) stable categories. By default, the categories we discuss are derived categories rather than the hearts of their t-structure. If \(\mathcal{C}\) is a category with t-structure, we write \(\mathcal{C}^{\heartsuit}\) for its abelian heart. So for instance, for \(X\) an algebraic stack, we write \(\mathsf{Coh}(X)\) for the derived category of coherent sheaves on \(X\). Throughout the paper, "stack" will always mean a derived stack which is quasi-compact, almost of finite presentation, and with affine diagonal. For \(\Lambda\subset\mathrm{T}^{*}X\) a Lagrangian, we write \(\mathsf{Perv}_{\Lambda}(X)\) for the category of compact objects inside the derived category of the heart of the perverse t-structure on the presentable stable category of sheaves on \(X\) with singular support in \(\Lambda.\) (Thus, our usage of \(\mathsf{Perv}_{\Lambda}(X)\) differs from the usual one in two ways: first, we treat this as a derived category rather than the heart of its t-structure, and second, we allow some objects with infinite-dimensional microstalks.) We retain the analogous convention for the category \(\mu\mathsf{Perv}_{\Lambda}(\Lambda)\) of microlocal perverse sheaves. Ideals in a monoidal category will always be assumed to be idempotent-complete full subcategories, closed under finite limits and colimits. For a torus \(G\), we write \(\mathfrak{g}\) for its Lie algebra (with dual \(\mathfrak{g}^{\vee}\)), \(\mathfrak{g}^{\vee}_{\mathbb{Z}}\) for the character lattice (and \(\mathfrak{g}_{\mathbb{Z}}\) for the cocharacter lattice), and \(\mathfrak{g}^{\vee}_{\Bbbk}:=\mathfrak{g}^{\vee}_{\mathbb{Z}}\otimes \mathbb{R},\)\(\mathfrak{g}_{\mathbb{R}}:=\mathfrak{g}\otimes\mathbb{R}.\) When \(G\) has an action on a symplectic variety, the moment map is naturally valued in \(\mathfrak{g}^{\vee}\). If \(G\) is defined over \(\mathbb{C}\), we write \(G_{c}\) for the maximal compact subgroup; if \(G_{c}\) has a Hamiltonian action on a (real) symplectic manifold, the moment map is valued in \(\mathfrak{g}^{\vee}_{\mathbb{R}}.\) **Notational index:** In Table 1 we collect some notation used frequently to describe the geometry of hypertoric varieties. In the following list, we write \(\alpha\in 2^{[n]}\) for a sign vector, and \(I\subset 2^{[n]}\) for a collection of sign vectors. **Acknowledgements** The authors are grateful to Aaron Mazel-Gee for collaborating on early stages of this project. BG would like to thank Elden Elmanto and Adeel Khan for suggestions about periodic cyclic homology and Laurent Cote for many long discussions about microlocal perverse sheaves and category \(\mathcal{O},\) and for collaboration on the companion paper [CGH]. JH would like to thank Mat Bullimore, Tudor Dimofte, and Davide Gaiotto for their collaborations on [1]. He would also like to thank Kevin Costelo, Niklas Garner, Wenjun Niu, and Philsang Yoo for many interesting discussions about 3d \(\mathcal{N}=4\) theories and thank Aleksander Doan, Ahsan Khan, and Semon Rezchikov for interesting discussions about Fueter maps. BG acknowledges the support of an NSF Postdoctoral Research Fellowship, DMS-2001897. JH is part of the Simons Collaboration on Homological Mirror Symmetry supported by Simons Grant 390287. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. ## 1. Hyperplanes and hyperkahler manifolds Throughout this paper, we fix an exact sequence of tori (1.0.1) as well as an equivalence \(D\simeq(\mathbb{G}_{m})^{n}\). ### Hyperplane arrangements From the exact sequence (1.0.1) and a choice of parameters \((t,m)\in\mathfrak{g}_{\mathbb{R}}^{\vee}\times\mathfrak{f}_{\mathbb{R}}\), we will define a polarized hyperplane arrangment in \(\mathfrak{f}_{\mathbb{R}}^{\vee}\) -- that is, a hyperplane arrangement in \(\mathfrak{f}_{\mathbb{R}}^{\vee}\) together with an affine-linear functional on \(\mathfrak{f}_{\mathbb{R}}^{\vee}\). We begin with the basic case where the parameters are \(0\) and \(G=\{e\}\) is trivial: **Definition 1.1**.: We write \(\mathcal{H}\) be the cooriented hyperplane arrangement in \(\mathfrak{d}_{\mathbb{R}}^{\vee}\simeq\mathbb{R}^{n}\) given by the \(n\) coordinate hyperplanes, with their natural coorientations. Observe that the exact sequence (1.0.1) induces an exact sequence Given \(t\in\mathfrak{g}_{\mathbb{R}}^{\vee}\), we write \[\mathfrak{f}_{\mathbb{R}}^{\vee}(t):=(i_{\mathbb{R}}^{\vee})^{-1}(t)\subset \mathfrak{d}_{\mathbb{R}}^{\vee}\] for the affine-linear subspace of \(\mathfrak{d}_{\mathbb{R}}^{\vee}\) obtained as a translation of the subspace \(\mathfrak{f}_{\mathbb{R}}^{\vee}\) by \(t\). We will assume that \(\mathfrak{f}_{\mathbb{R}}^{\vee}(t)\) is not contained in any of the hyperplanes in the arrangement \(\mathcal{H}\). **Definition 1.2**.: Given \((t,m)\in\mathfrak{g}_{\mathbb{R}}^{\vee}\times\mathfrak{f}_{\mathbb{R}}\), we define a polarized hyperplane arrangement \((\mathcal{H}_{G}(t),m)\), where \(\mathcal{H}_{G}(t)\) is the hyperplane arrangement on \(\mathfrak{f}_{\mathbb{R}}^{\vee}(t)\) obtained by intersecting with the coordinate hyperplane arrangement \(\mathcal{H}\), and the polarization \(m\) is the affine-linear functional on \(\mathfrak{f}_{\mathbb{R}}^{\vee}(t)\) given by \(m\in\mathfrak{f}_{\mathbb{R}}\). We will frequently need to refer to the chambers in these hyperplane arrangements. Observe that one of the \(2^{n}\) chambers in the cooriented hyperplane arrangement \(\mathcal{H}\) may be specified by labeling which of the \(n\) coordinates on \(\mathbb{R}^{n}\) is positive and which is negative. \begin{table} \begin{tabular}{l l} \hline \hline Notation & Definition \\ \hline \(X_{G}^{\alpha}\) & \(\{z_{i}=0\mid i\notin\alpha\}/G\subset\mathbb{A}^{n}/G\) \\ \(X_{G}\) & \(\bigsqcup_{\alpha\in 2^{[n]}}X_{G}^{\alpha}\) \\ \(X_{G}^{I}\) & \(\bigsqcup_{\alpha\in I}X_{G}^{\alpha}\) \\ \(\mathbb{L}_{G}^{\alpha}\) & \(\mathrm{N}_{X_{G}^{\alpha}}^{*}(\mathbb{A}^{n}/G)\subset\mathrm{T}^{*}( \mathbb{A}^{n}/G)\) \\ \(\mathbb{L}_{G}\) & \(\bigcup_{\alpha\in 2^{[n]}}\mathbb{L}_{G}^{\alpha}\) \\ \(\mathbb{L}_{G}^{I}\) & \(\bigcup_{\alpha\in I}\mathbb{L}_{G}^{\alpha}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Index of notation for the basic subvarieties and conormal Lagrangians considered in this paper **Definition 1.3**.: A _sign vector_ is an element of the set \(2^{[n]}.\) Writing \(2=\{+,-\}\), we can think of a sign vector as a length \(n\) word in \(\{+,-\}\), hence the name. We will also think of sign vectors as subsets of \([n]\), with \((+,\ldots,+)\) corresponding to the full set \([n]=\{1,...,n\}\) and \((-,\ldots,-)\) corresponding to the empty set. For a sign vector \(\alpha\), we write \(\mathcal{H}^{\alpha}\) for the corresponding chamber of \(\mathcal{H}\), and we write \(\mathcal{H}^{\alpha}_{G}(t)\) for the restriction of this chamber to \(\mathfrak{f}^{\vee}_{\mathbb{R}}(t)\). **Definition 1.4**.: We say that a sign vector \(\alpha\in 2^{[n]}\) is \(t\)_-unstable_ if the chamber \(\mathcal{H}^{\alpha}_{G}(t)\) does not appear in \(\mathcal{H}_{G}(t)\).7 Footnote 7: In [10], these sign vectors are instead called _infeasible_, using the language of linear programming. We prefer throughout to use language referencing the geometry of the hypertoric variety. We say that a sign vector \(\alpha\in 2^{[n]}\) is \(m\)_-bounded_ if the restriction of \(m\) to the chamber \(\mathcal{H}^{\alpha}_{G}(t)\) (or equivalently, to the chamber \(\mathcal{H}^{\alpha}_{G}(0)\)) is bounded. Recall the following features of hyperplane arrangements: **Definition 1.5**.: A hyperplane arrangement in \(\mathbb{R}^{k}\) is _simple_ if any \(k\) hyperplanes intersect in a single point and if the intersection of any \(k+1\) hyperplanes is empty. The arrangement \(\mathcal{H}_{G}(t)\) is _unimodular_ if there is some basis for \(\mathfrak{g}_{\mathbb{Z}}\) where \(p^{\vee}_{\mathbb{Z}}\) is given by a totally unimodular integer matrix. **Definition 1.6**.: We say that the parameters \((t,m)\in\mathfrak{g}^{\vee}_{\mathbb{R}}\times\mathfrak{f}_{\mathbb{R}}\) are _generic_ if they satisfy the following conditions: * \(t\) is chosen so that the hyperplane arrangement \(\mathcal{H}_{G}(t)\) is simple. * \(m\) is chosen so that it is not constant on any \(1\)-dimensional flat of \(\mathcal{H}_{G}(t)\) (or equivalently, on any \(1\)-dimensional flat of \(\mathcal{H}_{G}(0)\)). ### Gale duality Consider the exact sequence of tori obtained from (1.0.1) by taking duals: (1.2.1) We may repeat the constructions of SS1 with \(G\) replaced by \(F^{\vee}.\) Now we use \(m\in\mathfrak{f}\) as the parameter determining a translation of the subspace \(\mathfrak{g}_{\mathbb{R}}\subset\mathfrak{d}_{\mathbb{R}}\simeq\mathbb{R}^{n}\), and by intersecting with the coordinate hyperplane arrangement in \(\mathbb{R}^{n}\) we obtain a hyperplane arrangement \(\mathcal{H}_{F^{\vee}}(m)\) in \(\mathfrak{g}_{\mathbb{R}}\). **Definition 1.7**.: We say that the polarized hyperplane arrangements \((\mathcal{H}_{G}(t),m)\) and \((\mathcal{H}_{F^{\vee}}(m),t)\) are _Gale dual_ to one another. The basic theorem of linear programming is that the concepts of instability and unboundedness are related to each other by this duality. **Theorem 1.8** ([10, Theorem 2.4]).: _A sign vector \(\alpha\in 2^{[n]}\) is \(t\)-unstable for \((\mathcal{H}_{G}(t),m)\) if and only if it is \(t\)-unbounded for \((\mathcal{H}_{F^{\vee}}(m),t).\)_ ### Toric hyperkahler manifolds Before moving to hyperkahler manifolds, we begin with toric symplectic algebraic stacks. To the exact sequence (1.0.1), we associate the stack \[\mathfrak{M}_{G}:=\mathrm{T}^{*}(\mathbb{A}^{n}/G),\] where \(D\) is the torus of diagonal automorphisms of \(\mathbb{A}^{n}\), and \(G\) acts on \(\mathbb{A}^{n}\) as a subtorus of \(D\). This stack may be understood as the global quotient \[\mathrm{T}^{*}(\mathbb{A}^{n}/G)=\mu_{G}^{-1}(0)/G,\] where \(\mu_{G}:\mathrm{T}^{*}\mathbb{A}^{n}\to\mathfrak{g}^{\vee}\) is the moment map for the Hamiltonian action of the torus \(G.\) This moment map factors as the composite \[\mathrm{T}^{*}\mathbb{A}^{n}\xrightarrow{\mu_{D}}\mathbb{A}^{n}\simeq \mathfrak{d}^{\vee}\xrightarrow{i^{\vee}}\mathfrak{g}^{\vee},\] with the first map given by \[\mu_{D}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n})=(x_{1}y_{1},\ldots,x_{n}y_{n}).\] **Definition 1.9**.: Let \(t\in\mathfrak{g}_{\mathbb{Z}}^{\vee}\). The _hypertoric variety_8 associated to the exact sequence (1.0.1) and the parameter \(t\) is the GIT quotient \(\mathfrak{M}_{G}(t):=\mu_{G}^{-1}(0)/\!\!/_{t}G.\) Equivalently, we may write \(\mathfrak{M}_{G}(t)=(\mathrm{T}^{*}(\mathbb{C}^{n}/G))^{t-ss}\subset\mathrm{T }^{*}(\mathbb{A}^{n}/G)\) as the \(t\)-semistable locus inside the stack \(T^{*}(\mathbb{A}^{n}/G).\) When \(\Bbbk=\mathbb{C}\), the underlying complex-analytic space of \(\mathfrak{M}_{G}(t)\) is called a _toric hyperkahler manifold_. Footnote 8: If the hyperplane arrangement \(\mathcal{H}_{G}(t)\) is not unimodular, this is a slight misomer, as we will treat \(\mathfrak{M}_{G}(t)\) as a DM stack in this case. **Proposition 1.10** ([1, Theorems 3.2 & 3.3]).: _If \(t\) is generic, then the space \(\mathfrak{M}_{G}(t)\) is a smooth Deligne-Mumford stack. If moreover the hyperplane arrangement \(\mathcal{H}_{G}(t)\) is unimodular, then there are no strictly semistable points, and \(\mathfrak{M}_{G}(t)\) is a smooth variety._ When \(\Bbbk=\mathbb{C}\), the complex-analytic spaces \(\mathfrak{M}_{G}(t)\) were first introduced in [10], and further studied in [11, 12, 13].As these spaces have traditionally been studied as complex manifolds, they are often constructed not as GIT quotients but as hyperkahler quotients, as we now recall. In the complex setting, not only is the action of \(G\) on \(T^{*}\mathbb{C}^{n}\) Hamiltonian (with moment map \(\mu_{G}\)) for the holomorphic symplectic form on \(T^{*}\mathbb{C}^{n}\), but the action of the compact torus \(G_{c}\) is also Hamiltonian for the real Kahler form on \(T^{*}\mathbb{C}^{n}=\mathbb{C}^{2n}.\) The real moment map for this action is the composite \[T^{*}\mathbb{C}^{n}\xrightarrow{\mu_{D_{c}}}\mathbb{R}^{n}\simeq\mathfrak{d} _{\mathbb{R}}^{\vee}\xrightarrow{i_{\mathbb{R}}^{\vee}}\mathfrak{g}_{\mathbb{ R}}^{\vee},\] with the first map given by \[\mu_{D^{c}}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n})=\left(|x_{1}|^{2}-|y_{1}|^{ 2},\ldots,|x_{n}|^{2}-|y_{n}|^{2}\right).\] The following is a standard application of the Kempf-Ness theorem: see for instance [23, Ch. 3]. **Lemma 1.11**.: _For \(t\) generic, there is an isomorphism of complex manifolds (or orbifolds if \(\mathcal{H}_{G}(t)\) is not unimodular)_ \[\mathfrak{M}_{G}(t)\simeq\left(\mu_{G}^{-1}(0)\cap\mu_{G^{c}}^{-1}(t)\right)/G _{c}.\] One very useful feature of the spaces \(\mathfrak{M}_{G}(t)\) is the presence of a dilating \(\mathbb{G}_{m}\)-action. **Definition 1.12**.: Let \(\mathbb{S}=\mathbb{G}_{m}\) be the 1-dimensional torus which acts on \(\mathrm{T}^{*}\mathbb{A}^{n}\simeq\mathbb{A}^{2n}\) by scaling each of the coordinates with weight 1. **Lemma 1.13**.: _The action of \(\mathbb{S}\) on \(\mathrm{T}^{*}\mathbb{A}^{n}\) descends to an action on \(\mathfrak{M}_{G}(t)\) which scales the symplectic form with weight 2._ ### Toric holomorphic Lagrangians We now discuss some interesting Lagrangian subspaces of \(\mathfrak{M}\). As usual, our discussion begins with the basic case of \(\mathfrak{M}=\mathrm{T}^{*}\mathbb{C}^{n}\). **Definition 1.14**.: We define the following collection of conormal Lagrangians in our hypertyric spaces, indexed by sign vectors \(\alpha\). * Let \(\alpha\in 2^{[n]}\) be a sign vector. We write \[X^{\alpha}:=\{(z_{1},\ldots,z_{n})\in\mathbb{A}^{n}\mid z_{i}=0\text{ for }i\notin\alpha\}\subset\mathbb{A}^{n}\] for the subspace of \(\mathbb{A}^{n}\) given by the intersection of coordinate hyperplanes. We denote its image in \(\mathbb{A}^{n}/G\) by \(X^{\alpha}_{G}\), and we write \(\mathbb{L}^{\alpha}_{G}:=\mathrm{N}^{*}_{X^{\alpha}_{G}}(\mathbb{A}^{n}/G)\) for its conormal. * For \(I\subset 2^{[n]}\) a collection of sign vectors, we write \(X^{I}_{G}:=\bigsqcup_{\alpha\in I}X^{\alpha}_{G}\) and \(\mathbb{L}^{I}_{G}:=\bigcup_{\alpha\in I}\mathbb{L}^{\alpha}_{G}.\) * We write \(X_{G}:=\bigsqcup_{\alpha\in 2^{[n]}}X^{\alpha}_{G}\) and \(\mathbb{L}_{G}:=\bigcup_{\alpha\in 2^{[n]}}\mathbb{L}^{\alpha}_{G}.\) As \(X_{G}^{\alpha}\hookrightarrow\mathbb{A}^{n}/G\) is a closed immersion, we may understand \(\mathbb{L}_{G}^{\alpha}\) as a subspace of \(\mathrm{T}^{*}(\mathbb{A}^{n}/G).\) Similarly, we treat the singular Lagrangians \(\mathbb{L}_{G}^{I}\) and \(\mathbb{L}_{G}\) as subspaces of \(\mathrm{T}^{*}(\mathbb{A}^{n}/G)\) as well. We will be interested in descending these Lagrangians to the hypertoric varieties \(\mathfrak{M}_{G}(t),\) for generic \(t\in\mathfrak{g}_{\mathbb{Z}}^{\vee}.\) **Definition 1.15**.: We write \[\mathbb{L}_{G}(t,0):=\mathbb{L}_{G}\cap(\mathrm{T}^{*}(\mathbb{A}^{n}/G))^{t-ss }\subset\mathfrak{M}_{G}(t)\] for the locus of \(t\)-semistable points in the Lagrangian \(\mathbb{L}_{G}.\) **Proposition 1.16**.: _The locus of \(t\)-unstable points_ \[\mathbb{L}_{G}^{t\text{-unstab}}:=\mathbb{L}_{G}\setminus\mathbb{L}_{G}(t,0) \hookrightarrow\mathbb{L}_{G}\] _in the Lagrangian \(\mathbb{L}_{G}\) is given by the union \(\mathbb{L}_{G}^{t\text{-unstab}}=\bigcup_{\alpha\in t\text{-unstab}}\mathbb{L }_{G}^{\alpha},\) where the set of \(t\)-unstable sign vectors \(\alpha\) is defined in Definition 1.4._ Proof.: This follows from the stability criterion [10, Proposition 2.7]. The Lagrangian \(\mathbb{L}_{G}(t,0)\subset\mathfrak{M}_{G}(t)\) admits a simple description. Observe that when \(G=\{e\}\) is trivial, the Lagrangian \(\mathbb{L}=\mathbb{L}_{G}\subset\mathrm{T}^{*}\mathbb{A}^{n}\) is a normal crossings variety, and when \(\Bbbk=\mathbb{C},\) the chambers in the hyperplane arrangement \(\mathcal{H}\) (all orthants) are actually the moment polytopes for the toric (actually, all \(\mathbb{C}^{n}\)) components of \(\mathbb{L}.\) This fact persists after the GIT quotient: **Theorem 1.17** ([12, Theorem 6.5]).: _The component \((\mathbb{L}_{G}^{\alpha})^{t-ss}\) of \(\mathbb{L}_{G}(t,0)\) is a toric variety corresponding to the polytope \(\mathcal{H}_{G}^{\alpha},\) and these components meet according to the incidences of chambers in \(\mathcal{H}_{G}.\) When \(\Bbbk=\mathbb{C},\) the chambers \(\mathcal{H}_{G}^{\alpha}\) are literally the moment polytopes for the restriction to \(\mathbb{L}_{G}(t,0)\) of the real moment map \(\mu_{G^{c}}.\)_ In order to discuss (2-)categories \(\mathcal{O},\) we will need one further choice of parameter: the attraction parameter \(m\in\mathfrak{f}_{\mathbb{Z}},\) which we understand as a cocharacter \(m:\mathbb{G}_{m}\to F\) of the torus \(F\) which acts on \(\mathfrak{M}_{G}(t),\) determining a Hamiltonian action of a 1-dimensional torus \(\mathbb{G}_{m}\) on \(\mathfrak{M}_{G}(t).\) **Definition 1.18**.: A point \(x\in\mathrm{T}^{*}(\mathbb{A}^{n}/G)\) is \(m\)_-bounded_ if there exists a \(\mathbb{G}_{m}\)-equivariant map \(\mathbb{A}_{\infty}^{1}\to\mathfrak{M}_{G}(t)\) whose image contains \(x,\) where we write \(\mathbb{A}_{\infty}^{1}\) for the weight-\((-1)\) representation of \(\mathbb{G}_{m}.\) When \(\Bbbk=\mathbb{C},\) so the variety \(\mathfrak{M}_{G}(t)\) is a complex manifold and \(\mathbb{G}_{m}=\mathbb{C}^{\times},\) then Definition 1.18 is just the condition that the limit \(\lim_{\lambda\to\infty}m(\lambda)\cdot x\) exists. In general, we refer to [10, SS1] for more information about the geometry of \(\mathbb{G}_{m}\)-actions on algebraic spaces. **Lemma 1.19**.: _The \(m\)-bounded locus \(\mathbb{L}_{G}^{m\text{-bdd}}\) in the Lagrangian \(\mathbb{L}_{G}\) is the union of components \(\mathbb{L}_{G}^{\alpha}\) corresponding to sign vectors \(\alpha\) which are \(m\)-bounded in the sense of Definition 1.4._ **Definition 1.20**.: Fix parameters \(t\in\mathfrak{g}_{\mathbb{Z}}^{\vee},m\in\mathfrak{f}_{\mathbb{Z}}.\) Then we write \(\mathbb{L}_{G}(t,m):=\mathbb{L}_{G}(t,0)^{m\text{-bdd}}\subset\mathfrak{M}_{G }(t)\) for the subset of \(m\)-bounded points in \(\mathbb{L}_{G}(t,0),\) and we call it the _category_\(\mathcal{O}\)_skeleton_ of \(\mathfrak{M}_{G}(t).\) **Remark 1.21**.: The space of parameters \((t,m)\) is divided into chambers by a finite number of rational walls, and none of the quantities we study in this paper changes as \((t,m)\) moves within a fixed chamber. This suggests that it is reasonable to take \((t,m)\in\mathfrak{g}_{\mathbb{R}}^{\vee}\times\mathfrak{f}_{\mathbb{R}}\) to be continuous and real-valued, and indeed it is possible to make sense of the above constructions for such parameters: \(t\) still defines a closed subset of unstable points, and \(m\) defines a Hamiltonian vector field (which may not integrate to a \(\mathbb{C}^{\times}\)-action) which can still be used to define the bounded points. Nevertheless, for simplicity, throughout this paper we restrict our parameters to have integral values. ## 2. Hypertoric categories \(\mathcal{O}\) Throughout this section, we fix a generic choice of parameters \((t,m)\in\mathfrak{g}_{\mathbb{Z}}^{\vee}\times\mathfrak{f}_{\mathbb{Z}}\), determining a Lagrangian \(\mathbb{L}(t,m)\subset\mathrm{T}^{*}(\mathbb{C}^{n}/G).\) Hypertoric category \(\mathcal{O}\), first studied in [1, 1], should be a category of regular DQ-modules or microlocal perverse sheaves on this Lagrangian. We will refer to these two flavors of category \(\mathcal{O}\) as the _de Rham_ and _Betti_ categories \(\mathcal{O}\), respectively. Here we will give a brief review of these categories. The proofs of the statements about de Rham category \(\mathcal{O}\) can be found in [1, 1], and those about Betti category \(\mathcal{O}\) will appear in [1]. ### de Rham category \(\mathcal{O}\) Let \(\mathcal{E}\) be a deformation quantization of the variety \(\mathfrak{M}_{G}(t)\), which may be produced as in [10] by quantum Hamiltonian reduction of the sheaf of microdifferential operators on \(\mathrm{T}^{*}\mathbb{C}^{n}\), followed by restriction to \(\mathfrak{M}_{G}(t).\) We write \(\mathcal{W}:=\mathcal{E}[\hbar^{-\frac{1}{2}}]\), so that \(\mathcal{W}\) is a sheaf of \(\mathbb{C}((\hbar^{\frac{1}{2}}))\)-algebras on \(\mathfrak{M}_{G}(t)\), which is moreover equivariant for the action of the torus \(\mathbb{S}\) defined in Definition 1.12, with \(|\hbar|=2\). **Definition 2.1**.: An \(\mathcal{E}\)-module \(\mathcal{F}\) is _coherent_ if \(\mathcal{F}/\hbar\) is a coherent sheaf on \(\mathfrak{M}_{G}(t).\) A \(\mathcal{W}\)-module is _good_ if it admits a coherent \(\mathcal{E}\)-lattice. We write \(\mathsf{Good}^{\mathbb{S}}_{\mathcal{W}}\) for the derived category of \(\mathbb{S}\)-equivariant good \(\mathcal{W}\)-modules. We refer to [10] for the theory of \(\mathbb{S}\)-equivariant good \(\mathcal{W}\)-modules. **Definition 2.2** ([1, Definition 6.2]).: The _de Rham category_\(\mathcal{O}\) is the full subcategory of \(\mathsf{Good}^{\mathbb{S}}_{\mathcal{W}}\) on those \(\mathcal{F}\) such that * the support of \(\mathcal{F}\) is contained (set-theoretically) in \(\mathbb{L}_{G}(t,m)\), and * \(\mathcal{F}\) admits an \(\mathcal{E}\)-lattice that is preserved by the action of \(m\). We denote this category by \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m))\). This category is studied extensively in [1], where it is shown to be equivalent to the combinatorially-defined category introduced in [1]. An alternative approach to studying this category is given in [14, SS3]. We will summarize some facts about this category. **Theorem 2.3** ([1]).: \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m))\) _is a highest weight category, with simple objects \(L^{\alpha}\) corresponding to components of the skeleton \(\mathbb{L}_{G}(t,m)\). Each \(L^{\alpha}\) has a projective cover \(P^{\alpha}\)._ **Corollary 2.4**.: \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m))\) _is generated by the simple objects \(L^{\alpha}\)._ The category \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m))\) may therefore be described explicitly by calculating the endomorphism algebra of its simple objects. The main technical tool used in performing this calculation is the _categorical Kirwan surjectivity_ theorem [1, Theorem 5.31], which says that the functor induced on deformation quantizations by pullback along the open inclusion \(\mathfrak{M}_{G}(t)\hookrightarrow T^{*}(\mathbb{C}^{n}/G)\) admits a fully faithful left adjoint. As a result, we obtain the following description of the de Rham category: **Proposition 2.5** ([14, Proposition 2.6]).: _There is an equivalence_ \[\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}(t),\mathbb{L}_{G}(t,m))\simeq\mathsf{ Dmod}_{\mathbb{L}_{G}^{m\text{-bdd}}}(\mathbb{C}^{n}/G)/\mathsf{Dmod}_{ \mathbb{L}_{G}^{m\text{-bdd},t\text{-unstab}}}(\mathbb{C}^{n}/G) \tag{2.1.1}\] _between the de Rham category \(\mathcal{O}\) and the quotient of the category of D-modules with \(m\)-bounded microsupport by those with purely unstable microsupport._ In the quotient presentation (2.1.1), the simples in \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}(t),\mathbb{L}_{G}(t,m))\) correspond to the push-forwards of functions D-modules on the subvarieties \(X^{\alpha}_{G}\hookrightarrow\mathbb{C}^{n}/G\) for the \(m\)-bounded sign vectors \(\alpha.\) Before quotienting out the unstably-supported D-modules, the endomorphism algebra of these objects is given by the convolution algebra \(C^{\mathsf{BM}}_{*}(X^{m\text{-bdd}}\times_{\mathbb{C}^{n}/G}X^{m\text{-bdd}})\) of Borel-Moore chains on fiber products of the varieties \(X^{\alpha}_{G}\). The endomorphism algebra of simples in category \(\mathcal{O}\) can therefore be described as a quotient of this algebra: **Theorem 2.6** ([17, Corollary 3.3]).: _Let \(A^{\mathsf{dR}}\) be the quotient of the algebra \(C^{\mathsf{BM}}_{*}(X^{m\text{-bdd}}\times_{\mathbb{C}^{n}/G}X^{m\text{-bdd}})\) by the ideal generated by \(C^{\mathsf{BM}}_{*}(X^{m\text{-bdd,t-unstab}}\times_{\mathbb{C}^{n}/G}X^{m \text{-bdd,t-unstab}}).\) Then there is an equivalence \(\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}(t),\mathbb{L}_{G}(t,m))\simeq\mathsf{ Perf}_{A^{\mathsf{dR}}}\) between the de Rham category \(\mathcal{O}\) and the category of perfect modules over \(A^{\mathsf{dR}},\) induced by an equivalence \(A^{\mathsf{dR}}\simeq\mathsf{end}_{\mathcal{O}^{\mathsf{dR}}}(\bigoplus L^{ \alpha})\) between \(A^{\mathsf{dR}}\) and the endomorphism algebra of the simple objects in \(\mathcal{O}^{\mathsf{dR}}.\)_ ### Betti category \(\mathcal{O}\) and Riemann-Hilbert The Betti category \(\mathcal{O}\) will be defined as a category of microlocal perverse sheaves: see [17] for the definition of microlocal sheaves on a general (stably polarized) Weinstein manifold (extending the original work of [16] on cotangent bundles) and [9] for the definition of the perverse t-structure on a stably holomorphically polarized holomorphic Lagrangian. **Notation 2.7**.: In this section, we take our hypertoric variety \(\mathfrak{M}_{G}(t)\) to be defined over \(\mathbb{C}\), and we write \(\Bbbk\) for the coefficients in which our perverse sheaves are valued. **Warning 2.8**.: As noted in SS0.6, our conventions for the category \((\mu)\mathsf{Perv}_{\Lambda}\) of (microlocal) perverse sheaves along a Lagrangian \(\Lambda\) differ in two ways from the standard ones: 1. We do not require objects to have finite-dimensional microstalks but instead write \((\mu)\mathsf{Perv}_{\Lambda}\) for the category of compact objects in the presentable category of perverse sheaves with possibly infinite-dimensional microstalks. For instance, \(\mathsf{Perv}_{\mathbb{C}^{\times}}(\mathbb{C}^{\times})\simeq\mathsf{Loc}( \mathbb{C}^{\times})\) denotes not the category \(\mathsf{Mod}^{\mathsf{perf}/\Bbbk}_{\Bbbk[x^{\pm}]}\) of finite-dimensional \(\Bbbk[x^{\pm}]\)-modules but rather the category \(\mathsf{Perf}_{\Bbbk[x^{\pm}]}\) of perfect \(\Bbbk[x^{\pm}]\)-modules -- including the free module \(\Bbbk[x^{\pm}]\), which corresponds to the universal local system on \(\mathbb{C}^{\times}\) given by the pushforward of the constant sheaf along the universal covering map \(\mathbb{C}\to\mathbb{C}^{\times}\). 2. We denote by \((\mu)\mathsf{Perv}_{\Lambda}\) not the abelian category of (microlocal) perverse sheaves but rather its derived category. **Definition 2.9**.: The _Betti category_\(\mathcal{O}\) is the category \[\mathcal{O}_{\mathsf{Bet}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m)):=\mu \mathsf{Perv}_{\mathbb{L}_{G}(t,m)}(\mathbb{L}_{G}(t,m))\] of microlocal perverse sheaves on the Lagrangian \(\mathbb{L}_{G}(t,m)\), with stable polarization induced from the fiber polarization of \(\mathrm{T}^{*}(\mathbb{C}^{n}/G)\) by [17, Corollary 4.5]. Recall that there is a locally closed embedding \(\mathbb{L}_{G}(t,m)\hookrightarrow\mathbb{L}_{G}=\mathbb{L}/G\), where \(\mathbb{L}\subset\mathrm{T}^{*}\mathbb{C}^{n}\) is the union of conormals to intersections of coordinate hyperplanes. **Theorem 2.10** ([11]).: _There is an equivalence between \(\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n})\) and (compact objects in) the category of \(n\)-hypercubes of vector spaces \(\{V_{\alpha}\mid\alpha\in 2^{[n]}\}\) equipped with two families of linear maps_ \[u_{i}:V_{\alpha}\rightleftarrows V_{\alpha\cup\{i\}}:v_{i} \tag{2.2.1}\] _such that all the operators \(u_{i}\) and \(v_{j}\) commute, and such that \(1-u_{i}v_{i}\) and \(1-v_{i}u_{i}\) are invertible._ **Example 2.11**.: In case \(n=1\), we recover the classical description of perverse sheaves on \((\mathbb{C},0)\) in terms of the data \[\Phi\ \tfrac{\mathit{var}}{\epsilon^{\mathit{can}}}\ \Psi\] of their vanishing and nearby cycles, with canonical and variation maps between them. (The general case may be recovered from this one by taking a tensor product.) **Definition 2.12**.: For \(\alpha\in 2^{[n]}\), we write \(P_{\alpha}\) for the object of \(\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n})\) which corepresents the functor \(\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n})\to\mathsf{Mod}_{\Bbbk}\) taking a diagram (2.2.1) to the vector space \(V_{\alpha}\). The object \(P=\bigoplus_{\alpha\in 2^{[n]}}P_{\alpha}\) is a projective generator for the category \(\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n})\). **Example 2.13**.: Let \(n=1.\) Then, the objects \(P_{\Phi},P_{\Psi}\) are given by the respective diagrams \[\Bbbk[m^{\pm}]\xsts[1-m]{\sim}\Bbbk[m^{\pm}],\qquad\quad\Bbbk[m^{\pm}]\xsts[1- m]{\sim}\Bbbk[m^{\pm}].\] **Lemma 2.14**.: _Let \(I\subset 2^{[n]},\) and let \(\mathbb{L}^{I}=\bigcup\mathbb{L}^{\alpha}\subset\mathbb{L}\) be the union of components \(\mathbb{L}^{\alpha}\) with \(\alpha\in I.\) Then there is an equivalence_ \[\mathsf{Perv}_{\mathbb{L}^{I}}(\mathbb{C}^{n})\simeq\mathsf{Perv}_{\mathbb{L} }(\mathbb{C}^{n})/\langle P_{\alpha}\mid\alpha\notin I\rangle. \tag{2.2.2}\] _The right adjoint to the quotient functor gives a fully faithful embedding_ \[\mathsf{Perv}_{\mathbb{L}^{I}}(\mathbb{C}^{n})\hookrightarrow\mathsf{Perv}_{ \mathbb{L}}(\mathbb{C}^{n}) \tag{2.2.3}\] _whose essential image is the diagrams \((V_{\bullet},u,v)\) with \(V_{\alpha}=0\) for each \(\alpha\notin I.\)_ Proof.: The equivalence (2.2.2) is described in [10, Theorem 4.13]. On the presentable categories \(\mathsf{Ind}\mathsf{Perv},\) the quotient functor is left adjoint to the natural inclusion \[\mathsf{Ind}\mathsf{Perv}_{\mathbb{L}^{I}}(\mathbb{C}^{n})\hookrightarrow \mathsf{Ind}\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n}). \tag{2.2.4}\] To see that (2.2.4) preserves compact objects, observe that we can understand this functor as a functor \(\mathsf{Mod}_{A}\to\mathsf{Mod}_{B},\) where \(A\) and \(B\) are the endomorphism algebras of the projective generators in the respective perverse sheaf categories, and in particular, \(B\) is the path algebra of the quiver with relations described in Theorem 2.10. In these terms, \(A\) is the quotient of \(B\) by the idempotents at vertices not in \(I,\) and the functor \(\mathsf{Mod}_{A}\to\mathsf{Mod}_{B}\) is restriction of scalars along the quotient map \(B\to A.\) Preservation of compact objects now follows from the fact that, since \(A\) is the quotient of \(B\) by an idempotent, it is a projective \(B\)-module. The foregoing calculations have been within the cotangent bundle \(T^{*}(\mathbb{C}^{n}),\) but to study the Betti category \(\mathcal{O}\) we will need to microlocalize away from the unstable locus. As on the de Rham side, the result which will give us access to this microlocalization is a version of the categorical Kirwan surjectivity theorem, whose proof will appear in [10]: **Theorem 2.15** ([10]).: _Let \(I^{\prime}\subset I\subset 2^{[n]}.\) Then there is an equivalence_ \[\mu\mathsf{Perv}(\mathbb{L}^{I}\setminus\mathbb{L}^{I^{\prime}})\simeq\mathsf{ Perv}_{\mathbb{L}^{I}}(\mathbb{C}^{n})/\mathsf{Perv}_{\mathbb{L}^{I^{\prime}}}( \mathbb{C}^{n})\] _presenting the category of microlocal sheaves on the locally closed subset \(\mathbb{L}^{I}\setminus\mathbb{L}^{I^{\prime}}\subset\mathbb{L}\) as a quotient of categories of microlocal perverse sheaves on \(\mathbb{L}^{I}\) by those on \(\mathbb{L}^{I^{\prime}}.\) The left adjoint to the quotient functor gives a fully faithful embedding_ \[\mu\mathsf{Perv}(\mathbb{L}^{I}\setminus\mathbb{L}^{I^{\prime}})\hookrightarrow \mathsf{Perv}_{\mathbb{L}_{I}}(\mathbb{C}^{n}) \tag{2.2.5}\] _as the subcategory \(of\mathsf{Perv}_{\mathbb{L}_{I}}(\mathbb{C}^{n})\) generated by the objects \(P_{\alpha}\) for \(\alpha\notin I^{\prime}.\)_ **Example 2.16**.: Consider the case where \(n=1,\) and \(\mathbb{L}^{I^{\prime}}=\mathrm{T}_{0}^{*}\mathbb{C}\) is the cotangent fiber over \(0\). In this case, we have \[\mu\mathsf{Perv}(\mathbb{L}\setminus\mathbb{L}^{I^{\prime}})\simeq\mu\mathsf{ Perv}(\mathbb{C}^{\times})\simeq\mathsf{Loc}(\mathbb{C}^{\times}), \tag{2.2.6}\] since microlocal perverse sheaves on the Lagrangian \(\mathbb{C}^{\times}\subset\mathbb{C}\subset\mathrm{T}^{*}\mathbb{C}\) are just local systems on \(\mathbb{C}^{\times}.\) In these terms, the left adjoint to the quotient functor \(\mathsf{Perv}_{\mathbb{L}}(\mathbb{C})\to\mu\mathsf{Perv}(\mathbb{L}\setminus \mathrm{T}_{0}^{*}\mathbb{C})\) is given by the proper pushforward \[\mathsf{Loc}(\mathbb{C}^{\times})\xrightarrow{j_{!}}\mathsf{Perv}_{\mathbb{L }}(\mathbb{C})\] along the open embedding \(\mathbb{C}^{\times}\xrightarrow{j}\mathbb{C}.\) The right orthogonal to this embedding is the subcategory \(\mathsf{Perv}_{T_{0}^{*}\mathbb{C}}(\mathbb{C})=\mathsf{Perv}_{0}(\mathbb{C}) \hookrightarrow\mathsf{Perv}_{\mathbb{L}}(\mathbb{C})\) of perverse sheaves supported at \(0\in\mathbb{C}.\) Finally, after our study of microlocal perverse sheaves inside of \(T^{*}\mathbb{C}^{n}\), we must pass to the \(G\)-quotient. Observe that each vector space \(V_{\alpha}\) in the hypercubical diagram described in Theorem 2.10 is equipped with \(n\) commuting automorphisms (from the \(1-u_{i}v_{i}\) or \(1-v_{i}u_{i}\)), making it a representation of the group \(\mathbb{Z}^{n}.\) The inclusion \(G\hookrightarrow(\mathbb{C}^{\times})^{n}\) induces an inclusion on cocharacter lattices \(\pi_{1}G\hookrightarrow\mathbb{Z}^{n}\), so that each \(V_{\alpha}\) is in particular a representation of the free abelian group \(\pi_{1}G\). **Lemma 2.17** ([Cgh]).: _The category \(\operatorname{\mathsf{Perv}}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) is equivalent to the data of diagrams of vector spaces \((V_{\alpha})\) as in Theorem 2.10 such that the \(\pi_{1}G\) action on each vector space \(V_{\alpha}\) is trivial._ By combining the above results, we can give an explicit presentation of the Betti category \(\mathcal{O}\). **Definition 2.18**.: Let \(Q\) be the quiver whose vertices are indexed by \(\alpha\in 2^{[n]}\) which are \(t\)-stable, and whose arrows come in pairs \[u_{i}:\alpha\rightleftarrows\alpha\cup\{i\}:v_{i} \tag{2.2.7}\] for each pair of vertices of the form \((\alpha,\alpha\cup\{i\})\). Let \(A^{\mathsf{Bet}}\) be the algebra \(\Bbbk Q\otimes\Bbbk[\pi_{1}F]/I\), where \(I\) is the ideal generated by the following relations: 1. \(e_{\alpha}=0\) for each \(\alpha\) which is \(m\)-unbounded, where we write \(e_{\alpha}\) for the idempotent (length-0 path) at \(\alpha\). 2. For each square in \(Q\) of the form \[\xy(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{ \ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{ \ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)*{\ar@{-}};(0,0)* {\ar@{-}};(0, \[1-v_{1}u_{1}=M, 1-v_{2}u_{2}=M^{-1}.\] For an appropriate choice of parameters \(t,m,\) the component \(\mathbb{L}_{G}^{\{2\}}\) becomes \(t\)-unstable, while the smooth points of \(\mathbb{L}_{G}^{\emptyset}\) become \(m\)-unbounded. Now \(\mu\mathsf{Perv}(\mathbb{L}_{G}(t,m))\) is the quotient of \(\mathsf{Perv}_{\mathbb{L}_{G}}(\mathbb{C}^{2}/G)\) by the simple object at \(V^{\{2\}}\) and the projective object at \(V^{\emptyset}.\) This has the effect of forgetting \(V^{\{2\}}\) and setting \(V^{\emptyset}\) to \(0,\) so that \(\mu\mathsf{Perv}(\mathbb{L}_{G}(t,m))\) is the category of diagrams with monodromies at \(V^{\{1\}}\) and \(V^{\{1,2\}}\) and the same relations as above (except those involving the bottom-right corner). Setting the bottom-left corner to \(0,\) together with the two relations at the upper-left vertex, forces \(v_{2}u_{2}=0,\) which also forces the final relation (invertibility of \(u_{2}v_{2}\)), and we conclude that \(\mu\mathsf{Perv}(\mathbb{L}_{G}(t,m))\) is the category of diagrams of vector spaces with the single relation \(v_{2}u_{2}=0.\) This is the usual description of the BGG category \(\mathcal{O}\) for \(SL(2).\) We may reframe Theorem 2.19 as giving a presentation of Betti category \(\mathcal{O}\) as a subquotient of the category \(\mathsf{Perv}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\): **Corollary 2.22**.: _There is an equivalence_ \[\mu\mathsf{Perv}(\mathbb{L}_{G}(t,m))\simeq\frac{\langle P_{G}^{\alpha}\mid \alpha\ t\text{-semistable}\rangle}{\langle P_{G}^{\beta}\mid\beta\ m\text{- unbounded}\rangle}.\] Finally, we note that although in SS2.1 we described \(\mathcal{O}^{\mathsf{dR}}\) in terms of the endomorphism algebra of simples, it may also be described in terms of the endomorphism algebra of projectives. The resulting algebra, described in [1, SS3], is a quiver algebra of the same form as \(A^{\mathsf{Bet}}.\) The following is a microlocal form of the Riemann-Hilbert correspondence. **Proposition 2.23** ([Cgh]).: _Let \(\Bbbk=\mathbb{C}.\) Then there is an equivalence_ \[\mathcal{O}^{\mathsf{dR}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m))\simeq \mathcal{O}^{\mathsf{Bet}}(\mathfrak{M}_{G}(t),\mathbb{L}_{G}(t,m)) \tag{2.2.8}\] _between the Betti and de Rham categories \(\mathcal{O}.\)_ We refer to [CGH] for more details and an explicit form of this equivalence. ## 3. Microlocal perverse schobers Our A-side 2-category \(\mathcal{O}\) will be a category of microlocal perverse schobers on the skeleton \(\mathbb{L}(t,m),\) categorifying the presentation of \(\mathcal{O}^{\mathsf{Bet}}\) as a category of microlocal perverse sheaves. Unfortunately, microlocal perverse schobers do not yet have a definition. However, Theorem 2.15 tells us that microlocal perverse sheaves on \(\mathbb{L}\) can be defined purely in terms of categories of perverse sheaves on \(\mathbb{C}^{n}/G\) with singular supports in closed subsets of \(\mathbb{L}.\) We will therefore define microlocal perverse sheaves of categories on \(\mathbb{L}\) in the analogous way. ### Spherical functors and perverse schobers We begin by recalling that 2-category from [10]. We first need the notion of a spherical adjunction. **Definition 3.1** ([11]).: The _free adjunction_ is the (discrete, i.e., \((2,2)\) rather than \((\infty,2)\)) 2-category \(\mathsf{Adj}\) generated by 1-morphisms (3.1.1) and the 2-morphisms \(\mathsf{id}_{\Phi}\to RL,LR\to\mathsf{id}_{\Psi}\), with the relations that the compositions \(R\to RLR\to R\) and \(L\to LRL\to L\) are the identity maps \(\mathsf{id}_{R}\) and \(\mathsf{id}_{L}\), respectively. **Theorem 3.2** ([11]).: _The 2-category \(\mathsf{Adj}\) corepresents adjunctions in 2-categories._ **Definition 3.3**.: Let \(\overline{\mathsf{Adj}}:=\Sigma^{(\infty,2)}\mathsf{Adj}\) be the \(\Bbbk\)-linear stabilization of the 2-category \(\mathsf{Adj}\). The _universal (co)twist_ is the 1-morphism \[T_{\Phi}:=\mathsf{fib}(\mathsf{id}_{\Phi}\to RL)\qquad(\text{resp.},\,T_{\Psi}: =\mathsf{cofib}(LR\to\mathsf{id}_{\Psi}))\] in \(\overline{\mathsf{Adj}}\) given by the (co)fiber of the (co)unit of the universal adjunction (3.1.1). The _free spherical adjunction_ is the stable 2-category \[\mathsf{Sph}:=\overline{\mathsf{Adj}}[T_{\Phi}^{-1},T_{\Psi}^{-1}]\] obtained from \(\mathsf{Adj}\) by localizing at the 1-morphisms \(T_{\Phi},T_{\Psi}\).9 Footnote 9: This differs from the usage in [10], where \(\mathsf{Sph}\) was synonymous with the 2-category \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C})\). Let \(\mathcal{C}\rightleftarrows\mathcal{D}\) be an adjunction in stable categories, which (by Theorem 3.2 and the stability of \(\mathcal{C}\)) corresponds to a functor \(F:\overline{\mathsf{Adj}}\to\mathsf{Cat}\). If \(F\) admits a factorization \(\overline{\mathsf{Adj}}\to\mathsf{Sph}\to\mathsf{Cat}\), then we say that the adjunction \(\mathcal{C}\rightleftarrows\mathcal{D}\) is _spherical_, and that each of the adjoint functors is a _spherical functor_. As usual, let \(\mathbb{L}=\bigcup_{\alpha}\mathrm{T}_{X^{\alpha}}^{*}\mathbb{C}^{n}\subset T^ {*}\mathbb{C}^{n}\) be the union of conormals to intersections of coordinate hyperplanes. **Definition 3.4**.: The 2-category of _perverse schobers on \(\mathbb{C}^{n}\)_ with singular support in \(\mathbb{L}\) (and coefficients in the 2-category \(\mathsf{St}\) of stable categories) is the 2-category \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{n}):=\mathsf{Fun}(\mathsf{Sph}^{ \otimes n},\mathsf{St})\) of \(\mathsf{Sph}^{\otimes n}\)-diagrams in stable categories. **Notation 3.5**.: An object of \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{n})\) is specified by the data of a _spherical categorical n-cube_ in the sense of [10, Definition 5.4.3], namely a functor \(2^{[n]}\to\mathsf{St}\) (where we treat \(2^{[n]}\) as a partially ordered set) such that each edge of the cube is a spherical functor, and each face of the cube satisfies the Beck-Chevalley condition of [10, Definition 4.5.7]. Such an object consists of a collection of categories \((\mathcal{C}^{\alpha})_{\alpha\in 2^{[n]}}\), together with spherical adjunctions \[u_{i}:\mathcal{C}_{\alpha}\rightleftarrows\mathcal{C}_{\alpha\cup\{i\}}:v_{i}\] satisfying the appropriate commutativity conditions. We will denote this object by \((\mathcal{C}^{\bullet},u,v)\). The \(D=(\mathbb{C}^{\times})^{n}\) action on \(\mathbb{C}^{n}\) should make \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{n})\) into a topological \(D\)-2-category (i.e., the underlying 2-category of a functor \(BD\to 2\mathsf{Pr}^{L,\mathsf{st}}\), where the target is the 3-category of presentable stable 2-categories), and indeed this structure was described in [10]. **Definition 3.6**.: The _universal twist_ is the central element \(\mathbf{T}\in\mathsf{aut}(\mathsf{id}_{\mathsf{Sph}})\) which acts on a spherical adjunction by the automorphism **Lemma 3.7** ([GMGH], Proposition 3.16).: _The universal twist is the generator of an \(\mathbb{E}_{2}\)-map \(\mathbb{Z}\to\mathsf{end}(\mathsf{id}_{\mathsf{Sph}}),\) defining a topological \(B\mathbb{Z}\)-action on \(\mathsf{Sph}.\)_ By taking products, we obtain a topological \(B(\mathbb{Z}^{n})\simeq D\)-action on \(\mathsf{Sph}^{n},\) which we can restrict along the inclusion \(G\to D\) to obtain a \(G\)-action. **Definition 3.8**.: We write \(\mathsf{Sph}_{G}^{n}\) for the coinvariants of the \(G\)-action on \(\mathsf{Sph}.\) (See [GMGH, Lemma A.4] for the construction of these coinvariants.) The \(2\)-category of _\(G\)-equivariant perverse schobers_ is defined as \[\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G):=\mathsf{Fun}(\mathsf{Sph }_{G}^{n},\mathsf{St}).\] The \(2\)-category \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) is therefore obtained as the invariants \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{n})^{G}\) for the \(G\)-action on \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{n}).\) An object of the resulting \(2\)-category may be specified by the data of a spherical \(n\)-cube equiped with trivializations of the appropriate compositions of (co)twist automorphisms on each category \(\mathcal{C}^{\alpha}.\) We will continue to denote such an object by \((\mathcal{C}^{\bullet},u,v),\) leaving the trivializations implicit in our notation. Certain objects in \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) will play a distinguished role. Recall that an object of \(\mathsf{Sph}^{n}\) may be specified by a choice of \(\alpha\in 2^{[n]};\) we continue to write \(\alpha\) to denote the image of this object in \(\mathsf{Sph}_{G}^{n}.\) **Definition 3.9**.: The object \(\alpha\) in \(\mathsf{Sph}_{G}^{n}\) determines a corepresentable functor \(\mathsf{Hom}_{\mathsf{Sph}_{G}^{n}}(\alpha,-);\) we will refer to this object of \(\mathsf{Fun}(\mathsf{Sph}_{G}^{n},\mathsf{St})=\mathsf{PervCat}_{\mathbb{L}_{G }}(\mathbb{C}^{n}/G)\) by \(\mathcal{P}_{G}^{\alpha}.\) Tautologically, the object \(\mathcal{P}_{G}^{\alpha}\) corepresents the functional on \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) which takes a \(G\)-invariant spherical \(n\)-cube \((\mathcal{C}^{\bullet},u,v)\) to the category \(\mathcal{C}^{\alpha}.\) The objects \(\mathcal{P}_{G}^{\alpha}\) are therefore generators of the \(2\)-category \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathsf{Sph}_{G}^{n},\mathsf{St});\) the main theorem of [GMGH], which we have stated as Theorem 0.1 above, is a calculation of their monoidal category of endomorphisms \(\mathsf{end}\left(\bigoplus\mathcal{P}_{\alpha}\right).\) We can use the \(\mathcal{P}_{G}^{\alpha}\) to describe perverse schobers on \(\mathbb{C}^{n}/G\) with singular support in closed subsets of \(\mathbb{L}_{G}.\) **Definition 3.10**.: Let \(I\subset 2^{[n]},\) and \(\mathbb{L}_{G}^{I}:=\bigcup_{\alpha\in I}\mathbb{L}_{G}.\) Then the \(2\)-category of perverse schobers on \(\mathbb{C}^{n}/G\) with singular support in \(\mathbb{L}_{G}^{I}\) is the quotient \(2\)-category \[\mathsf{PervCat}_{\mathbb{L}_{G}^{I}}(\mathbb{C}^{n}/G):=\frac{\mathsf{PervCat }_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)}{\langle\mathcal{P}_{\alpha}\rangle_{ \alpha\notin I}}. \tag{3.1.2}\] **Lemma 3.11**.: _The right adjoint to the quotient functor (3.1.2) gives a fully faithful embedding \(\mathsf{PervCat}_{\mathbb{L}_{G}^{I}}(\mathbb{C}^{n}/G)\hookrightarrow\mathsf{ PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) as the sub-2-category of diagrams \((\mathcal{C}^{\bullet},u,v)\) where \(\mathcal{C}^{\alpha}\simeq 0\) for each \(\alpha\notin I.\)_ ### \(2\)-category \(\mathcal{O}\) So far we have understood the \(2\)-category \(\mathsf{PervCat}_{\mathbb{L}^{G}}(\mathbb{C}^{n})\) as a categorification of equivariant perverse sheaves on \(\mathbb{C}^{n}.\) We will now treat it microlocally, imitating the description of microlocal perverse sheaves in SS2.2. **Definition 3.12**.: For a stability parameter \(t\) and an attraction parameter \(m,\) the \(2\)-category of _microlocal perverse schobers_ on \(\mathbb{L}_{G}(t,m)\) is defined as the quotient \[\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m)):=\frac{\mathsf{PervCat}_{\mathbb{L}_{ G}}(\mathbb{C}^{n}/G)}{\mathsf{PervCat}_{\mathbb{L}_{G}^{t,\text{match}}}( \mathbb{C}^{n}/G),\langle\mathcal{P}_{G}^{\alpha}\mid\alpha\ m\text{-unbounded} \rangle}. \tag{3.2.1}\] We can also present (3.2.1) as a subquotient of \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\), matching the presentation of Betti category \(\mathcal{O}\) given in Corollary 2.22: **Lemma 3.13**.: _There is an equivalence \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\simeq\frac{\langle\mathcal{P}_{ \mathcal{G}}^{\alpha}|\alpha\ t\text{-semistable}\rangle}{\langle\mathcal{P}_{ \mathcal{G}}^{\alpha}|\beta\ m\text{-unbounded}\rangle}.\)_ Proof.: As in the \(1\)-categorical case, this follows from the fact that the left adjoint to the quotient functor \[\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\to\frac{\mathsf{PervCat}_{ \mathbb{L}_{G}}(\mathbb{C}^{n}/G)}{\mathsf{PervCat}_{\mathbb{L}_{G}^{t} \text{unstab}}(\mathbb{C}^{n}/G)}\] embeds the quotient as the full sub-\(2\)-category of \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\) generated by the objects \(\mathcal{P}_{G}^{\alpha}\) for \(t\)-stable \(\alpha\). Categorifying Definition 2.18, we can give an explicit description of objects in the \(2\)-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\). **Definition 3.14**.: _Quiver data_ for \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) is the following collection of data: * For each \(\alpha\) which is both \(t\)-stable and \(m\)-bounded, a category \(\mathcal{C}^{\alpha}\). * For each \(\mathcal{C}^{\alpha}\), a map \(\pi_{1}(F)\to\mathsf{aut}(\mathcal{C}^{\alpha})\), which we understand as a map \(\pi_{1}(D)\to\mathsf{aut}(\mathcal{C}^{\alpha})\) with a trivialization of the composite \(\pi_{1}(G)\to\pi_{1}(D)\to\mathsf{aut}(\mathcal{C}^{\alpha}).\) We write \(T_{i}^{\alpha}\in\mathsf{aut}(\mathcal{C}^{\alpha})\) for the images of the canonical generators of \(\pi_{1}(D)\simeq\mathbb{Z}^{n}\). * On each \(\mathcal{C}^{\alpha}\), a monad \(M_{i}^{\alpha}\) for each \(i\in\alpha\), and a comonad \(C_{i}^{\alpha}\) for each \(i\notin\alpha\). * For each \(t\)-stable pair \((\alpha,\alpha\cup\{i\})\), a spherical adjunction \[\mathcal{C}^{\alpha}\xleftarrow{\begin{subarray}{c}v_{i}\\ u_{i}\end{subarray}}\mathcal{C}^{\alpha\cup\{i\}}\] together with equivalences \(M_{i}^{\alpha\cup\{i\}}\simeq u_{i}v_{i},C_{i}^{\alpha}\simeq v_{i}u_{i}\) of monads (resp. comonads), where we take the convention that \(\mathcal{C}^{\alpha}\simeq 0\) if \(\alpha\) is not \(m\)-bounded * For each \(\mathcal{C}^{\alpha}\), an equivalence \(T_{i}^{\alpha}\simeq\mathsf{fib}(\mathsf{id}_{\mathcal{C}^{\alpha}}\xrightarrow {\eta}M)\) for each \(i\in\alpha\), and an equivalence \(T_{i}^{\alpha}\simeq\mathsf{cofib}(\mathsf{id}_{\mathcal{C}^{\alpha}} \xrightarrow{\eta}M)[-2]\) for each \(i\notin\alpha\). We moreover require these data to satisfy the conditions that the \(u_{i}\)'s and \(v_{i}\)'s all commute with each other and with the monads \(M_{i}^{\alpha}\) and comonads \(C_{i}^{\alpha}\). It can be checked that quiver data as above determines an object in the \(2\)-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\). We expect that it is possible to define a stable \(2\)-category directly from this quiver data, although we have not done this here. If done correctly, this combinatorially defined \(2\)-category will agree with the \(2\)-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) defined in Definition 3.12. **Example 3.15**.: Here we consider the categorification of Example 2.21. Once again, let \[G\simeq\mathbb{C}^{\times}\hookrightarrow(\mathbb{C}^{\times})^{2}\] be the kernel of the multiplication map. Before imposing \(G\)-equivariance, we have that \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{2})\) is the \(2\)-category of Beck-Chevalley squares of spherical functors (3.2.2) Sphericality equips each category \(\mathcal{C}^{\alpha}\) with two automorphisms, the relevant two of \[\mathsf{fib}(\mathsf{id}_{\mathcal{C}^{\alpha}}\to v_{2}^{R}v_{2}),\quad \mathsf{cofib}(v_{2}v_{2}^{R}\to\mathsf{id}_{\mathcal{C}^{\alpha}})[-2],\quad \mathsf{fib}(\mathsf{id}_{\mathcal{C}^{\alpha}}\to v_{1}^{R}v_{1}),\quad \mathsf{cofib}(v_{1}v_{1}^{R}\to\mathsf{id}_{\mathcal{C}^{\alpha}})[-2].\] We obtain \(\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{2}/G)\) from this \(2\)-category by imposing additionally an equivalence between the two automorphisms at each category \(\mathcal{C}^{\alpha}\). Finally, to produce the \(2\)-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\), we must quotient by the simple object supported on \(\mathcal{C}^{\{2\}}\) and by the projective \(\mathcal{P}^{\emptyset}_{G}.\) The first of these operations has the effect of forgetting the lower-right vertex from the diagram (3.2.2) (and the functors ingoing and outgoing from it -- but not the associated monad \(v_{1}^{R}v_{1}\) on \(\mathcal{C}^{\{1,2\}}\)). The latter operation sets to \(0\) the category \(\mathcal{C}^{\emptyset},\) along with the monad \(v_{1}^{R}v_{1}\) on \(\mathcal{C}^{\{1\}},\) so that the automorphism at \(\mathcal{C}^{\{1\}}\) (which remains identified with \(\mathsf{cofib}(v_{2}v_{2}^{R}\to\mathsf{id}_{\mathcal{C}^{\{1\}}})[-2]\)) is identified with \(\mathsf{id}_{\mathcal{C}^{\{1\}}}.\) In summary, the \(2\)-category \(\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\) is the \(2\)-category of spherical functors \[\mathcal{C}^{\{1\}}\xleftarrow{v_{2}}\mathcal{C}^{\{1,2\}}\] together with an extra monad "\(v_{1}^{R}v_{1}\)" on \(\mathcal{C}^{\{1,2\}},\) and identifications \[\mathsf{fib}(\mathsf{id}_{\mathcal{C}^{\{1,2\}}}\xrightarrow{\eta}``v_{1}^{R} v_{1}\,")\simeq\mathsf{fib}(\mathsf{id}_{\mathcal{C}^{\{1,2\}}}\xrightarrow{ \eta}v_{2}^{R}v_{2}),\qquad\mathsf{cofib}(v_{2}v_{2}^{R}\to\mathsf{id}_{ \mathcal{C}^{\{1\}}})\simeq[2].\] ## 4. Microlocal coherent schobers ### Coherent sheaves of categories Beginning with a symplectic variety \(\mathfrak{M},\) a proposal is given in [10, 11] for a an algebraic \(2\)-category whose objects are supported on Lagrangian subvarieties in \(\mathfrak{M},\) and it is further explained that when \(\mathfrak{M}=\mathrm{T}^{*}Y\) is a cotangent bundle, the part of the category supported on a conical Lagrangian \(\mathbb{L}\subset\mathrm{T}^{*}Y\) may be described in terms of coherent-sheaf categories of spaces proper over \(Y.\) (This may motivate the notation in [13] for this \(2\)-category as \(\sqrt{\mathfrak{Coh}}(\mathrm{T}^{*}Y).\)) Given a conic Lagrangian \(\mathbb{L}\subset\mathrm{T}^{*}Y\) in the cotangent bundle of an algebraic stack \(Y\), we are therefore motivated to study the \(2\)-category of coherent sheaves of categories over \(Y\) with singular support in \(\mathbb{L}.\) A full mathematical account of this \(2\)-category has not yet appeared in the literature, though it is expected in forthcoming works of Stefanich and Arinkin.10 Nevertheless, enough is known about this \(2\)-category for our present purposes. Footnote 10: See [1] for motivation for the construction of this \(2\)-category, and [11] for the simpler case of quasicoherent sheaves of (\(n\)-)categories.) We will need only the following two facts: first, that if \(X\) is a smooth stack with a proper map \(X\to Y,\) then \(\mathsf{Coh}(X)\) defines an object in this \(2\)-category, and second, that the Hom category between two such objects \(\mathsf{Coh}(X)\) and \(\mathsf{Coh}(X^{\prime})\) is the category \(\mathsf{Fun}^{ex}_{\mathsf{Perf}(Y)}(\mathsf{Coh}(X),\mathsf{Coh}(X^{\prime}))\) of exact \(\mathsf{Perf}(Y)\)-linear functors between the coherent-sheaf categories, with compositions among these Hom categories given by composition of functors. These Hom categories have been computed directly in [10]. **Theorem 4.1** ([10, Theorem 1.1.3]).: _Let \(X,X^{\prime},Y\) be smooth perfect stacks, and let \(X\to Y\gets X^{\prime}\) be proper maps. Then there is an equivalence_ \[\mathsf{Fun}_{\mathsf{Perf}(Y)}(\mathsf{Coh}(X),\mathsf{Coh}(X^{\prime})) \simeq\mathsf{Coh}(X\times_{Y}X^{\prime})\] _between the category of exact \(\mathsf{Perf}(Y)\)-linear functors and the category of coherent sheaves on the fiber product \(X\times_{Y}X^{\prime},\) associating to a coherent sheaf \(K\) on \(\mathsf{Coh}(X\times_{Y}X^{\prime})\) the integral transform functor \(p_{X^{\prime},*}(p_{X}^{*}(-)\otimes K).\) Moreover, in the case \(X=X^{\prime},\) this is naturally a monoidal equivalence._ Theorem 4.1 justifies the following definition, due to Arinkin. **Definition 4.2**.: Let \(X,Y\) be smooth perfect stacks and \(f:X\to Y\) a proper map, and write \(\overline{\mathrm{N}_{X}^{*}Y}\) for the image in \(T^{*}Y\) of the conormal Lagrangian \(\mathrm{N}_{X}^{*}Y.\) Then we say the \(2\)-category of _coherent sheaves of categories on \(Y\) with singular support \(\overline{\mathrm{N}_{X}^{*}Y}\)_ is the \(2\)-category \[\mathsf{CohCat}_{\overline{\mathrm{N}_{X}^{*}Y}}(Y):=\mathsf{Mod}_{\mathsf{ Coh}(X\times_{Y}X)}(\mathsf{St}) \tag{4.1.1}\] of module categories for the monoidal category \(\mathsf{Coh}(X\times_{Y}X).\) **Remark 4.3**.: The notation in Definition 4.2 may seem misleading, as a priori it is unclear that the \(2\)-category defined by (4.1.1) depends only on the subset \(\overline{\mathrm{N}_{X}^{*}Y}\subset\mathrm{T}^{*}Y\), rather than on the choice of map \(X\to Y\). This invariance statement is promised as [1], but so far a proof has not appeared in the literature. We provide a proof below as Proposition 4.18. ### Singular support We would like to introduce a modification of Definition 4.2 to account for non-closed singular support conditions. For this, we will need the theory of coherent singular support developed in [1]. Recall that \(Z\) a quasi-smooth Artin stack, the _stack of singularities_ of \(Z\) is the classical Artin stack \[\mathsf{Sing}(Z):=(\mathrm{T}^{*}[-1]Z)^{cl}\] obtained by taking the underlying classical stack of the \((-1)\)-shifted cotangent bundle. If \(Z\) is affine, there is a map \[\Gamma(\mathsf{Sing}(Z),\mathcal{O}_{\mathsf{Sing}(Z)})\to\mathsf{HH}^{\mathrm{ even}}(Z)\] from functions on \(\mathsf{Sing}(Z)\) to the even Hochschild cohomology of \(Z\). As a result, in general, the category \(\mathsf{Coh}(Z)\) localizes over \(\mathsf{Sing}(Z)\) (in the conic topology). **Definition 4.4**.: Let \(\Lambda\subset\mathsf{Sing}(Z)\) be a conic closed subset, with open complement \(U\). Then we write \(\mathsf{Coh}_{\Lambda}(Z)\subset\mathsf{Coh}(Z)\) for the full subcategory of coherent sheaves supported on \(\Lambda\), and \(\mathsf{Coh}_{U}(Z):=\mathsf{Coh}(Z)/\mathsf{Coh}_{\Lambda}(Z)\) for the localization away from sheaves supported on \(\Lambda\). **Definition 4.5**.: Given a map \(g:Z\to W\) of quasi-smooth stacks, consider the correspondence (4.2.1) which we will use to define a pushforward and pullback of singular support conditions along \(g\): 1. For a conic subset \(U\subset\mathsf{Sing}(Z)\), we define \[g_{*}U:=\tilde{g}((dg^{*})^{-1}(U))\subset\mathsf{Sing}(W).\] 2. For a conic subset \(V\subset\mathsf{Sing}(W)\), we define \[g^{!}V:=df^{*}(Z\times_{W}V)\subset\mathsf{Sing}(Z).\] **Lemma 4.6** ([1, Proposition 7.1.3]).: _The pushforward and pullback functors preserve singular supports, inducing functors_ \[f_{*}:\mathsf{Coh}_{\Lambda_{Z}}(Z)\to\mathsf{Coh}_{f_{*}\Lambda_{Z}}(W), \qquad f^{!}:\mathsf{Coh}_{\Lambda_{W}}(W)\to\mathsf{Coh}_{f^{!}\Lambda_{Z}} (Z).\] We refer to [1] for more information about coherent singular support conditions. ### Monoidal structure **Notation 4.7**.: Throughout this section, we fix smooth perfect stacks \(Y,X_{i}\) and proper maps \(f_{i}:X_{i}\to Y.\) We write \(X_{ij}:=X_{i}\times_{Y}X_{j}\) for the fiber products over \(Y\), and similarly \(X_{ijk}:=X_{i}\times_{Y}X_{j}\times_{Y}X_{k}\) for the triple fiber products. In Theorem 4.1 and Definition 4.2 we have made use of the functoriality of \(\mathsf{Coh}\) under correspondences: to a correspondence (4.3.1) with \(g\) proper, we associate the functor. (4.3.2) In the case where the correspondence (4.3.1) is (4.3.3) we recover the rule for composition of integral transforms: **Definition 4.8**.: Using (4.3.2), the correspondence (4.3.3) (together with the higher fiber products for associativity data) defines a monoidal structure on the category \(\mathsf{Coh}(X_{ii})\) together with a \(\mathsf{Coh}(X_{ii}),\mathsf{Coh}(X_{jj})\)-bimodule structure on the category \(\mathsf{Coh}(X_{ij}).\) We call this the _convolution monoidal structure._ We will need to understand how this algebraic structure on the categories \(\mathsf{Coh}(X_{ij})\) interacts with the the theory of coherent singular support. This question has been previously studied in [1, 10]. **Definition 4.9**.: Let \(\Lambda_{12}\subset\mathsf{Sing}(X_{12})\) and \(\Lambda_{23}\subset\mathsf{Sing}(X_{23}),\) and consider the diagram \[X_{12}\times X_{23}\xleftarrow{\delta_{2}}X_{123}\xrightarrow{p_{13}}X_{13}, \tag{4.3.4}\] where we write \(\delta_{2}\) as shorthand for \((p_{12},p_{23}).\) The _convolution_ of singular support conditions \(\Lambda_{ij}\) is defined by \[\Lambda_{12}*\Lambda_{23}:=(p_{13})_{*}(\delta_{2})^{!}(\Lambda_{12}\boxtimes \Lambda_{23}). \tag{4.3.5}\] The singular support condition \(\Lambda_{ij}\) is \(X_{ii}\)_-stable_ if it is preserved by convolution with \(\mathsf{Sing}(X_{ii})\): \[\mathsf{Sing}(X_{ii})*\Lambda_{ij}\subset\Lambda_{ij}.\] **Remark 4.10**.: The operation of convolution may be more succinctly reframed in the language of Lagrangian correspondences: given a correspondence (4.3.1), the shifted conormal to \(C\) defines a Lagrangian correspondence \[\mathrm{T}^{*}[-1]X\xleftarrow{\mathrm{N}^{*}_{C}[-1](X\times Y)}\xrightarrow {}\mathrm{T}^{*}[-1]Y. \tag{4.3.6}\] A Lagrangian \(\Lambda\to\mathrm{T}^{*}[-1]X\) may be understood as a Lagrangian correspondence which we may compose with the Lagrangian correspondence \(\mathrm{N}^{*}_{C}[-1](X\times Y)\) to obtain a new Lagrangian \[\Lambda\circ C:=\Lambda\times_{\mathrm{N}^{*}_{C}[-1](X\times Y)}\mathrm{T}^{ *}[-1]X\to\mathrm{T}^{*}[-1]Y. \tag{4.3.7}\] After taking classical part \((\Lambda\circ C)^{cl},\) we recover the singular-support condition \(g_{*}f^{!}\Lambda\) obtained by transferring the singular-support condition \(\Lambda^{cl}\) along the correspondence \(C\). Moreover, the Lagrangian correspondence (4.3.6) may be factored as the composition of a pair of Lagrangian correspondences. Given a map \(g:Z\to Z^{\prime},\) the shifted conormal to the graph of \(Z\) is equivalent to the pullback of the shifted cotangent bundle, \[\mathrm{N}^{*}_{\Gamma_{g}}[-1](Z\times Z^{\prime})\simeq\mathrm{T}^{*}[-1]Z^ {\prime}\times_{Z^{\prime}}Z,\] and therefore it defines the Lagrangian correspondence (4.2.1), with the pushforward and pullback \(g_{*},g^{!}\) being the respective pushforward and pullback with this Lagrangian correspondence with its reverse. Given a correspondence (4.3.1), the convolution (4.3.7) is the composition of the pushforward and pullback \(g_{*}f^{!},\) implemented by the composed Lagrangian correspondence \[\mathrm{N}^{*}_{C}[-1](X\times Y)\simeq\mathrm{N}^{*}_{\Gamma_{f}}[-1](X\times C )\circ\mathrm{N}^{*}_{\Gamma_{g}}[-1](C\times X). \tag{4.3.8}\] In what follows, we will often be able to simplify computations involving convolution using the language of Lagrangian correspondences, with the understanding that the Lagrangians discussed in these terms are derived schemes, and at the end of a computation we must pass to their classical parts to recover a singular support condition in the singularity space \(\mathsf{Sing}.\) We would like to relate the convolution of Lagrangians to the categorical convolution structures of Definition 4.8. **Definition 4.11**.: We will write \(L_{i}:=\mathrm{N}^{*}_{X_{i}}Y\) for the conormal to the map \(X_{i}\to Y,\) which comes equipped with a map \(L_{i}\to\mathrm{T}^{*}Y\) We write \[L_{ij}:=L_{i}\times_{\mathrm{T}^{*}Y}L_{j},\qquad L_{ijk}:=L_{i}\times_{\mathrm{ T}^{*}Y}L_{j}\times_{\mathrm{T}^{*}Y}L_{k}\] for their fiber products over \(\mathrm{T}^{*}Y,\) and write \(q_{i}\) (resp. \(q_{ij}\)) for the projections to \(L_{i}\) (resp. \(L_{ij}\)). **Lemma 4.12**.: _There is an equivalence \(\mathrm{T}^{*}[-1]X_{ij}\simeq L_{ij}.\) Taking classical parts, we obtain an equivalence \(\mathsf{Sing}(X_{ij})\simeq L_{ij}^{cl}.\)_ Proof.: Consider the cocartesian diagram of sheaves on \(X_{ij}\). Here the upper left cocartesian square is the standard formula for the cotangent complex of a pullback, the left and upper cocartesian rectangles define the relative cotangent complexes, and the outer cocartesian square is the definition of the suspension. The desired map \(\mathrm{T}^{*}[-1]X_{ij}\to L_{ij}\) comes from noticing that \(L_{i}\) and \(L_{j}\) are the total spaces of \(\mathbf{L}_{f_{i}}[-1]\) and \(\mathbf{L}_{f_{j}}[-1]\) respectively. The fact that this is a symplectomorphism is Remark 2.20 in [21]. **Lemma 4.13**.: _There is an equivalence of Lagrangian correspondences_ _between the correspondence underlying the operation of convolution on Lagrangians and that given by the triple fiber product of conormals._ Proof.: There is a strongly cocartesian diagram_ of sheaves on \(X_{123}\). Using the fact that the total cofiber of any face of the cube must vanish, we find \[\mathsf{cofib}(\mathbf{L}_{X_{13}}\to\mathbf{L}_{X_{123}})\simeq\mathsf{ cofib}(\mathbf{L}_{X_{3}}\to\mathbf{L}_{X_{23}})\simeq\mathsf{cofib}(\mathbf{L}_{Y}\to\mathbf{L}_{X_{2}}) \simeq\mathbf{L}_{f_{2}}.\] The \((-1)\)-shifted conormal bundle \(\mathrm{T}^{*}_{X_{123}}[-1](X_{12}\times X_{13}\times X_{23})\) is the total space of the complex \[\mathsf{fib}(\mathbf{L}_{X_{12}}\oplus\mathbf{L}_{X_{13}}\oplus\mathbf{L}_{X _{23}}\to\mathbf{L}_{X_{123}})[-1]\simeq\mathsf{cofib}(\mathbf{L}_{X_{12}} \oplus\mathbf{L}_{X_{13}}\oplus\mathbf{L}_{X_{23}}\to\mathbf{L}_{X_{123}})[-2]\] \[\simeq\mathsf{cofib}(\mathbf{L}_{X_{12}}\oplus\mathbf{L}_{X_{23}} \to\mathsf{cofib}(\mathbf{L}_{X_{13}}\to\mathbf{L}_{X_{123}}))[-2]\] \[\simeq\mathsf{cofib}(\mathbf{L}_{X_{12}}\oplus\mathbf{L}_{X_{23}} \to\mathbf{L}_{f_{2}})[-2]\] \[\simeq\mathsf{fib}(\mathbf{L}_{X_{12}}\oplus\mathbf{L}_{X_{23}} \to\mathbf{L}_{f_{2}})[-1].\] The total space of the last complex is \(L_{12}\times_{L_{2}}L_{23}\simeq L_{123}\), as desired. **Definition 4.14**.: We write \[F:\mathsf{Sing}(X_{ij})\to\mathrm{T}^{*}Y\] for the map induced on classical parts by the projection \(L_{ij}=L_{i}\times_{\mathrm{T}^{*}Y}L_{j}\to\mathrm{T}^{*}Y.\) The map \(F\) just defined gives some justification for the relevance of singular support conditions, and the relation to Definition 4.2. **Lemma 4.15**.: _The image of the map \(F\) coincides with the intersection in \(\mathrm{T}^{*}Y\) of the images of the conormals \(\mathrm{N}^{*}_{X_{i}}Y\to\mathrm{T}^{*}Y.\) In particular, when \(X_{i}=X_{j}=X,\) the image of \(F\) coincides with the image of \(\mathrm{N}^{*}_{X}Y\to\mathrm{T}^{*}Y.\)_ Proof.: This is immediate from the identification of \(\mathsf{Sing}(X_{ij})\) with the fiber product of conormals in \(\mathrm{T}^{*}Y.\) We now specialize to the case where \(X_{i}=X\) for all \(i\), so that we are studying the monoidal category \(\mathsf{Coh}(X\times_{Y}X).\) We will use the map \(F\) to pull back conic subsets of \(\mathrm{T}^{*}Y\) (or of the image of \(\mathrm{N}^{*}_{X}Y\) in \(\mathrm{T}^{*}Y\)) to singular support conditions on \(X\times_{Y}X.\) We must first check that such singular-support conditions are respected by the monoidal structure on \(\mathsf{Coh}(X\times_{Y}X)\): **Lemma 4.16**.: _Let \(\Lambda\subset\mathrm{T}^{*}Y\) be a closed conic subset. Then the pullback \(F^{-1}(\Lambda)\) is stable under convolution with \(\mathsf{Sing}(X\times_{Y}X).\)_ Proof.: From Lemma 4.13, we know that the convolution of Lagrangians (or, in this case, coisotropics) may be computed by composition with the Lagrangian correspondence so that \(F^{-1}(\Lambda)*\mathsf{Sing}(X\times_{Y}X)\) is the coisotropic in \(L_{123}\) described by the fiber product \[(F^{-1}(\Lambda)\times L_{23})\times_{L_{12}\times L_{23}}L_{123}\to L_{123}.\] This fiber product imposes the condition on points of \(L_{123}\) that their projection to \(\mathrm{T}^{*}Y\) lives in \(\Lambda\), and its image in \(L_{13}\) again consists of points satisfying the same condition. Our main tool in relating convolution singular supports to convolution of coherent sheaf categories will be the following calculation: **Proposition 4.17** ([11, Proposition 3.30]).: _Let \(\Lambda_{12}\subset\mathsf{Sing}(Z_{12}),\)\(\Lambda_{23}\subset\mathsf{Sing}(Z_{23})\) be \(Z_{22}\)-stable conic subsets. Then there is an equivalence of categories_ \[\mathsf{Coh}_{\Lambda_{12}}(Z_{23})\otimes_{\mathsf{Coh}(Z_{22})}\mathsf{ Coh}_{\Lambda_{23}}(Z_{23})\simeq\mathsf{Coh}_{\Lambda_{12}*\Lambda_{23}}(Z_{13}).\] Proposition 4.17 allows us to establish the following invariance result, which justifies the notation of Definition 4.2. **Proposition 4.18**.: _Suppose that the images in \(\mathrm{T}^{*}Y\) of the conormals \(\mathrm{N}^{*}_{X_{1}}Y\) and \(\mathrm{N}^{*}_{X_{2}}Y\) coincide. Then there is an equivalence of module 2-categories_ \[\mathsf{Mod}_{\mathsf{Coh}(X_{1}\times_{Y}X_{1})}\simeq\mathsf{Mod}_{\mathsf{ Coh}(X_{2}\times_{Y}X_{2})},\] _given by tensoring with the bimodule category \(\mathsf{Coh}(X_{1}\times_{Y}X_{2}).\)_ Proof.: To see that the bimodule \(\mathsf{Coh}(X_{12})\) defines a Morita equivalence between the monoidal categories \(\mathsf{Coh}(X_{11})\) and \(\mathsf{Coh}(X_{22})\), we will show that the bimodule \(\mathsf{Coh}(X_{12})\) gives the inverse Morita equivalence, by exhibiting an equivalence \[\mathsf{Coh}(X_{1}\times_{Y}X_{2})\otimes_{\mathsf{Coh}(X_{2}\times_{Y}X_{2} )}\mathsf{Coh}(X_{2}\times_{Y}X_{1})\simeq\mathsf{Coh}(X_{1}\times_{Y}X_{1}). \tag{4.3.9}\] By Proposition 4.17, the left-hand side of (4.3.9) is equivalent to the category \(\mathsf{Coh}_{\Lambda^{cl}}(X_{1}\times_{Y}X_{1})\), where \(\Lambda^{cl}=\mathsf{Sing}(X_{12})*\mathsf{Sing}(X_{21})\) is given by convolution of the coisotropic full singular-support conditions for \(X_{12}\) and \(X_{21}\). \(\Lambda^{cl}\) is the underlying classical (and reduced) space of the coisotropic \(\Lambda\) given by applying the Lagrangian correspondence \(L_{121}\) to \(L_{12}\times L_{23}\): \[\Lambda\simeq(L_{12}\times L_{21})\times_{L_{12}\times L_{21}}L_{121}\simeq L_ {121}\to L_{11}.\] By assumption, the images in \(\mathrm{T}^{*}Y\) of the conormals to \(X_{1}\) and \(X_{2}\) coincide, so that the map on sets induced by \(L_{121}\to L_{11}\) is surjective, and we conclude that \(\Lambda^{cl}=\mathsf{Sing}(X_{11})\) is the entire singular-support condition for \(X_{11}\), so that the left-hand side of (4.3.9) is equivalent to the whole category of coherent sheaves on \(X_{1}\times_{Y}X_{1}\). ### Microlocalization We now return to the case of a single map \(f:X\to Y\) (where \(X\) may be disconnected). Using the map \(F:\mathsf{Sing}(X\times_{Y}X)\simeq(\mathrm{N}_{X}^{*}Y\times_{\mathrm{T}^{*}Y }\mathrm{N}_{X}^{*}Y)^{cl}\to\mathrm{T}^{*}Y,\) we propose the following definition, allowing us to study coherent sheaves of categories on \(Y\) microlocally away from the zero-section in \(\mathrm{T}^{*}Y.\) **Definition 4.19**.: Let \(X\to Y\) be as in Definition 4.2, and let \(U\subset\overline{\mathrm{N}_{X}^{*}Y}\) be an open subset of the image in \(\mathrm{T}^{*}Y\) of the conormal \(\mathrm{N}_{X}^{*}Y,\) with closed complement \(\Lambda=\overline{\mathrm{N}_{X}^{*}Y}\setminus U\). Then we define the 2-category of _microlocal coherent sheaves of categories_ on \(U\) to be the 2-category \[\mu\mathsf{CohCat}(U):=\mathsf{Mod}_{\mathsf{Coh}_{F^{-1}(U)}(X\times_{Y}X)}( \mathsf{St})\] of module categories for the monoidal category \(\mathsf{Coh}_{U}(X\times_{Y}X)\simeq\frac{\mathsf{Coh}(X\times_{Y}X)}{\mathsf{ Coh}_{F^{-1}(\Lambda)}(X\times_{Y}X)}\) of coherent sheaves with singular support in \(U\). The microlocal analogue of Proposition 4.18 remains true. **Proposition 4.20**.: _Suppose that the images in \(T^{*}Y\) of \(\mathrm{N}_{X_{1}}^{*}Y\) and \(\mathrm{N}_{X_{2}}^{*}Y\) coincide. Then there is an equivalence of 2-categories_ \[\mu\mathsf{CohCat}(U\cap\overline{\mathrm{N}_{X_{1}}^{*}Y})\simeq\mu\mathsf{ CohCat}(U\cap\overline{\mathrm{N}_{X_{2}}^{*}Y}).\] Proof.: The monoidal category \(\overline{\mathcal{A}}_{i}:=\mathsf{Coh}_{F^{-1}}(U)(X_{i}\times_{Y}X_{i})\) is a quotient of the monoidal category \(\mathcal{A}_{i}:=\mathsf{Coh}(X_{i}\times_{Y}X_{i})\) be the ideal \(\mathcal{I}_{i}\) generated by \(\mathsf{Coh}_{F^{-1}(\Lambda)}(X_{i}\times_{Y}X_{i}).\) From Proposition 4.18, we know that the \(\mathcal{A}_{1},\mathcal{A}_{2}\)-bimodule category \(\mathcal{M}:=\mathsf{Coh}(X_{2}\times_{Y}X_{1})\) gives a Morita equivalence between \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}.\) To show that the quotient \(\mathcal{M}/\mathcal{M}\mathcal{I}_{1}\) induces a Morita equivalence between the quotient algebras \(\overline{\mathcal{A}}_{i},\) it is sufficient to show that \(\mathcal{I}_{2}\mathcal{M}\) and \(\mathcal{M}\mathcal{I}_{1}\) are equivalent subcategories of \(\mathcal{M},\) or in other words that the two convolutions \[\mathsf{Coh}_{F^{-1}(\Lambda)}(X_{2}\times_{Y}X_{2})\otimes_{\mathsf{Coh}(X_ {2}\times_{Y}X_{2})}\mathsf{Coh}(X_{2}\times_{Y}X_{1}),\quad\mathsf{Coh}_{F^ {-1}(\Lambda)}(X_{2}\times_{Y}X_{1})\otimes_{\mathsf{Coh}(X_{1}\times_{Y}X_{1 })}\mathsf{Coh}(X_{1}\times_{Y}X_{1})\] agree. By Proposition 4.17 we are reduced to checking that the convolutions \(F^{-1}(\Lambda)*L_{21}\) and \(L_{21}*F^{-1}(\Lambda)\) determine the same singular support condition inside of \(\mathsf{Sing}(X_{2}\times_{Y}X_{1}).\) Since both of these convolutions agree with \(F^{-1}(\Lambda)\subset\mathsf{Sing}(X_{2}\times_{Y}X_{1}),\) we are done. **Remark 4.21**.: By [1, Theorem 1.2.10], the monoidal center of the monoidal category \(\mathsf{Coh}(X\times_{Y}X)\) is given by the category \(\mathsf{Coh}_{prop/Y}(\mathcal{L}_{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{missing We now specialize to the situation that \(X=\bigsqcup_{\alpha}X_{\alpha}\) is a disjoint union of components, so that the monoidal category \(\mathsf{Coh}(X\times_{Y}X)\) splits (non-monoidally) into its "matrix entries": \[\mathsf{Coh}(X\times_{Y}X)\simeq\bigoplus_{(\alpha,\alpha^{\prime})}\mathsf{ Coh}(X_{\alpha}\times_{Y}X_{\alpha^{\prime}}).\] The diagonal entries \[\mathsf{Coh}(X_{\alpha}\times_{Y}X_{\alpha})\simeq\mathsf{Fun}_{\mathsf{ Perf}(Y)}(\mathsf{Coh}(X_{\alpha}),\mathsf{Coh}(X_{\alpha})) \tag{4.4.1}\] of this matrix are themselves monoidal categories, with unit object (corresponding to the identity functor under the equivalence (4.4.1)) given by the pushforward \((\Delta_{\alpha})_{*}\mathcal{O}_{X_{\alpha}}\) under the diagonal map \[\Delta_{\alpha}:X_{\alpha}\to X_{\alpha}\times_{Y}X_{\alpha}.\] This object plays a distinguished role: **Definition 4.22**.: With \(X=\bigsqcup_{\alpha}X_{\alpha}\) as above, we write \[E_{\alpha}:=(\Delta_{\alpha})_{*}\mathcal{O}_{X_{\alpha}}\] for the _matrix idempotent_ corresponding to \(X_{\alpha}\). **Proposition 4.23**.: _Let \(I=\{\alpha_{1},\ldots,\alpha_{k}\}.\) Write \(X_{I}:=\bigsqcup_{\alpha_{i}\in I}X_{\alpha}\) and let \(\Lambda_{I}\) be the image in \(\mathrm{T}^{*}Y\) of the conormal \(\mathrm{N}^{*}_{X_{I}}.\) Then the smallest ideal in \(\mathsf{Coh}(X\times_{Y}X)\) containing \(\langle E_{\alpha_{i}}\rangle_{\alpha_{i}\in I}\) is \(\mathsf{Coh}_{F^{-1}(\Lambda_{I})}(X\times_{Y}X).\)_ Proof.: Let \(E_{I}=\bigoplus_{\alpha_{i}\in I}E_{\alpha_{i}}.\) Since \(E_{I}\) is the monoidal unit of \(\mathsf{Coh}(X_{I}\times_{Y}X_{I}),\) the smallest ideal containing \(E_{I}\) may be written as the tensor product \[\mathsf{Coh}(X\times_{Y}X_{I})\otimes_{\mathsf{Coh}(X_{I}\times_{Y}X_{I})} \mathsf{Coh}(X_{I}\times_{Y}X). \tag{4.4.2}\] More By Proposition 4.17, (4.4.2) is equivalent to \(\mathsf{Coh}_{\Lambda}(X\times_{Y}X),\) where \(\Lambda\) is the convolution of \(\mathsf{Sing}(X\times_{Y}X_{I})\) with \(\mathsf{Sing}(X_{I}\times_{Y}X).\) In the notation of the previous section, writing \(X_{1}=X_{3}=X\) and \(X_{2}=X_{I},\) we have \[\Lambda\simeq L_{12}*L_{23}\simeq(L_{12}\times L_{23})\times_{L_{12}\times L_ {23}}L_{123}\simeq L_{123}\to L_{13}.\] The image of \(L_{123}\) in \(L_{13}^{d}\simeq\mathsf{Sing}(X\times_{Y}X)\) agrees with the set of points whose image under \(F:\mathsf{Sing}(X\times_{Y}X)\to\mathrm{T}^{*}Y\) agrees with the image in \(\mathrm{T}^{*}Y\) of the conormal \(\mathrm{N}^{*}_{X_{I}}Y.\) **Definition 4.24**.: Let \(X=\bigsqcup_{\alpha}X^{\alpha}\to Y\) as above. Then we write \(\mathscr{S}^{\alpha}\) for the \(\mathsf{Coh}(X\times_{Y}X)\)-module category \(\mathsf{Coh}(X^{\alpha}\times_{Y}X).\) For \(U\subset\overline{\mathrm{N}^{*}_{X}Y}\) open, we continue to write \(\mathscr{S}^{\alpha}\) for the image of \(\mathscr{S}^{\alpha}\) in \(\mu\mathsf{Coh}\mathsf{Cat}(U).\) ### The hypertoric 2-category We now specialize to the case of interest to us. **Notation 4.25**.: As in Definition 1.14, for a sign vector \(\alpha,\) we write \(X_{G}^{\alpha}:=\{\prod_{i\in\alpha}z_{i}=0\}/G\) for the \(G\)-equivariant intersection of coordinate hyperplanes in \(\mathbb{A}^{n},\) equipped with its closed embedding \(X_{G}^{\alpha}\hookrightarrow\mathbb{A}^{n}/G.\) We write \(X_{G}:=\bigsqcup_{\alpha\in 2^{[n]}}X_{G}^{\alpha},\) equipped with the map \(X_{G}\to\mathbb{A}^{n}/G\) restricting to the standard embedding on each component. More generally, for \(I\subset 2^{[n]}\) a collection of sign vectors, we write \(X_{G}^{I}:=\bigsqcup_{\alpha\in I}X_{G}^{\alpha}.\) **Notation 4.26**.: Let \(\mathcal{A}_{G}:=\mathsf{Coh}(X_{G}\times_{\mathbb{A}^{n}/G}X_{G}),\) equipped with a monoidal structure by convolution. More generally, for \(I\subset 2^{[n]}\) a collection of sign vectors, we write \(\mathcal{A}_{G}^{I}:=\mathsf{Coh}(X_{G}^{I}\times_{\mathbb{A}^{n}/G}X_{G}^{I}).\) The relevance of \(\mathcal{A}_{G}^{I}\) is immediate from Definition 4.2: **Corollary 4.27**.: _There is an equivalence of 2-categories_ \[\mathsf{CohCat}_{\mathbb{L}_{G}^{I}}(\mathbb{A}^{n}/G)\simeq\mathsf{Mod}_{ \mathcal{A}_{G}^{I}}(\mathsf{St}) \tag{4.5.1}\] _between \(\mathsf{CohCat}_{\mathbb{L}_{G}^{I}}(\mathbb{A}^{n}/G)\) and the 2-category of module categories for \(\mathcal{A}_{G}^{I}.\)_ **Example 4.28**.: Let \(n=1\) and \(G=\{1\}\subset(\mathbb{G}_{m})^{n}\) be the trivial torus. Then \(\mathsf{CohCat}_{\mathbb{L}_{G}}(\mathbb{A}^{1})\) is the 2-category of module categories over the monoidal category \[\mathcal{A}_{G}=\left(\begin{array}{cc}\mathsf{Coh}(\mathbb{A}^{1})& \mathsf{Coh}(0)\\ \mathsf{Coh}(0)&\mathsf{Coh}(0\times_{\mathbb{A}^{1}}0)\end{array}\right). \tag{4.5.2}\] Each entry in the matrix (4.5.2) admits a single categorical generator, with a commutative ring of endomorphisms, so that we may rewrite the matrix as \[\left(\begin{array}{cc}\mathsf{Perf}_{\Bbbk[x]}&\mathsf{Perf}_{\Bbbk}\\ \mathsf{Perf}_{\Bbbk}&\mathsf{Perf}_{\Bbbk[\beta]}\end{array}\right),\] where \(\beta\) is in cohomological degree 2. In other words, the 2-category \(\mathsf{CohCat}_{\mathbb{L}_{G}}(\mathbb{A}^{1})\) has two objects, four generating 1-morphisms (namely, the rings \(\Bbbk[x],\Bbbk,\Bbbk\), and \(\Bbbk[\beta]\) above) and two generating 2-morphisms (namely, \(x\) and \(\beta\)). The relations among these can be made explicit, so that this 2-category thus admits a completely combinatorial description. Working in the same fashion, such a description can be given for all the 2-categories \(\mathsf{CohCat}_{\mathbb{L}_{G}}(\mathbb{A}^{n})\), for a general choice of \(G\). (However, if \(G\) is nontrivial, then the set of generating 1-morphisms in this description must be allowed to be infinite.) **Example 4.29**.: For \(m\in\mathfrak{f}_{\mathbb{Z}}\) a choice of attraction parameter, we may let \(I=m\text{-bdd}\subset 2^{[n]}\) be the collection of sign vectors which are \(m\)-bounded. In this case, \(\mathbb{L}_{G}^{m\text{-bdd}}=\mathbb{L}_{G}(0,m)\), and we recover the 2-category \[\mathsf{CohCat}_{\mathbb{L}_{G}(0,m)}(\mathbb{A}^{n}/G)\simeq\mathsf{Mod}_{ \mathbb{A}_{G}^{m\text{-bdd}}}(\mathsf{St}) \tag{4.5.3}\] associated to the \(m\)-bounded locus \(\mathbb{L}_{G}(0,m)=\mathbb{L}_{G}^{m\text{-bdd}}\) in the Lagrangian \(\mathbb{L}_{G}\). We are now ready to study the 2-category \(\mu\mathsf{CohCat}(\mathbb{L}_{G}(t,m))\) of microlocal coherent sheaves of categories on the category \(\mathcal{O}\) Lagrangian. To apply Definition 4.19, we need to present \(\mathbb{L}_{G}(t,m))\) as an open subset of a conormal Lagrangian. There is an obvious choice, namely the open inclusion \[\mathbb{L}_{G}(t,m)\hookrightarrow\mathbb{L}_{G}(0,m)=\mathbb{L}_{G}^{m\text {-bdd}}. \tag{4.5.4}\] Applying definition Definition 4.19, we may therefore write \[\mu\mathsf{CohCat}(\mathbb{L}_{G}(t,m))\simeq\mathsf{Mod}_{\mathsf{Coh}_{F^{- 1}(\mathbb{L}_{G}(t,m))}(X^{m\text{-bdd}}\times_{\mathbb{A}^{n}/G}X^{m\text {-bdd}})}(\mathsf{St}).\] **Lemma 4.30**.: _Let \(\Lambda:=\mathbb{L}_{G}(0,m)\setminus\mathbb{L}_{G}(t,m)\) be the closed complement of the embedding (4.5.4). Then \(\mathsf{Coh}_{F^{-1}(\Lambda)}(X^{m\text{-bdd}}\times_{\mathbb{A}^{n}/G}X^{m \text{-bdd}})\subset\mathsf{Coh}(X^{m\text{-bdd}}\times_{\mathbb{A}^{n}/G}X^ {m\text{-bdd}})\) is the ideal generated by \(\mathsf{Coh}(X^{m\text{-bdd},t\text{-unstab}}\times_{\mathbb{A}^{n}/G}X^{m \text{-bdd},t\text{-unstab}})\)._ Proof.: By Proposition 4.23, \(\mathsf{Coh}_{F^{-1}(\Lambda)}(X^{m\text{-bdd}}\times_{\mathbb{A}^{n}/G}X^{m \text{-bdd}})\) is the smallest ideal in \(\mathsf{Coh}(X^{m\text{-bdd}}\times_{\mathbb{A}^{n}/G}X^{m\text{-bdd}})\) containing the matrix idempotents \(E_{\alpha}\) for \(\alpha\in m\text{-bdd},t\text{-unstab}\). But these matrix idempotents are monoidal generators of the monoidal subcategory \(Coh(X^{m\text{-bdd},t\text{-unstab}}\times_{\mathbb{A}^{n}/G}X^{m\text{-bdd},t\text{-unstab}})\). **Corollary 4.31**.: _There is an equivalence_ \[\mu\mathsf{CohCat}(\mathbb{L}_{G}(t,m))\simeq\mathsf{Mod}_{\frac{A^{m\text{-bdd }}}{A^{m\text{-bdd}}_{G}}}(\mathsf{St})\] _presenting \(\mu\mathsf{CohCat}(\mathbb{L}_{G}(t,m))\) as the 2-category of module categories for the monoidal category obtained as the quotient of \(\mathcal{A}_{G}^{m\text{-bdd}}\) by the ideal generated by the subcategory \(\mathcal{A}_{G}^{m\text{-bdd},t\text{-unstab}}\)._ Proof.: This follows immediately from Definition 4.19 and Lemma 4.30. **Example 4.32**.: We return to the situation of Example 0.12, where \(F^{\vee}\simeq\mathbb{G}_{m}\hookrightarrow(\mathbb{G}_{m})^{2}\) is the diagonal torus. Before turning on the stability or attraction parameters, the skeleton \(\mathbb{L}_{F^{\vee}}\) is the image in \(\mathrm{T}^{*}(\mathbb{A}^{2}/\mathbb{G}_{m})\) of the union of conormals to the zero section, \(x\)-axis, \(y\)-axis, and origin. The coherent 2-category admits a presentation \[\mathsf{CohCat}_{\mathbb{L}_{F^{\vee}}}(\mathbb{A}^{2}/F^{\vee})=\mathsf{Mod}_ {A^{\vee}_{F}}(\mathsf{St}),\] as modules over the monoidal category \(\mathcal{A}:=\mathsf{Coh}^{\mathbb{G}_{m}}(X\times_{Y}X)\), where we write \[X=(\mathbb{A}^{2}\sqcup\mathbb{A}^{1}_{x}\sqcup\mathbb{A}^{1}_{y}\sqcup 0) \rightarrow\mathbb{A}^{2}=Y,\] with the maps given by embeddings of closures of toric strata. In other words, presenting each component in the fiber product \(X\times_{Y}X\) as the appropriate shifted normal bundle, we have \[\mathcal{A}\simeq\mathsf{Coh}^{\mathbb{G}_{m}}\left(\begin{array}{cccc} \mathbb{A}^{2}&\mathbb{A}^{1}_{x}&\mathbb{A}^{1}_{y}&0\\ \mathbb{A}^{1}_{x}&\mathbb{N}_{\mathbb{A}^{1}_{x}}[-1]\mathbb{A}^{2}&0&\mathrm {N}_{0}[-1]\mathbb{A}^{1}_{y}\\ \mathbb{A}^{1}_{y}&0&\mathrm{N}_{\mathbb{A}^{1}_{y}}[-1]\mathbb{A}^{2}&\mathrm {N}_{0}[-1]\mathbb{A}^{1}_{x}\\ 0&\mathrm{N}_{0}[-1]\mathbb{A}^{1}_{y}&\mathrm{N}_{0}[-1]\mathbb{A}^{1}_{x}& \mathrm{N}_{0}[-1]\mathbb{A}^{2}\end{array}\right). \tag{4.5.5}\] If we pick the stability parameter \(m\in\mathfrak{f}^{\vee}_{\mathbb{Z}}=\mathbb{Z}\) to be positive, then the unstable locus in \(\mathrm{T}^{*}(\mathbb{A}^{2}/\mathbb{G}_{m})\) is \(\mathrm{T}^{*}_{0}(\mathbb{A}^{2})\), so that passage to the \(m\)-semistable locus is restriction to the open subset \(\mathcal{U}:=\mathrm{T}^{*}\mathbb{P}^{1}=\mathrm{T}^{*}((\mathbb{A}^{2} \setminus\{0\})/\mathbb{G}_{m})\) inside \(\mathrm{T}^{*}(\mathbb{A}^{2}/\mathbb{G}_{m})\). Passing to the \(m\)-semistable locus is a localization of the monoidal category \(\mathcal{A}\), so that we can write \(\mu\mathsf{CohCat}(\mathbb{L}_{F^{\vee}}(m,0))\simeq\mathsf{Mod}_{A|_{ \mathcal{U}}}(\mathsf{St})\), where \[\mathcal{A}|_{\mathcal{U}}\simeq\mathsf{Coh}^{\mathbb{G}_{m}}\left(\begin{array} []{cccc}\mathbb{A}^{2}\setminus 0&\mathbb{A}^{1}_{x}\setminus 0&\mathbb{A}^{1}_{y} \setminus 0&\emptyset\\ \mathbb{A}^{1}_{x}\setminus 0&\mathrm{N}_{\mathbb{A}^{1}_{x}\setminus 0}[-1]( \mathbb{A}^{2}\setminus 0)&\emptyset\\ \mathbb{A}^{1}_{y}\setminus 0&\emptyset&\mathrm{N}_{\mathbb{A}^{1}_{y}\setminus 0}[-1]( \mathbb{A}^{2}\setminus 0)&\emptyset\\ \emptyset&\emptyset&\emptyset&\emptyset\end{array}\right). \tag{4.5.6}\] Using the fact that the \(\mathbb{G}_{m}\)-action on the spaces in (4.5.6) is free, and removing the final row and column (whose values are all the zero category \(\mathsf{Coh}(\emptyset)\simeq 0\) -- not to be confused with the category \(\mathsf{Coh}(0)\simeq\mathsf{Mod}_{\mathsf{k}}\)), we may rewrite (4.5.6) as \[\mathcal{A}|_{\mathcal{U}}\simeq\mathsf{Coh}\left(\begin{array}{cccc}\mathbb{ P}^{1}&0&\infty\\ 0&T_{0}[-1]\mathbb{P}^{1}&\emptyset\\ \infty&\emptyset&T_{\infty}[-1](\mathbb{P}^{1})\end{array}\right), \tag{4.5.7}\] where we have identified the images of the \(x\)- and \(y\)-axes with the points \(0,\infty\in\mathbb{P}^{1}=(\mathbb{A}^{2}\setminus 0)/\mathbb{G}_{m}\). This monoidal category evidently admits a convolution presentation as before, where now \[\mathcal{A}|_{\mathcal{U}}\simeq\mathsf{Coh}\left((\mathbb{P}^{1}\sqcup 0\sqcup \infty)\times_{\mathbb{P}^{1}}(\mathbb{P}^{1}\sqcup 0\sqcup\infty)\right).\] Finally, we impose a nonzero attraction parameter \(t\), passing from the "TIE fighter" Lagrangian \[\mathbb{L}_{F^{\vee}}(m,0)=\mathbb{P}^{1}\cup\mathrm{T}^{*}_{0}\mathbb{P}^{1} \cup\mathrm{T}^{*}_{\infty}\mathbb{P}^{1}\] to the closed subspace \[\mathbb{L}_{F^{\vee}}(m,t)=\mathbb{P}^{1}\cup\mathrm{T}^{*}_{0}\mathbb{P}^{1}.\] At the level of categories, this has the effect of deleting the third row and column of (4.5.7), so that we have at last \[\mu\mathsf{CohCat}(\mathbb{L}_{F^{\vee}}(m,t))=\mathsf{Mod}_{\mathcal{A}^{ \prime}}(\mathsf{St})\] where \[\mathcal{A}^{\prime}=\mathsf{Coh}\left((\mathbb{P}^{1}\sqcup 0)\times_{\mathbb{P}^{1}}( \mathbb{P}^{1}\sqcup 0)\right)\simeq\mathsf{Coh}\left(\begin{array}{cc} \mathbb{P}^{1}&0\\ 0&T_{0}[-1]\mathbb{P}^{1}\end{array}\right). \tag{4.5.8}\] **Remark 4.33**.: Example 4.32 is misleading in some ways, due to the simplicity of the stable locus \(\mathrm{T}^{*}((\mathbb{A}^{2}\setminus\{0\})/\mathbb{G}_{m}).\) As a result, the theory of coherent singular support conditions was not really necessary, as the singular support condition could be understood just as a usual support condition over \(\mathbb{A}^{2}/\mathbb{G}_{m}.\) However, if we had picked the opposite sign for our stability parameter, then (4.5.6) would be replaced with a "purely microlocal" -- that is, disjoint from the zero-section -- localization. The easiest way to compute the result of this localization on (4.5.5) would be to apply Koszul duality to replace the \((-1)\)-shifted normal bundles with \(2\)-shifted conormal bundles, and then apply the fact that Koszul duality exchanges singular-support conditions (and microlocalizations) with usual support conditions (and usual localizations). ## 5. Proof of Theorems A & B We now prove our first main theorem, an equivalence between the (Gale dual) pair of \(2\)-categories \(\mathcal{O}\) we have defined: Proof of Theorem A.: Our starting point is the equivalence of \(2\)-categories (0.2.3) proved as [10, Theorem G], namely \[\mathsf{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)\simeq\mathsf{CohCat}_{ \mathbb{L}_{F^{\vee}}}(\mathbb{C}^{n}/F^{\vee}), \tag{5.0.1}\] which sends the projective \(\mathcal{P}^{\alpha}_{G}\) in the left-hand side to the \(\mathsf{Coh}(X_{F^{\vee}}\times_{\mathbb{C}^{n}/F^{\vee}}X_{F^{\vee}})\)-module category \(\mathcal{S}^{\alpha}_{F^{\vee}}:=\mathsf{Coh}(X_{F^{\vee}}\times_{\mathbb{C}^ {n}/F^{\vee}}X_{F^{\vee}}).\) Observe that for \(I\subset I\), the subcategory of \(\mathsf{CohCat}_{\mathbb{L}_{F^{\vee}}}(\mathbb{C}^{n}/F^{\vee})\) generated by the \(\mathcal{S}^{\alpha}_{F^{\vee}}\) for \(\alpha\in I\) is \(\mathsf{CohCat}_{\mathbb{L}_{F^{\vee}}}(\mathbb{C}^{n}/F^{\vee}),\) where we write \(\mathbb{L}_{F^{\vee}}^{I}=\bigcup_{\alpha\in I}\mathbb{L}_{F^{\vee}}^{\alpha}\) for the union of components of \(\mathbb{L}_{F^{\vee}}\) corresponding to \(\alpha\in I.\) Now recall that Lemma 3.13 gives a description of \(\mathsf{\muPervCat}(\mathbb{L}_{G}(t,m))\) as the subquotient \[\mu\mathsf{PervCat}(\mathbb{L}_{G}(t,m))\simeq\frac{\langle\mathcal{P}^{ \alpha}_{G}\mid\alpha\text{ $t$-semistable}\rangle}{\langle\mathcal{P}^{\beta}_{G}\mid\beta\text{ $m$- unbounded}\rangle} \tag{5.0.2}\] of the left-hand side of (5.0.1). By the above observations, (5.0.2) is equivalent to the quotient \(2\)-category \[\mathsf{CohCat}_{\mathbb{L}^{\text{\tiny$t$-bdd}}}(\mathbb{C}^{n}/F^{\vee} )/\mathsf{CohCat}_{\mathbb{L}^{\text{\tiny$t$-bdd},\text{\tiny$m$-unstab}}}( \mathbb{C}^{n}/F^{\vee}), \tag{5.0.3}\] and by Corollary 4.31, the quotient (5.0.3) is equivalent to the \(2\)-category \(\mu\mathsf{CohCat}(\mathbb{L}_{F^{\vee}}(m,t)).\) By construction, this equivalence has identified \(\mathcal{P}^{\alpha}_{G}\) with \(\mathcal{S}^{\alpha}_{F^{\vee}}.\) We now proceed to the proof of Theorem B. This theorem is really two theorems, with equivalences (0.4.1) and (0.4.2) identifying the decategorifications of the A- and B-side \(2\)-categories, respectively. We will deal with each of these separately, beginning with the B-side. **Notation 5.1**.: From now on, \(\Bbbk\) is assumed to be a field of characteristic \(0\). For a \(\Bbbk\)-algebra or \(\Bbbk\)-linear category, we write \((-)_{\mathbb{Z}/2}:=(-)\otimes_{\Bbbk}\Bbbk((u))\) for the \(2\)-periodization, where \(u\) is a variable of degree \(2\). Our main tool will be an identification of the periodic cyclic homology of coherent-sheaf categories of stacks. From [13], when \(X\) is an affine scheme there is an identification \(\mathsf{HP}(\mathsf{Perf}(X))\simeq\mathsf{C}^{*}_{\mathsf{dR}}(X)_{\mathbb{Z }/2}\) between the periodic cyclic homology of \(\mathsf{Perf}(X)\) and the \(2\)-periodized infinitesimal cohomology of \(X\). In [13], this result was generalized to the case where we replace \(\mathsf{Perf}\) with \(\mathsf{Coh}\) (allowing \(X\) more generally to be a quasi-compact and separated algebraic space): in this case the statement remains true if we replace de Rham cochains \(\mathsf{C}^{*}_{\mathsf{dR}}(X)_{\mathbb{Z}/2}\) with Borel-Moore chains \(\mathsf{C}^{\mathsf{BM},\mathsf{dR}}_{*}(X)_{\mathbb{Z}/2}.\) Generalizing from algebraic spaces to stacks presents additional difficulties. This case was studied in [11], which gave a periodic-cyclic version of the Atiyah-Segal completion theorem. **Notation 5.2**.: Let \(G\) be a reductive group. If \(A\) is an algebra linear over the 2-periodic representation ring \(\mathsf{HP}(\mathsf{Perf}(BG))=\mathcal{R}(G)_{\mathbb{Z}/2}\) of \(G\), we write \(A_{\hat{e}}\) for the completion of \(A\) at the augmentation ideal. **Theorem 5.3** ([10, Theorem 4.3.2]).: _Let \(G\) be a reductive group acting on a smooth quasi-projective variety \(X.\) Then \(\mathsf{HP}(\mathsf{Perf}(XG))\) is linear over \(\mathsf{HP}(\mathsf{Perf}(BG)),\) and there is an equivalence_ \[\mathsf{HP}(\mathsf{Perf}(XG))_{\hat{e}}\simeq\mathsf{C}^{*}_{\mathsf{dR}}(X/G )_{\mathbb{Z}/2} \tag{5.0.4}\] _between the completion of \(\mathsf{HP}(\mathsf{Perf}(XG))\) and 2-periodic de Rham cochains on \(X/G\)._ For a smooth variety, the categories of coherent sheaves and perfect complexes are equivalent. But we need a slight enhancement of this calculation beyond the smooth case. **Corollary 5.4**.: _Let \(X=\bigsqcup X^{\alpha}\to\mathbb{C}^{n}\) as in Definition 1.14, equipped with its action of \(G\subset(\mathbb{G}_{m})^{n}\) as usual, and write \(\mathfrak{X}\) for the fiber product \(\mathfrak{X}:=X\times_{\mathbb{C}^{n}}X\). Then there is an equivalence_ \[\mathsf{HP}(\mathsf{Coh}(\mathfrak{X}/G))_{\hat{e}}\simeq\mathsf{C}^{\mathsf{ BM},\mathsf{dR}}_{*}(\mathfrak{X}/G)_{\mathbb{Z}/2}\] _between the completion of \(\mathsf{HP}(\mathsf{Coh}(\mathfrak{X}/G))\) and the 2-periodic Borel-Moore chains on \(\mathfrak{X}.\)_ Proof.: First we observe that the statement is true when \(G=D=(\mathbb{G}_{m})^{n}\), since by combining Koszul duality with a shearing equivalence, we have \[\mathsf{Coh}(\mathbb{A}^{n}[-1]/D)\simeq\mathsf{Perf}(\mathbb{A}^{n}[2]/D) \simeq\mathsf{Perf}(\mathbb{A}^{n}/D),\] so we are reduced to the case when \(\mathfrak{X}\) is smooth. From here, we can reduce to the case of general \(G\) by deequivariantization. The only other fact we will need about periodic cylic homology is the following: **Lemma 5.5**.: \(\mathsf{HP}(-)\) _is a localizing invariant: it takes exact sequences of stable categories to fiber sequences.11_ Footnote 11: We follow the convention of [11], which differs from that of [10] in not requiring localizing invariants to commute with filtered colimits. In the conventions of [10], Hochschild and cyclic homology are localizing but periodic cyclic homology is not. Proof.: For Hochschild homology this is proved in [12] or [11, Theorem 7.1]. The result for \(\mathsf{HP}\) follows, as \(\mathsf{HP}\) is obtained from \(\mathsf{HH}\) by the exact functor \((-)^{tS^{1}}\). Proof of Theorem B (B-side).: The canonical generator \(\mathcal{S}\) of \(\mu\mathsf{Coh}\mathsf{Cat}(\mathbb{L}_{F^{\vee}}(m,t))\) has endomorphism category \(\mathcal{A}_{F^{\vee}}^{t\text{-bdd}}/\mathfrak{J}\), where \(\mathfrak{J}\hookrightarrow\mathcal{A}_{F^{\vee}}^{t\text{-bdd}}\) is the ideal generated by \(\mathcal{A}_{F^{\vee}}^{t\text{-bdd},m\text{-unstab}}\) (and the monoidal categories \(\mathcal{A}_{F^{\vee}}\) are as in Notation 4.26). We therefore conclude that \(\mathsf{HP}(\mathsf{end}(\mathcal{S}))\) is equivalent to the quotient of \(\mathsf{HP}(\mathcal{A}_{F^{\vee}}^{t\text{-bdd}})\) by the ideal \(\mathsf{HP}(\mathfrak{J})\) generated by \(\mathsf{HP}(\mathcal{A}_{F^{\vee}}^{t\text{-bdd},m\text{-unstab}})\). Observe that the augmentation ideal for \(\mathsf{HP}(BF^{\vee})\) is contained inside of \(\mathsf{HP}(\mathfrak{J})\), so that completion at the augmentation ideal does not affect the resulting quotient algebra. By Corollary 5.4, the completion of this algebra at the augmentation ideal is equivalent to the quotient of the algebra \(\mathsf{C}^{\mathsf{BM},\mathsf{dR}}_{*}(X_{F^{\vee}}^{t\text{-bdd}}\times_{ \mathbb{A}^{n}/F^{\vee}}X_{F^{\vee}}^{t\text{-bdd}})_{\mathbb{Z}/2}\) by the ideal generated by \(\mathsf{C}^{\mathsf{BM},\mathsf{dR}}_{*}(X_{F^{\vee}}^{t\text{-bdd},m\text{- unstab}}\times_{\mathbb{A}^{n}/F^{\vee}}X_{F^{\vee}}^{t\text{-bdd},m\text{- unstab}})_{\mathbb{Z}/2}.\) This agrees with the description of de Rham category \(\mathcal{O}\) from Theorem 2.6, up to 2-periodization. On the A-side, we begin with the non-microlocal computation. **Lemma 5.6**.: _Let \(\mathcal{P}=\bigoplus\mathcal{P}^{\alpha}\) be the sum of generating objects of \(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}^{n}),\) and similarly \(P=\bigoplus P^{\alpha}\) the sum of the projectives in \(\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n}).\) Then there is an equivalence_ \[\mathsf{HP}(\mathsf{end}_{\mathsf{Perv}_{\mathbb{L}}(\mathbb{C}^{n})}( \mathcal{P}))\simeq\mathsf{end}_{\mathsf{Perv}_{\mathbb{L}}(\mathbb{C})}(P)_{ \mathbb{Z}/2}. \tag{5.0.5}\] _Moreover, this equivalence intertwines the \(G\)-actions._ Proof.: It is sufficient to prove the case \(n=1\), since the general case is recovered from this one as a tensor product. Let \(\mathcal{A}=\mathsf{end}(\mathcal{P})\) and \(A=\mathsf{end}(P).\) From the spectral decomposition \[\mathcal{A}\simeq\mathsf{Coh}^{\mathbb{G}_{m}}\left((\mathbb{A}\sqcup 0)\times_{ \mathbb{A}}(\mathbb{A}\sqcup 0)\right)\simeq\mathsf{Coh}^{\mathbb{G}_{m}} \left(\begin{array}{cc}\mathbb{A}^{1}&0\\ 0&\mathrm{N}_{0}^{*}\mathbb{A}^{1}\end{array}\right), \tag{5.0.6}\] we can see that there is an equivalence \(\mathsf{HP}(\mathcal{A})\simeq\mathsf{HH}(\mathcal{A})\otimes_{\Bbbk}\Bbbk((u))\) (for which see for instance [12, Example 1.0.5]), so it is sufficient to describe an isomorphism \(A\simeq\mathsf{HH}(\mathcal{A}).\) The map \(A\to\mathsf{HH}(\mathcal{A})\) may be defined in the obvious way: observe the algebra \(A\) is generated by the vertex idempotents \(e_{\Phi},e_{\Psi}\) and the maps \(u,v\) with \(1-uv,1-vu\) invertible, and define the map \(A\to\mathsf{HH}(\mathcal{A})\) by sending the vertex idempotents to the classes of the identity maps \([\mathsf{id}_{\Phi}],[\mathsf{id}_{\Psi}],\) and the maps \(u,v\) to the classes of the generating maps between \(\Phi\) and \(\Psi.\) The defining exact triangles for the twist and cotwist ensure that \(1-uv,1-vu\) map to these invertible elements, so that the map is well-defined. To see that this map is an equivalence, observe that the Hochschild homology of the matrix category (5.0.6) may be written as \[\left(\begin{array}{cc}\Bbbk[T^{\pm}]&\Bbbk[T^{\pm}]\\ \Bbbk[T^{\pm}]&\Bbbk[T^{\pm}]\end{array}\right).\] The explicit description of \(P_{\Phi},P_{\Psi}\) given in Example 2.13 shows that \(A\) has an identical expression, and it is easy to see that the obvious bases of \(A\) and \(\mathsf{HH}(\mathcal{A})\) are sent to each other by this map. For the statement about \(G\)-actions, it is necessary to show that the universal twist map \(\mathcal{T}:\Bbbk[\pi_{1}G]\to Z(\mathsf{PervCat}_{\mathbb{L}}(\mathbb{C}))\) induces on \(\mathsf{HP}(\mathcal{A})\simeq A_{\mathbb{Z}/2}\) the monodromy automorphism. This follows immediately from the formula for the universal twist, and that our map \(A\to\mathsf{HH}(\mathcal{A})\) was defined so that the monodromy automorphisms were sent to the classes of the twist and cotwist. **Corollary 5.7**.: _There is an equivalence_ \[\mathsf{HP}(\mathsf{end_{PervCat}_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)}( \mathcal{P}))\simeq\mathsf{end_{Perv_{\mathbb{L}_{G}}(\mathbb{C}^{n}/G)}}(P) _{\mathbb{Z}/2} \tag{5.0.7}\] Proof.: Each of the algebras in (5.0.7) is obtained from the corresponding algebra in (5.0.5) by trivializing the respective action of \(\Bbbk[\pi_{1}G],\) and we saw that the equivalence (5.0.5) intertwined these actions. Proof of Theorem B (\(A\)-side).: The theorem now follows immediately from Corollary 5.7 by matching the subquotient presentations of Corollary 2.22 and Lemma 3.13 and applying the fact that \(\mathsf{HP}\) is a localizing invariant.
2301.06627
Dissociating language and thought in large language models
Large Language Models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence -- knowledge of linguistic rules and patterns -- and functional linguistic competence -- understanding and using language in the world. We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of mechanisms specialized for formal linguistic competence, distinct from functional competence.
Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko
2023-01-16T22:41:19Z
http://arxiv.org/abs/2301.06627v3
# Dissociating language and thought in large language models: a cognitive perspective ###### Abstract **Short abstract (100 words)**: Large language models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their capabilities remain split. Here, we evaluate LLMs using a distinction between formal competence--knowledge of linguistic rules and patterns--and functional competence--understanding and using language in the world. We ground this distinction in human neuroscience, showing that these skills recruit different cognitive mechanisms. Although LLMs are close to mastering formal competence, they still fail at functional competence tasks, which often require drawing on non-linguistic capacities. In short, LLMs are good models of language but incomplete models of human thought. **Long abstract (250 words)**: Today's large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text. This achievement has led to speculation that these networks are--or will soon become--"thinking machines", capable of performing tasks that require abstract knowledge and reasoning. Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. Drawing on evidence from cognitive neuroscience, we show that formal competence in humans relies on specialized language processing mechanisms, whereas functional competence recruits multiple extralinguistic capacities that comprise human thought, such as formal reasoning, world knowledge, situation modeling, and social cognition. In line with this distinction, LLMs show impressive (although imperfect) performance on tasks requiring formal linguistic competence, but fail on many tests requiring functional competence. Based on this evidence, we argue that (1) contemporary LLMs should be taken seriously as models of formal linguistic skills; (2) models that master real-life language use would need to incorporate or develop not only a core language module, but also multiple non-language-specific cognitive capacities required for modeling thought. Overall, a distinction between formal and functional linguistic competence helps clarify the discourse surrounding LLMs' potential and provides a path toward building models that understand and use language in human-like ways. * _The two lead authors contributed equally to this work._ ###### Contents * 1 Introduction * 2 Formal vs. functional linguistic competence * 2.1 What does linguistic competence entail? * 2.1.1 Formal linguistic competence * 2.1.2 Functional linguistic competence * 2.2 Motivation for the distinction between formal vs. functional linguistic competence * 2.2.1 The language network in the human brain * 2.2.2 The language network does not support non-linguistic cognition * 3 The success of large language models in acquiring formal linguistic competence * 3.1 Statistical language models: some fundamentals * 3.2 What large language models can do: a case study * 3.3 Large language models learn core aspects of human language processing * 3.3.1 LLMs learn hierarchical structure * 3.3.2 LLMs learn abstractions * 3.4 LLMs resemble the human language-selective network * 3.5 Limitations of LLMs as human-like language learners and processors * 3.5.1 Excessive reliance on statistical regularities * 3.5.2 Unrealistic amounts of training data * 3.5.3 Insufficient tests on languages other than English * 3.6 Interim Conclusions * 4 The failure of large language models in acquiring functional linguistic competence * 4.1 LLMs are great at pretending to think * 4.2 How LLMs fail * 4.3 Limitations of LLMs as real-life language users * 4.3.1 Formal reasoning * 4.3.2 World knowledge and commonsense reasoning * 4.3.3 Situation modeling * 4.3.4 Social reasoning (pragmatics and intent) * 4.4 Interim conclusions * 5 Building models that talk and think like humans * 5.1 Modularity * 5.2 Curated data and diverse objective functions * 5.3 Separate benchmarks for formal and functional competence * 6 General Conclusion ## 1 Introduction When we hear a sentence, we typically assume that it was produced by a rational, thinking agent (another person). The sentences that people generate in day-to-day conversations are based on their world knowledge ("Not all birds can fly."), their reasoning abilities ("You're 15, you can't go to a bar."), and their goals ("Would you give me a ride, please?"). Naturally, we often use other people's statements not only as a reflection of their linguistic skill, but also as a window into their mind, including how they think and reason. In 1950, Alan Turing leveraged this tight relationship between language and thought to propose his famous test [Turing, 1950]. The Turing test uses language as an interface between two agents, allowing human participants to probe the knowledge and reasoning capacities of two other agents to determine which of them is a human and which is a machine.1 Although the utility of the Turing test has since been questioned, it has undoubtedly shaped the way society today thinks of machine intelligence [French, 1990, 2000, Boneh et al., 2019, Pinar Saygin et al., 2000, Moor, 1976, Marcus et al., 2016]. Footnote 1: In later versions of the test, the number of conversation partners has been reduced to one. The popularity of the Turing test, combined with the fact that language can, and typically does, reflect underlying thoughts has led to several common fallacies related to the language-thought relationship. We focus on two of these. The first fallacy is that an entity (be it a human or a machine) that is good at language must also be good at thinking. If an entity generates long coherent stretches of text, it must possess rich knowledge and reasoning capacities. Let's call this the "good at language -> good at thought" fallacy. The rise of large language models [LLMs; Vaswani et al., 2017a, Devlin et al., 2019, Bommasani et al., 2021], most notably OpenAI's GPT-3 [Brown et al., 2020], has brought this fallacy to the forefront. Some of these models can produce text that is difficult to distinguish from human output, and even outperform humans at some text comprehension tasks [Wang et al., 2018, 2019a, Srivastava et al., 2022]. As a result, claims have emerged--both in the popular press and in the academic literature--that LLMs represent not only a major advance in language processing but, more broadly, in Artificial General Intelligence (AGI), i.e., a step towards a "thinking machine" (see e.g., Dale 2021 for a summary of alarmist newspaper headlines about GPT-3). Some, like philosopher of mind David Chalmers Chalmers [2022], have even taken seriously the idea that these models have become sentient [although Chalmers stops short of arguing that they are sentient; see also Cerullo, 2022]. However, as we show below, LLMs' ability to _think_ is more questionable. The "good at language -> good at thought" fallacy is unsurprising given the propensity of humans to draw inferences based on their past experiences. It is still novel, and thus uncanny, to encounter an entity (e.g., a model) that generates fluent sentences despite lacking a human identity. Thus, our heuristics for understanding what the language model is doing--heuristics that emerged from our language experience with other humans--are broken.2. Footnote 2: Note that people also make a related fallacy, “bad at language -> bad at thought” (see Mahowald & Ivanova, 2022). Individuals who are not native speakers of a language, who do not speak hegeomic dialects, or those suffering from disfluencies in their productions due to developmental or acquired speech and language disorders are often incorrectly perceived to be less smart and less educated [Kinzler, 2021, Kinzler et al., 2009, Hudley and Mallinson, 2015] The second fallacy is that a model that is bad at thinking must also be a bad model of language. Let's call this the "bad at thought -> bad at language" fallacy. LLMs are commonly criticized for their lack of consistent, generalizable world knowledge [e.g. Elazar et al., 2021a], lack of commonsense reasoning abilities [e.g., the ability to predict the effects of gravity Marcus, 2020], and failure to understand what an utterance is really about [e.g., Bender and Koller, 2020a, Bisk et al., 2020]. While these efforts to probe model limitations are useful in identifying things that LLMs can't do, some critics suggest that the models' failure to produce linguistic output that fully captures the richness and sophistication of human _thought_ means that they are not good models of human _language_. Chomsky said in a 2019 interview (Lex Fridman, 2019): "We have to ask here a certain question: is [deep learning] engineering or is it science? [...] On engineering grounds, it's kind of worth having, like a bulldozer. Does it tell you anything about human language? Zero." The view that deep learning models are not of scientific interest remains common in linguistics and psycholinguistics, and, despite a number of position pieces arguing for integrating such models into research on human language processing and acquisition [Baroni, 2021, Linzen, 2019, Linzen and Baroni, 2021, Pater, 2019, Warstadt and Bowman, 2022, Lappin, 2021], this integration still encounters resistance (e.g., from Chomsky above). Both the "good at language -> good at thought" and the "bad at thought -> bad at language" fallacies stem from the conflation of language and thought, and both can be avoided if we distinguish between two kinds of linguistic competence: formal linguistic competence (the knowledge of rules and statistical regularities of language) and functional linguistic competence (the ability to use language in the real world, which often draws on non-linguistic capacities). Of course, language does not live in a vacuum and is fundamentally embedded and social, so the formal capacity is of limited value without being integrated in a situated context (e.g., Clark, 1996; Hudley et al., 2020; Bucholtz and Hall, 2005; Labov, 1978; Wittgenstein, 1953; Grice, 1975; Lakoff, 1972; Clark, 1992). But even solving the more restricted problem of formal linguistic competence (e.g., what counts as a valid string of a language) is far from trivial and indeed has been a major goal of modern linguistics. Our motivation for the distinction between formal and functional linguistic competence comes from the human brain. A wealth of evidence from cognitive science and neuroscience has established that language and thought in humans are robustly dissociable: the machinery dedicated to processing language is separate from the machinery responsible for memory, reasoning, and social skills (e.g., Fedorenko and Varley, 2016a, ; Section 2). Armed with this distinction, we evaluate contemporary LLM performance and argue that LLMs have promise as scientific models of one piece of the human cognitive toolbox--formal language processing--but fall short of modeling human thought. Ultimately, what "pure" LLMs can learn is necessarily constrained both by the information available in their training data and by whether that information is learnable through a word prediction mechanism. It has turned out that quite a lot of linguistic knowledge, e.g., about syntax and semantics, can be learned from language data alone (Potts, 2020; Merrill et al., 2022; Bommasani et al., 2021), in our opinion far more than most researchers in the field would have guessed 5 or 10 years ago (see Merrill et al. (2022) for an argument of how semantic information is in-principle learnable from language data, and Piantadosi and Hill (2022) for an argument that models can genuinely learn meaning). The success of these models is a major development, with far-reaching implications. But LLMs' success in developing linguistic knowledge by predicting words using massive amounts of text does not guarantee that all aspects of thought and reasoning could be learned that way (although, as we will discuss, some aspects of thought and reasoning _can_ be learned that way provided the relevant information is typically encoded in distributional patterns over words). By saying that LLMs do not, in and of themselves, model human thought, we are not suggesting that AI approaches which start from building LLMs will necessarily run up against hard limits. Indeed, at the end of this article, we discuss current modular approaches in which separate architectures or diverse objectives are combined. InstructGPT (Ouyang et al., 2022) and ChatGPT are examples of successes in this vein, in that they combine an LLM with Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017), whereby human feedback is used to iteratively adjust the trained models. In that sense, they are more than just LLMs and can learn based on more than just what is available in massive amounts of passively observed text. For our purposes here, we will use the term LLMs to refer primarily to "pure" language models (such as the original GPT-3) that are trained to predict held-out language tokens conditional on the immediate linguistic context, from large corpora of naturally observed language use. In the rest of the paper, we formulate an account of what we should and should not expect from a model of language and evaluate contemporary LLMs within this framework. In Section 2, we elaborate on the constructs of formal and functional linguistic competence and motivate this distinction based on the evidence from human cognitive science and neuroscience. In Section 3, we discuss the successes of LLMs in achieving formal linguistic competence, showing that models trained on word filling-in/prediction tasks capture numerous complex linguistic phenomena. Then, in Section 4, we consider several domains required for functional linguistic competence--formal reasoning, world knowledge, situation modeling, and social-cognitive abilities--on which today's LLMs fail, or at least perform much worse than humans. In Section 5, we discuss the implications of our framework for building and evaluating future models of language and Artificial General Intelligence (AGI). Finally, in Section 6, we summarize our key conclusions. ## 2 Formal vs. functional linguistic competence ### What does linguistic competence entail? #### 2.1.1 Formal linguistic competence We define formal linguistic competence as a set of core, specific capacities required to produce and comprehend a given language. Specifically, it involves the knowledge of and flexible use of linguistic rules (e.g., Chomsky, 1957; Comrie, 1989; Pinker and Jackendoff, 2005), as well as of non-rule-like statistical regularities that govern that language (e.g., Jackendoff and Pinker, 2005; Goldberg, 2019; Bybee and Hopper, 2001a). Well-recognized aspects of formal competence entail knowing a language's vocabulary and how it can be productively composed to form grammatical utterances. For example, most users of Standard Written English say, "The dogs in my bedroom are asleep" rather than "The dogs in my bedroom is asleep", because the verb "to be" must match the number of the noun that is the subject of the sentence ("the dogs"), even though that verb is closer to an intervening, singular noun ("bedroom"). Linguistic competence also requires exquisite sensitivity to the kinds of regularities that characterize idiosyncratic linguistic constructions. For instance, although English speakers know not to use the indefinite article "a" with plural nouns--making a phrase like "a days" ill-formed--they also know that it is allowed in a special construction where an adjective and a numeral intervene: "a beautiful five days in New York" [Solt, 2007, Dalrymple and King, 2019, Keenan, 2013]. Human language users likely learn rules (or rule-like systems) along with thousands of idiosyncratic constructions [Goldberg, 2019] through some combination of sophisticated statistical learning [Spelke, 2004, Aslin et al., 1998, Aslin, 2007, Bresnan, 2007, Bybee and Hopper, 2001b, Chater et al., 2006, Clark, 2014, Frank and Tenenbaum, 2011, Gerken, 2006, O'Donnell, 2011, Perfors et al., 2011, Saffran et al., 1996, Saffran and Thiessen, 2003] and innate conceptual, and perhaps specifically linguistic, machinery [Berwick et al., 2011b, Chomsky, 1957, Gleitman, 1993, Jackendoff and Jackendoff, 2002, Pinker, 2000, Pinker and Jackendoff, 2005, Pinker and Bloom, 1990]. The result is the human ability to understand and produce language and to make judgments of the kind of utterances that are acceptable and unacceptable in a language: "The customer ate." but not "The customer devoured.", "a beautiful five days in New York" and not "a beautiful five day in New York". #### 2.1.2 Functional linguistic competence In addition to being competent in the rules and statistical regularities of language, a competent language user must be able to use language to do things in the world [Bloom, 2002, Clark, 1996, Frank and Goodman, 2012, Grice, 1975, 1969, Slobin, 1996, Wilson and Sperber, 2002, Tomasello, 2010, Christiansen and Chater, 2016, Bucholtz and Hall, 2004]: to talk about things that can be seen or felt or heard, to reason about diverse topics, to make requests, to perform speech acts, to cajole, prevariate, and flatter. In other words, we use language to send and receive information from other perceptual and cognitive systems, such as our senses and our memory, and we deploy words as part of a broader communication framework supported by our sophisticated social skills. A formal language system in isolation is useless to a language user unless it can interface with the rest of perception, cognition, and action. This host of capacities to use language to do things in the world is distinct from formal competence and depends crucially on aspects of non-linguistic cognition (Figure 1). Thus, we define functional linguistic competence as non-language-specific cognitive functions that are required when we use language in real-world circumstances. Figure 1: Successful use of language relies on multiple cognitive skills, some of which (required for formal competence) are language-specific and some (required for functional competence) are not. A failure to acquire a particular skill would result in a specific type of language use deficit. Determining whether a particular failure stems from a gap in formal competence or functional competence is key to evaluating and improving language models. ### Motivation for the distinction between formal vs. functional linguistic competence As noted in Section 1, our motivation for the distinction between formal and functional linguistic competence comes from what we know about the functional architecture of the human mind. In humans, language is robustly dissociated from the rest of high-level cognition, as well as from perception and action. Below we briefly summarize a body of evidence from cognitive science and neuroscience that supports this dissociation. #### 2.2.1 The language network in the human brain Human language processing draws on a set of interconnected brain areas in the frontal and temporal lobes (typically in the left hemisphere). This 'language network' supports both comprehension (spoken, written, and signed; e.g., Deniz et al., 2019, Fedorenko et al., 2010, MacSweeney et al., 2002, Regev et al., 2013, Scott et al., 2017) and production Menenti et al., 2011, Hu et al., 2022b. Furthermore, the language network responds to stimulus features rather than task demands, as evidenced by i) similar responses to linguistic input under passive listening/reading and task-driven conditions (e.g., Diachek et al., 2020) and ii) similar patterns of fluctuations across participants when they process naturalistic linguistic stimuli (e.g., Wilson et al., 2008, Lerner et al., 2011, Silbert et al., 2014, Blank and Fedorenko, 2017). Further, the language network is sensitive to linguistic regularities at all levels: from phonological/sub-lexical, to word level, to phrase/sentence level (Bautista and Wilson, 2016, Blank et al., 2016, Blank and Fedorenko, 2020, Fedorenko et al., 2011, 2020, Regev et al., 2021) and supports linguistic operations that are related to both the processing of word meanings and those related to combinatorial semantic and syntactic processing (Fedorenko et al., 2020, Hu et al., 2022b). This consistent recruitment for language across a broad range of conditions, as well as the fact that damage to the language network leads to linguistic deficits (e.g., Bates et al., 2003, Broca, 1865, Damasio, 1992, Mesulam, 2001, Mesulam et al., 2014, Saffran, 2000, Wernicke, 1874, Wilson et al., 2019), indicates that this set of regions stores our linguistic knowledge representations--a set of mappings between linguistic forms and meanings. #### 2.2.2 The language network does not support non-linguistic cognition The language network is remarkably selective for language alone. Evidence of a strong dissociation between language processing and non-linguistic abilities comes from two main sources: a) behavioral investigations of individuals with aphasia--a language impairment caused by damage to the language network, typically as a result of a stroke or degeneration, and b) functional brain imaging studies of neurotypical adults. Studies of individuals with aphasia provide a unique opportunity for testing which cognitive capacities rely on linguistic representations. Of particular interest are cases of the so-called 'global aphasia', which affects both production and comprehension. Individuals with global aphasia exhibit severe linguistic deficits that, at best, spare nothing but single word comprehension for a small set of words. If some aspects of non-linguistic cognition draw on the same resources as language, then individuals with severe linguistic deficits should invariably exhibit impaired performance on the relevant non-linguistic tasks. Despite the nearly complete loss of linguistic abilities, some individuals with severe aphasia have intact non-linguistic cognitive abilities: they can play chess, compose music, solve arithmetic problems and logic puzzles, leverage their world knowledge to perform diverse tasks, reason about cause and effect, and navigate complex social situations (Basso and Capitani, 1985, Bek et al., 2010, Klessinger et al., 2007, Luria et al., 1965, Lecours and Joanette, 1980, Varley, 1998, Varley and Siegal, 2000, Varley et al., 2001, 2005, Willems et al., 2011, see Fedorenko and Varley, 2016b, for a review]. Brain imaging techniques like functional MRI (fMRI) can be used to observe activity in the language network in healthy individuals in real time. Given its high spatial resolution, fMRI is especially well-suited to study whether any two cognitive abilities draw on the same brain structures. For example, to ask whether language and mathematical reasoning recruit the same brain areas, we can have participants perform a language task and a math task while in an MRI scanner and then test whether brain regions that are active during language processing are also active when participants solve a math problem. Mirroring the findings from aphasia, this approach reveals that the language network is extremely selective for language processing: it responds robustly and reliably when people listen to, read, or generate sentences (Section 2.2.1), but not when they perform arithmetic tasks, engage in logical reasoning, understand computer programs, listen to music, categorize objects or events, watch others' actions, reason about people's mental states, or process non-verbal communicative information like facial expressions or gestures (e.g., Amalric and Dehaene, 2019, Benn et al., 2021, Blank et al., 2014, Chen et al., 2021, Deen et al., 2015, Fedorenko et al., 2011, Ivanova et al., 2020, Jouravlev et al., 2019, Liu et al., 2020, Monti et al., 2007, 2009) (Monti et al., 2012, Paunov et al., 2019, 2022, Pritchett et al., 2018, Shain et al., 2022b). In summary, evidence from individuals with aphasia and from brain imaging studies is remarkably consistent: the mechanisms that process language in the human brain do not support non-linguistic cognitive tasks. The latter draw on distinct mechanisms, as discussed in Section 4 below. This sharp dissociation suggests that in examining language models' functionality, it is important to separate their linguistic abilities from their abstract knowledge and reasoning abilities, which can be probed--and perhaps even learned--through a linguistic interface but require much more than formal linguistic competence. ## 3 The success of large language models in acquiring formal linguistic competence In this section, we evaluate the performance of LLMs qua language models by asking whether these models have made progress towards achieving formal linguistic competence, the kind of competence that is supported by the language-selective network in the human brain. We argue that these models are surprisingly and impressively successful at mastering this specific domain--dramatically more successful than the best systems from even 5-10 years ago. This impressive advance stands in stark contrast with past claims about the limits of learning language from linguistic input alone. ### Statistical language models: some fundamentals LLMs are the spiritual descendants of a number of earlier approaches in computational linguistics, including statistical language modeling, word embeddings, and connectionism (an earlier term for the approach that morphed into today's deep learning). Similar to earlier statistical language models, LLMs are usually trained on a word prediction task (the same task used for training n-gram models going back to at least Shannon's work in the mid-20th century; see Jurafsky and Martin [2009b] and Lee [2003b] for a historical overview). Similar to approaches in distributional semantics and word embeddings [for overviews, see Baroni and Lenci, 2010, Erk, 2012, Lenci, 2008]), LLMs represent linguistic information as vectors in a high-dimensional space. And, similar to earlier connectionist approaches [e.g., Rumelhart and McClelland, 1986, 1987, Elman, 1993, 1990], they use neural networks that are modeled off the human brain, whereby a series of model weights are learned and passed through a network in order to generate a response. All of these approaches stand in contrast to models that use explicit, structured hierarchical representations of syntactic rules [see Norvig, 2012, 2017 for a discussion of these two divergent paradigms)]. N-grams and word embedding models achieved some success in various domains in natural language processing (e.g., spelling correction, spam classification, sentiment analysis; Jurafsky and Martin [2009a], Lee [2003a]). However, they never approached human-level performance on general language tasks like text generation, leading to claims that purely statistical approaches would never be able to capture the richness of natural language, particularly in complex syntactic, morphological, and semantic domains [e.g., Pinker and Prince, 1988]. For example, Everaert et al. [2015] argued against statistical approaches to understanding human language use. They specifically claim that statistical approaches, which use linear strings of words as input, are unlikely to learn complex syntactic features that require representing phrases and sentences hierarchically rather than linearly. This pessimism is now challenged by LLMs. Here, we focus on a class of LLMs known as transformers. We will use GPT-3, a popular transformer LLM, as an example to explain how the training process works for these models. First, a training set is constructed from a massive amount of text from the web. GPT-3, for example, is trained on 45 terabytes of text, or about 500 billion words. The text is broken into word piece tokens, which are either words or word components. The decision of what tokens to use is based on character co-occurrence information rather than morphological analysis. As a result, the sentence "GPT-3 can be used for linguistics" is broken into the following word pieces: _G, PT, -, 3, can, be, used, for, lingu, istics_. This is typical: short common words like _can_ and _used_ are kept whole. A novel word in the corpus (_GPT-3_, which of course did not exist when the model was trained) is broken up into smaller pieces. And the word _linguistics_ is broken into _lingu_- (allowing it to learn a potential relationship with related words like _lingual_ and _lingua_) and _-istics_ (_cladistics_, _statistics_, etc.). GPT-3 has 50k unique word piece tokens. During the training, GPT-3 has a simple training objective: predict the next word piece based on a fixed number of previous word pieces (typically a few hundred). So if the input is "GPT-3 can be used for lingu- ____," the trained model might successfully predict that the next word piece is -istics. The predicted word piece is then compared with the ground truth (which word piece actually occurred in that training sentence), and the feedback signal is propagated back through the model to update its many (>100 billion) parameters. GPT-3's architecture, like that of other transformer models, has a number of key properties that makes it so successful. First, the model contains many layers, each of which can access a mixture of information from earlier layers, allowing the model to learn both low-level and high-level properties of the input at different layers [Tenney et al., 2019]. Second, each layer has a gating mechanism -- known as attention [Vaswani et al., 2017b] -- that allows each word piece node to selectively attend to any of the preceding word piece nodes. For instance, if some part of the model is responsible for generating a pronoun, it can learn to selectively attend to earlier possible noun antecedents in order to generate the right pronoun [12, 19, 20]. Finally, in the transformer architecture, the words in the input are passed in at the same time (not linearly, word by word, as in past models) and used to generate a prediction all at once [20]. This makes the training more parallelizable and efficient, which makes it feasible to train them on the enormous amount of data required. By being trained for word prediction, transformer models learn a lot about the structure of language, including linguistic features that, even recently, were thought to be beyond the scope of statistical models. These models have succeeded not just on tests of general language understanding developed by the NLP community [e.g., GLUE tasks 20, 21], but, critically for our purposes, on tests of linguistic competence. The benchmark BLiMP [23], for instance, contains minimal pairs of grammatical vs. ungrammatical sentences testing a diverse range of difficult linguistic phenomena like filler-gap dependencies (Bert knew what many writers find vs. *Bert knew that many writers find) and negative polarity licensing (The truck has clearly tipped over. vs. *The truck has ever tipped over.) These examples are designed to be challenging. As Warstadt and Bowman [20]summarize, RoBERTa base, another transformer LLM achieves human-level performance on 6 out of 12 item types. GPT-3 performs successfully on most of these items. Similar results are seen on other benchmarks like SyntaxGym [15], a suite of syntactic language benchmarks. Although it is tempting to move the goalposts and focus on what these models are still unable to do (see Bowman [20] for a discussion of the dangers of focusing on failures in NLP), we argue that the remarkable advances in LLMs' ability to capture various linguistic phenomena should not be overlooked. ### What large language models can do: a case study We demonstrate some of the linguistic capabilities of GPT-3 by working through an example of its output, in response to a prompt. Figure 2: An example of output produced by GPT-3 in response to a text prompt. The model predicts one token at a time, iteratively, based on what has been generated so far. The model demonstrates remarkable linguistic competence, generating syntactically and semantically coherent output. Consider the prompt and completion shown in Figure 2. This predicted continuation is typical of GPT-3's output quality and illustrative of the features that typify its knowledge of language. The model deftly uses a variety of complex linguistic features that depend on preceding text. The very first word of the generated text is a pronoun (_it_), which requires a prior referent. In this case, the prior referent is _GPT-3_, and the pronoun is chosen accordingly. The model also correctly uses the elliptical _do so_, which refers to _can produce text that_... from the preceding sentence. It uses consistent and correct passive voice constructions--making sure all verbs have the right number of arguments and that they are semantically plausible. The adverp _purely_ is placed naturally and sensibly (_it is trained purely on_, not the more stitled _it purely is trained on_), and the prepositions are apt and used with plausible objects (_on the statistics_, _of the English language_). Further, GPT-3 maintains coherence with the previous sentence by correctly inferring from the reference to _English_ that the relevant statistics are the _statistics of the English language_. And it deploys a sophisticated discourse relationship (the way in which two segments of a text are logically connected, typically across sentences): the first part of the sentence _It can do so_ is connected, in a sensible way, to the second half of the sentence with the connective _even though_. Other connectives like _and_ or _because_ would be less felicitous. The end of the sentence _of syntax, semantics, or even writing_ reproduces a common English-language pattern for a series of three parallel items, in which the list begins with two closely related words (in this case, _syntax_ and _semantics_) and concludes with a third, less related item (_writing_). The use of "even" in _even writing_ elegantly marks that pattern. Beyond just generating text, a widespread paradigm in NLP now involves treating the LLM as a few-shot or zero-shot learner [Brown et al., 2020] and _prompting_ it to perform tasks of interest, either by giving it a few examples or just by giving instructions. This approach is known as prompting, and GPT-3 was the first model to successfully perform a wide range of tasks with prompting alone (without task-specific training). Of course, insofar as the goal of language is to communicate thoughts, GPT-3 is deficient since it has no thoughts of its own. The lack of underlying meaning is particularly clear when one considers what it means when GPT-3's generated text in Figure 2 says that GPT-3 is trained with no knowledge of "writing". It is hard to interpret that phrase in a way that makes sense in the world. However, we argue that this failure is not a failure of GPT-3's formal linguistic competence, but of its functional competence (see Section 4). Note that the current (as of January 2023) versions of GPT-3--including "ChatGPT", "InstructGPT", and the models collectively referred to as "GPT-3.5" that descend from "InstructGPT"--are importantly different from the original version of GPT-3 and its predecessor models in ways that depart from being LLMs in the sense we discuss in this article. They are trained using at least three kinds of data: (1) purely unsupervised language in a generic next-token context-conditional prediction task - the original notion of "language modeling"; (2) human reinforcement learning, with human readers giving thumbs up or down based on whether the system's responses are good; and (3) human supervised learning, where humans were asked to write correct responses to a large number of prompts entered into the GPT-3 playground [OpenAI, 2022a,b]. When we discuss LLMs, we are referring primarily to "pure LLMs" trained on the first of the above data sources. But, as we discuss in Section 5, the inclusion of these additional objective functions is a promising avenue for building better models. ### Large language models learn core aspects of human language processing For LLMs to be useful as models of language processing in humans, we must be convinced that the models encode the abstract phonological, morphological, syntactic, and semantic rules that characterize human language. While there are important and interesting differences between syntactic processing in LLMs and humans [Arehalli et al., 2022, Van Schijndel and Linzen, 2021], there are also important similarities. Here, we review evidence that LLMs learn two features that are argued by many to be central to human linguistic processing: hierarchical structure and abstraction [see Hupkes et al., 2020, Lovering and Pavlick, 2022, Linzen and Baroni, 2021, Manning et al., 2020, Press et al., 2022, Ettinger, 2020a, for discussions of the importance of these features in the context of LLMs]. Both of these features address primarily the syntactic aspect of formal linguistic competence; we have chosen them because they have been extensively covered in the linguistics and NLP literatures. #### 3.3.1 LLMs learn hierarchical structure In human languages, words combine to make compositional meanings. When a sentence has multiple words, their meanings do not simply get added linearly one by one. Instead, they can be combined hierarchically [e.g., Adger, 2003, Bresnan, 1982, Chomsky, 1957, 1965, Frazier, 1979, Jackendoff and Jackendoff, 2002]. For instance, in the phrase "the keys to the cabinet", the meaning gets built up as follows: the first "the" combines with "keys", the second "the" combines with "cabinet", "the cabinet" combines with "to", and the resulting phrase "to the cabinet" combines with "the keys". The hierarchical structure in language manifests in many ways. One prominent example is non-local feature agreement. In English and many other languages, verbs agree with their subjects. For instance, a plural subject uses the verb "are", whereas a singular subject uses "is". A non-hierarchical bigram model, which simply stores frequencies of two-word strings, could learn that "The keys are on the table" is more probable than "The keys is on the table" by knowing that "keys are" is more common than "keys is". But such a model would not be able to learn that the subject and verb must agree even if arbitrarily far apart. For instance, "The keys to the old, wooden kitchen cabinet are on the table" has six intervening words between the subject and verb, and yet "are" still agrees with "keys" and not with the nearby "cabinet". However, a model that learns the underlying hierarchical structure of English sentences will have no trouble keeping track of the subject-verb dependency (Bernardy and Lappin, 2017; Finlayson et al., 2021b; Gulordava et al., 2018; Kuncoro et al., 2018; Lakretz et al., 2019; Linzen et al., 2016; Lu et al., 2020; Lasri et al., 2022b; Mueller et al., 2022). We highlight two main methods here that have been used to test whether LLMs learn hierarchical linguistic structure (although there are many others: see Belinkov et al. (2020) for an overview). The first is to treat the models as psycholinguistic subjects and see how they handle grammatical tasks that require operating over hierarchical structure (Futrell et al., 2018; Wilcox et al., 2019, 2021; Linzen et al., 2016; Ettinger, 2020a). Assessing the ability of models to correctly do number agreement between subjects and verbs has been a major focus of such work. The task is straightforward: given that language models produce a probability distribution over all words in their vocabulary, one can simply measure the probability assigned to the plural verb and compare it to the probability assigned to the singular verb (e.g., compare the probability of "is" vs. "are" given the prompt "The keys to the cabinet..."). Today's LLMs perform long-distance number agreement well above chance, even in the presence of intervening words (Gulordava et al., 2018; Linzen and Baroni, 2021), although they can be distracted by frequency effects (such as differences in the frequency between the singular and plural forms; Wei et al., 2021; Yu et al., 2020) and (as of GPT-2) fall short of humans on number agreement tasks in nested sentences (Lakretz et al., 2021; although see Lampinen et al., 2022). In a similar vein, it has been shown that these models can also handle other structure-sensitive constructions, like filler-gap dependencies (Wilcox et al., 2018) or negative polarity (Warstadt et al., 2019; Marvin and Linzen, 2018). Moreover, studies that turn on and off specific model "neurons" have provided mechanistic insight into how an LLM might perform this task (Lakretz et al., 2019). Another broad approach, typically known as probing (Tenney et al., 2019; Belinkov and Glass, 2019; Ettinger et al., 2016; Belinkov, 2022; Pimentel et al., 2020; Lasri et al., 2022a; Pimentel and Cotterell, 2021; Conneau et al., 2017; Giulianelli et al., 2018), uses an algorithm that takes as input the internal representations of a language model and learns to map them to some linguistic feature of interest. This approach shows that distances between LLM representations of individual words in a sentence align with the sentence's hierarchical structure rather than with linear distance between the words, thereby recovering the close structural relationship between the subject and verb of a sentence even when they are linearly far apart (Hewitt and Manning, 2019). #### 3.3.2 LLMs learn abstractions Following Ambridge (2020), we define an abstraction as a generalized linguistic representation--such as a part-of-speech category or grammatical role--that goes beyond simple storage of input and allows for generalization. The very notion of subject-verb agreement, outlined in the previous section, relies on the abstract categories of subject and verb. Gulordava et al. (2018) gives the example that, in a sentence like "dogs in the neighborhood often... (bark/barks)", a model might learn a shallow version of the agreement rule, namely, that the collocation of "dogs" and "bark" in the same sentence is more common than "dogs" and barks". However, a model that has an abstract representation of categories like grammatical subject, grammatical number, and verb should be able to handle long-distance number agreement even for novel combinations of words. One way to test a model's knowledge of abstract rule knowledge is by using semantically nonsensical sentences, like "The colorless green ideas I ate with the chair... (sleep/sleeps)". Testing on Italian, English, Hebrew, and Russian, Gulordava et al. (2018) found that models performed well even on these semantically empty sentences. Further, a large body of work has used the probing methodology described above to test for linguistic abstraction in LLMs. In this literature, a classifier is typically trained on top of model embeddings to ask whether an abstract category, such as part-of-speech or dependency role, can be recovered from the model. Tenney et al. (2019) argued that LLMs "rediscover the classical NLP pipeline," learning at various layers features like part-of-speech categories, parses, named entities, and semantic roles. However, one important limitation of such probing studies is that, even if abstract categories can be decoded from model representations, a model might not necessarily be using this knowledge (Belinkov, 2022; Hewitt and Liang, 2019; Ivanova et al., 2021; Pimentel et al., 2020; Voita and Titov, 2020; Elazar et al., 2021; Ravfogel et al., 2021; Finlayson et al., 2021; Wu et al., 2022; Tucker et al., 2022). One possible objection is that perhaps LLMs learn syntactic information but in a way that is closely tied to the particular lexical items used (e.g., they might learn the agreement rule for the specific verbs they encountered but not for an abstract verb category). An even more stringent test, thus, is to test whether LLMs can apply morphosyntactic rules to novel words. For instance, Kim and Smolensky (2021) show that BERT has some ability to generalize grammatical categories. They give the model novel words, used in phrases, as input (e.g., "The blick" where blick is likely a noun and "They dax" where dax is likely a verb) and test whether, based on the input, the model can generalize the part-of-speech category. That is, given the example in the preceding sentence as input, the model should be able to know that "I went to a blick" is more probable than "I went to a dax" since blick was used as a noun. They conclude that BERT succeeds partially at this task: it does learn to generalize, but only after repeated examples [but see Kim et al., 2022, Misra et al., 2022, for ways in which the word itself affects compositional ability]. More recent models, such as GPT-3, seem to be able to use a novel word appropriately right away, at least if prompted correctly (Brown et al., 2020, McCoy et al., 2021b). A probing study in the same vein (Maudslay and Cotterell, 2021) attempted to recover hierarchical sentence structure from LLM representations of sentences constructed in the style of the poem Jabberwocky, such that they are syntactically English-like but use nonce words in place of content words (e.g., "I povicated your briticists very much"). Even in this difficult case, probes performed above chance, although the performance was lower than for meaningful sentences. These results suggest that, although language models may rely on lexical-semantic cues to some extent, linguistic probes recover some abstract syntactic knowledge even in a very stringent test. Note that a human-like language model is not expected to rely solely on abstract rules. Humans use diverse cues in their language learning and processing that sometimes override or conflict with strict hierarchical syntactic processing (e.g., MacDonald et al., 1994a, 1992, Seidenberg and MacDonald, 1999, MacDonald, 1993a, MacDonald et al., 1994b, MacDonald, 2013, 1993b, Bates and MacWhinney, 1989a, 1987, 1989b, MacWhinney and MacWhinney, 1987, Tanenhaus et al., 1995, Trueswell et al., 1993, Seidenberg et al., 1982, Trueswell and Tanenhaus, 1994, Altmann and Kamide, 1999, Altmann and Steedman, 1988, Altmann and Kamide, 2007, Gibson and Pearlmutter, 1998, Rayner and Frazier, 1989, Rayner et al., 2006]. This resource-constrained, "good enough" processing strategy (Ferreira et al., 2002) leads to frequent syntactic errors, including in non-local subject-verb agreement cases discussed above (Bock and Miller, 1991, Patson and Husband, 2016, Brehm and Bock, 2013, Christianson, 2016). Humans also rely, to varying extents, on memorizing previously seen input, as opposed to purely learning abstract rules (Ambridge, 2020, Bod, 2009, Bybee and Hopper, 2001b, Goldberg, 2019, O'Donnell, 2015, Langacker, 1988, 2010, Goldberg, 2009). Overall, it seems clear that LLMs achieve at least some degree of abstraction. The degree of that abstraction remains a matter of debate, as it does for humans, but the fact that LLMs today already show evidence of representing hierarchical structure and abstract linguistic patterns suggests that it is feasible to build powerful language models that learn linguistic rules from textual input. ### LLMs resemble the human language-selective network Multiple studies have shown that LLMs can be used to predict activity in the language brain network in response to novel sentences (e.g., Caucheteux and King, 2022, Goldstein et al., 2022, Schrimpf et al., 2021). This correspondence between artificial and biological neural networks for language suggests that perhaps these systems carry out similar functions. A likely candidate objective for both is next-word prediction in the service of meaning extraction. Indeed, Schrimpf et al. (2021) showed that LLMs that perform better on next-word prediction (but not other language tasks) provide a better match to human behavioral and neural data (see, e.g., Wilcox et al., 2021 for additional behavioral evidence). Furthermore, the functional response properties of LLMs described above (Section 3.3) resemble those of the language network. Similar to LLMs, the language network is sensitive to abstract hierarchical rules, as evidenced by studies that use isolated phrases and sentences (e.g., Snijders et al., 2009, Fedorenko et al., 2010, 2016, Pallier et al., 2011, Ding et al., 2016, Law and Pylkkanen, 2021, Nelson et al., 2017, Shain et al., 2021), naturalistic narratives (e.g., Shain et al., 2020, Brennan et al., 2020, Heilbron et al., 2022, Reddy and Wehbe, 2021, Shain et al., 2022a), and syntactically well-formed but semantically empty ("jabberwocky") stimuli (e.g., Fedorenko et al., 2010, Fedorenko and Varley, 2016b, Shain et al., 2021, Pallier et al., 2011, Humphries et al., 2006, Matchin et al., 2017, 2019, Matchin and Wood, 2020). However, just like LLMs, the language network is also sensitive to specific word co-occurrences (e.g., as evidenced by sensitivity to n-gram surprisal; Shain et al., 2020). Finally, the internal architecture of LLMs and the language network also shows a resemblance: neither system shows clear spatial segregation for syntactic and semantic processing (LLMs: e.g., Tenney et al., 2019, Durrani et al., 2020, Huang et al., 2021, brain: e.g., Dick et al., 2001, Fedorenko et al., 2020, Reddy and Wehbe, 2021, indicating that these functions are tightly functionally coupled in both. ### Limitations of LLMs as human-like language learners and processors Although a preponderance of evidence suggests that LLMs learn some aspects of hierarchical structure and abstraction and resemble the human language system in other ways, their behavior is not fully human-like. Below we consider three common criticisms of LLMs as models of human language processing and provide some responses to these objections. #### 3.5.1 Excessive reliance on statistical regularities Part of what makes LLMs succeed on language tasks is almost certainly the fact that they pick up on statistical regularities to achieve good performance without necessarily learning what we think of as the relevant linguistic information (e.g., hierarchical structure, abstract grammatical categories, etc.). In other words, the models can be "right for the wrong reason" [McCoy et al., 2019] and leverage certain features in the input that aren't the ones being tested. For instance, adding noise or distracting information can degrade model performance on a variety of tasks [e.g. Belinkov and Bisk, 2017, Khayrallah and Koehn, 2018, Wallace et al., 2019, Kassner and Schutze, 2020]. In some (but not all) of these cases, it is shown that such noise does not similarly affect humans. Chaves [2020] makes an argument along these lines, suggesting that the models are actually learning surface statistical regularities in order to model the probabilities of various syntactic phenomena. Other evidence suggests that LLMs can be misled by simple frequency effects, such as, in a task where it has to choose between a singular and plural form of a particular verb, always choosing the plural form of verbs for which the plural form is much more frequent [Chaves and Richter, 2021]. These findings lead to the question: do LLMs simply store and regurgitate output? This does not appear to be the case: McCoy et al. [2021a] explicitly explored the extent to which GPT-2 output is regurgitated from the training set and found that, although n-grams up to length 4 often appeared in the training set, GPT-2 generated mostly novel 5-grams and above. They also showed that the model routinely generates plausible novel words that do not appear in the training set. Thus, LLMs generate output based on a combination of word co-occurrence knowledge and abstract morphosyntactic rules; whether this ratio can be adjusted to match that of humans (which is still debated; Section 3.3.2) remains to be determined. #### 3.5.2 Unrealistic amounts of training data Most LLMs that achieve near-human performance are trained on vastly more data than a child is exposed to. For example, van Schijndel et al. [2019] have found that a model's training dataset would need to be unrealistically large in order to handle some constructions in a human-like way. Warstadt and Bowman [2022] estimate that GPT-3 sees 1000x more language data than a 10-year-old human (but observe that RoBERTA sees only 100x more and still performs well at grammaticality tasks). Therefore, even if linguistic information is, in principle, learnable from statistical regularities in the input, in practice, human language learners likely rely on pre-existing biases in order to learn quickly from sparse and noisy input--biases that today's state-of-the-art models lack. This difference in the amount of "training data" that models vs. human language learners require is sometimes taken to imply that the resulting model representations will necessarily be fundamentally unlike human linguistic representations [see, e.g., Munn, 1950, Meister, 2022, for similar discussions in the context of using non-human animals as models of human cognition]. However, there is reason for optimism. Ongoing work is actively exploring the extent to which models that are trained on more realistic amounts and kinds of input and/or in otherwise more realistic ways can still learn critical aspects of language [Warstadt and Bowman, 2022]. Several studies have found that some syntactic generalizations can occur in BERT-like architectures that are trained with millions (as opposed to billions) of words [Hu et al., 2020, Zhang et al., 2021]. For example, the ELECTRA model uses a binary word prediction objective, instead of an unconstrained word prediction task, and achieves comparable performance to models trained on far more data [Clark et al., 2020]. BabyBERTa, a model trained on 5 million words of child-directed speech [Huebner et al., 2021]--similar in scale to what a human child would encounter--learned some syntactic generalizations comparable to RoBERTa, a high-performing model that is trained on 30 billion words. Performance of smaller language models is far from perfect; for example, BabyBERTa could not represent several aspects of grammar that human children find intuitive. Critically, however, improvements in language models--including the use of more cognitively-inspired architectures and more realistic learning scenarios--could lead to strong performance with orders of magnitude less training data than today's state-of-the-art LLMs [see Zhuang et al., 2021, for evidence for the success of a similar approach in vision models]. In that vein, an important question is what inductive biases are introduced by the model architectures, whether those biases resemble the ones that enable humans to learn language [McCoy et al., 2018, 2020, Ravfogel et al., 2019], and whether better architectures could better capture these biases and enable faster learning on less data. #### 3.5.3 Insufficient tests on languages other than English Because LLMs are data-hungry, they only work well on languages for which vast corpora are available. For most human languages, this is not the case. More worryingly, the architectures themselves may be biased towards English and other European languages [11]: not all languages are equally easy to model given the existing infrastructure [12, 13]. Thus, we should proceed with caution in assuming that the success of BERT, GPT-2/3, and other LLMs will extend to all languages. That said, evidence is growing of strong performance in a variety of languages [14, 15, 16], and successful transfer of models to low-resource languages [15]. ### Interim Conclusions LLMs today generate highly coherent, grammatical texts that can be indistinguishable from human output. In doing so, they exhibit at least some knowledge of hierarchical structure and abstract linguistic categories while successfully capturing human brain responses during language processing. These models are not perfect learners of abstract linguistic rules, but neither are humans. We therefore conclude that LLMs are on track to acquiring formal linguistic competence. LLMs have already overturned the claims about the fundamental impossibility of acquiring certain linguistic knowledge--including hierarchical structure and abstract categories--from the input alone [but see 17]. If language modeling continues to improve (including learning from more realistic kinds and amounts of data; Section 3.4.3), this would allow testing more general versions of this poverty of the stimulus argument [1, 12, 13], including specific tests of which, if any, inductive biases are required to learn the rules and statistical regularities of human language. As such, LLMs have substantial value in the scientific study of language learning and processing. ## 4 The failure of large language models in acquiring functional linguistic competence ### LLMs are great at pretending to think Large text corpora contain a wealth of non-linguistic information, from mathematical and scientific facts (e.g., "two plus seven is nine") to factual knowledge (e.g., "the capital of Texas is Austin") to harmful stereotypes (e.g., "women belong in the kitchen"). This is not particularly surprising since even simple patterns of co-occurrence between words capture rich conceptual knowledge, including object properties [e.g., 14, 15, 16, 17, 18, 19], abstract analogies [12], social biases [e.g., 14, 15, 16], and expert knowledge in specialized domains [e.g., 13]. Moreover, statistical regularities extracted from language and from visual scenes exhibit a substantial degree of correspondence [11, 12], indicating that linguistic information can capture at least some aspects of experiential input [e.g., 14, 15]. As a result, language models trained on gigantic text corpora acquire large amounts of factual knowledge [e.g., 14, 15, 16, 17], succeed at some types of mathematical reasoning [e.g., 14, 15, 16] and reproduce many stereotypes and social biases [14, 15, 16, 17, 18] All these behaviors -- both positive and negative -- become more prominent as models gets larger [15, 16], indicating that larger storage capacity allows LLMs to learn increasingly more fine-grained patterns in the input. These wide-ranging capabilities are, in a way, unsurprising given LLMs' impressive linguistic skills that we covered in Section 3. The very same pattern learning mechanisms enable LLMs to infer that "keys" go with "are", not "is", and that "The capital of Texas" is "Austin", not "Boston". But do LLMs actually use this information to reason about the world? How well do they generalize beyond specific word co-occurrences? Can they use their knowledge appropriately depending on context? In the rest of this section, we argue that LLMs, in fact, struggle with non-language-specific capabilities. If a test scenario is common enough, then the model can seemingly succeed -- but studies that test generalized reasoning abilities highlight major limitations of LLMs as models of human cognition. ### How LLMs fail Any test of LLMs' ability to reason must account for their ability to use word co-occurrence patterns to "hack" the task. In some cases, researchers can circumvent this hacking behavior by constructing unusual prompts, ensuring that an answer cannot be found in (or easily constructed from) the models' training data. Consider the study by Collins, Wong et al. (2022). The authors asked both GPT-3 and human participants to come up with solutions for a set of unusual tasks; an independent group of human raters then evaluated the goodness of the proposed solution. A representative example is below: **Prompt:** Get your sofa onto the roof of your house. **GPT-3 response:** I would start by getting a very strong ladder and a very strong friend... (goodness: 3.6 out of 7) **Human response:** You may need to rent a Genie lift large enough to carry the sofa. You will need at least one other person... (goodness: 4.8 out of 7) To manipulate how "out-of-distribution" the query is, the prompts get progressively more constrained: **Prompt:** Get your sofa onto the roof of your house, without using a pulley. **GPT-3 response:** Use a rope to tie around the sofa and connect it to a car. (goodness: 3.0 out of 7) **Human response:** I would get a giant crane... and use the crane to lift it to the roof of my house. (goodness: 5.1 out of 7) With more and more constraints, human responses remain at approximately the same level of goodness, whereas model performance breaks down completely: **Prompt:** Get your sofa onto the roof of your house, without using a pulley, a ladder, a crane... **GPT-3 response:** Cut the bottom of the sofa so that it would fit through the window...break the windows to make room for the sofa. (goodness: 2.7 out of 7) **Human response:** I will build a large wooden ramp...on the side of my house with platforms every 5 feet... (goodness: 5.0 out of 7) GPT-3's failure to perform well on out-of-distribution problems indicates that LLMs can leverage existing patterns in the data but struggle to come up with creative solutions to novel tasks. Many other studies also highlight LLMs' limitations when it comes to non-shallow reasoning tasks [for a summary, see e.g. Helwe et al., 2021]. Note that the formal linguistic competence of GPT-3 remains high: its output is grammatical and coherent. Only when we attempt to evaluate the merit of the proposed solution, do its shortcomings become apparent. ### Limitations of LLMs as real-life language users Why should we care whether LLMs can think? One reason, of course, is a wealth of claims that contemporary LLMs are precursors to artificial general intelligence [AGI; e.g., Romero, 2022, Yan, 2021]. However, there is another reason, which is of substantial relevance to both AI researchers and language scientists: real-life language use is impossible without non-linguistic cognitive skills. Understanding a sentence, reasoning about its implications, and deciding what to say--these skills all rely on cognitive capacities that go way beyond lexical semantics or syntax. We focus on four key capacities that are not language-specific but are nevertheless crucial for language use in real-life settings: **i) formal reasoning**--a host of abilities including logical reasoning, mathematical reasoning, relational reasoning, computational thinking, and novel problem solving; **ii) world knowledge**--knowledge of objects and their properties, actions, events, social agents, facts, and ideas; **iii) situation modeling**--the dynamic tracking of protagonists, locations, and events as a narrative/conversation unfolds over time; and **iv) social reasoning**--understanding the social context of linguistic exchanges, including what knowledge is shared, or in 'common ground', what the mental states of conversation participants are, and pragmatic reasoning ability. A simple conversation typically requires the use of all four of these capacities, yet none of them are specific to language use. Below, we provide evidence that these skills rely on non-language-specific processing mechanisms in humans and highlight LLMs' failures as relevant to each domain. #### 4.3.1 Formal reasoning Language allows us to discuss highly abstract ideas, turn ideas into scientific and philosophical theories, construct logical syllogisms, and engage in formal debates. Unsurprisingly, language is often considered a cornerstone of complex reasoning [e.g., Baldo et al., 2005, 2010, Dennett, 1994, Carruthers, 2002, Grigoroglou and Ganea, 2022, Hinzen, 2013]. However, both neuroscience and studies of LLMs provide evidence that language and formal reasoning dissociate in cognitive systems. **Humans.** Despite their close interplay, language and reasoning rely on distinct cognitive and neural systems. A line of work by Monti and colleagues [e.g., Monti et al., 2007, 2009, Coetzee and Monti, 2018] has investigated the brain basis of logical inference in a task where the participants had to decide whether the conclusion followed from the premise ('If X, then Y" => "If Y, then not X"). They found that, when contrasted with grammaticality judgments, logical inference recruited a set of frontal and parietal brain regions that are distinct from the language network. These brain regions are known as the multiple demand network [Duncan, 2010, 2013, Duncan et al., 2020], named so because they support many different kinds of demanding cognitive tasks, including logic, mathematical reasoning [Amalric and Dehaene, 2016, 2019, Pined and Dehaene, 2009, Fedorenko et al., 2013, Monti et al., 2012], physical reasoning [Fischer et al., 2016, Schwettmann et al., 2019, Pramod et al., 2022], and computer code comprehension [Ivanova et al., 2020, Liu et al., 2020]. Human patient studies have provided causal evidence for the role of the multiple demand network in logical reasoning by showing that the amount of damage to these regions correlates negatively with performance on executive function tasks [Glischer et al., 2010, Woolgar et al., 2010, 2018]. Importantly, the multiple demand network supports reasoning even when the task is presented linguistically [Amalric and Dehaene, 2016, 2019, Ivanova et al., 2020, Monti et al., 2012] -- similar to how LLMs receive their prompts. **LLMs.** Several studies have criticized the extent to which language models can engage in tasks that require formal reasoning, such as math problems expressed in words. Patel et al. [2021] showed that, while models can appear to solve math problems [e.g., the dataset in Miao et al., 2020], they actually rely on heuristics and fail on more complicated problems. Similarly, the creators of GPT-3 show that it performs well on two-digit addition and subtraction but not on more complex tasks, such as three-digit addition or two-digit multiplication [Brown et al., 2020]. Reasoning tests that break common co-occurrence patterns in the input or require multi-step operations also lead to model failure [Chowdhery et al., 2022, Talmor et al., 2020b] To understand why, [Zhang et al., 2022] evaluate a BERT model trained from scratch on a controlled dataset where classical logic rules hold in all cases. They show that the model behaves near-perfectly on novel examples sampled from within the training distribution but fail to generalize to out-of-distribution examples that would be easy to solve with simple logic rules [for related examples, see Lake and Baroni, 2018, Loula et al., 2018]. 3. Footnote 3: A potential encouraging direction is to train/fine-tune language models on intermediate computations required to arrive at the correct answer, the so-called “mental scratchpad” [Nye et al., 2021, Recchia, 2021] However, because the content of the mental scratchpad in these examples is not linguistic, we consider these models to be outside the scope of this paper. Further, these models still suffer from the generalization issues we describe above; for instance, when trained to do addition, models trained on 1-8-digit numbers fail to generalize to 10 or more digits. Overall, evidence from LLMs is consistent with evidence from neuroscience: language and formal reasoning are distinct cognitive capacities relying on distinct processing mechanisms. #### 4.3.2 World knowledge and commonsense reasoning After people access the meanings of words and compose them into a semantically coherent whole, they get access to a wealth of conceptual information. For instance, after reading the sentence "Nabiha took the keys from the table", we can access a wealth of implicit information about that event (what size the keys likely are; what the motion of taking the keys from the table looks like; where the keys are now with respect to Nabiha). Should we expect LLMs to perform the same inferences? Is language knowledge inextricably tied to world knowledge? **Humans.** Evidence from neuroscience shows a dissociation between linguistic and semantic knowledge. On one hand, individuals with aphasia struggle to produce grammatical utterances and retrieve contextually appropriate words, but their ability to reason about objects and events presented as pictures often remains intact [Varley and Siegal, 2000, Ivanova et al., 2021b, Benn et al., 2021]. On the other hand, individuals who suffer from semantic dementia, a neurodegenerative disorder affecting primarily anterior temporal lobes, struggle with world knowledge tasks (e.g., knowing that a zebra has stripes) regardless of whether the stimuli are presented as words or as pictures [Patterson et al., 2007]. Thus, despite a tight coupling between language and world knowledge that is required for using language in real life, they rely on distinct neural circuits. **LLMs.** As discussed in Section 4.1, language models acquire a wealth of world knowledge contained in word co-occurrence patterns. However, world knowledge in LLMs suffers from two major shortcomings. First, it is brittle. Kassner and Schutze [2020] show that BERT can be tricked by appending a distractor prime before a question, a technique they call "mispriming". For instance, performance might degrade if we appended something like: "Boston? The capital of Texas is ____." Now, instead of saying Austin, it might instead supply the contextually salient Boston. Moreover, LLMs often generate inconsistent outputs. For instance, when prompted with "Scinfeld premiered on ____ and "Seinfeld originally aired on ____", they might provide names of different TV networks despite the fact that the prompts have the same semantic content Elazar et al. [see also Ravichander et al. 2020 2020]. Misra et al. [2022] show another interesting failure of even advanced models like GPT-3 on a task where they are asked to reason about properties of novel objects. Models generally learn properties of objects, like that robins can fly and penguins cannot. They can also seemingly generalize to novel objects, correctly giving high scores to "A wug is a robin. Therefore, a wug can fly." But this knowledge is brittle, and even highly performant models fail at: "A wug is a robin. A dax is a penguin. Therefore, a dax can fly." because of the distractor. Second, world knowledge in LLMs is biased. Gordon and Van Durme [2013] note that learning about the world from language corpora is challenging because much of world knowledge is implied: people are much more likely to communicate new or unusual information rather than commonly known facts. Indeed, language models have impaired knowledge of domains that are underreported, such as basic shape knowledge [e.g., "the wheels are round" Lucy and Gauthier, 2017, Utsumi, 2020, Chersoni et al., 2021] and object size knowledge [e.g., "a table is smaller than an airplane" Talmor et al., 2020, Liu et al., 2022b]. These shortcomings arise from the fact that these models are trained to extract statistical information about words in text rather than a set of stable, consistent, and complete facts about the world. Any output generated by LLMs will be biased accordingly. The world knowledge gaps become particularly prominent when it comes to commonsense reasoning tasks, which often require using information that is not explicitly stated in the training data. A number of datasets have been developed for testing commonsense reasoning in language models [Huang et al., 2019, Levesque et al., 2012, Zellers et al., 2018], and, while models can succeed in some cases, they fail on more rigorous tests. For example, Elazar et al. [2021a] show that models cannot generalize on Winograd scheme tasks [Levesque et al., 2012] that require commonsense reasoning about the world. For instance, when presented with a sentence like "The lawyer asked the witness a question, but he was reluctant to repeat it.", most humans would agree that "he" refers to "lawyer" and not "witness", but the model would perform at chance. What is worse, Elazar and colleagues show that some past studies reporting model success on commonsense reasoning rely on flawed evaluation techniques, which means that, going forward, we should be exceedingly careful when distinguishing between statistical knowledge and true world knowledge in LLMs. #### 4.3.3 Situation modeling People can easily follow the plot of a story that spans multiple chapters or, sometimes, multiple book volumes. We can also have a three-hour conversation with a friend, and the next day the friend will expect us to remember most of what was said. We accomplish this impressive feat not by having a dedicated memory slot for every word that we read or heard, but by abstracting away linguistic information into a situation model -- a mental model of entities, relations between them, and a sequence of states they had been in [Van Dijk et al., 1983]. In addition to tracking long contexts, a situation model allows us to seamlessly integrate linguistic and non-linguistic information, as in a sentence "Can you pass me that? <points at a plate>" [Jackendoff, 2002]. Given the importance of situation modeling for language use, should we treat it as part of formal linguistic competence? **Humans.** As discussed in Section 2.2.1, the language network is sensitive to linguistic regularities that apply to units of different 'grain size', from phonemes to morphemes to words to phrases and clauses. Curiously, it does not appear to be sensitive to structure above the clause level (e.g., Blank and Fedorenko, 2020, Jacoby and Fedorenko, 2020, Lerner et al., 2011 see also Yeshurun et al., 2017). This response profile suggests that the language network is distinct from downstream processes that aggregate phrase- and sentence-level meanings into a coherent whole. The fact that people's memory for precise linguistic forms is relatively poor [e.g., Gurevich et al., 2010, Potter and Lombardi, 1998] aligns well with the idea that the representations passed by the language system on to downstream systems are abstract semantic in nature. A likely candidate for integrating these abstract representations is the so-called default network [e.g., Blank and Fedorenko, 2020, Ferstl and von Cramon, 2002, Kuperberg et al., 2000, Lerner et al., 2011, Simony et al., 2016, ; for a general review of the default network, see Buckner and DiNicola, 2019]. Crucially, the default network builds situation models for both linguistic and non-linguistic narratives [Baldassano et al., 2017, 2018], indicating that situation modeling is not a language-specific skill.4 Footnote 4: Recent work has suggested that the default network may consist of two distinct interdigitated sub-networks [Braga et al., 2019, Deen and Freiwald, 2022, DiNicola et al., 2020]. One of these sub-networks appears to correspond to the Theory of Mind network [Save and Kanwisher, 2003] discussed in 4.3.4 below. The exact contributions of the other sub-network remain debated, with different proposals linking its functions to episodic projection [placing oneself into the past, when remembering things, or into the future, when imagining things; Buckner et al., 2008], scene construction and situation modeling [Hassabis and Maguire, 2009] or spatial cognition in general [Deen and Freiwald, 2022]. **LLMs.** Although there is some evidence that LLMs track internal representations of the entities' states [Li et al., 2021], they are, by design, incapable of tracking information over long contexts. As of today, their input window is typically between 512 and 2,048 tokens -- nowhere near enough to represent a full conversation, let alone a book (models with "long" context windows can cover a book chapter; e.g., Beltagy et al. (2020); Guo et al. (2021)). Furthermore, LLMs struggle to accurately build situation models even over short spans of text: for instance, their outputs can refer to non-existent discourse entities ["Arthur doesn't own a dog. The dog is brown." Schuster and Linzen, 2022]. And, of course, models trained exclusively on text strings are, by design, incapable of using this knowledge to refer to real-world entities (Bender et al., 2021; Bender and Koller, 2020; Bisk et al., 2020), meaning that they are incapable of using language in a physical environment the way humans do. Thus, LLMs in their current form are challenged to perform an essential feat of language comprehension: integrating incoming language information into a general, multimodal, dynamically evolving situation model. #### 4.3.4 Social reasoning (pragmatics and intent) "Water!" Wittgenstein famously used single-word utterances like this to show that linguistic meaning radically depends on context. Although its literal interpretation is simply a reference to a physical entity, the intended meanings are much more varied. Is the word being gasped by a thirty person in the desert? By a hiker warning his friend of a hidden stream? An impatient diner talking to a waiter? Work in cognitive science and linguistics has come to recognize that these kind of grounded, context-dependent aspects of language are not just peripheral but a central part of human language production and understanding (Bloom, 2002; Clark, 1996; Frank and Goodman, 2012; Grice, 1975, 1969; Slobin, 1996; Wilson and Sperber, 2002; Tomasello, 2010; Christiansen and Chater, 2016). The process of inferring the intended meaning of the utterance beyond its literal content is known as pragmatics (Levinson et al., 1983). This processes likely engage a variety of cognitive mechanisms (Andres-Roqueta and Katsos, 2017; Levinson, 2000; Paunov et al., 2022). Here, we focus on one core capacity required for pragmatics: social reasoning. **Humans.** A wealth of neuroscientific evidence shows that the human brain has dedicated machinery for processing social information (Adolphs, 1999, 2009; Deen et al., 2015; Isik et al., 2017; Kanwisher et al., 1997; Lee Masson and Isik, 2021; Saxe, 2006; Tarhan and Konkle, 2020; Walbrin et al., 2018, e.g., ]. Perhaps the most relevant to our current discussion is the theory of mind network, a set of brain regions that are engaged when their owner is attempting to infer somebody's mental state (Fletcher et al., 1995; Gallagher et al., 2000; Jacoby et al., 2016; Saxe and Kanwisher, 2003; Saxe et al., 2006; Saxe and Powell, 2006). The specific contributions of the theory of mind network to language understanding can be divided into two broad categories. First, just like other functionally specialized brain modules, it is engaged when processing semantic content that is specifically related to its domain (see Section 4.3.2): narratives that require inferring the mental state of the characters engage the theory of mind network regardless of whether the actual stimuli are texts or as movies (Jacoby et al., 2016; Paunov et al., 2022), and texts that require inferring the characters' intentions evoke greater activity than those that do not (Fletcher et al., 1995; Ferstl and von Cramon, 2002; Saxe and Powell, 2006). Second, the theory of mind network is engaged more strongly during nonliteral language comprehension, such as jokes, sarcasm, indirect speech, and conversational implicature (Hauptman et al., 2022; Spotorno et al., 2012; Feng et al., 2017, 2021; Jang et al., 2013; van Ackeren et al., 2012, see Hagoort and Levinson, 2014, for a review) -- in other words, in situations where understanding the meaning of an utterance requires inferring the intentions of the speaker. Thus, successful language understanding and use relies on our broader, non-language-specific social inference skills. **LLMs.** Recent versions of OpenAI's GPT-3 models show a markedly improved capacity to interpret non-literal utterances, such as metaphors and polite deceit, suggesting that it can reach human or near-human performance on at least some pragmatic tasks (Hu et al., 2022a). That said, today's LLMs are still unable to interpret sarcasm or complete jokes (Hu et al., 2022a); what is perhaps even more important, they struggle on theory of mind tasks, which require inferring the intentions behind others' actions (Sap et al., 2022). Thus, in agreement with neural evidence, linguistic and social/pragmatic skills dissociate in LLMs. Moreover, LLMs themselves lack communicative intent (Shanahan, 2022; Bender et al., 2021). The closest they come to intentionality is modeling a document-specific distribution of language patterns, which can result in generated strings that are overall consistent with a particular person/agent (Andreas, 2022), but the intent behind these strings is still missing. Globally speaking, these models have nothing to say. Nor should we expect them to: LLMs' training objective is maximizing next-/masked-word predictive accuracy, not generating utterances that allow them to achieve specific goals in the world. A consequence of the lack of communicative intent is that models' attempts to automatically generate long stretches of text often eventually degrade. Consider the example in Section 3.2, in which we asked GPT-3 to complete a prompt. It was able to generate a mostly plausible continuation for a sentence or two. But, when prompted to continue further after the same prompt, it started saying things that were false or misleading (e.g., "GPT-3 is a recurrent neural network (RNN).") and, after enough tokens, starts to ramble. For instance, it gives a caption for a non-existent figure: Figure 2: Sample outputs from GPT-3. Top left : "It was the best of times, it was the worst of times." Top right : "The moon is made of green cheese." Bottom left : "The man saw a dog going into a house and went to the store for some milk." Bottom right: "I am here today because my father was not there yesterday." Although all of these sentences are grammatical (and actually obey a sensible scheme for a 4-paneled figure in an academic paper), GPT-3 has no intention, no broader meaning to communicate, and so, at some point sufficiently removed from the human-generated prompt, it will start becoming incoherent. Further, even if given explicit instructions, LLMs can be easily distracted, as demonstrated by the example in Figure 3. Attempts to align the model's output with the user's intent often require adding an objective other than language modeling [e.g., Ouyang et al., 2022, InstructGPT], and even those are imperfect. Overall, LLMs' inability to infer and maintain the goals of the interaction means that their outputs will often be meaningless and/or mis-specified despite high linguistic well-formedness. ### Interim conclusions Real-life language use requires integrating language into a broader cognitive framework. In this section, we have shown that many capacities required for language comprehension and production are, in fact, not specific to language and are supported by distinct brain circuits. In line with this distinction, models that master many syntactic and distributional Figure 3: GPT-3 is unable to infer and maintain the intent underlying an interaction, making it vulnerable to the so-called “prompt injection” attacks. Example by Riley Goodman [https://threadreaderapp.com/thread/1569128808308957185.html](https://threadreaderapp.com/thread/1569128808308957185.html). properties of human language still cannot use language in human-like ways. In particular, they struggle when engaging in formal reasoning, fail to acquire comprehensive and consistent world knowledge, cannot track objects, relations and events in long inputs, and are unable to generate utterances intentionally or infer communicative intent from linguistic input. In other words, their functional language competence remains in its infancy. This is not to say that LLMs can only ever master formal linguistic competence. Some non-linguistic abilities that contemporary LLMs succeed at include various forms of general pattern completion ("a, ab, abc,?"), style transfer, and long- and short-term memory. Nevertheless, their failure to master the four functional competence domains described in this section is quite notable: in line with evidence from cognitive neuroscience, LLMs' behavior highlights the difference between being good at language and being good at thought. The stark dissociation between formal and functional language competence in both humans and contemporary LLMs raises a question: is it reasonable to model these diverse capabilities using a single system and a single objective function? We turn to this question next. ## 5 Building models that talk and think like humans The distinction between formal and functional competence has several important implications for building better models of real-life language use. Here, we discuss three ingredients required to build models that talk and think like humans: modularity, curated data combined with diverse objective functions, and separate benchmarks for formal and functional competence. ### Modularity In this paper, we have advanced the thesis that functional competence and formal linguistic competence are distinct capabilities, recruiting different machinery in the human brain. More broadly, most biological intelligent systems--including both human and non-human minds--are highly modular [e.g., Carruthers, 2002, 2005, Cosmides and Tooby, 1994, Fedorenko et al., 2011, Kanwisher et al., 1997, Meunier et al., 2010]. What can this modularity tell us about how to build better, more human-like models? We argue that future language models can master both formal and functional linguistic competence by establishing a division of labor between the core language system and components for other cognitive processes, such as formal logic and social reasoning. We see at least two ways to implement this division of labor: explicitly building modularity into the architecture of the system (we call this Architectural Modularity) or naturally inducing modularity through the training process, both through the training data and the objective function (we call this Emergent Modularity). **Architectural Modularity** has a long history; it involves stitching together separate components, perhaps with quite specialized architectures [e.g., Bottou and Gallinari, 1990, Ronco and Gawthrop, 1997]. More recent examples include a transformer language model paired with a separate memory module [e.g., Borgeaud et al., 2022, d'Autume et al., 2019, Liu et al., 2022a] or a model for visual question answering, which includes a language module, a vision module, and a reasoning module [e.g., Yi et al., 2018, Mao et al., 2019, Andreas et al., 2016, Hudson and Manning, 2019, Johnson et al., 2017]. Such modular models are capable of achieving high task performance, are more efficient (i.e., can be trained on smaller datasets and have lower memory demands), and show high generalizability (i.e., perform well on datasets with previously unseen properties). The modules of such models can be trained separately or together, similarly to how humans can flexibly combine different cognitive skills when learning to perform novel complex tasks. The **Emergent Modularity** approach involves training models end-to-end (similarly to contemporary LLMs) but allows modularity to develop naturally within the model. Modular structure has been shown to spontaneously emerge in some end-to-end neural network systems in domains other than language [e.g., Yang et al., 2019, Dobs et al., 2022], suggesting that modularity may constitute an optimal solution to many complex tasks. For this approach to be successful, the model architecture must allow individual, specialized modules to develop within the model. Transformers, the most popular architecture today, satisfy this condition to some extent by allowing different attention heads to attend to different input features [e.g. Manning et al., 2020, Vaswani et al., 2017b, Vig and Belinkov, 2019]; certain approaches promote modularization even more explicitly, e.g., by endowing transformers with a mixture-of-experts architecture [Goyal et al., 2022, Kudugunta et al., 2021, Zhou et al., 2022]. A modular language model architecture is much better aligned with the fact that real-life language use is a complex capability, requiring both language-specific knowledge (formal competence) and various non-language-specific cognitive abilities (functional competence). Whether built-in or induced to emerge, modularity can lead the models to mirror the functional organization of the human brain and, consequently, make their behavior much more humanlike. ### Curated data and diverse objective functions We argue that the approach that has dominated the field for the last five years--training LLMs on large "naturalistic" text corpora from the web with a words-in-context prediction objective--is insufficient to induce the emergence of functional linguistic competence. First, this approach is biased toward low-level input properties, leading to unstable model behavior that depends on a particular way the prompt is phrased. Second, information contained in regular text corpora does not faithfully reflect the world: for instance, it is biased toward unusual events and contains little commonsense knowledge. Third, and perhaps most crucially, it incentivizes the models to learn patterns in the text (at various levels of abstraction) but limits their ability to generalize out-of-distribution (see Section 4.3 for example failures caused by all these issues). Finally, even in cases where LLMs succeed, the amount of naturalistic data required for non-linguistic capacities to emerge is ridiculously large (Wei et al., 2022), making this approach vastly inefficient (and environmentally irresponsible). Today, we already see examples where adjusting the training data and/or the objective function yields improved results. One such example is a math model Minerva (Lewkowycz et al., 2022). The system is built on the PaLM language model, a transformer LLM similar to the ones we discussed here. But what makes Minerva successful on math problems is that it was fine-tuned on a specialized math corpus, with special processing to make sure the mathematical notation is machine-readable (with some additional tricks, such as chain-of thought prompting; Wei et al., 2022). Examples of LLMs that benefit from an additional objective function are InstructGPT (Ouyang et al., 2022) and ChatGPT5, models that build upon a large GPT-style LLM but are additionally trained using human feedback. In particular, they use reinforcement learning to increase the likelihood of generating answers that a human might label as 'good', leading to outputs that humans indeed consider to be high-quality, at least at first glance. Footnote 5: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/) We believe that a model that succeeds at real-world language use would include---in addition to the core language component---a successful problem solver, a grounded experiencer, a situation modeler, a pragmatic reasoner, and a goal setter. In a way, we therefore arrive at the same conclusion as Turing (1950): a model that masters language use, not just the rules and patterns of natural language, has to be a general intelligence model. Furthermore, based on the functional organization of the brain, the machinery required to simulate intelligence will include both domain-general components (such as the multiple demand network (Duncan, 2010; Fedorenko et al., 2013)) and domain-specific components (such as brain regions specialized for intuitive physics (Fischer et al., 2016), navigation (Epstein et al., 2017), and social reasoning (Section 4.3.4). This modularity could be baked in by training modular models (see Section 5.1) on a mixture of carefully curated datasets using diverse objective functions (e.g., how GPTChat combines a pure language modeling objective with an additional human feedback objective). ### Separate benchmarks for formal and functional competence To assess progress on the road toward building models that use language in humanlike ways, it is important to develop benchmarks that evaluate both formal and functional linguistic competence. This distinction can reduce the confusion that arises when discussing these models by combating the "good at language -> good at thought" and the "bad at thought -> bad at language" fallacies. Several existing benchmarks already evaluate formal linguistic competence in LLMs (e.g., Gauthier et al., 2020; Warstadt et al., 2020) and can be complemented by additional tests of core linguistic features: hierarchy and abstraction (Section 3.3). At present, no single benchmark for evaluating functional linguistic competence exists, and datasets that target its subsets, like commonsense reasoning (e.g., WinoGrande from Sakaguchi et al., 2019; HellaSwag from Zellers et al., 2019), can often be "hacked" by LLMs by leveraging flawed heuristics (Elazar et al., 2021). This issue is likely exacerbated in large-scale heterogeneous datasets like BIG-bench (Srivastava et al., 2022). However, it is certainly possible to disentangle word-co-occurrence-based hacks and true reasoning capabilities, as evidenced, for instance, by the example in Section 4.2. Overall, we believe that developing comprehensive, separate assessments of formal linguistic competence and different aspects of functional linguistic competence in language models will enable the field to develop models that excel at both. Eventually, models should be able to solve complex tasks that require all aspects of linguistic competence, but at this, still relatively early stage of building functionally competent models, it is important to target particular skills known to be separable in humans, so as to be able to interpret the models' failures. ## 6 General Conclusion The discourse around the latest crop of language models has consisted of a curious mix of overclaiming and underclaiming [Bowman, 2022]. On the one hand, hyperbolic and fantastical articles in the press have claimed that models like GPT-3 have solved language and will make human writers redundant. On the other hand, a steady stream of articles within the academic literature have pointed out the many failures of LLMs: for instance, they get tripped up on tasks that require abstract reasoning and start to meander when the discourse gets too long. Here, we have put these seemingly inconsistent reactions in dialogue with prior and ongoing work in computational linguistics, cognitive science, and neuroscience. In particular, we argue that LLMs are remarkably successful on tasks that require a particular type of structural and statistical linguistic competence--formal linguistic competence. Although their performance is not yet fully human-like, these models achieve an impressive degree of success in representing and using hierarchical relationships among words and building representations that are sufficiently abstract to generalize to new words and constructions. As such, these LLMs are underused in linguistics and cognitive science as candidate models of human language processing [see also Baroni, 2021, Linzen, 2019, Linzen and Baroni, 2021, Pater, 2019, Potts, 2020, Warstadt and Bowman, 2022]. We also review some of the LLMs' failures on tasks that reflect real-life language use, such as reasoning, while highlighting that the capabilities these tasks require are fundamentally distinct from formal language competence and rely on specialized machinery in the human brain. In line with Turing [1950], we conclude that a model that excels at real-life language use would need to be an AGI, and argue that a human-like AGI cannot be reached simply by getting really good at predicting upcoming words. The many failures of LLMs on non-linguistic tasks do not undermine them as good models of language processing. After all, the set of areas that support language processing in the human brain also cannot do math, solve logical problems, or even track the meaning of a story across multiple paragraphs. If we take the human mind and brain--a good example of generalized intelligence--as a guide, we might expect that future advances in AGI will depend on combining language models with models that represent abstract knowledge and support complex reasoning, rather than expecting a single model (trained with a single word prediction objective) to do it all. To those who have argued that most interesting aspects of human language cannot be learned from data alone, we say that LLMs compellingly demonstrate the possibility of learning complex syntactic features from linguistic input (even if, as of now, much more input is required than a typical child sees). To those who criticize LLMs for their inability to do complex arithmetic or to fully reason about the world, we say, give language models a break: given a strict separation of language and non-linguistic capabilities in the human mind, we should evaluate these capabilities separately, recognizing success in formal linguistic competence even when non-linguistic capabilities lag behind. Finally, to those who are looking to language models as a route to AGI, we suggest that, instead of or in addition to scaling up the size of the models [Kaplan et al., 2020], more promising solutions will come in the form of modular architectures--pre-specified or emergent--that, like the human brain, integrate language processing with additional systems that carry out perception, reasoning, and planning. ## Acknowledgements For helpful conversations, we thank Jacob Andreas, Alex Warstadt, Dan Roberts, the attendees of the Harvard LangCog journal club, and the attendees of the UT Austin Department of Linguistics SynSem seminar. ## Funding Sources AI was supported by funds from the Quest Initiative for Intelligence. EF was supported by NIH awards R01-DC016607, R01-DC016950, and U01-NS121471 and by research funds from the Brain and Cognitive Sciences Department, McGovern Institute for Brain Research, and the Simons Foundation though the Simons Center for the Social Brain. KM acknowledges funding from NSF Grant 2104995. ## Conflicts of Interest The authors declare no Conflicts of Interest.
2302.07069
Stable maps of curves and algebraic equivalence of 1-cycles
We show that algebraic equivalence of images of stable maps of curves lifts to deformation equivalence of the stable maps. The main applications concern $A_1(X)$, the group of 1-cycles modulo algebraic equivalence, for smooth, separably rationally connected varieties. If $K/k$ is an algebraic extension, then the kernel of $A_1(X_k)\to A_1(X_K)$ is at most ${\mathbb Z}/2{\mathbb Z}$. If $k$ is finite, then the image equals the subgroup of Galois invariant cycles. This paper replaces Sections~2--3 of 2211.15915.v.1 and Sections~2--3 of 2211.15911.v.1. The other Sections are retained in the revised versions of these papers.
János Kollár, Zhiyu Tian
2023-02-14T14:32:00Z
http://arxiv.org/abs/2302.07069v1
# Stable maps of curves and ###### Abstract. We show that algebraic equivalence of images of stable maps of curves lifts to deformation equivalence of the stable maps. The main applications concern \(A_{1}(X)\), the group of 1-cycles modulo algebraic equivalence, for smooth, separably rationally connected varieties. If \(K/k\) is an algebraic extension, then the kernel of \(A_{1}(X_{k})\to A_{1}(X_{K})\) is at most \(\mathbb{Z}/2\mathbb{Z}\). If \(k\) is finite, then the image equals the subgroup of Galois invariant cycles. For a scheme \(X\), let \(\operatorname{Mor}(\operatorname{Curves},X)\) denote the stack of morphisms from proper, nodal curves to \(X\). If two morphisms \(\pi_{i}:C_{i}\to X\) are deformation equivalent (see Definition 12), then the image 1-cycles \((\pi_{i})_{*}[C_{i}]\) are algebraically equivalent. The converse, however, rarely holds. Nonetheless, we show that if \(X\) is smooth, then algebraic equivalence of the image cycles lifts to deformation equivalence of morphisms. We use \(A\uplus B\) to indicate that all intersection points \(A\cap B\) are nodes; see Notation 16. **Theorem 1**.: _Let \(X\) be a smooth, projective variety over an algebraically closed field \(K\). Let \(\pi_{i}:C_{i}\to X\) (for \(i\in I\)) be finitely many morphisms of nodal curves to \(X\) such that the \((\pi_{i})_{*}[C_{i}]\) are algebraically equivalent to each other. Then there is a nodal deformation equivalence (as in Definition 12)_ \[\begin{array}{ccccc}C_{i}\uplus R_{i}&\subset&S&\stackrel{{\pi}} {{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{1.1}\] _such that_ 1. \((\pi|_{C_{i}}:C_{i}\to X)\cong(\pi_{i}:C_{i}\to X)\) _for_ \(i\in I\)_, and the_ 2. \((\pi|_{R_{i}}:R_{i}\to X)\) _are isomorphic to each other for_ \(i\in I\)_._ That is, the algebraic equivalence is visible at the level of maps, not just at the level of cycles. For arithmetic applications, the following version is quite useful. **Theorem 2**.: _Let \(k\) be a perfect field with algebraic closure \(K\), and \(k\subset L\subset K\). Let \(X\) be a smooth, projective, \(k\)-variety and \(\pi_{i}:C_{i}\to X_{L}\) (for \(i\in I\)) morphisms of nodal curves defined over \(L\), such that the \((\pi_{i})_{*}[C_{i}]\) are algebraically equivalent to each other over \(K\). Assume also that \(\chi(C_{i},\mathcal{O}_{C_{i}})\) is independent of \(i\in I\)._ _Then one can choose (1.1) such that, in addition to (1.2-3),_ 1. _the_ \(\pi|_{C_{i}\uplus R_{i}}:C_{i}\uplus R_{i}\to X_{L}\) _are defined over_ \(L\) _for_ \(i\in I\)_, and the_ 2. \((\pi|_{R_{i}}:R_{i}\to X_{L})\) _are defined over_ \(k\)_._ **Remark 3**.: There are some unusual features of Theorem 2. First, we assume only that the \((\pi_{i})_{*}[C_{i}]\) are algebraically equivalent over \(K\). Correspondingly, the diagram (1.1) itself is defined only over \(K\). It is only the fibers over the points \(b_{i}\) that are defined over \(L\). Second, the \(R_{i}\) are defined over the smallest possible field, \(k\). Third, the assumption on \(\chi(C_{i},\mathcal{O}_{C_{i}})\) is necessary, see Example 9. From Theorem 2, we can not conclude that the \(C_{i}\uplus R_{i}\) are algebraically equivalent over \(k\) or \(L\), since the families \(S\to B\) are only defined over \(K\). However, we will be able to prove algebraic equivalence for certain varieties, which we define next. **Definition 4**.: Let \(X\) be a smooth, proper variety over an algebraically closed field. Given any \(g:\mathbb{P}^{1}\to X\), the pull-back \(g^{*}T_{X}\) decomposes as \(\sum_{i}\mathcal{O}_{\mathbb{P}^{1}}(a_{i})\). It is called _very free_ if \(a_{i}>0\) for every \(i\), and _almost very free_ if \(a_{i}\geq 0\) for every \(i\), with strict inequality for all but one \(a_{i}\). \(X\) is _separably rationally connected,_ or _src,_ if there is a very free \(g\); see [11, Sec.IV.3] or [1, Sec.7] for many equivalent definitions. We say that \(X\) is _separably rationally connected in codimension 1,_ or _src in codimension 1,_ if there is an almost very free \(g:\mathbb{P}^{1}\to X\). Thus src implies src in codimension 1. The main new examples are varieties that admit a morphism to a curve \(X\to B\) whose general fibers are smooth and separably rationally connected. These 2 classes give all examples in characteristic 0. There are a few more in positive characteristic. General fibers of \(X\to B\) could be singular, but contain very free rational curves in their smooth locus. There are also rationally connected (but not src) examples, studied in [10]. Being src (or src in codimension 1) are both open conditions in smooth families of projective varieties. For varieties that are src in codimension 1, we turn the deformation equivalences in Theorem 2 into algebraic equivalences over \(L\). This leads to a series of arithmetic consequences for 1-cycles. Further applications to the Brauer-Manin obstruction for zero cycles on geometrically rational surfaces over global function fields and the coniveau filtration are discussed in [16, 17]. **5** (Algebraic equivalence of 1-cycles).: Let \(X\) be a proper scheme over a field \(k\). We use \(A_{d}(X)\) for the group of \(d\)-dimensional cycles modulo _algebraic equivalence,_ the latter denoted by \(Z_{1}\sim_{a}Z_{2}\). (For an introduction, see [12, Sec.10.3], where this group is denoted by \(B_{d}(X)\).) Let \(L/k\) be an algebraic field extension. We have natural maps \(A_{d}(X_{k})\to A_{d}(X_{L})\). The kernel of this is \(\deg(L/k)\)-torsion. If \(L/k\) is Galois, then the image is contained in the Galois-invariant subgroup \(A_{d}(X_{L})^{\operatorname{Gal}(L/k)}\). Thus our interest is the kernel and cokernel of the map \[A_{1}(X_{k})\to A_{1}(X_{L})^{\operatorname{Gal}(L/k)}. \tag{5.1}\] The following 3 theorems are proved in Section 7. The first one describes the kernel of (5.1). **Theorem 6**.: _Let \(X_{k}\) be a smooth, projective variety over a perfect field \(k\) with algebraic closure \(K\). Assume that \(X_{K}\) is src in codimension 1. Then the kernel of the natural map \(A_{1}(X_{k})\to A_{1}(X_{K})\) is either trivial or \(\mathbb{Z}/2\mathbb{Z}\). More precisely,_ 1. _the kernel is trivial if_ \(X_{k}\) _contains an odd degree 0-cycle, and_ 1. _if_ \(Z=\sum d_{i}C_{i}\) _and_ \(Z_{K}\sim_{a}0\)_, then_ \(Z\sim_{a}0\) _iff the index of_ \(X\) _(Definition_ 49_) divides_ \(\chi(Z):=\sum_{i}d_{i}\chi(C_{i},\mathcal{O}_{C_{i}})\)_._ For finite fields (5.1) is an isomorphism. **Theorem 7**.: _Let \(k\) be a perfect field with algebraic closure \(K\), and \(X\) a smooth, projective \(k\)-variety that is src in codimension 1. Assume that every geometrically irreducible \(k\)-variety has a 0-cycle of degree 1. (For example, \(k\) is finite or pseudo algebraically closed.) Then_ \[A_{1}(X_{k})\to A_{1}(X_{K})^{\operatorname{Gal}(K/k)}\quad\text{is an isomorphism}. \tag{7.1}\] Let \(k\) be a field, \(C\) a smooth, projective, geometrically irreducible \(k\)-curve. Then \(X:=C\times\mathbb{P}^{1}\) is src in codimension 1. For \(c\in C(K)\), the line \(\{c\}\times\mathbb{P}^{1}\) gives a class in \(A_{1}(X_{K})^{\operatorname{Gal}(K/k)}\). If (7.1) holds then \(C\) has a 0-cycle of degree 1. So the assumption on \(k\) in Theorem 7 is necessary in all cases. The third application concerns specialization of 1-cycles. **Theorem 8**.: _Let \(R\) be a Henselian DVR with perfect residue field. Let \(p:X\to\operatorname{Spec}R\) be a smooth, projective morphism with closed fiber \(X_{0}\) and generic fiber \(X_{g}\). Assume that \(X_{0}\) is src in codimension 1._ _Then the specialization map \(A_{1}(X_{g})\to A_{1}(X_{0})\) is an isomorphism._ Next we give some examples concerning Theorems 2 and 6. From now on we write \(\chi(C):=\chi(C,\mathcal{O}_{C})\) for proper, 1-dimensional schemes. **Example 9**.: Take \(X=\mathbb{P}^{3}\), choose \(m>0\) and let \(C_{1}\subset\mathbb{P}^{3}\) be a degree \(2m\) curve of genus 1 over \(\mathbb{Q}\) such that every 0-cycle has degree divisible by \(2m\). Let \(C_{2}\) be the union of \(m\) disjoint, conjugate conics, such that again every 0-cycle has degree divisible by \(2m\). Note that \(\chi(C_{1})=0\) and \(\chi(C_{2})=m\). Let \(R\) be any curve and attach it to \(C_{i}\) at \(n_{i}\) nodes, where, necessarily, \(2m\mid n_{i}\). Since \(\chi(C_{i}\uplus R)=\chi(C_{i})+\chi(R)+n_{i}\), we obtain that \[\chi(C_{1}\uplus R)-\chi(C_{2}\uplus R)\equiv m\mod 2m.\] The Euler characteristic is deformation invariant, so the \(C_{i}\uplus R\) are not deformation equivalent. **Example 10**.: In connection with Theorem 6, we give 3 examples computing the kernel of \(A_{1}(X_{k})\to A_{1}(X_{K})\). (10.1) Let \(Q_{k}\subset\mathbb{P}^{4}\) be a quadric without \(k\)-points. Let \(C_{1}\subset Q_{k}\) be a smooth conic and \(C_{2}\) the union of a conjugate pair of lines. For example, for any \(k\subset\mathbb{R}\) we can take \[\begin{array}{l}Q=(\sum_{i}x_{i}^{2}=0)\subset\mathbb{P}^{4},\ C_{1}=(x_{3} =x_{4}=0),\quad\text{and}\\ C_{2}=(x_{0}=x_{1}-ix_{2}=x_{3}-ix_{4}=0)\cup(x_{0}=x_{1}+ix_{2}=x_{3}+ix_{4}= 0).\end{array}\] Then \(C_{1}-C_{2}\) is algebraically equivalent to 0 over \(K\), but not algebraically equivalent to 0 over \(k\) by (51) since \(\chi(C_{1})=1\) and \(\chi(C_{2})=2\). Thus the kernel of \(A_{1}(Q_{k})\to A_{1}(Q_{K})\) is \(\mathbb{Z}/2\mathbb{Z}\). (10.2) Let \(\pi:X\to B\) be a morphism of smooth projective varieties. Assume that \(\dim B=1\) and the generic fiber has odd index. Write the index of \(B\) as \(2^{a}(\text{odd})\). Then the index of \(X\) is also \(2^{a}(\text{odd})\). Let \(Z_{1},Z_{2}\) be 1-cycles on \(X\) that are algebraically equivalent over \(K\). Then the \(Z_{i}\to B\) have the same degree, so \(\chi(Z_{1})\equiv\chi(Z_{2})\mod 2^{a}\) by (51.1). Using (49) we get that the index of \(X\) divides \(\chi(Z_{1})-\chi(Z_{2})\). So, if the generic fiber is src, then \(A_{1}(X_{k})\to A_{1}(X_{K})\) is an injection. (10.3) Let \(\pi:X\to S\) be a morphism of smooth projective varieties. Assume that \(\dim S=2\) and the generic fiber has odd index. As before, if \(\operatorname{ind}_{S}=2^{a}(\operatorname{odd})\), then \(\operatorname{ind}_{X}=2^{a}(\operatorname{odd})\). Let \(Z_{1},Z_{2}\) be \(1\)-cycles on \(X\) that are algebraically equivalent over \(K\). Then \(\pi_{*}Z_{1}\) and \(\pi_{*}Z_{2}\) are also algebraically equivalent over \(K\). By the adjunction formula, they have the same Euler characteristic. This implies that \(\chi(Z_{1})\equiv\chi(Z_{2})\mod 2^{a}\) by (51.1). Using (49) we get that the index of \(X\) divides \(\chi(Z_{1})-\chi(Z_{2})\). So, if \(S\) is geometrically rational or ruled, and the generic fiber is src, then \(A_{1}(X_{k})\to A_{1}(X_{K})\) is an injection. **11** (Steps of the proof).: Assume for simplicity that \(C_{1},C_{2}\subset X\) are algebraically equivalent smooth curves of the same genus. By definition, their algebraic equivalence is certified (15) by a deformation \[\begin{array}{ccccc}S_{i}&\subset&S&\stackrel{{\pi}}{{ \longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{11.1}\] such that \(\pi_{*}[S_{1}]=[C_{1}]+Z\) and \(\pi_{*}[S_{2}]=[C_{2}]+Z\) for some \(1\)-cycle \(Z\). After applying semistable reduction to \(S\to B\), we may assume that all fibers of \(S\to B\) are nodal. Note that in general the \(S_{i}\) need not have irreducible components \(C_{i}^{\prime}\subset S_{i}\) that are isomorphic to the \(C_{i}\). Even if they do, and we write \(S_{i}=C_{i}^{\prime}\cup R_{i}\), the curves \(R_{i}\) and the maps \(\pi_{i}:=\pi|_{R_{i}}:R_{i}\to\operatorname{Supp}Z\) can be very different from each other. The moduli space of morphisms of curves to \(\mathbb{P}^{1}\) was studied by Hurwitz. Similarly, we view the \(R_{i}\) as 'covers' of the same \(\operatorname{Supp}Z\), though the maps \(R_{i}\to\operatorname{Supp}Z\) need not even be finite. In Section 2 we give a combinatorial description of going between various such 'covers' by gluing additional components and deformation equivalences. Next we want to realize the above combinatorial steps as deformation equivalences. The general problem is essentially the following: _Question 11.2_.: Write a reducible curve as \(C_{1}\cup C_{2}\) and construct deformations of each \(C_{i}\). Can they be glued to a deformation of \(C_{1}\cup C_{2}\)? If the \(C_{i}\) are disjoint then yes, but we run into non-flatness at the intersection points \(C_{1}\cap C_{2}\). However, flatness can be restored by adding an extra component. The general framework is discussed in Section 3 and applied to our current setting in Section 4. The main technical result is (31), which replaces (11.1) with another certificate \[\begin{array}{ccccc}C_{i}\uplus R_{i}&\subset&S_{R}&\stackrel{{ \pi_{R}}}{{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{11.3}\] such that now \((\pi_{R}:R_{1}\to X)\cong(\pi_{R}:R_{2}\to X)\). (31) also gives very detailed information on how \(S_{R}\) is obtained from \(S\). This completes the geometric part of the argument, proving Theorem 1. If the \(C_{i}\) are defined over some field \(k\), then we first construct (11.3) over the algebraic closure \(K\), and then in (36) construct another certificate, where \[C_{i}+R_{i}\quad\text{is replaced by}\quad C_{i}+(\text{all Galois conjugates of $R_{i}$}).\] This gives us Theorem 2 and completes the part of the proof that holds for all smooth, projective varieties. The base curve \(B\) of the deformation in (11.3) is geometrically connected, but usually not definable over \(k\), so it says nothing about algebraic equivalence over \(k\). Now we face the following _Question 11.4_.: Let \(Y\) be a \(k\)-variety, and \(Z_{i}\subset Y\) curves such that \([Z_{1}]\) and \([Z_{2}]\) are in the same geometrically connected component of \(\operatorname{Hilb}_{1}(Y)\). Are \([Z_{1}]\) and \([Z_{2}]\) algebraically equivalent over \(k\)? The answer is negative in general, and this is where we need that \(X\) be src in codimension 1. We show in (45) that there is a third curve \(T\) such that \([Z_{1}\cup T]\) and \([Z_{2}\cup T]\) are smooth points on the same irreducible component of \(\operatorname{Hilb}_{1}(Y)\) that is also geometrically irreducible. Thus \([Z_{1}\cup T]\) and \([Z_{2}\cup T]\) are algebraically equivalent over \(k\), and so are \([Z_{1}]\) and \([Z_{2}]\). This gives us Theorem 6, while Theorems 7-8 require a more careful understanding of the above procedure. ### Acknowledgments We thank Jean-Louis Colliot-Thelene, Rahul Pandharipande and Olivier Wittenberg for many helpful and constructive comments. Partial financial support to JK was provided by the NSF under grant number DMS-1901855, and to ZT by NSFC grants No.11890660 and No.11890662. ## 1. Notation and definitions **Definition 12**.: Let \(X\) be a proper scheme over an algebraically closed field \(K\). A _deformation equivalence_ (of curves mapping to \(X\)) is a diagram \[\begin{array}{ccccccc}S_{i}&\subset&S&\stackrel{{\pi}}{{ \longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{12.1}\] such that 1. \(B\) is a connected curve with smooth marked points \(b_{i}\), 2. \(S\to B\) is flat, proper, of pure relative dimension 1, and 3. \(S_{i}:=S_{b_{i}}\) is the fiber over \(b_{i}\). The \(b_{i}\) and the intersection points of different irreducible components of \(B\) are the _pivot points_. Such a deformation is called _nodal_ (or 2-nodal) if all fibers are nodal curves. We call a deformation _3-nodal_ if all but finitely many fibers are nodal curves, and the pivot fibers have only nodes (like 2 coordinate axes \((xy=0)\subset\mathbb{A}^{2}\)) and triple points (like 3 coordinate axes \((xy=yz=zx=0)\subset\mathbb{A}^{3}\)). Our constructions naturally produce 3-nodal deformations, we then make them 2-nodal at the end using (17). _Note on terminology._ 3-nodal is not standard. These are sometimes called ordinary triple points, but that name usually refers to planar triple points whose tangent cone is 3 distinct lines. **13** (Base change).: Let \((B^{\prime},b^{\prime}_{i})\) be another connected curve with smooth marked points, and \(r:B^{\prime}\to B\) a morphism such that \(r(b^{\prime}_{i})=b_{i}\). By pull-back we get a new deformation equivalence \[B^{\prime}\gets S^{\prime}:=B^{\prime}\times_{B}S\to B^{\prime}\times X.\] If \(S\to B\) is nodal, then so is \(S^{\prime}\to B^{\prime}\). If \(S^{\prime}\to B^{\prime}\) is 3-nodal, and the fibers of \(S\to B\) over the images of the pivot points of \(B^{\prime}\) are 3-nodal, then \(S^{\prime}\to B^{\prime}\) is also 3-nodal. (For us \(B^{\prime}\to B\) will usually come from a general choice, so the extra condition will not be a problem.) During the proofs we have to make many base changes, and usually we use \(B\) to denote all base curves. **14** (Improving the singularities of \(S\to B\)).: Let \(p:S\to B\) be a flat, projective morphism of a surface to an irreducible curves. The semistable reduction theorem [13] says that there is a base change \(B^{\rm ss}\to B\), and a resolution of singularities \(S^{\rm ss}\to B^{\rm ss}\times_{B}S\) such that the resulting \(S^{\rm ss}\to B^{\rm ss}\) is smooth over generic points, and the singular fibers are all nodal. If \(B\) is reducible, we could apply this to each irreducible component \(S_{i}\to B_{i}\). However, the fibers of \(S^{\rm ss}_{i}\to B^{\rm ss}_{i}\) and of \(S^{\rm ss}_{j}\to B^{\rm ss}_{j}\) over \(B_{i}\cap B_{j}\) are usually not isomorphic. So the families \(S^{\rm ss}_{i}\to B^{\rm ss}_{i}\) can not be glued together into a flat family over a connected curve. Nonetheless, if we have a deformation equivalence as in (12.1) where all but finitely many fibers are nodal, then, after a suitable base change \(B^{\prime}\to B\), we can get a new family \(S^{\prime}\to B^{\prime}\) where all fibers over smooth points of \(B^{\prime}\) are nodal and the fibers over the pivot points are unchanged. For this reason, we can and will ignore possible badly singular fibers over smooth points of \(B\). **Definition 15**.: Let \(X\) be a proper scheme over a field \(k\) and \(Z\) a 1-cycle on \(X\). We say that the deformation (12.1) _certifies_ that \(Z\sim_{a}0\) iff \(\pi_{*}[S_{1}]-\pi_{*}[S_{2}]=Z\), as 1-cycles on \(X\). By definition, if \(Z\sim_{a}0\), there is always a certificate \(B\gets S\to B\times X\) where \(B\) is a smooth, irreducible curve. We can apply stable reduction to \(B\gets S\) and get certificate \(B\gets S^{\rm nodal}\to B\times X\) where the fibers of \(S^{\rm nodal}\to B\) are nodal curves. We can also assume that the general fibers are smooth, but this usually makes the \(S_{i}\) disconnected. For us, connectedness is more important. **Notation 16**.: Let \(X_{i}\) be schemes, \(Z_{i}\subset X_{i}\) subschemes and \(\sigma:Z_{1}\cong Z_{2}\) an isomorphism. We use \(X_{1}\amalg X_{2}\) to denote the scheme obtained by identifying \(Z_{1}\) and \(Z_{2}\). Let \(C_{i}\) be curves, \(Z_{i}\subset C_{i}\) sets of smooth points and \(\sigma:Z_{1}\cong Z_{2}\) an isomorphism. Instead of \(C_{1}\amalg C_{2}\) we use \(C_{1}\uplus_{\sigma}C_{2}\) (where the \(\times\) reminds us that we get a node). We use \(C_{1}\uplus C_{2}\) if the \(Z_{i}\) and \(\sigma\) are clear or not important. **17** (From 3-nodes to nodes).: Let \(C_{3}\subset\mathbb{P}^{3}\) be 3 general lines meeting at point \(p\). Using (38.1-2) we compute that the Hilbert scheme of \(\mathbb{P}^{3}\) is smooth of dimension 12 at \([C_{3}]\), and includes the (3-dimensional) universal deformation of the singularity at the intersection point. Blow up \(p\), let \(E\cong\mathbb{P}^{2}\) be the exceptional divisor. Let \(\bar{C}_{3}\) be a curve consisting of the birational transform \(C^{\prime}_{3}\) of \(C_{3}\), and a conic \(C_{0}\subset E\) passing through the 3 points \(E\cap C_{3}^{\prime}\). Although the conic is not unique, \(\pi:\bar{C}_{3}\to C_{3}\) is unique as an abstract curve. The space of morphisms \(\operatorname{Mor}(\operatorname{Curves},\mathbb{P}^{3})\) is also smooth of dimension \(12\) at \([\pi:\bar{C}_{3}\to\mathbb{P}^{3}]\); cf. [1, 15]. Since \(R^{1}\pi_{*}\mathcal{O}_{\bar{C}_{3}}=0\), \(\pi\) extends to a morphism \[\Pi:\operatorname{Mor}(\operatorname{Nodal\ curves},\mathbb{P}^{3})^{ \circ}\to\operatorname{Hilb}_{1}(\mathbb{P}^{3})^{\circ} \tag{17.1}\] in suitable open neighborhoods of \([\pi:\bar{C}_{3}\to\mathbb{P}^{3}]\) and \([C_{3}]\). Thus \(\Pi\) is a local isomorphism. That is, every flat deformation of the \(3\)-node \(C_{3}\) can be lifted to a flat deformation of the morphism \(\pi:\bar{C}_{3}\to C_{3}\). Since \(\bar{C}_{3}\) has only nodes, this replaces \(3\)-nodal families with nodal ones. In general, for any \(X\) we get \[\Pi_{X}:\operatorname{Mor}(\operatorname{Nodal\ curves},X)\dasharrow\operatorname {Hilb}_{1}(X), \tag{17.2}\] which restricts to an isomorphism over the \(3\)-nodal locus of \(\operatorname{Hilb}_{1}(X)\). (In fact, nodes and \(3\)-nodes are the only singularities over which \(\Pi_{X}\) is a local isomorphism.) The following explicit example may be useful. It shows how to 'jump over' the nodes as we slide an auxiliary curve \(L\) from one irreducible component of a nodal curve \(C\) to another. Consider a family of nodes parametrized by a node: \[\mathbb{A}^{4}_{xyuv}\supset S:=(xy=uv=0)\stackrel{{\pi}}{{ \longrightarrow}}(xy=0)=:B\subset\mathbb{A}^{2}_{xy}.\] Let \(D\subset S\) be the image of the section \(\sigma(x,y)=(x,y,x,y)\). Thus, along the \(x\)-axis we move in the \(u\)-axis, along the \(y\)-axis we are in the \(v\)-axis. At the origin (\(x=y=0\)) we hit the node (\(u=v=0\)). Let \(D^{\prime}\subset S\) be the image of the section \(\sigma^{\prime}(x,y)=(x,y,y,x)\). Its equation is \((v-x=u-y=0)\). Note that \(D+D^{\prime}\) is Cartier divisor, with defining equation \(x+y=u+v\). Blow up \(D^{\prime}\subset S\) to get \(\bar{S}\to S\). The restriction of \(D^{\prime}\) to the (\(x=v=0\)) and (\(y=u=0\)) planes is a Cartier divisor, so on these the blow-up is the identity. The restriction of \(D^{\prime}\) to the (\(x=u=0\)) and (\(y=v=0\)) planes is the origin, so we get their usual blow-up. The \(2\) exceptional curves are identified in \(\bar{S}\) to get \(\bar{E}\subset\bar{S}\). The new central fiber consists of \(\bar{E}\) and the birational transforms of \((0,0,u,0)\) and \((0,0,0,v)\), meeting \(\bar{E}\) at distinct points \(p_{u},p_{v}\). The birational transform of \(D\) is still a section \(\bar{D}\), meeting \(\bar{E}\) at a point different from the \(p_{u},p_{v}\). Thus in the new family \(\bar{S}\to B\), the section \(\bar{D}\) is a Cartier divisor that does not pass through any of the nodes of the fibers. ## 2. Maps to curves Hurwitz proved that for fixed \((g,d)\), all degree \(d\) morphisms from genus \(g\) smooth curves to \(\mathbb{P}^{1}\) form an irreducible family. We discuss what happens when \(\mathbb{P}^{1}\) is replaced by an arbitrary (possibly reducible) curve \(C\) and we study maps from nodal curves to \(C\). Irreducibility fails already in simple cases. For example, let \(C:=(xyz=0)\subset\mathbb{P}^{2}\) be a cycle of \(3\) lines. Separating one of its nodes we get \(3\) morphisms \(\pi_{i}:A\to C\) where \(A\) is a chain of \(3\) rational curves, so \(\chi(A)=1\). These \(3\) maps are distinct, rigid and all their numerical invariants agree. We aim to prove that by adding more irreducible curves to \(A\), we get a connected moduli space. **Definition 18**.: Let \(k\) be an algebraically closed field, \(C\) a proper curve, \(A,B\) proper nodal curves and \(\pi_{A}:A\to C,\pi_{B}:B\to C\) morphisms. The following operations are the _elementary changes_ of these data. 1. Take a smooth, projective, irreducible curve \(\tau:E\to C\) and attach it to \(A\) and \(B\) along two subsets of the same cardinality, to get \(\pi^{\prime}_{A}:A^{\prime}:=A\uplus E\to C\) and \(\pi^{\prime}_{B}:B^{\prime}:=B\uplus E\to C\). 2. Take constant morphisms \(\tau_{A}:\mathbb{P}^{1}\to C,\tau_{B}:\mathbb{P}^{1}\to C\) and attach then to \(A\) (resp. \(B\)) along two subsets of the same cardinality, to get \(\pi^{\prime}_{A}:A^{\prime}:=A\uplus\mathbb{P}^{1}\to C\) and \(\pi^{\prime}_{B}:B^{\prime}:=B\uplus\mathbb{P}^{1}\to C\). 3. Write \(A=E\cap F\). Choose \(\pi^{\prime}_{F}:F^{\prime}\to C\) to be deformation equivalent to \(\pi|_{F}\) fixing \(E\cap F\). Set \[\pi^{\prime}_{A}:(\pi|_{E},\pi^{\prime}_{F}):A^{\prime}:=E\uplus F^{\prime} \to C\quad\text{and}\quad\pi^{\prime}_{B}=\pi_{B}:B\to C.\] 4. As in (3) with the roles of \(A,B\) reversed. The main result of this section is the following. **Proposition 19**.: _Let \(k\) be an algebraically closed field, \(C\) a proper curve, \(A,B\) proper nodal curves and \(\pi^{1}_{A}:A^{1}\to C,\pi^{1}_{B}:B^{1}\to C\) morphisms. The following are equivalent._ 1. \((\pi^{1}_{A})_{*}[A^{1}]=(\pi^{1}_{B})_{*}[B^{1}]\) _and_ \(\chi(A^{1})=\chi(B^{1})\)_._ 2. _There is a sequence of elementary changes (_18_._1-_4_)_ \[\left(\pi^{1}_{A}:A^{1}\to C,\pi^{1}_{B}:B^{1}\to C\right)\dashrightarrow \cdots\dashrightarrow\left(\pi^{r}_{A}:A^{r}\to C,\pi^{r}_{B}:B^{r} \to C\right)\] _such that_ \(\left(\pi^{r}_{A}:A^{r}\to C\right)\cong\left(\pi^{r}_{B}:B^{r}\to C\right)\)_._ Proof. It is clear that elementary changes preserve both \((\pi^{i}_{A})_{*}[A^{i}]-(\pi^{i}_{B})_{*}[B^{i}]\) and \(\chi(A^{i})-\chi(B^{i})\), so (2) \(\Rightarrow\) (1). For the converse, we may assume that \(\pi^{1}_{A},\pi^{1}_{B}\) are dominant. The claim is clear if \(A^{1}\) or \(B^{1}\) are empty. Let \(C_{j}\) be the irreducible components of \(C\) with normalizations \(\bar{C}_{j}\to C_{j}\). Take curves \(\bar{A}^{1}_{j}\to\bar{C}_{j}\) and \(\bar{B}^{1}_{j}\to\bar{C}_{j}\) such that \(A^{1}\) (resp \(B^{1}\)) is obtained from \(\amalg_{j}\bar{A}^{1}_{j}\) (resp. \(\amalg_{j}\bar{B}^{1}_{j}\)) by gluing some point pairs to nodes. These \(\bar{A}^{1}_{j},\bar{B}^{1}_{j}\) are almost unique, except for irreducible components of \(A^{1}\cup B^{1}\) that lie over a singular point of \(C\). For these we have finitely many possible choices, we make any one of them. Our plan is to show that our claim holds each \(\left(\pi^{1}_{A}:\bar{A}^{1}_{j}\to\bar{C}_{j},\pi^{1}_{B}:\bar{B}^{1}_{j} \to\bar{C}_{j}\right)\). However, \(\chi(\bar{A}^{1}_{j})\) can be different from \(\chi(\bar{B}^{1}_{j})\). So first we need a series of elementary changes to ensure that \(\chi(\bar{A}^{i}_{j})=\chi(\bar{B}^{i}_{j})\) for some \(i\geq 1\). It will be convenient to add disjoint copies of \(\amalg_{j}\bar{C}_{j}\) to both \(A^{1}\) and \(B^{1}\), in order to avoid some trivial special cases. We have to pay special attention to the nodes of \(A^{1}\) and \(B^{1}\) that get separated when we pass to \(\amalg_{j}\bar{A}^{1}_{j}\) and \(\amalg_{j}\bar{B}^{1}_{j}\). So let \(c\in C\) be a singular point and \(C_{j}(c)\) the local branches of \(C\) through \(c\). Let \(N^{1}_{A}(c,j_{1},j_{2})\subset A^{1}\) be the set of all nodes of \(A^{1}\) over \(c\) whose 2 branches end up on \(C_{j_{1}}(c)\) and \(C_{j_{2}}(c)\) when we pass to \(\amalg_{j}\bar{A}^{1}_{j}\). We call these _separating nodes_. We start with some elementary changes to achieve that \(\#N^{i}_{A}(c,j_{1},j_{2})=\#N^{i}_{B}(c,j_{1},j_{2})\) for every \(c,j_{1},j_{2}\). So, let us see how to create a new node on \(A^{1}\) by elementary changes. This is done in 2 steps. 1. First attach a copy of \(E_{1}\cong\bar{C}_{j_{1}}\) to both \(A^{1}\) and \(B^{1}\) at a point as in (18.1). (19.2) Then attach a copy of \(E_{2}\cong\bar{C}_{j_{2}}\) to \(A\uplus E_{1}\) at a point of \(A\) and at the point of \(E_{1}\) lying over \(c\). Also attach a copy of \(E_{2}\cong\bar{C}_{j_{2}}\) to \(B\uplus E_{1}\) at \(2\) points of \(B\). After finitely many such steps, we achieve that \(\#N^{i}_{A}(c,j_{1},j_{2})=\#N^{i}_{B}(c,j_{1},j_{2})\) for every \(c,j_{1},j_{2}\). From now on we will not create more separating nodes. Next we arrange that \(\chi(\bar{A}^{i}_{j})=\chi(\bar{B}^{i}_{j})\) for every \(j\). Fix a curve \(C_{1}\). If say \(\chi(\bar{A}^{i}_{j})>\chi(\bar{B}^{i}_{j})\) for some \(j\neq 1\) we use (18.2) to attach a \(\mathbb{P}^{1}\) at \(2\) points of \(\bar{A}^{i}_{j}\) and at \(2\) points of \(\bar{B}^{i}_{1}\). This decreases \(\chi(\bar{A}^{i}_{j})\) by \(1\) but leaves \(\chi(\bar{B}^{i}_{j})\) unaltered. Repeating as necessary, and increasing the value of \(i\), we achieve that \(\chi(\bar{A}^{i}_{j})=\chi(\bar{B}^{i}_{j})\) for every \(j\neq 1\). Next note that \[\chi(A^{i})=\sum_{j}\chi(\bar{A}^{i}_{j})-\sum\#N^{i}_{A}(c,j_{1},j_{2}),\quad \text{and}\quad\chi(B^{i})=\sum_{j}\chi(\bar{B}^{i}_{j})-\sum\#N^{i}_{B}(c,j_{ 1},j_{2}).\] We arranged that all but \(1\) of the terms in these identities match up, thus \(\chi(\bar{A}^{i}_{1})=\chi(\bar{B}^{i}_{1})\) also holds. Next choose embeddings \(A^{i}\subset C\times\mathbb{P}^{3}\) and \(B^{i}\subset C\times\mathbb{P}^{3}\) that have the same bidegree and such that \(N^{i}_{A}(c,j_{1},j_{2})=N^{i}_{B}(c,j_{1},j_{2})\) for every \(c,j_{1},j_{2}\). From now on we work separately with these \(\big{(}\pi^{i}_{A}:\bar{A}^{i}_{j}\to\bar{C}_{j},\pi^{i}_{B}:\bar{B}^{i}_{j} \to\bar{C}_{j}\big{)}\). In (20) we prove a slightly stronger version of our claim, where all the steps leave the set of separated nodes fixed. The \(2\)-nodal deformation equivalence at the end fixes all the separated nodes. If this holds, then the \(2\)-nodal deformation equivalences over the individual \(\bar{C}_{j}\) glue together to a \(2\)-nodal deformation equivalence over \(C\), as claimed. **Lemma 20**.: _Let \(C\) be a smooth, projective, irreducible curve and \(A^{1},B^{1}\subset C\times\mathbb{P}^{n}\) two nodal subcurves such that \(\big{(}\mathrm{bideg}(A^{1}),\chi(A^{1})\big{)}=\big{(}\mathrm{bideg}(B^{1}), \chi(B^{1})\big{)}\). Let \(P\subset A^{1}\cap B^{1}\) be a finite set of points that are smooth on both \(A^{1}\) and \(B^{1}\). Then there is a sequence of elementary changes (18.1-3) fixing \(P\) such that \(\big{(}\pi^{r}_{A}:A^{r}\to C\big{)}=\big{(}\pi^{r}_{B}:B^{r}\to C\big{)}\)._ Proof. First we achieve that \(A^{i}\to C\) is finite and separable. To do this, choose a smooth complete intersection surface \(A^{1}\subset S\subset C\times\mathbb{P}^{3}\) and \(m>0\) such that \(\mathcal{O}_{S}(m)(-A^{1})\) is very ample. A general section of it gives a smooth curve \(F\subset S\). We attach \(F\) to \(A^{1}\) along \(F\cap A^{1}\) and to \(B^{1}\) along the same number if points as in (18.1). Now \(A^{2}=A^{1}\uplus F\) is a very ample curve on \(S\). It is thus linearly equivalent to a curve \(A^{3}\) that is smooth, separable over \(C\), and contains \(P\). This is (18.3) with \(E=\emptyset\). We can now do the same for the \(B\)-curves. Once \(A^{i},B^{i}\) are both finite and separable over \(C\), in characteristic \(0\) we could use [10, 1.1]. It should work in any characteristic, but here is a weaker version that is good enough for us. Choose a general projection to \(C\times\mathbb{P}^{1}\). Let the images be \(A^{\prime},B^{\prime}\), and \(P^{\prime}\subset A^{\prime}\cap B^{\prime}\) the image of \(P\). We may assume that \(A^{\prime},B^{\prime}\) are smooth at \(P^{\prime}\). Since \(A^{\prime},B^{\prime}\) have the same bidegree, they have the same arithmetic genus. Thus we get the same number of new nodes \(N^{\prime}_{A},N^{\prime}_{B}\). After adding the same number of vertical lines \(L^{\prime}_{A}=\cup_{c}\{c\}\times\mathbb{P}^{1}\), we may assume that \(A^{\prime}+L^{\prime}_{A}\) and \(B^{\prime}+L^{\prime}_{B}\) are linearly equivalent, and such that \(|A^{\prime}+L^{\prime}_{A}|(-P^{\prime})\) is very ample. This gives the deformation equivalence of \(A^{\prime}+L^{\prime}_{A}\) and of \(B^{\prime}+L^{\prime}_{B}\). The lines \(L^{\prime}_{A},L^{\prime}_{B}\) can be lifted to \(L_{A},L_{B}\) but we have a problem at the nodes \(N^{\prime}\) that are smoothed in this deformation equivalence. However, these can be lifted to the required deformation equivalence, after a ramified double cover and adding lines that connect the preimages of the nodes as in (21). **Example 21**.: Consider the smoothing of the node \((xy=t)\subset\mathbb{A}_{xyt}^{3}\). Take the ramified double cover \(t=s^{2}+s^{3}\). (The cube is needed only in \(\operatorname{char}2\).) Now we have \((xy=s^{2}+s^{3})\subset\mathbb{A}_{xys}^{3}\). Blowing up the origin, the central fiber is the birational transforms of the \(x\) and \(y\)-axes, connected by the exceptional curve of the blow up. ## 3. Gluing of deformations **22** (Gluing deformations).: As in (12.1), consider \(2\) deformations \[B\longleftarrow S^{j}\overset{\pi^{j}}{\longrightarrow}B\times X \tag{22.1}\] with sections \(\sigma^{j}:B\to S^{j}\). If \(\pi^{1}\circ\sigma^{1}\equiv\pi^{2}\circ\sigma^{2}\) then we can glue the \(2\) deformations along the isomorphism \[S^{1}\supset\sigma^{1}(B)\cong B\cong\sigma^{2}(B)\subset S^{2}\] to get a new deformation \[B\longleftarrow S^{1}\amalg_{\sigma}S^{2}\overset{\pi}{\longrightarrow}B \times X, \tag{22.2}\] whose fiber over \(b\in B\) is \(S^{1}_{b}\amalg_{\sigma_{b}}S^{2}_{b}\). Furthermore, if the (22.1) are \(3\)-nodal, the \(\sigma^{j}(b)\) is never a \(3\)-node, and \(\sigma^{1}(b),\sigma^{1}(b)\) are never both \(2\)-nodes, then (22.2) is also \(3\)-nodal. The condition \(\pi^{1}\circ\sigma^{1}\equiv\pi^{2}\circ\sigma^{2}\) is, however, very restrictive, so we are unlikely to be able to glue right away. However, we can always connect any \(2\) points of \(X\) by some auxiliary curve. Our plan is to glue both \(S^{j}\) to the auxiliary family. **23** (Preliminary steps).: _Creating sections 23.1._ Given a deformation as in (12.1) such that \(S\to B\) has connected fibers. Let \(p_{i}\in S_{i}\) be smooth points. Let \(B^{\prime}\) be a general member of a sufficiently ample linear system on \(S\) passing through the \(p_{i}\). After base change to \(B^{\prime}\to B\), the family \(S^{\prime}\to B^{\prime}\) has a section connecting the points \(p^{\prime}_{i}\). If \(S\to B\) is \(3\)-nodal, then so is \(S^{\prime}\to B^{\prime}\). (Note that \(B^{\prime}\) needs to avoid the \(3\)-nodes, and that we pick up a new pivot points where \(B^{\prime}\) intersects the double curve of \(S\).) If the \(p_{1},p_{2}\) are on different irreducible components of \(S\), then \(B^{\prime}\) is reducible. However, if \(B\) is irreducible and \(p_{1},p_{2}\) are on the same irreducible component of \(S\), then \(B^{\prime}\) is irreducible. Iterating this we can create any number of sections. We may assume that they meet only at smooth points and transversally. We can now blow up the intersection points to get a deformation with disjoint sections. _Creating isomorphic base curves 23.2._ Given pointed, connected curves \((B^{1},b^{1}_{i})\) and \((B^{2},b^{2}_{i})\), let \(B\) be a general member of a sufficiently ample linear system on \(B^{1}\times B^{2}\) passing through the \(b_{i}:=(b^{1}_{i},b^{2}_{i})\). The projections give base changes \(\beta^{1}:(B,b_{i})\to(B^{1},b^{1}_{i})\) and \(\beta^{2}:(B,b_{i})\to(B^{2},b^{2}_{i})\). Thus, given \(2\) families \(S^{j}\to B^{j}\), after a base change we may assume that \(B^{1}=B^{2}\). If the \(S^{j}\to B^{j}\) are \(3\)-nodal, so are the new families over \(B\). **24** (Joining \(2\) points).: Let \(\tau_{i}:B\to\mathbb{P}^{n}\) be \(2\) morphisms form a curve \(B\) to \(\mathbb{P}^{n}\). For \(b\in B\), let \(L_{b}\subset\mathbb{P}^{n}\) be the line joining \(\tau_{1}(b)\) and \(\tau_{2}(b)\). This gives \(T:\mathbb{P}^{1}\times B\to\mathbb{P}^{n}\) whose restriction to the \(0\) and \(\infty\) sections give the \(\tau_{i}\). There are, however, some degenerate cases. First, if \(\tau_{1}(b)=\tau_{2}(b)\) for every \(b\). Second, if \(\tau_{1}(b)=\tau_{2}(b)\) holds only for finitely many \(b_{i}\), we get the lines over \(B\setminus\{b_{i}\}\) and then taking the limit should give the lines over the \(b_{i}\). This works if the \(b_{i}\) are smooth points, but not otherwise. We need a similar systematic way to connect 2 points in any variety \(X\). Instead of lines, we use complete intersection curves. In order to avoid the above problems with coinciding points, we use complete intersection curves in \(X\times\mathbb{P}^{1}\). **25** (\(H\)-complete intersection curves).: Let \(|H|\) be a sufficiently ample linear system on \(X\) and \(n:=\dim X\). (For example, any \(H\sim mA\) works, where \(A\) is very ample and \(m\geq 2\).) Consider \(X\times\mathbb{P}^{1}\) with coordinate projections \(p_{i}\). We have the linear systems \(p_{1}^{*}|H|\) and \(|H^{*}|:=p_{1}^{*}|H|+p_{2}^{*}|\mathcal{O}_{\mathbb{P}^{1}}(2)|\). We have a universal family \[M_{H}\xleftarrow{u}(T_{H}\subset U_{H})\stackrel{{\rho}}{{ \longrightarrow}}X, \tag{25.1}\] where \(M_{H}\) parametrizes objects (\(T\subset L=H_{1}\cap\cdots\cap H_{n-1}\cap H_{n}^{*}\)), where \(T\) is a length 2 subscheme of \(X\times\mathbb{P}^{1}\), and \(L\) is a smooth, complete intersection of \(H_{1},\ldots,H_{n-1}\in p_{1}^{*}|H|\) and of \(H_{n}^{*}\in|H^{*}|\). We refer to this as the \(H^{\rm ci}\)_family._ Its members are double covers of \(H\)-complete intersection curves in \(X\). Let \(Y\) be a smooth, proper variety. The Hilbert scheme of length 2 subschemes with a marked point is \(\operatorname{Hilb}_{(2)}(Y)\cong B_{\Delta}(Y\times Y)\), the blow-up of the diagonal of \(Y\times Y\). For the \(H^{\rm ci}\) family, \(M_{H}\) is an open subset of a fiber bundle over \(\operatorname{Hilb}_{(2)}(X\times\mathbb{P}^{1})\) whose fiber is a product of projective spaces, and the induced map \(M_{H}\to\operatorname{Hilb}_{(2)}(X\times\mathbb{P}^{1})\) is surjective. In particular, we have the following lifting property. _Claim 25.2_.: Let \(B\) be a reduced curve, \(b_{j}\in B\) a finite set of points and \(g:B\to\operatorname{Hilb}_{(2)}(X\times\mathbb{P}^{1})\) a morphism. For each point \(b_{j}\), choose \(m_{j}\in M_{H}\) such that \(\rho(T_{m_{j}})=g(b_{j})\). Then there is a rational map \(g^{\prime}:B\dashrightarrow M_{H}\) such that 1. \(\rho\circ g^{\prime}=g\), 2. \(g^{\prime}\) is defined at the poins \(b_{j}\), and \(g^{\prime}(b_{j})=m_{j}\). **Corollary 26**.: _Let \(X\) be a smooth, projective variety and fix an \(H^{\rm ci}\) family as in (25.1). Let \(B\) be a reduced curve, \(b_{i}\in B\) a finite set of points and \(\tau^{j}:B\to X\) two morphisms. Choose liftings \(\tilde{\tau}^{j}:B\to X\times\mathbb{P}^{1}\) such that \(\tilde{\tau}^{1}(b)=\tilde{\tau}^{2}(b)\) happens only at finitely many smooth points. Thus we have \(g:=(\tilde{\tau}^{1},\tilde{\tau}^{2}):B\to\operatorname{Hilb}_{(2)}(X \times\mathbb{P}^{1})\). For each \(b_{i}\) choose \(H^{\rm ci}\) curves \(L_{i}\) through \(g(b_{i})\)._ _Then there is a deformation_ \[\begin{array}{ccccc}H(\tau)_{i}&\subset&H(\tau)&\stackrel{{\pi }}{{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{26.1}\] _with 2 sections \(\sigma^{j}:B\to H(\tau)\) such that_ 1. \(\tau^{j}=\pi\circ\sigma^{j}\) _for_ \(j=1,2\)_,_ 2. _all but finitely many fibers of_ \(H(\tau)\to B\) _are_ \(H^{\rm ci}\) _curves, and_ 3. \(H(\tau)_{i}=L_{i}\) _for every_ \(i\)_._ Proof. The lifting of \(g:B\to\operatorname{Hilb}_{(2)}(X\times\mathbb{P}^{1})\) to \(g^{\prime}:B\to M_{H}\) is given by (25.2). The pull-back of the universal family by \(g^{\prime}\) gives \(H(\tau)\). We use \(H^{\mathrm{ci}}\) families to glue curves together. The situation is clear for disjoint curves. **Definition 27**.: Let \(\pi_{i}:C_{i}\to X\) be reduced curves and \[\mathbf{L}:=\left\{(\tau_{L}:L\to X,p_{1}\neq p_{2}\in L)\right\}\] a set of 2-pointed curves. Let \(\operatorname{Join}(C_{1},r\mathbf{L},C_{2})\) denote the set of all curves of the form \[C_{1}\cup_{\sigma_{1}}(\amalg_{j\in J}L_{j})\cup_{\sigma_{2}}C_{2}\] where \(\#J=r\), the \(p_{ij}\in L\) are smooth, and \(\sigma_{i}:\amalg_{j}\{p_{ij}\}\hookrightarrow C_{i}^{\mathrm{nodal}}\) are injections such that \(\tau_{L_{j}}(p_{ij})=\pi_{i}(\sigma_{i}(p_{ij}))\). Let \(\operatorname{Join}^{\mathrm{nodal}}(C_{1},r\mathbf{L},C_{2})\) be the subset of those curves for which the \(\sigma_{i}(p_{ij})\in C_{i}\) are smooth. **Corollary 28**.: _Let \(C_{i}\to X\) be reduced curves and \(H\) an ample divisor class as in (25). Assume that the nodal loci of the \(C_{i}\) are connected. Then_ 1. \(\operatorname{Join}(C_{1},rH^{\mathrm{ci}},C_{2})\) _is connected, and_ 2. \(\operatorname{Join}^{\mathrm{nodal}}(C_{1},rH^{\mathrm{ci}},C_{2})\) _is a dense, open subset of it._ For families with some intersection points, we need to be more careful. **Theorem 29**.: _Let \(X\) be a smooth, projective variety over an algebraically closed field \(K\), and \(\pi_{i}:(C_{i}\uplus D_{i})\to X\) nodal curves mapping to \(X\) for \(i=1,2\). Assume that the \(C_{i},D_{i}\) are connected, \(\#(C_{1}\cap D_{1})=\#(C_{2}\cap D_{2})\) (call this number \(r\)) and we have 3-nodal deformation equivalences_ \[\begin{array}{ccccccccc}C_{i}&\subset&S_{C}&\stackrel{{\pi_{C} }}{{\longrightarrow}}&B_{C}\times X&&D_{i}&\subset&S_{D}&\stackrel{{ \pi_{D}}}{{\longrightarrow}}&B_{D}\times X\\ \downarrow&&\downarrow&&\downarrow&&\downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B_{C}&=&B_{C},&&b_{i}&\in&B_{D}&=&B_{D}.\end{array}\] _Fix an \(H^{\mathrm{ci}}\) family of curves as in (25.1)._ _Then, after a base change \(\beta_{C}:B\to B_{C}\) and \(\beta_{D}:B\to B_{D}\), there is a 3-nodal deformation equivalence glued from \(\beta_{C}^{*}(S_{C})\to B\), \(\beta_{D}^{*}(S_{D})\to B\) and \(r\) different \(H^{\mathrm{ci}}\) families._ Proof. By (23), after a base change \(\beta_{C}:B\to B_{C}\) and \(\beta_{D}:B\to B_{D}\) we may assume that there are disjoint sections \(\sigma_{j}^{C}:B\to S_{C}\) and \(\sigma_{j}^{D}:B\to S_{D}\) connecting the points in \(C_{1}\cap D_{1}\) with the points in \(C_{2}\cap D_{2}\). This gives \(\tau_{j}^{C}:=\pi_{C}\circ\sigma_{j}^{C}:B\to X\) and \(\tau_{j}^{D}:=\pi_{D}\circ\sigma_{j}^{D}:B\to X\). We apply (26) to the pair of morphisms \(\tau_{j}^{C},\tau_{j}^{D}\). We choose liftings \(\tilde{\tau}_{j}^{C}:B\to X\times\mathbb{P}^{1}\) and \(\tilde{\tau}_{j}^{D}:B\to X\times\mathbb{P}^{1}\) such that \(\tilde{\tau}_{j}^{C}(b_{1})=\tilde{\tau}_{j}^{D}(b_{2})\) for every \(j\). Now use (26) to create \(H^{\mathrm{ci}}\) families \(H^{j}\to B\) with 2 sections \(\sigma_{j}^{HC}:B\to H^{j}\) and \(\sigma_{j}^{HD}:B\to H^{j}\). Now first glue \(S_{C}\) to each \(H^{j}\) using the \(r\) section pairs \((\sigma_{j}^{C},\sigma_{j}^{HC})\). Then glue the resulting surface to \(S_{D}\) using the \(r\) section pairs \((\sigma_{j}^{D},\sigma_{j}^{HD})\). **Complement 30**.: It may be worthwhile to write down explicitly what happens at the points \(C_{i}\cap D_{i}\). By construction, at each point we glue 3 smooth surfaces. In suitable (formal or etale) local coordinates the gluing problem becomes \[\begin{array}{l}S_{C}\cong\mathbb{A}_{xt}^{2}\ni(0,t)\leftrightarrow(g(t), t)\in\mathbb{A}_{zt}^{2}\cong H^{j},\quad\text{and}\\ S_{D}\cong\mathbb{A}_{yt}^{2}\ni(0,t)\leftrightarrow(h(t),t)\in\mathbb{A}_{zt}^{2} \cong H^{j},\end{array}\] where \(g(0)=h(0)=0\). The \((t=0)\) fiber is the 3 coordinate axes, which can be given as \[\left(\operatorname{rank}\left(\begin{array}{ccc}x&0&z\\ 0&y&z\end{array}\right)\leq 1\right)\subset\mathbb{A}_{xyz}^{3}.\] By [1, SS5], any flat deformation is obtained by varying the entries of the above matrix. In our case, the glued surface can be written as \[\left(\operatorname{rank}\left(\begin{array}{ccc}x&0&z{-}g(t)\\ 0&y&z{-}h(t)\end{array}\right)\leq 1\right)\subset\mathbb{A}_{xyzt}^{4}.\] ## 4. Modifying deformation equivalences **Theorem 31**.: _Let \(X\) be a smooth, projective variety over an algebraically closed field \(k\), and_ \[\begin{array}{ccccc}C_{i}\uplus D_{i}&\subset&S&\stackrel{{\pi }}{{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B\end{array}\] _a 3-nodal deformation equivalence. Assume that the \(C_{i},D_{i}\) are connected, \(\chi(D_{1})=\chi(D_{2})\) and \(\pi_{*}[D_{1}]=\pi_{*}[D_{2}]\)._ _Then, there is a 3-nodal deformation equivalence_ \[\begin{array}{ccccc}C_{i}\uplus R_{i}&\subset&S_{R}&\stackrel{{ \pi_{R}}}{{\longrightarrow}}&B_{R}\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B_{R}&=&B_{R},\end{array}\] _such that_ 1. \((\pi_{R}:R_{1}\to X)\cong(\pi_{R}:R_{2}\to X)\)_,_ 2. \(R_{i}\cap C_{i}=D_{i}\cap C_{i}\)_, and_ 3. \(S_{R}\) _is glued from 4 types of pieces: pull-backs of \(S\), deformation equivalences over \(\operatorname{Supp}\pi(D_{1})=\operatorname{Supp}\pi(D_{2})\), trivial families \(C_{i}\times B^{\prime}\to B^{\prime}\), and \(H^{\mathrm{ci}}\) families._ Proof. We apply (19) to \[\pi|_{D_{1}}:D_{1}\to\operatorname{Supp}\pi(D_{1})\quad\text{and}\quad\pi|_{ D_{2}}:D_{2}\to\operatorname{Supp}\pi(D_{2})=\operatorname{Supp}\pi(D_{1}).\] Then we show that each of the steps (18.1-4) can be lifted to 3-nodal deformation equivalences. For (18.1) this is achieved using (29): we glue \(S\to B\) to the trivial family \(E\times B\to B\) using additional \(H^{\mathrm{ci}}\) families. For (18.2) we glue \(S\to B\) to the trivial family \(\mathbb{P}^{1}\times D_{1}\to D_{1}\), again using additional \(H^{\mathrm{ci}}\) families. Steps (18.3-4) are already deformation equivalences. This is almost what we need, except that in (29) we added a number of \(H^{\mathrm{ci}}\) families. Thus the fibers over \(b_{1}\) and \(b_{2}\) are of the form \[\left(C_{i}\uplus R_{i}\right)\amalg_{\sigma}\left(\cup_{j\in J}L_{ij}\right)\] where the \(L_{ij}\) are \(H^{\mathrm{ci}}\) curves attached to nodes of \(C_{i}\uplus R_{i}\) by \(\sigma\). By construction, we have the same number of \(H^{\mathrm{ci}}\) curves for \(i=1,2\). We can now slide these \(H^{\mathrm{ci}}\) curves away from the nodes using (28), to get the same \(H^{\mathrm{ci}}\) curves for \(i=1,2\). Thus fibers over \(b_{1}\) and \(b_{2}\) are now \[\left(C_{i}\uplus R_{i}\right)\uplus\left(\cup_{j\in J}L_{j}^{\prime}\right),\] which is what we wanted. **Corollary 32**.: _Let \(X\) be a smooth, projective variety over an algebraically closed field \(K\), and \(\pi_{i}:A_{i}\to X\) connected, nodal curves mapping to \(X\). Assume that \(\chi(A_{1})=\chi(A_{2})\) and \((\pi_{1})_{*}[A_{1}]=(\pi_{2})_{*}[A_{2}]\). Then there is a 3-nodal deformation equivalence_ \[\begin{array}{ccccc}A_{i}\uplus L_{i}\uplus R_{i}&\subset&S&\stackrel{{ \pi}}{{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{32.1}\] _such that_ 1. \((\pi:R_{1}\to X)\cong(\pi:R_{2}\to X)\)_,_ 2. \(L_{i}\) _is an_ \(H^{\rm ci}\) _curve, meeting both_ \(A_{i}\) _and_ \(R_{i}\) _at a single smooth point, and_ 3. \(S\) _is glued from 4 types of pieces as in (_31.3_), (with_ \(A_{i}\) _replacing_ \(D_{i}\)_)._ Proof. Pick any \(H^{\rm ci}\) curve \(L\) meeting both \(A_{1}\) and \(A_{2}\) at a single smooth point. There is a trivial deformation equivalence \((A_{1}\uplus L\uplus A_{2})\times\mathbb{P}^{1}\to\mathbb{P}^{1}\). Apply (31) with \(D_{1}=A_{1}\) and \(D_{2}=A_{2}\). It is worth noting that the fibers of \(S\to B\) move very little in \(\operatorname{Chow}_{1}(X)\). To state this, let \(|H^{\rm ci}|\subset\operatorname{Chow}_{1}(X)\) denote the closure of the locus of images of \(H^{\rm ci}\) curves. **Complement 33**.: _We can choose (32.1) with the following property. There is an effective 1-cycle \(Z\) and morphisms \(\tau_{i}:B\to|H^{\rm ci}|\) such that_ \[\pi_{*}[S_{b}]=Z+\sum_{i}L(\tau_{i}(b)) \tag{33.1}\] _for every \(b\in B\). (We stress that (33.1) is an actual equality of cycles.) _ ## 5. Proof of Theorems 1-2 We start with a key special case, where we have a more precise answer. **Proposition 34**.: _Let \(X\) be a smooth, projective variety over an algebraically closed field \(K\), and \(\pi_{i}:C_{i}\to X\) connected, nodal curves mapping to \(X\). Assume that \(\chi(C_{1})=\chi(C_{2})\) and \((\pi_{1})_{*}[C_{1}]\sim_{a}(\pi_{2})_{*}[C_{2}]\). Then there is a 3-nodal deformation equivalence_ \[\begin{array}{ccccc}C_{i}\uplus L_{i}\uplus R_{i}&\subset&S&\stackrel{{ \pi}}{{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{34.1}\] _such that_ 1. \((\pi:L_{1}\uplus R_{1}\to X)\cong(\pi:L_{2}\uplus R_{2}\to X)\)_, and_ 2. \(L_{i}\) _is an_ \(H^{\rm ci}\) _curve, meeting both_ \(C_{i}\) _and_ \(R_{i}\) _at a single smooth point._ _Furthermore, if the \(C_{i}\) have a common irreducible component \(D\), then we can also achieve that_ 1. \(L_{i}\) _meets_ \(C_{i}\) _at a point on_ \(D\)_._ Proof. Let \(B\gets T\to T\times X\) be a nodal deformation equivalence certifying \((\pi_{1})_{*}[C_{1}]\sim_{a}(\pi_{2})_{*}[C_{2}]\). We may assume that \(T\to B\) has connected fibers. We also have the trivial deformation equivalences \(C_{i}\times B\to B\). By (29) we can glue these 3 together, using 2 different \(H^{\rm ci}\) families to get a 3-nodal deformation equivalence \[\begin{array}{ccccc}T^{\prime}_{i}&\subset&T^{\prime}&\stackrel{{ \pi^{\prime}}}{{\longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{array} \tag{34.2}\] such that \(T^{\prime}_{i}=C_{1}\uplus L_{1}\uplus T_{i}\uplus L_{2}\uplus C_{2}\). Apply (31) with \(D_{1}=L_{1}\uplus T_{i}\uplus L_{2}\uplus C_{2}\) and \(D_{2}=C_{1}\uplus L_{1}\uplus T_{i}\uplus L_{2}\). **35** (Proof of Theorem 1).: First apply (37) to get morphisms of nodal curves \(\bar{\pi}_{i}:C_{i}\uplus D\to X\) such that, the \(C_{i}\uplus D\) are geometrically connected, and \(\chi(C_{i}\uplus D)\) is independent of \(i\). Now we are in the situation of (34). Thus, for any two \(i,j\) we have deformation equivalences between curves of the form \[(C_{i}\uplus D)\uplus L_{ij}\uplus R_{ij}\quad\text{and}\quad(C_{j}\uplus D) \uplus L_{ij}\uplus R_{ij}.\] We can choose the \(L_{ij}\) to meet the \(C_{i}\uplus D\) and \(C_{j}\uplus D\) at a point of \(D\) by (34.4). Finally, for any \(\ell\), take \[(C_{\ell}\uplus D)\uplus\big{(}\amalg_{ij}L_{ij}\big{)}\uplus\big{(}\amalg_{ ij}R_{ij}\big{)}. \tag{35.1}\] We claim that they are \(3\)-nodal deformation equivalent to each other. By symmetry consider \(\ell=1,2\). Then we have \[(C_{\ell}\uplus D)\uplus\big{(}\amalg_{ij}L_{ij}\big{)}\uplus\big{(}\amalg_{ ij}R_{ij}\big{)}=R_{12}\uplus L_{12}\uplus(C_{\ell}\uplus D)\uplus\big{(}\amalg_{ ij}^{\prime}L_{ij}\big{)}\uplus\big{(}\amalg_{ij}^{\prime}R_{ij}\big{)} \tag{35.2}\] where \(\amalg^{\prime}\) means that we omit the pair \(ij=12\). Here \[R_{12}\uplus L_{12}\uplus(C_{1}\uplus D)\quad\text{and}\quad R_{12}\uplus L_{1 2}\uplus(C_{2}\uplus D)\] are \(3\)-nodal deformation equivalent by construction. We can now use (28) to add the \(L_{ij}\uplus R_{ij}\) one at a time (for \(ij\neq 12\)). This gives a deformation equivalence that has some \(3\)-nodal fibers, they can be eliminated using (17). **36** (Proof of Theorem 2).: We follow the arguments in (35). First apply (37) to get morphisms of nodal \(k\)-curves \(\bar{\pi}_{i}:C_{i}\uplus D\to X\) such that the \(C_{i}\uplus D\) are geometrically connected and \(\chi(C_{i}\uplus D)\) is independent of \(i\). We are in the situation of (34-35). Thus, working over the algebraic closure \(K\supset k\), we have the curves \[(C_{\ell}\uplus D)\uplus\big{(}\amalg_{ij}L_{ij}\big{)}\uplus\big{(}\amalg_{ ij}R_{ij}\big{)}. \tag{35.1}\] as in (35.1). Set \(p_{ij}=D\cap L_{ij}\). Next fix a pair \(ij\). The plan is to replace \(L_{ij}\uplus R_{ij}\) by all of its Galois conjugates \((L_{ij}\uplus R_{ij})^{\sigma}\) and attach them to \(D\) at the point \(p_{ij}^{\sigma}\). In order to keep the new curve nodal, we need the various \(p_{ij}^{\sigma}\) to be different from each other and from the points in \(C_{\ell}\cap D\). Here it is important that we are free to move the curves \(L_{ij}\), and use (61) to guarantee a choice for which \(L_{ij}\uplus R_{ij}\) is defined over \(k(p_{ij})\), while avoiding any given finite subset of points of \(D\). We can now do this for all the pairs \(ij\) in some order. Of course we need to make sure that a new \(p_{ij}\) is distinct form all nodes created earlier. In this construction, the degrees \(k(p_{ij})/k\) end up different. With a more careful choice in (61) we can achieve that these degrees are the same. It is, however, unlikely that one can choose the fields \(k(p_{ij})\) the same. **Lemma 37**.: _Let \(k\) be a perfect field and \(L/k\) a finite field extension. Let \(X\) be a smooth, projective, \(k\)-variety and \(\pi_{i}:C_{i}\to X_{L}\) (for \(i\in I\)) finitely many morphisms of reduced curves defined over \(L\). Assume that either \(\operatorname{ind}_{C_{i}}=1\) for every \(i\in I\), or \(\chi(C_{i})\) is independent of \(i\). Then there is a smooth, geometrically irreducible \(k\)-curve \(\pi_{D}:D\to X\) and morphisms of curves \(\bar{\pi}_{i}:C_{i}\uplus D\to X_{L}\) such that_ 1. _the_ \((C_{i}\uplus D)^{\mathrm{nodal}}\) _are geometrically connected,_ 2. \(\chi(C_{i}\uplus D)\) _is independent of_ \(i\)_, and_ 3. \((\bar{\pi}_{i}|_{D}:D\to X)\cong(\pi_{D}:D\to X)\)_._ Note that the assumption on \(\chi(C_{i})\) is necessary by Example 9. Proof. Working in \(Y:=X\times\mathbb{P}^{3}\), we may assume that the \(C_{i}\subset Y\) are disjoint. Let \(Z_{i}\subset C_{i}^{\mathrm{sm}}\) be reduced subschemes that have at least \(1\) point on each irreducible component. Let \(D\subset Y\) be a smooth, complete intersection curve, defined over \(k\), that contains the \(Z_{i}\). Now attach \(D\) to \(C_{i}\) at \(Z_{i}\) to get \(C_{i}\uplus D\). Note that \[\chi(C_{i}\uplus D)=\chi(C_{i})+\chi(D)-\deg Z_{i}.\] If \(\mathrm{ind}_{C_{i}}=1\) for every \(i\) then we can choose the \(Z_{i}\) such that \(\chi(C_{i}\uplus D)\) is independent of \(i\). We can always choose the \(\deg Z_{i}\) to be the same. Then \(\chi(C_{i}\uplus D)\) is independent of \(i\), and the other conditions are clear. ## 6. Smoothing combs Let \(X\) be a smooth projective variety and \(C\subset X\) a smooth curve with normal bundle \(N_{C}\). If \(N_{C}\) is generated by global sections and \(H^{1}(C,N_{C})=0\), then \(C\) can be deformed in all directions inside \(X\), but otherwise \(C\) may have no deformations at all. The technique of combs, originated in [10, 11], relies on the observation that if we attach enough smooth rational curves \(R_{i}\) to \(C\), and the \(N_{R_{i}}\) are generated by global sections, then the resulting nodal curve \(C\uplus\amalg_{i}R_{i}\) has a better deformation theory than \(C\) itself. We call \(C\) the _handle_ and the \(R_{i}\) the _teeth_ of the _comb_\(C\uplus\amalg_{i}R_{i}\). (See also [1, Sec.6] for an introductory treatment.) The proof uses \(2\) properties of the curves \(R\). At a general point \(c\in C\), * the tangent space \(T_{c}X\) is spanned by \(T_{c}C\) and the \(T_{c}R\) for all \(R\ni c\), and * \(H^{1}(R,N_{R}(-c))=0\) for all \(R\ni c\). If \(X\) is src, then there are rational curves \(R\) through every point and in every direction with ample \(N_{R}\), as needed. If \(X\to B\) has smooth src general fibers, then we get the needed rational curves in the smooth fibers, but not in the singular fibers. So the method only works for handles that dominate \(B\). Thus, in this case, we proceed in \(2\) steps. First we attach rational curves to \(H^{\mathrm{ci}}\) curves to achieve the needed properties. Then use these (reducible, high genus) curves as teeth for handles that are contained in the singular fibers. To accommodate both steps, we work out the details in a general setting. Each projective family of nodal curves \(B_{R}\gets S_{R}\to B\times X\) can be realized as an embedded surface in \(B\times Y\) where \(Y:=X\times\mathbb{P}^{3}\). We can thus view the deformation as a geometrically connected curve \(B_{R}\hookrightarrow\mathrm{Hilb}_{1}(Y)\) that parametrizes nodal curves. **38** (Hilbert scheme).: For a projective \(k\)-scheme \(Y\), let \(\mathrm{Hilb}_{1}(Y)\) denote the Hilbert scheme of \(1\)-dimensional, closed subschemes. Let \(\mathrm{Hilb}_{1}^{\mathrm{nodal}}(Y),\mathrm{Hilb}_{1}^{3-\mathrm{nodal}}(Y)\) denote the subschemes parametrizing nodal (resp. \(3\)-nodal) curves. We use \(\mathrm{Hilb}_{1}^{\mathrm{unobst}}(Y)\) to denote the open subscheme parametrizing reduced curves, all of whose singularities have unobstructed deformations. For a curve \([C]\in\mathrm{Hilb}_{1}^{\mathrm{unobst}}(Y)\), let \(N_{C}:=\mathcal{H}om_{C}(I_{C}/I_{C}^{(2)},\mathcal{O}_{C})\) denote its _normal sheaf_. (It is locally free for nodes, but not for \(3\)-nodes.) Then (38.1) \(H^{0}(C,N_{C})\) is the tangent space of \(\operatorname{Hilb}_{1}(Y)\) at \([C]\), and (38.2) \(H^{1}(C,N_{C})\) is the obstruction space for deforming \(C\subset Y\); see, for example, [11, I.2.14]. We say that \(C\) is _free_ if \(N_{C}\) is generated by global sections and \(H^{1}(C,N_{C})=0\). Let \(C=C_{1}+C_{2}\). Restriction gives a natural injection \(I_{C}/I_{C}^{(2)}\hookrightarrow I_{C_{1}}/I_{C_{1}}^{(2)}+I_{C_{2}}/I_{C_{2}}^ {(2)}\), hence we get \[0\to N_{C_{1}}+N_{C_{2}}\to N_{C_{1}+C_{2}}\to T\to 0,\] where \(T\) is supported at \(C_{1}\cap C_{2}\). In particular, we get the following. _Claim 38.3_.: If \(C_{1},C_{2}\) are free and \(C_{1}+C_{2}\) is locally unobstructed, then \(C_{1}+C_{2}\) is also free. The open subscheme parametrizing free curves is denoted by \(\operatorname{Hilb}_{1}^{\operatorname{free}}(Y)\). Note that \(\operatorname{Hilb}_{1}^{\operatorname{free}}(Y)\) is smooth. It has the following general properties. _Claim 38.4_.: Every irreducible component of \(\operatorname{Hilb}_{1}^{\operatorname{free}}(Y)\) is a connected component. _Claim 38.5_.: Assume that \([Z_{1}]\) and \([Z_{2}]\) are smooth \(k\)-points on the same irreducible component of \(\operatorname{Hilb}_{1}(Y)\). Then \([Z_{1}]\sim_{a}[Z_{2}]\) over \(k\). **39** (Families of teeth).: Let \(X\) be a proper variety. A _family of pointed nodal curves_ over \(X\) is a diagram \[M\stackrel{{\pi}}{{\longleftarrow}}C_{M}\stackrel{{ \tau}}{{\longrightarrow}}X \tag{39.1}\] where \(\pi:C_{M}\to M\) is a proper, flat morphism with nodal curve fibers, plus a section \(s:M\to C_{M}\) such that \(\pi\) is smooth along \(s(M)\). For \(p\in M\) let \(\tau_{p}:C_{p}\to X\) denote the restriction of \(\tau\) to the fiber over \(p\). If every \(\tau_{p}\) is an embedding, we have a flat family of nodal curves in \(X\). For \(x\in X\), the condition \(\tau\circ s(p)=x\) defines \(M_{x}\subset M\) and the the subfamily \(C_{M_{x}}\to M_{x}\). If \(C_{p}\to X\) is an immersion at \(s(p)\) for every \(p\in M\), then the derivative of \(\tau_{p}\) at \(s(p)\) gives \(d\tau:M\to\mathbb{P}_{X}(T_{X})\). If \(\tau_{p}:C_{p}\to X\) is an embedding, then we have the normal bundle \(N_{C_{p}}\). The following properties are various versions of saying that the curves \(\tau_{p}:C_{p}\to X\)'move freely' in \(X\). 1. For every \(x\in X\), \(\tau|_{M_{x}}:C_{M_{x}}\setminus s(M_{x})\to X\) is equidimensional. 2. \(d\tau:M\to\mathbb{P}_{X}(T_{X})\) is smooth and surjective. 3. \(d\tau\) has geometrically irreducible fibers. 4. \(H^{1}\big{(}C_{p},N_{C_{p}}(-s(p))\big{)}=0\) for every \(p\in M\). Our discussions in (25) show that the family of pointed, smooth \(H^{\operatorname{ci}}\) curves satisfy (2-4), but they almost never satisfy (5). However, if \(X\) is separably rationally connected, then there is a way to create a family that also satisfies (5), **40** (Combs).: Let \(X\) be a proper variety over an algebraically closed field \(K\). Let \(\mathbf{W}\) be a set of curves on \(X\) and \[M\stackrel{{\tau}}{{\longleftarrow}}T_{M}\stackrel{{ \tau}}{{\longrightarrow}}X\quad\text{with}\quad s:M\to T_{M}\] a family of pointed curves as in (39.1); called teeth from now on, and denoted by \(\mathbf{T}\). A _comb_ with handle in \(\mathbf{W}\) and \(r\) teeth in \(\mathbf{T}\) is a curve \(C\uplus(\operatorname{Hil}_{i\in I}T_{p_{i}})\), where \(C\in\mathbf{W}\) \(p_{i}\in M\) and \(\sigma:\amalg_{i}s(p_{i})\hookrightarrow C^{\mathrm{sm}}\) is a gluing morphism such that \(\tau(s(p_{i}))=\sigma(s(p_{i}))\) for every \(i\). If we allow \(\sigma:\amalg_{i}s(p_{i})\hookrightarrow C^{\mathrm{modal}}\), we get a _3-nodal comb_. It is an _embedded comb_ in the natural map to \(X\) is an embedding. Let \(\mathrm{EComb}(\mathbf{W},r\mathbf{T})\) denote the set of all such subschemes. A comb is called _balanced_ if each irreducible component of \(C\) has at least \(r/(2c)\) teeth attached at smooth points, where \(c\) is the number of irreducible components of \(C\). (2 is pretty arbitrary choice, any larger constant would work for us.) These form a subset \(\mathrm{EComb}^{\mathrm{bal}}(\mathbf{W},r\mathbf{T})\subset\mathrm{EComb}( \mathbf{W},r\mathbf{T})\). If \(\mathbf{W},\mathbf{T}\) are constructible subsets of \(\mathrm{Hilb}_{1}(X)\), then so are \(\mathrm{EComb}(\mathbf{W},r\mathbf{T})\) and \(\mathrm{EComb}^{\mathrm{bal}}(\mathbf{W},r\mathbf{T})\). The following two theorems are slight generalizations of [1, Sec.2], see also [1, Sec.6]. **Theorem 41**.: _Let \(X\) be a smooth, projective variety over a perfect field \(k\). Assume that either \(X\) is src and \(\dim X\geq 3\), or src in codimension 1 and \(\dim X\geq 4\). Let \(\mathbf{R}\) be an irreducible component of the space of almost very free morphisms \(\mathbb{P}^{1}\to X\)._ 1. _Let_ \(\mathbf{W}\) _be any_ \(H^{\mathrm{ci}}\) _family. Then there is an_ \(r>0\) _and a geometrically irreducible open subset_ \(\mathbf{T}\subset\mathrm{EComb}(\mathbf{W},r\mathbf{R})\) _that satisfies (_39_._2_-_5_)._ Proof. As we noted, the \(H^{\mathrm{ci}}\) family satisfies (39.2-3), which implies the same for \(\mathrm{EComb}(\mathbf{W},r\mathbf{R})\). The key property (39.5) is proved in [1, Sec.2]. One needs to pay some attention to (39.4). \(\mathbf{R}\) parametrizes certain maps \(g:\mathbb{P}^{1}\to X\). Their lifts \(\bar{g}:\mathbb{P}^{1}\to X\times\mathbb{P}^{1}\) are parametrized by \(\bar{\mathbf{R}}:=\mathbf{R}\times\mathrm{PGL}_{2}\). We have the universal rational curve map \(\rho:\mathbb{P}^{1}_{\mathbf{R}}\to X\times\mathbb{P}^{1}\). Given an \(H^{\mathrm{ci}}\) curve \(L\subset X\times\mathbb{P}^{1}\), the curves \(R\uplus L\) are parametrized by \(\rho^{-1}(L)\). By Bertini's connectedness theorem, \(\rho^{-1}(L)\) is geometrically irreducible for general \(L\); see [12, 4.10 and 6.10]. If \(\mathbf{R}\) itself is geometrically irreducible, then \(\mathrm{EComb}(\mathbf{W},r\mathbf{R})\) is geometrically irreducible, and we are done. (In fact, one can always choose such an \(\mathbf{R}\).) In general, if \(\mathbf{R}\) decomposes into geometrically irreducible components \(\mathbf{R}_{i}\), then \(\mathbf{T}\) consist of those combs that have the same number of teeth from each \(\mathbf{R}_{i}\). Next we use \(\mathbf{T}\) obtained in (41) as teeth of the next comb construction, but first a definition. **Definition 42**.: A comb \(C\uplus(\amalg_{i}T_{i})\subset X\) is _\(d\)-free along the handle_ if for every subcurve \(C_{1}\subset C\), \[H^{1}\big{(}C_{1}\uplus(\amalg_{i}T_{i}),N_{C_{1}\uplus(\amalg_{i}T_{i})}(-D) \big{)}=0 \tag{42.1}\] for every effective Cartier divisor \(D\subset C_{1}^{\mathrm{sm}}\) of degree \(\leq d\). Given \(\mathbf{W},\mathbf{T}\), let \(\mathrm{EComb}^{\mathrm{d-free}}(\mathbf{W},r\mathbf{T})\) denote the set of all balanced, 3-nodal combs that are free and \(d\)-free along their handle. _Comments._ Requiring (42.1) for all \(C_{1}\subset C\) will be important in (55). It would have been simpler to require (42.1) for every \(D\subset\big{(}C\uplus(\amalg_{i})\big{)}^{\mathrm{sm}}\). However, the teeth \(T_{i}\) that we use contain the rational curves \(R_{i}\) of (41) as irreducible components. If \(X\) is src, we can choose these \(R_{i}\) to have arbitrarily positive normal bundle. If \(X\) is only src in codimension 1, then the \(R_{i}\) are contained in the fibers of \(X\to B\), thus their normal bundle always has a trivial summand. So \(H^{1}\big{(}R_{i},N_{R_{i}}(-2)\big{)}\) is never 0. We could fix this by replacing the reducible curves \(\mathbf{T}\) obtained in (41) by their general smooth deformations. This would necessitate some extra steps to work over nonclosed fields. **Theorem 43**.: _Let \(Y\) be a smooth, projective variety and \(\mathbf{W}\subset\operatorname{Hilb}^{\operatorname{unobst}}_{1}(Y)\) a bounded set. Let \(\mathbf{T}\) be a family of teeth satisfying (39.2-5) and fix \(d>0\)._ _Then there is an \(r>0\) such that, for every \(C\in\mathbf{W}\), \(\operatorname{EComb}^{\operatorname{d-free}}(C,r\mathbf{T})\) is open and dense in \(\operatorname{EComb}^{\operatorname{bal}}(C,r\mathbf{T})\)._ Proof. Choose \(C\in\mathbf{W}\) and let \(D\subset C^{\operatorname{sm}}\) be an effective Cartier divisor of degree \(\leq d\). \(H^{1}\big{(}C,N_{C}(-D)\big{)}\) is dual to \(\operatorname{Hom}_{C}\big{(}N_{C}(-D),\omega_{C}\big{)}\). The computation of [10, Sec.2], which is done at smooth points of \(C\), shows that if \(\eta\in\operatorname{Hom}_{C}\big{(}N_{C}(-D),\omega_{C}\big{)}\), then attaching a tooth \(T_{c}\) at a point of \(c\in\operatorname{Supp}(\eta)\) kills \(\eta\), provided \(\eta\) is nonzero on the tangent space of \(T_{c}\) at \(c\). Thus there is an \(r_{0}\) such that if we attach at least \(r_{0}\) general teeth at general points to all irreducible components on \(C\), then \(H^{1}\big{(}C,N_{C}(-D)\big{)}\) gets killed. Now take \(r\geq 2cr_{0}\), where \(c\) is the number of irreducible components of \(C\). Finally note that both the dimension of \(H^{1}\big{(}C,N_{C}(-D)\big{)}\) and the number of irreducible components are uniformly bounded in a bounded set of curves. The main technical result is the following. **Theorem 44**.: _Let \(Y\) be a smooth, projective variety of dimension \(\geq 3\) over a perfect field \(k\). Let \(\mathbf{W}\subset\operatorname{Hilb}^{\operatorname{nodal}}_{1}(Y)\) be a bounded, locally closed subset, and \(\mathbf{T}\) a family of pointed curves satisfying (39.2-5). Fix \(d>0\)._ _Then there is an \(r>0\) such that the forgetful map_ \[\Pi:\operatorname{EComb}^{\operatorname{d-free}}(\mathbf{W},r\mathbf{T}) \longrightarrow\mathbf{W}\] 1. _has connected fibers,_ 2. _satisfies the curve lifting property (_48_), and_ 3. _surjective on_ \(L\)_-points for every_ \(k\subset L\subset K\)_._ **Corollary 45**.: _Using the notation of (44), assume in addition that \(\mathbf{W}\) is geometrically connected. Then \(\operatorname{EComb}^{\operatorname{d-free}}(\mathbf{W},r\mathbf{T})\) is contained in a single geometrically irreducible component \(\operatorname{Env}(\mathbf{W},r\mathbf{T})\) of \(\operatorname{Hilb}_{1}(Y)\); called its envelope._ Proof. \(\operatorname{EComb}^{\operatorname{d-free}}(\mathbf{W},r\mathbf{T})\) is geometrically connected by (48). Thus the unique connected component of \(\operatorname{Hilb}^{\operatorname{free}}_{1}(Y)\) containing it is geometrically irreducible by (38.4). **Corollary 46**.: _Using the notation of (44), assume in addition that \(\mathbf{W}\) is geometrically connected. If \([C_{1}],[C_{2}]\in\mathbf{W}(L)\), then \(C_{1}\) and \(C_{2}\) are algebraically equivalent over \(L\)._ Proof. By (44.3) there are free combs \(C_{i}\uplus(\amalg_{j}T_{ij})\) defined over \(L\). By (38.5), the \(C_{i}\uplus(\amalg_{j}T_{ij})\) are algebraically equivalent over \(L\). The \(\amalg_{j}T_{ij}\) are also smooth \(L\)-points of the \(r\)th symmetric power of \(\mathbf{T}\), hence the \(\amalg_{j}T_{ij}\) are also algebraically equivalent to each other over \(L\). **47** (Proof of 44).: Let \(C\in\mathbf{W}\) with irreducible components \(\{C_{i}:i\in I\}\). The combs where we attach \(m_{i}\) teeth to \(C_{i}\) form an irreducible family. By (43) there is an \(r_{0}\) such that general such combs are \(d\)-free if \(m_{i}\geq r_{0}\) for every \(i\). If \(m_{i}>r_{0}\) for every \(i\), then, as we slide one of the teeth across a node of \(C\), we get a \(3\)-nodal comb, which is still \(d\)-free be (38.3). Thus, if \(r\geq(r_{0}+1)\cdot\#I\), then \(\operatorname{EComb}^{\operatorname{d-free}}(C,r\mathbf{T})\) is connected. In order to check the curve lifting property (48) let \((B,b_{1},b_{2})\) be an irreducible, smooth, \(2\)-pointed curve, and \(\tau:(B,b_{1},b_{2})\rightarrow\mathbf{W}\) a morphism. By pull-back we get a family \(C_{B}\to B\). Attach \(m\) general teeth to each irreducible component of the \(C_{i}\) at the points \(c_{ij}\in C_{i}\). After a further base change, we may assume to have sections \(\sigma_{ij}:B^{\prime}\to C^{\prime}_{B}\), such that \(\sigma_{ij}(b^{\prime}_{i})=c_{ij}\). We keep \(B^{\prime}\) irreducible, thus we can not control on which irreducible components of \(C_{3-i}\) the points \(\sigma_{3-i,j}(b_{i})\) are on. However, after attaching further teeth, we have a family in \(\operatorname{EComb}^{\operatorname{d-free}}(C,r\mathbf{T})\). Assume next that \(C\) is defined over a field \(L\) and \(C\uplus(\Pi_{j}T_{j})\) is \(d\)-free and defined over \(K\). Adding all the \(\operatorname{Gal}(K/L)\)-conjugate teeth, we get a comb \(C\uplus(\Pi_{j,\sigma}T^{\sigma}_{j})\) which is \(d\)-free and defined over \(L\). We need to make sure that the \(T^{\sigma}_{j}\) are attached to \(C\) at distinct points and that their number is uniformly bounded. Such a choice of the \(T_{j}\) is possible by (60). **48** (Curve lifting property).: Let \(K\) be algebraically closed and \(g:Y\to X\) a morphism of \(K\)-schemes of finite type. If \(g\) is not proper, it can happen that \(X\) is connected, the fibers of \(g\) are nonempty and connected, but \(Y\) is not. The implication, however, holds, if \(g\) has the following property. Let \((B,b_{1},b_{2})\) be a 2-pointed, irreducible (not necessarily proper) curve and \(\tau:(B,b_{1},b_{2})\to X\) a morphism. Then there is 2-pointed, irreducible curve \((B^{\prime},b^{\prime}_{1},b^{\prime}_{2})\) and a commutative diagram \[\begin{array}{ccc}(B^{\prime},b^{\prime}_{1},b^{\prime}_{2})&\stackrel{{ \tau^{\prime}}}{{\to}}&Y\\ \downarrow&&\downarrow g\\ (B,b_{1},b_{2})&\stackrel{{\tau}}{{\to}}&X.\end{array}\] ## 7. Arithmetic applications First we discuss the index and the genera of curves. **49** (Euler characteristic of 1-cycles).: Let \(k\) be a field, \(X\) a proper \(k\)-scheme and \(\operatorname{ind}_{X}\) the _index_ of \(X\), that is, the gcd of the degrees of all 0-cycles on \(X\). Let \(C\subset X\) be an irreducible curve. We check in (50) that \(\operatorname{ind}_{X}\mid 2\chi(C)\) and set \[\chi^{*}(C):=\tfrac{2\chi(C)}{\operatorname{ind}_{X}}\mod 2.\] Thus, if \(\operatorname{ind}_{X}\) is odd then \(\chi^{*}(C)=0\) for every \(C\). We extend \(\chi^{*}\) to reducible curves and 1-cycles \(Z=\sum_{i}d_{i}C_{i}\) by linearity. We check in (51) that if \(Z\) is algebraically equivalent to 0 then \(\chi^{*}(Z)=0\). Thus \(\chi^{*}\) descends to a homomorphism \(\chi^{*}:A_{1}(X)\to\mathbb{Z}/2\mathbb{Z}\). **50**.: Let \(k\) be a field, \(X\) a proper \(k\)-scheme and \(\operatorname{ind}_{X}\) the index of \(X\). Let \(p:C\to X\) be a proper, reduced curve mapping to \(X\) and \(\pi:\bar{C}\to C\) the normalization. Then \(p_{*}(\pi_{*}\mathcal{O}_{\bar{C}}/\mathcal{O}_{C})\) is a 0-cycle on \(X\), hence its degree is divisible by \(\operatorname{ind}_{X}\). Thus \(\chi(\bar{C})\equiv\chi(C)\mod\operatorname{ind}_{X}\). The canonical class of \(\bar{C}\) is represented by a 0-cycle, and \(\deg K_{\bar{C}}=-2\chi(\bar{C})\). Thus \(\operatorname{ind}_{X}\mid 2\chi(\bar{C})\). We have 2 possibilities: 1. if \(\operatorname{ind}_{X}\) is odd then \(\chi(C)\equiv 0\mod\operatorname{ind}_{X}\), and 2. if \(\operatorname{ind}_{X}\) is even then \(\chi(C)\equiv\chi^{*}(C)(\operatorname{ind}_{X}/2)\mod\operatorname{ind}_{X}\). (This is related to the elw indices defined in [11]. For \(d\)-cycles, we get a similar invariant modulo \(\gcd(\operatorname{elw}_{d-1},\mu(\operatorname{Td}_{d}))\) where \(\mu(\operatorname{Td}_{d})\) is the denominator appearing in the Todd class in dimension \(d\).) **Proposition 51**.: _(cf. [11, 3, 4, 5]) Let \(X\) be a proper \(k\)-scheme. Then the function on 1-cycles \(Z\mapsto\chi^{*}(Z)\in\mathbb{Z}/2\mathbb{Z}\) is preserved by algebraic equivalence._ Proof. Let \(p:C\to X\) be a proper morphism from a nonsingular curve to \(X\) such that \(p_{*}[C]=[Z]\). The Euler characteristic of fibers of flat morphisms is a locally constant function on the base, so it is enough to prove that \[\chi(C)\equiv\sum_{i}d_{i}\chi(Z_{i})\mod\operatorname{ind}_{X}. \tag{51.1}\] By linearity, it is enough to prove this for a morphism \(\tau:C\to B\) of nonsingular, irreducible curves over \(X\). If \(\tau(C)\) has dimension 0, then \(\chi(C)=\deg\tau_{*}\mathcal{O}_{C}-\deg R^{1}\tau_{*}\mathcal{O}_{C}\), and both terms on the right hand side are divisible by \(\operatorname{ind}_{X}\). If \(\tau\) is finite then \[\chi(C)=\chi(\tau_{*}\mathcal{O}_{C})=\deg\tau_{*}\mathcal{O}_{C}+\deg\tau \cdot\chi(\mathcal{O}_{B}).\] Here \(\deg\tau_{*}\mathcal{O}_{C}\) is divisible by \(\operatorname{ind}_{X}\). **Lemma 52**.: _Let \(X\) be a smooth, projective variety over a perfect field \(k\) with 1-cycles \(Z_{1},Z_{2}\) such that \(\chi^{*}(Z_{1})=\chi^{*}(Z_{2})\). Let \(A\subset X\) be a smooth, curve meeting all irreducible components of \(Z_{1}\cup Z_{2}\) at smooth points and such that \(\operatorname{ind}_{X}=\operatorname{ind}_{A}\)._ _Then there are connected, nodal \(k\)-curves \(\pi_{i}:C_{i}\to X\) such that \((\pi_{i})_{*}[C_{i}]=Z_{i}+2A\) and \(\chi(C_{1})=\chi(C_{2})\)._ Proof. We have connected, nodal \(k\)-curves \(\rho_{i}:D_{i}\to X\) such that \((\rho_{i})_{*}[D_{i}]=\operatorname{red}(Z_{i})+A\). If \(D_{ij}\subset D_{i}\) is an irreducible component, and \((\rho_{i})_{*}[D_{ij}]\) appears in \(Z_{i}\) with multiplicity \(d_{ij}\), then we replace \(D_{ij}\) with a degree \(d_{ij}\) cyclic cover \(C_{ij}\to D_{ij}\) that ramifies along all the nodes. Now we have connected, nodal \(k\)-curves \(C^{\prime}_{i}=\cup C_{ij}\) such that \((\pi^{\prime}_{i})_{*}[C^{\prime}_{i}]=Z_{i}\). Note that \(\operatorname{ind}_{X}\) divides \(\chi(C^{\prime}_{1})-\chi(C^{\prime}_{2})\). Also, since \(\operatorname{ind}_{X}=\operatorname{ind}_{A}\), the curve \(A\) has double covers \(A_{i}\to A\) such that \(\chi(A_{2})-\chi(A_{1})=\chi(C^{\prime}_{1})-\chi(C^{\prime}_{2})\). We now get \(C_{i}\) by gluing \(C^{\prime}_{i}\) to \(A_{i}\). The stronger form of Theorem 6 is the following. **Theorem 53**.: _Let \(X_{k}\) be a smooth, projective variety over a perfect field \(k\) with algebraic closure \(K\). Assume that \(X_{K}\) is src in codimension 1. Then_ \[A_{1}(X_{k})\to A_{1}(X_{K})+\mathbb{Z}/2\mathbb{Z}\quad\text{given by}\quad[Z_{k}]\mapsto\big{(}[Z_{K}],\chi^{*}(Z_{k}) \big{)}\] _is injective._ Proof. If \(Z_{k}\sim_{a}0\) then also \(Z_{K}\sim_{a}0\), and \(\chi^{*}(Z)=0\) by (51). Conversely, assume that \(\chi^{*}(Z)=0\). By (52) there are connected, nodal \(k\)-curves \(\pi_{i}:C_{i}\to X\) such that \(Z\sim_{a}(\pi_{1})_{*}[C_{1}]-(\pi_{2})_{*}[C_{2}]\) and \(\chi(C_{1})=\chi(C_{2})\). Then, by Theorem 2, there is a nodal deformation equivalence \[\begin{matrix}C_{i}\uplus R&\subset&S&\stackrel{{\pi}}{{ \longrightarrow}}&B\times X\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B,\end{matrix}\] where the \(\pi|_{C_{i}\uplus R}:C_{i}\uplus R\to X\) are defined over \(k\). Note that \[(\pi_{1})_{*}[C_{1}\uplus R]-(\pi_{2})_{*}[C_{2}\uplus R]=(\pi_{1})_{*}[C_{1} ]-(\pi_{2})_{*}[C_{2}]\sim_{a}Z.\] Finally \((\pi_{1})_{*}[C_{1}\uplus R]-(\pi_{2})_{*}[C_{2}\uplus R]\sim_{a}0\) by (46) applied to \(C_{i}\uplus R\). **54** (Proof of Theorem 7).: Injectivity follows from (6.1). Let \(Z\) be a 1-cycle on \(X_{K}\) such that \(Z\sim_{a}Z^{\sigma}\) for every \(\sigma\in\operatorname{Gal}(K/k)\). By Theorem 2 we may certify \(Z\sim_{a}Z^{\sigma}\) by a nodal curve \(C\uplus R\subset X\times\mathbb{P}^{3}\) that is nodal deformation equivalent to \(C^{\sigma}\uplus R\) for every \(\sigma\in\operatorname{Gal}(K/k)\). The union of these nodal equivalences and their \(\operatorname{Gal}(K/k)\)-conjugates gives a geometrically connected \(\mathbf{W}\subset\operatorname{Hilb}_{1}(X\times\mathbb{P}^{3})\) which contains all the \(C^{\sigma}\uplus R\). Choose a geometrically irreducible family of teeth \(\mathbf{T}\), and apply (45). We get the envelope \(\operatorname{Env}(\mathbf{W},r\mathbf{T})\) that contains a free comb \((C\uplus R)\uplus(\amalg_{j}T_{j})\) and all of its \(\operatorname{Gal}(K/k)\)-conjugates \(\bigl{(}(C\uplus R)\uplus(\amalg_{j}T_{j})\bigr{)}^{\sigma}\). Thus \(\operatorname{Env}(\mathbf{W},r\mathbf{T})\) is geometrically irreducible. By assumption there are \(0\)-cycles \(P\) on \(\mathbf{T}\) and \(Q\) on \(\operatorname{Env}(\mathbf{W},r\mathbf{T})\) of degree \(1\). Let \(T_{P}\) and \(E_{Q}\) be the corresponding \(1\)-cycles on \(X\times\mathbb{P}^{3}\). Then \([E_{Q}]-r[T_{P}]\) is a \(1\)-cycle on \(X\times\mathbb{P}^{3}\) that is algebraically equivalent to \(C\uplus R\) over \(K\). Thus \([E_{Q}]-r[T_{P}]-[R]\) is algebraically equivalent to \(C\) over \(K\). Projecting it to \(X\) gives a \(1\)-cycle on \(X\) that is algebraically equivalent to \([Z]\) over \(K\). Pick \([Z_{0}]\in A_{1}(X_{0})\). We can represent it as the projection of a nodal curve \(C_{0}\subset Y:=X\times\mathbb{P}^{3}\). By (43) there is a free comb \(C_{0}\uplus(\amalg_{j}T_{j})\) where \(\amalg_{j}T_{j}\) is defined over \(k\). Thus \(\operatorname{Hilb}_{1}(Y/R)\to\operatorname{Spec}R\) is smooth at \([C_{0}\uplus(\amalg_{j}T_{j})]\) and also at \([\amalg_{j}T_{j}]\). Since \(R\) is Henselian, these lift to \([E_{g}],[T_{g}]\in\operatorname{Hilb}_{1}(Y_{g})\). Thus \([E_{g}]-[T_{g}]\) shows that \(A_{1}(X_{g})\to A_{1}(X_{0})\) is surjective. For injectivity, pick \(Z_{g}^{1},Z_{g}^{2}\) with specializations \(Z_{0}^{1}\sim_{a}Z_{0}^{2}\). Then \(\operatorname{ind}_{X_{0}}\mid\chi(Z_{0}^{1})-\chi(Z_{0}^{2})\) by (6.2). Note that \(\operatorname{ind}_{X_{0}}=\operatorname{ind}_{X_{g}}\), so \(\operatorname{ind}_{X_{g}}\mid\chi(Z_{g}^{1})-\chi(Z_{g}^{2})\). Thus, by (6.2), it is enough to show that \(Z_{g}^{1}\) and \(Z_{g}^{2}\) become algebraically equivalent after a field extension of \(k(g)\). By (37) we can choose nodal representatives \(C_{g}^{i}\to Z_{g}^{i}\) such that \(\chi(C_{g}^{1})=\chi(C_{g}^{2})\). After semistable reduction, we may replace the \(C_{g}^{i}\) by flat families of nodal curves \(C_{R}^{i}\subset X_{R}\times_{R}\mathbb{P}_{R}^{3}\). We apply Theorem 1 to get \[\begin{array}{ccccc}C_{0}^{i}\uplus E&\subset&S&\hookrightarrow&B\times X_{ 0}\times\mathbb{P}^{3}\\ \downarrow&&\downarrow&&\downarrow\\ b_{i}&\in&B&=&B.\end{array}\] Now apply (45) to \(\mathbf{W}:=B\) with \(d:=\max\{\#(C_{0}^{1}\cap E),\#(C_{0}^{2}\cap E)\}\). We get \(d\)-free combs \((C_{0}^{i}\uplus E)\uplus(\amalg_{j}T_{j}^{i})\) that are in the same irreducible component of \(\operatorname{Hilb}_{1}^{\operatorname{free}}(Y_{0})\). The \(C_{0}^{i}\cap E\) are smooth points on \(C_{0}^{i}\), so can be lifted to a union of sections \(\Sigma^{i}\subset C_{R}^{i}\). We can also view \(C_{0}^{i}\cap E\), as a Cartier divisor on \(E\). Since the comb is \(d\)-free, \[H^{1}\bigl{(}E\uplus(\amalg_{j}T_{j}^{i}),N_{E\uplus(\amalg_{j}T_{j}^{i})}(- (C_{0}^{i}\cap E))\bigr{)}=0,\] hence every deformation of \(C_{0}^{i}\cap E\) extends to a deformation of \(E\uplus(\amalg_{j}T_{j}^{i})\). In particular, each \(E\uplus(\amalg_{j}T_{j}^{i})\) can be lifted to a flat family of curves \(V_{R}^{i}\) containing \(\Sigma^{i}\). Then \(C_{R}^{i}\cup V_{R}^{i}\) is flat over \(R\). Since the \(E\uplus(\amalg_{j}T_{j}^{i})=V_{0}^{i}\) are smooth points on same irreducible component of \(\operatorname{Hilb}_{1}^{\operatorname{free}}(Y_{0})\), the same holds for the general fibers \(V_{g}^{i}\). Hence \(V_{g}^{1}\sim_{a}V_{g}^{2}\). The same argument shows that \(C_{g}^{1}\cup V_{g}^{1}\sim_{a}C_{g}^{2}\cup V_{g}^{2}\). Therefore \(C_{g}^{1}\sim_{a}C_{g}^{2}\). ## 8. Loops in \(\operatorname{Chow}_{1}\) **Definition 56**.: Let \(X\) be a scheme. A length \(m\)_chain_ in \(X\) is a collection of morphisms \[\mathcal{L}:=\bigl{\{}\pi^{i}:(W^{i},Z_{1}^{i},Z_{2}^{i})\to X\colon i=1,\dots, m\bigr{\}},\] where the \(W^{i}\) are irreducible schemes, \(Z_{1}^{i},Z_{2}^{i}\) are irreducible, closed subsets and \(\pi^{i}(Z_{2}^{i})=\pi^{i+1}(Z_{1}^{i+1})\) (as sets). \(\mathcal{L}\) is a _loop_ if \(\pi^{m}(Z_{2}^{m})=\pi^{1}(Z_{1}^{1})\) also holds. A loop is _contractible_ if all the \(\pi^{i}(W^{i})\) are contained in the same irreducible component of \(X\). A connected, pointed curve \(\pi^{i}_{B}:(B^{i},p^{i}_{1},p^{i}_{2})\to(W^{i},Z_{1}^{i},Z_{2}^{i})\) is a _representative_. Assume that we also have connected, pointed curves \(\pi^{i}_{C}:(C^{i},q^{i}_{1},q^{i}_{2})\to p^{i}(Z_{2}^{i})\) such that \(\pi^{i}\circ\pi^{i}_{B}(p^{i}_{2})=\pi^{i}_{C}(q^{i}_{1})\) and \(\pi^{i+1}\circ\pi^{i+1}_{B}(p^{i}_{1})=\pi^{i}_{C}(q^{i}_{2})\) (indexing modulo \(m\)). Alternating the \(B^{i}\) with the \(C^{i}\) gives a loop in \(X\). We call it a _representative_ of \(\mathcal{L}\). **Definition 57**.: Let \(X\) be a proper scheme over a field and \[\mathcal{L}:=\big{\{}\pi^{i}:(W^{i},Z_{1}^{i},Z_{2}^{i})\to X\colon i=1,\dots, m\big{\}},\] a loop in \(\operatorname{Chow}_{1}(X)\). Let \(Z_{V}\subset V\subset\operatorname{Chow}_{1}(X)\) be irreducible subvarieties. The _translation_ of \(\mathcal{L}\) by \((V\supset Z_{V})\) is the loop made from the \[(W^{i}\oplus V,Z_{1}^{i}\oplus Z_{V},Z_{2}^{i}\oplus Z_{V})\to \operatorname{Chow}_{1}(X),\] where for \(U_{1},U_{2}\subset\operatorname{Chow}_{1}(X)\), we set \(U_{1}\oplus U_{2}:=\{Z_{1}+Z_{2}:Z_{i}\in U_{i}\}\). **Theorem 58**.: _Let \(X\) be a smooth, projective variety, and \(\mathcal{L}\) a loop in \(\operatorname{Chow}_{1}(X)\). Then there is a 1-cycle \(Z\) and \(r>0\) such that the translation of \(\mathcal{L}\) by \([Z]\oplus r|H^{\operatorname{ci}}|\) is representable by a loop in \(\operatorname{Hilb}_{1}^{\operatorname{nodal}}(X\times\mathbb{P}^{3})\)._ Proof. Choose points \(z_{j}^{i}\in Z_{j}^{i}\). By (15), there are 2-nodal deformation equivalences \[(B^{i},b_{1}^{i},b_{2}^{i})\longleftarrow S^{i}\stackrel{{\pi^{ i}}}{{\longrightarrow}}B^{i}\times X\] representing \((W^{i},z_{1}^{i},z_{2}^{i})\). After adding a suitable 1-cycle \(D\) and using (37), we may assume that the \(S^{i}\to B^{i}\) have connected fibers and their Euler characteristic is independent of \(i\). The problem is that, while \(S_{2}^{i}\to X\) and \(S_{1}^{i+1}\to X\) give the same 1-cycles on \(X\), they are not isomorphic as maps. Next we apply (32-33) to each \(\pi^{i}:S_{2}^{i}\to X\) and \(\pi^{i+1}:S_{1}^{i+1}\to X\). We get nodal curves \(R_{i}\) and 3-nodal deformation equivalences \[(C^{i},c_{1}^{i},c_{2}^{i})\longleftarrow T^{i}\stackrel{{\tau^{ i}}}{{\longrightarrow}}C^{i}\times X\] such that \[T_{1}^{i}\cong S_{2}^{i}\uplus L_{i}\uplus R_{i}\quad\text{and}\quad T_{2}^{i} \cong S_{1}^{i+1}\uplus L_{i}\uplus R_{i}.\] We can now assemble a 3-nodal representative loop from the pieces \[S^{i}\uplus_{j}\big{(}L_{j}\uplus R_{j}\big{)}\quad\text{and}\quad T^{i}\uplus_ {j\neq i}\big{(}L_{j}\uplus R_{j}\big{)}.\] Now take \([Z]=[D]+\sum_{j}[\tau_{*}^{j}(R_{j})]\). We can turn it nodal using (17). Combining this with (45) we get the following. **Corollary 59**.: _Let \(X\) be a smooth, projective variety, and \(\mathcal{L}\) a loop in \(\operatorname{Chow}_{1}(X)\). Assume that \(X\) is src in codimension 1._ _Then a suitable translate of \(\mathcal{L}\) is contractible. _ ## 9. Controlling residue fields In Sections 5-6 we need the following to control residue fields. **Lemma 60**.: _Let \(k\) be a perfect field and \(\{p_{i}:X_{i}\to Y_{i}:i\in I\}\) finitely many nonconstant morphisms of \(k\)-schemes of finite type. Then there are closed points \(x_{i}\in X_{i}\) with images \(y_{i}=p_{i}(x_{i})\) such that the residue fields \(k(x_{i}),k(y_{i})\) are all isomorphic. Moreover, the following hold._ 1. _If_ \(k\) _is infinite, then there is an infinite set_ \(D\subset\mathbb{N}\) _such that, for every_ \(d\in D\)_, there are infinitely many such points with_ \(\deg(k(x_{i})/k)=d.\)__ 2. _If_ \(k\) _is finite, then there is an infinite set_ \(D\subset\mathbb{N}\) _such that, for every_ \(d\in D\)_, there are_ \(\geq\frac{1}{2}|k|^{d}\) _such points with_ \(\deg(k(x_{i})/k)=d.\)__ 3. _If the_ \(\{p_{i}:X_{i}\to Y_{i}:i\in I\}\) _are chosen from a bounded family, and_ \(\#I\) _is bounded, then a fixed_ \(D\subset\mathbb{N}\) _works for all of them._ Proof. We may assume that the \(X_{i},Y_{i}\) are affine and the \(p_{i}\) are flat. Choose dominant morphisms \(Y_{i}\to\mathbb{A}^{1}\) and let \(q:Z\to\mathbb{A}^{1}\) be the fiber product of all \(X_{i}\to Y_{i}\to\mathbb{A}^{1}\). Apply [10] to \(q:Z\to\mathbb{A}^{1}\). We get a dense set of closed points \(z\in Z\) such that \(k(z)=k(q(z))\). Let \(x_{i}\in X_{i}\) be the coordinate prjections of \(z\). Then \(k(z)\supset k(x_{i})\supset k(y_{i})\supset k(q(z))\), so they are all equal. The claims (1-3) may be clearer from [10, 7.2.1]. A typical application is the following. **Corollary 61**.: _Let \(k\) be a perfect field, \(X\) a \(k\)-scheme of finite type and \(Z_{1},Z_{2}\subset X\) positive dimensional, irreducible subsets. Let_ \[M\gets C_{M}\stackrel{{\pi}}{{\longrightarrow}}X\] _be an irreducible family of curves joining \(Z_{1}\) and \(Z_{2}\). That is, \(M\gets C_{M}\) is flat, and there are 2 sections \(\sigma_{i}:M\to C_{M}\) such that the \(\pi\circ\sigma_{i}\) give dominant morphisms form \(M\) to \(Z_{i}\). Then there is a dense set of closed points \(m\in M\) such that \(k(m)=k\bigl{(}\pi\circ\sigma_{1}(m)\bigr{)}=k\bigl{(}\pi\circ\sigma_{2}(m) \bigr{)}\). _
2308.12359
Moving Anchor Extragradient Methods For Smooth Structured Minimax Problems
This work introduces a moving anchor acceleration technique to extragradient algorithms for smooth structured minimax problems. The moving anchor is introduced as a generalization of the original algorithmic anchoring framework, i.e. the EAG method introduced in [32], in hope of further acceleration. We show that the optimal order of convergence in terms of worst-case complexity on the squared gradient, O(1/k2), is achieved by our new method (where k is the number of iterations). We have also extended our algorithm to a more general nonconvex-nonconcave class of saddle point problems using the framework of [14], which slightly generalizes [32]. We obtain similar order-optimal complexity results in this extended case. In both problem settings, numerical results illustrate the efficacy of our moving anchor algorithm variants, in particular by attaining the theoretical optimal convergence rate for first order methods, as well as suggesting a better optimized constant in the big O notation which surpasses the traditional fixed anchor methods in many cases. A proximal-point preconditioned version of our algorithms is also introduced and analyzed to match optimal theoretical convergence rates.
James K. Alcala, Yat Tin Chow, Mahesh Sunkula
2023-08-23T18:04:18Z
http://arxiv.org/abs/2308.12359v2
# Moving anchor Extragradient Methods for Smooth Structured Minimax Problems ###### Abstract. This work introduces a moving anchor acceleration technique to extragradient algorithms for smooth structured minimax problems. The moving anchor is introduced as a generalization of the original algorithmic anchoring framework, i.e. the EAG method introduced in [32], in hope of further acceleration. We show that the optimal order of convergence in terms of worst-case complexity on the squared gradient, \(O(1/k^{2})\), is achieved by our new method (where \(k\) is the number of iterations). We have also extended our algorithm to a more general nonconvex-nonconcave class of saddle point problems using the framework of [14], which slightly generalizes [32]. We obtain similar order-optimal complexity results in this extended case. In both problem settings, numerical results illustrate the efficacy of our moving anchor algorithm variants, in particular by attaining the theoretical optimal convergence rate for first order methods, as well as suggesting a better optimized constant in the big O notation which surpasses the traditional fixed anchor methods in many cases. A proximal-point preconditioned version of our algorithms is also introduced and analyzed to match optimal theoretical convergence rates. James K. Alcala is partially supported by a UCR Dissertation Year Program Award. Yat Tin Chow is partially supported by a Regents' Faculty Fellowship, University of California, Riverside to tackle smooth-structured convex-concave minimax problems. With the primary assumptions being \(R-\)smoothness and convexity-concavity of (1.1), EAG achieved \(O(1/k^{2})=\Omega(1/k^{2})\) convergence rates on the squared gradient-norm; that is, the algorithm is order-optimal. This achievement has inspired a flurry of research activity in recent years [13], [28], [32]. To show optimality, the authors of [32] adapt arguments from [21], [22] to construct a worse-case analysis for a large class of algorithms that contain EAG. As anchoring is relatively new compared to extragradient, much of the literature written as a direct consequence of these results emphasizes anchoring and other Halpern adjacent techniques [15], [30], [29]. However, the EAG class is not without limitations. The two sub-variants of EAG, EAG-V with varying step-size and EAG-C with constant step-size, have difficult convergence analyses and are both relegated to the convex-concave class of smooth functions. Addressing some of these issues, the authors of [14] introduced the Fast ExtraGradient Method, or FEG. This method generalizes the results of EAG and EG+ [6] to introduce the order-optimal pairing of the extragradient anchor to the setting of certain nonconvex-nonconcave problems (specifically, negative comonotone) and introduces an analysis dependent on terms that are less difficult to work with. Furthermore, their work improves upon the bounding constant attained in EAG in convex-concave problems while retaining optimal convergence rates for a broader class of problems that are of particular importance to machine learning practitioners, among many others. In the spirit of these previous works, our contributions are as follows. 1. We introduce a new technique, the'moving anchor,' into the algorithmic settings of EAG-V and FEG under minimal assumptions. We demonstrate that in both settings, introducing the moving anchor retains order-optimal \(O(1/k^{2})\) convergence rates across a range of parameter choices that using the moving anchor gives one access to. One may recover the original fixed-anchor algorithms via parameter tuning, so our algorithms generalize much of the current anchoring literature. 2. For both the EAG-V moving anchor and the FEG moving anchor, we run a variety of numerical examples by comparing multiple versions of our moving anchor algorithms with their fixed anchor counterparts. These numerical examples demonstrate the efficacy of our algorithm, as in all cases, a moving anchor algorithm variant in each example is the fastest algorithm by a constant or is comparable-to-better for all iterations. In addition, in many cases the fastest moving anchor algorithm appears to have a massive initial oscillation towards the fixed point that the fixed point algorithms seem to lack - this may be beneficial for reaching certain stopping criteria very quickly. 3. We develop a theoretical version of the moving anchor algorithms (in both the convex-concave EAG-V and nonconvex-nonconcave FEG) with a proximal anchoring step with fruitful implications for future research. ## 2. Literature Review & Preliminaries ### Halpern iteration and anchoring Introduced in 1967 and inspired by Browder's classical fixed point theorem, the Halpern iteration [11] is an algorithm built for approximating fixed point(s) of nonexpanding maps in a Hilbert space. Its convergence has been studied in [16], and it is extensively used in monotone inclusion-type problem settings [5], [30], [2]. A recent paper [29] draws an explicit connection between Halpern-inspired methods and Nesterov's AGM [23], linking two very active strains of acceleration literature. Directly inspired by Halpern, algorithmic anchoring was recently introduced in the literature [27] and has since been utilized to establish optimal \(O(1/k^{2})\) convergence rates for smooth-structured convex-concave minimax problems [32]. Since then, these methods have been extended to the nonconvex-nonconcave, negative comonotone problem setting [14] and analogous settings for composite problems in a multi-step framework [15]. Interestingly, this latter framework introduces'semi'-anchoring, where only one part of the descent-ascent step is anchored, and a unique anchor occurs at each step of the multi-step. To our knowledge, this is the first instance of an anchoring method that goes beyond a single fixed anchor. In [30], the authors develop an anchored Popov's scheme and a splitting version of the EAG developed in [32], with a similar analysis. ### Extragradient methods The extragradient method first appeared in [12] and has since been an important acceleration method extensively studied in the optimization literature [1], [31], [17], especially in the context of generative adversarial networks [9], [3] and adversarial training [18]. A classical result regarding these methods is that if \(X\in\mathbb{R}^{n},Y\in\mathbb{R}^{m}\) are compact domains, then for the duality gap \(\max_{y*\in Y}L(x,y*)-\min_{x*\in X}L(x*,y)\), the ergodic iterate of extragradient-type methods [19], [24] have an \(O(1/k)\) rate, which is order-optimal [25], [20]. Recently, it was shown that the last iterate convergence rate for extragradient also attains \(O(1/k)\) convergence [10], with _only_ monotonicity and Lipschitz assumptions. This closes the gap between the last-iterate and ergodic-iterate convergence rates for extragradient discussed in [8]. Another recent interesting result was attained in [6], where the authors developed the Extragradient+ method, a variant of extragradient extended to various nonconvex-nonconcave problem settings. On the other hand, when the problem at hand has certain smoothness properties, the squared gradient norm \(\|\nabla L\|^{2}\) for extragradient-type algorithms recently achieved order-optimal convergence of \(O(1/k^{2})\)[32], [14], thanks in part to the synthesis with anchoring. This breaks the bound of the SCLI class of algorithms discussed in [8], which contains the unmodified extragradient, because EAG is _not_ SCLI, but specifically 2-CLI or in an extended class of 1-CLI algorithms. See Appendix D.2 of [32] for a best-iterate (NOT last iterate, at the time of writing this quantity doesn't seem to be known) convergence analysis of extragradient and Appendix E of [32] and [8] for more details on the relationships between these classes of algorithms. We conclude this discussion by remarking that for smooth problems, the bound on the squared gradient norm is meaningful in nonconvex-nonconcave problem settings, and as demonstrated in this and recent works, has room for numerical improvement. ### Preliminaries A saddle function \(L:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\) is (non)convex-(non)concave if it is (non)convex in \(x\) for any fixed \(y\in\mathbb{R}^{m}\) and (non)concave in \(y\) for any fixed \(x\in\mathbb{R}^{n}\). A saddle point \((\hat{x},\hat{y})\in\mathbb{R}^{n}\times\mathbb{R}^{m}\) is any point such that the inequality \(L(\hat{x},y)\leq L(\hat{x},\hat{y})\leq L(x,\hat{y})\) for all \(x\in\mathbb{R}^{n}\) and \(y\in\mathbb{R}^{m}.\) Solutions to (1.1) are defined as saddle points. Throughout this paper, we assume the differentiability of \(L\), and we are especially interested in the so-called _saddle operator_ associated to \(L\), \[G_{L}(z)=\left[\begin{array}{c}\nabla_{x}L(x,y)\\ -\nabla yL(x,y)\end{array}\right] \tag{2.1}\] where the \(L\) subscript is omitted when the underlying saddle function is known. When our problem is convex-concave, the operator (2.1) is known to be monotone [26], meaning \(\langle G_{L}(z_{1})-G_{L}(z_{2}),z_{1}-z_{2}\rangle\geq 0\ \forall z_{1},z_{2}\in\mathbb{R}^{n}\times\mathbb{R}^{m}.\) We assume that this operator \(G_{L}\) is \(R\)-Lipschitz, or has certain stronger Lipschitz properties we detail later; this is sometimes referred to as \(L\) being \(R\)-smooth. With these properties in mind, one may introduce an assumption that generalizes monotonicity: let \(\rho\in(-\frac{1}{2R},+\infty).\) In this paper, we assume that when \(G_{L}\) is _not_ monotone, it satisfies \[\langle G_{L}(z_{1})-G_{L}(z_{2}),z_{1}-z_{2}\rangle\geq\rho\|G_{L}(z_{1})-G_{ L}(z_{2})\|^{2}\ \forall z_{1},z_{2}\in\mathbb{R}^{n}\times\mathbb{R}^{m}.\] When \(\rho>0\), this is called co-coercivity; when \(\rho=0\), this recovers monotonicity; when \(\rho<0\), this is called negative comonotonicity. This latter condition on (2.1) allows one to consider certain nonconvex-nonconcave problems \(L\), and is also going to be a central focus of this work. Note, however, that these assumptions need not cover all smooth nonconvex-nonconcave problems of interest. Figure 1, Table 1, and Example 1 of [14] illustrate broader problem classes than negative comonotonicity that retain smoothness while being nonconvex-nonconcave. Finally we state that although \(\nabla L\neq G_{L}\), we have \(\|\nabla L\|=\|G_{L}\|\), so we may use these expressions interchangeably. ## 3. Original Algorithm, EAG-V The Extragradient Anchored Algorithm, or EAG with varying step size (EAG-V) has a simple statement and a relatively simple proof of convergence: \[z^{k+1/2} =z^{k}+\beta_{k}(z^{0}-z^{k})-\alpha_{k}G(z^{k})\] \[z^{k+1} =z^{k}+\beta_{k}(z^{0}-z^{k})-\alpha_{k}G(z^{k+1/2})\] \[\alpha_{k+1} =\frac{\alpha_{k}}{1-\alpha_{k}^{2}R^{2}}\left(1-\frac{(k+2)^{2} }{(k+1)(k+3)}\alpha_{k}^{2}R^{2}\right)\] \[=\alpha_{k}\left(1-\frac{1}{(k+1)(k+3)}\frac{\alpha_{k}^{2}R^{2} }{1-\alpha_{k}^{2}R^{2}}\right)\] with \(\alpha_{0}\in(0,1/R)\), and R a predetermined constant. Here, \(G\) is the so-called saddle operator, \(G:=(\nabla_{x}L,-\nabla_{y}L)\) and \(L\) is a convex-concave saddle function in a minimax optimization problem. It is a nontrivial fact that \(G\) is monotone [32]. The structure of the \(\alpha_{k}\)'s and \(\beta_{k}\)'s are detailed below alongside auxiliary sequences \(A_{k}\) and \(B_{k}\). We state the convergence of this algorithm as a theorem and relay the details of its convergence via a specific Lyapunov functional as a lemma. For more details, including a version of EAG with a non-varying step size, see [32]. **Theorem 3.1** (EAG-V convergence rate [32]).: _Assume \(L:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) is an \(R\)-smooth convex-concave function with a saddle point \(z^{*}.\) Assume further that \(\alpha_{0}\in(0,\frac{3}{4R})\) and define \(\alpha_{\infty}=\lim_{k\rightarrow\infty}\alpha_{k}.\) Then EAG-V converges, with rate_ \[||\nabla G(z^{k})||^{2}\leq\frac{4(1+\alpha_{0}\alpha_{\infty}R^{2})}{\alpha_ {\infty}^{2}}\frac{||z^{0}-z^{*}||^{2}}{(k+1)(k+2)}\] _where \(G=(\nabla L|_{x\in\mathbb{R}^{n}},-\nabla L|_{-y\in-\mathbb{R}^{m}}).\)_ Since \(z^{*}\) is the saddle point, this theorem demonstrates \(O(1/k^{2})\) convergence of the algorithm. To derive this order of convergence, the following lemma is necessary. **Lemma 3.2** (EAG Lyapunov Functional [32]).: _Let \(\{\beta_{k}\}_{k\geq 0}\subseteq(0,1)\) and \(\alpha_{0}\in(0,\frac{1}{R})\) be given. Consider the following sequences defined by the given recurrence relations for \(k\geq 0:\)_ \[A_{k} =\frac{\alpha_{k}}{2\beta_{k}}B_{k}\] \[B_{k+1} =\frac{B_{k}}{1-\beta_{k}}\] \[\alpha_{k+1} =\frac{\alpha_{k}\beta_{k+1}(1-\alpha_{k}^{2}R^{2}-\beta_{k}^{2})} {\beta_{k}(1-\beta_{k})(1-\alpha_{k}^{2}R^{2})} \tag{3.1}\] _where \(B_{0}=1.\) Assume that \(\alpha_{k}\in(0,\frac{1}{R})\) holds for all \(k\geq 0,\) and that \(L\) is \(R-\)smooth and convex-concave. Then the sequence \(\{V_{k}\}_{k\geq}\) defined as_ \[V_{k}:=A_{k}\|G(z^{k})\|^{2}+B_{k}\langle G(z^{k}),z^{k}-z^{0}\rangle \tag{3.2}\] _is non-increasing._ Within (3.2), choosing \(\beta_{k}=\frac{1}{k+2}\) yields \(B_{k}=k+1,A_{k}=\frac{\alpha_{k}(k+2)(k+1)}{2},\) and the construction of \(\alpha_{k+1}\) in (3.1). ## 4. EAG-V with moving anchor In this section, we construct and analyze a new version of the EAG-V algorithm. Here, the anchoring point moves at each time step. We call this the moving anchor algorithm; it utilizes a similar extragradient step. Further down, we demonstrate comparable rates of convergence to the original EAG algorithm with varying step-size. For the \(k-th\) iterate of \(z^{0}\in\mathbb{R}^{n}\times\mathbb{R}^{m}\), the EAG-V with moving anchor is defined as \[z^{0} =\bar{z}^{0}\] \[z^{k+1/2} =z^{k}+\frac{1}{k+2}(\bar{z}^{k}-z^{k})-\alpha_{k}G(z^{k}) \tag{4.1}\] \[z^{k+1} =z^{k}+\frac{1}{k+2}(\bar{z}^{k}-z^{k})-\alpha_{k}G(z^{k+1/2})\] (4.2) \[\bar{z}^{k+1} =\bar{z}^{k}+\gamma_{k+1}G(z^{k+1}) \tag{4.3}\] The major structural difference here is the introduction of the regularly-updating \(\bar{z}^{k}\), analogous to the role of \(z^{0}\) in the EAG-V detailed in the previous section. (4.3) is the regular update for this anchor; it depends on the algorithm update (4.2) rather than exclusively on itself. All previously defined terms are the same as in the fixed anchor algorithm, now with \[c_{k+1} \leq\frac{c_{k}}{1+\delta_{k}}, \tag{4.4}\] \[\gamma_{k+1} \leq\frac{B_{k+1}}{c_{k+1}(1+\frac{1}{\delta_{k}})}. \tag{4.5}\] We choose \(\delta_{k}\) so that \(\sum_{k=0}^{\infty}\log(1+\delta_{k})<\infty\). The \(c_{k}\) terms are part of the definition of the Lyapunov functional we use in our analysis; these come in handy when we use \(\gamma_{k}\) to absolve terms. Let \(c_{\infty}:=\lim_{k\to\infty}c_{k}=c_{0}\prod_{k=0}^{\infty}\frac{1}{1+\delta _{k}}\). As a general rule, one wishes to choose \(c_{0}\) so that \(c_{\infty}\) satisfies some specified convergence constraint; these constraints will appear throughout the major convergence theorems in this section and the next section. While the choice of \(c_{0}\) is therefore limited to according to certain problem/algorithm constraints, in general there seems to be much freedom in choosing \(c_{0}\) and the sequence \(\{\delta_{k}\}.\) For the rest of this article, we take (4.4) and (4.5) to be given with equal signs instead of inequalities. Before we proceed with the analysis, we emphasize that the original EAG-V algorithm may be recovered simply by setting \(\gamma_{k+1}:=0\) for all \(k.\) Now, we give the definition of the Lyapunov functional and show that it is nonincreasing: **Lemma 4.1**.: _The Lyapunov functional_ \[V_{k}:=A_{k}\|G(z^{k})\|^{2}+B_{k}\langle G(z^{k}),z^{k}-\bar{z}^{k}\rangle+c_ {k}\|z^{*}-\bar{z}^{k}\|^{2},\] _where all constants have been previously defined, is nonincreasing._ Proof.: First we reorganize some of the algorithm statements and label them for use later. \[z^{k}-z^{k+1}=\beta_{k}(z^{k}-\bar{z}^{k})+\alpha_{k}G(z^{k+1/2}) \tag{4.6}\] \[z^{k+1/2}-z^{k+1}=\alpha_{k}(G(z^{k+1/2})-G(z^{k}))\] (4.7) \[\bar{z}^{k}-z^{k+1}=(1-\beta_{k})(\bar{z}^{k}-z^{k})+\alpha_{k}G( z^{k+1/2})\] (4.8) \[\bar{z}^{k}-\bar{z}^{k+1}=-\gamma_{k+1}G(z^{k+1}) \tag{4.9}\] (4.6) comes from rearranging (4.2), (4.7) comes from taking the difference between (4.1) and (4.2), (4.8) is \(\bar{z}^{k}\) minus (4.2), and (4.9) is (4.3) rearranged. The overall goal of this proof is to show that the difference \(V_{k}-V_{k+1}\) is nonnegative. \[V_{k}-V_{k+1}\] \[\geq A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}\underbrace{+B_{k} \langle z^{k}-\bar{z}^{k},G(z^{k})\rangle}_{\mathrm{II}}\] \[\underbrace{-B_{k+1}\langle z^{k+1}-\bar{z}^{k+1},G(z^{k+1}) \rangle}_{\mathrm{II}}+c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{ k+1}\|^{2}\] \[\underbrace{-\frac{B_{k}}{\beta_{k}}\langle z^{k}-z^{k+1},G(z^{k} )-G(z^{k+1})\rangle}_{\mathrm{III}}\] Notice that the last term above, III, is not part of the definition of \(V_{k}\) nor \(V_{k+1}.\) It has been introduced to aid in the proof and is nonnegative by the monotonicity of \(G\). We would like to absolve any terms containing the \(\bar{z}^{k},\bar{z}^{k+1}\) terms. To accomplish this, our next goal is to focus on turning the labeled parts (I, II, III) of the above line into \[\underbrace{\alpha_{k}B_{k+1}\langle G(z^{k+1/2},G(z^{k+1})\rangle+\frac{B_{k +1}}{\gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}-\frac{\alpha_{k}B_{k}}{ \beta k}\langle G(z^{k+1/2}),G(z^{k})-G(z^{k+1})\rangle}_{\mathrm{IV}}.\] We now detail this process. The term I does not change. For II, on the other hand, we have \[\underbrace{-B_{k+1}\langle z^{k+1}-\bar{z}^{k+1},G(z^{k+1}) \rangle}_{\mathrm{II}}\] \[=B_{k+1}\langle\bar{z}^{k}-z^{k+1},G(z^{k+1})\rangle-B_{k+1} \langle\bar{z}^{k}-\bar{z}^{k+1},G(z^{k+1})\rangle \tag{4.10}\] \[=B_{k+1}\langle(1-\beta_{k})(\bar{z}^{k}-z^{k})+\alpha_{k}G(z^{k+ 1/2}),G(z^{k+1})\rangle-B_{k+1}\langle-\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle \tag{4.11}\] where the first equality comes from recognizing \(z^{k+1}-\bar{z}^{k+1}=z^{k+1}-\bar{z}^{k}+\bar{z}^{k}-\bar{z}^{k+1}\) and the second comes from substituting in equality (4.8) and (4.9). For III, \[\underbrace{-\frac{B_{k}}{\beta_{k}}\langle z^{k}-z^{k+1},G(z^{k} )-G(z^{k+1})\rangle}_{\mathrm{III}}\] \[=-\frac{B_{k}}{\beta_{k}}\langle z^{k}-z^{k+1},G(z^{k})\rangle+ \frac{B_{k}}{\beta_{k}}\langle z^{k}-z^{k+1},G(z^{k+1})\rangle \tag{4.12}\] \[=-\frac{B_{k}}{\beta_{k}}\langle\beta_{k}(z^{k}-\bar{z}^{k})+ \alpha_{k}G(z^{k+1/2}),G(z^{k})\rangle+\frac{B_{k}}{\beta_{k}}\langle\beta_{k}( z^{k}-\bar{z}^{k})+\alpha_{k}G(z^{k+1/2}),G(z^{k+1})\rangle,\] where the last equality is a result of substituting in (4.6) in each of the first arguments of the two terms in (4.12). Now, we can begin simplify everything we've done to obtain IV. \[\underbrace{\langle z^{k}-\bar{z}^{k},G(z^{k})\rangle}_{\text{I}} \tag{4.13}\] \[\underbrace{\langle(1-\beta_{k})(z^{k}-\bar{z}^{k})-\alpha_{k}G(z^ {k+1/2})-\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle}_{\text{II}}\] (4.14) \[\underbrace{-\frac{B_{k}}{\beta_{k}}\langle\beta_{k}(z^{k}-\bar{z }^{k})+\alpha_{k}G(z^{k+1/2}),G(z^{k})\rangle}_{\text{III}}\] (4.15) \[\underbrace{+\frac{B_{k}}{\beta_{k}}\langle\beta_{k}(z^{k}-\bar{z }^{k})+\alpha_{k}G(z^{k+1/2}),G(z^{k+1})\rangle}_{\text{III}} \tag{4.16}\] From here, we'll use two facts. First, \(B_{k+1}=\frac{B_{k}}{1-\beta_{k}}.\) This allows us to combine and cancel the very first component of (4.14) with the \(\beta_{k}(z^{k}-\bar{z}^{k})\) component of (4.16). Additionally, (4.13) cancels with the \(\beta_{k}(z^{k}-\bar{z}^{k})\) component of (4.15). This leaves us with \[= \underbrace{\alpha_{k}B_{k+1}\langle G(z^{k+1/2}),G(z^{k+1}) \rangle+B_{k+1}\langle\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle}_{\text{II}}\] \[\underbrace{-\frac{B_{k}\alpha_{k}}{\beta_{k}}\langle G(z^{k+1/2} ),G(z^{k})\rangle+\frac{B_{k}\alpha_{k}}{\beta_{k}}\langle G(z^{k+1/2}),G(z^{k +1})\rangle}_{\text{III}}\] \[= \underbrace{\alpha_{k}B_{k+1}\langle G(z^{k+1/2}),G(z^{k+1}) \rangle+\frac{B_{k+1}}{\gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}}_{\text {IV}}\] \[\underbrace{-\frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1/2} ),G(z^{k})-G(z^{k+1})\rangle}_{\text{IV}},\] where the last equality is a result of applying the anchor update to get the norm squared term, and combining the latter two terms while leaving \(G(z^{k+1/2})\) fixed. Thus, we've shown \[A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}\] \[+ B_{k}\langle z^{k}-\bar{z}^{k},G(z^{k})\rangle-B_{k+1}\langle z ^{k+1}-\bar{z}^{k+1},G(z^{k+1})\rangle\] \[+ c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2 }-\frac{B_{k}}{\beta_{k}}\langle z^{k}-z^{k+1},G(z^{k})-G(z^{k+1})\rangle\] \[= A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}+\alpha_{k}B_{k+1} \langle G(z^{k+1/2}),G(z^{k+1})\rangle \tag{4.17}\] \[- \frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1/2}),G(z^{k})- G(z^{k+1})\rangle\] (4.18) \[+ c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2 }+\frac{B_{k+1}}{\gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2} \tag{4.19}\] Now, we continue on with our goal of absolving terms. From Cauchy, we have that \[||z^{*}-\bar{z}^{k+1}\|^{2}\leq(1+\delta_{k})\|z^{*}-\bar{z}^{k}\|^{2}+(1+ \frac{1}{\delta_{k}})\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2} \tag{4.20}\] and from the algorithm definition, \[c_{k+1}=\frac{c_{k}}{1+\delta_{k}},\quad\gamma_{k+1}=\frac{B_{k+1}}{c_{k+1}(1+ \frac{1}{\delta_{k}})}. \tag{4.21}\] We apply (4.20) to (4.19) to obtain \[\geq A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}+\alpha_{k}B_{k+1} \langle G(z^{k+1/2}),G(z^{k+1})\rangle\] \[- \frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1/2}),G(z^{k})-G (z^{k+1})\rangle+c_{k}\|z^{*}-\bar{z}^{k}\|^{2}\] \[- c_{k+1}\big{(}(1+\delta_{k})\|z^{*}-\bar{z}^{k}\|^{2}+(1+\frac{1 }{\delta_{k}})\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}\big{)}+\frac{B_{k+1}}{\gamma_ {k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}\] and now we apply (4.21): \[\geq A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}+\alpha_{k}B_{k+1} \langle G(z^{k+1/2}),G(z^{k+1})\rangle\] \[- \frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1/2}),G(z^{k})-G (z^{k+1})\rangle+c_{k}\|z^{*}-\bar{z}^{k}\|^{2}\] \[- c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-\frac{B_{k+1}}{\gamma_{k+1}}\|\bar {z}^{k}-\bar{z}^{k+1}\|^{2}+\frac{B_{k+1}}{\gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^ {k+1}\|^{2}\] \[= A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}+\alpha_{k}B_{k+1} \langle G(z^{k+1/2}),G(z^{k+1})\rangle\] \[- \frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1/2}),G(z^{k})-G (z^{k+1})\rangle+0.\] At this point, showing that the remaining terms are nonnegative is nontrivial, but directly follows the arguments made in the proof of Lemma 2 in [32]. Specifically, following (29) onwards in [32], one will find that \[A_{k}\|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}+\alpha_{k}B_{k+ 1}\langle G(z^{k+1/2}),G(z^{k+1})\rangle\] \[- \frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1/2}),G(z^{k})-G (z^{k+1})\rangle\] \[\geq 0,\] which completes the proof. Now we have the primary result of this section. **Theorem 4.2**.: _The EAG-V algorithm with moving anchor, described above, together with the Lyapunov functional described in Lemma 4.1, has convergence rate_ \[\|G(z^{k})\|^{2}\leq\frac{4(\alpha_{0}R^{2}+c_{0})\|z^{0}-z^{*}\|^{2}}{\alpha_ {\infty}(k+1)(k+2)}\] _as long as we assume \(c_{\infty}\alpha_{\infty}\geq 1\)._ Proof.: For the most part, this argument parallels the analogous argument found in [32]. We use the Lyapunov functional to isolate and bound \(\|G(z^{k})\|^{2}\). \[V_{k} \leq V_{0}=\alpha_{0}\|G(z^{0})\|^{2}+c_{0}\|z^{0}-z^{*}\|^{2} \tag{4.22}\] \[\leq(\alpha_{0}R^{2}+c_{0})\|z^{0}-z^{*}\|^{2}\] by \(R-\)smoothness. On the other hand, \[V_{k} =A_{k}\|G(z^{k})\|^{2}+B_{k}\langle G(z^{k}),z^{k}-\bar{z}^{k}\rangle +c_{k}\|z^{*}-\bar{z}^{k}\|^{2}\] \[\geq A_{k}\|G(z^{k})\|^{2}+B_{k}\langle G(z^{k}),z^{*}-\bar{z}^{k} \rangle+c_{k}\|z^{*}-\bar{z}^{k}\|^{2}\] \[\geq\frac{A_{k}}{2}\|G(z^{k})\|^{2}+(c_{k}-\frac{B_{k}^{2}}{2A_{ k}})\|z^{*}-\bar{z}^{k}\|^{2}\] \[=\frac{\alpha_{k}(k+1)(k+2)}{4}\|G(z^{k})\|^{2}+(c_{k}-\frac{k+1} {\alpha_{k}(k+2)})\|z^{*}-\bar{z}^{k}\|^{2}\] \[\geq\frac{\alpha_{\infty}}{4}(k+1)(k+2)\|G(z^{k})\|^{2}+(c_{ \infty}-\frac{1}{\alpha_{\infty}})\|z^{*}-\bar{z}^{k}\|^{2}\] \[\geq\frac{\alpha_{\infty}}{4}(k+1)(k+2)\|G(z^{k})\|^{2}\] As long as \(c_{\infty}\geq\frac{1}{\alpha_{\infty}}\), the second to last line above is positive, and we may focus on the inequality given to us by the last line above: \[\frac{\alpha_{\infty}}{4}(k+1)(k+2)\|G(z^{k})\|^{2}\leq(\alpha_{0}R^{2}+c_{0}) \|z^{0}-z^{*}\|^{2}.\] Dividing both sides by the constant \(\frac{\alpha_{\infty}}{4}(k+1)(k+2)\) gives the desired result. ### Proof of convergence for \(-\gamma_{k}\) We next show that, for a slightly restricted choice of \(\gamma_{k}\), our proof works for \(-\gamma_{k}\) in place of \(\gamma_{k}.\) This is of interest as numerical results indicate that certain problem settings favor \(-\gamma_{k}\) in terms of convergence speed by a constant, while \(+\gamma_{k}\) seems to be favored in other settings. **Lemma 4.3**.: _In the setting of Lemma 4.1, replace \(\gamma_{k}\) with \(-\gamma_{k}\) in the definition of the EAG-V algorithm with moving anchor, and suppose \(\gamma_{k+1}=\min\frac{B_{k+1}}{c_{k+1}(1+\frac{1}{\delta_{k}})},\frac{e_{k+1 }}{2B_{k+1}\|G(z^{k+1})\|^{2}},\) where \(\sum e_{k}<\infty.\) Then our Lyapunov functional is nonincreasing, and we attain the same order of convergence in our algorithm._ Proof.: First, note that the anchor update (4.3) has been modified to become \[-\gamma_{k+1}\geq-\frac{B_{k+1}}{c_{k+1}(1+\frac{1}{\delta_{k}})}, \tag{4.23}\] resulting in the following modification to (4.9): \[\bar{z}^{k}-\bar{z}^{k+1}=\gamma_{k+1}G(z^{k+1}). \tag{4.24}\] We see the first adjustment in the previous lemma in the transition from line (4.10) to (4.11); note that we focus only on the terms dependent on (4.24): \[-B_{k+1}\langle\bar{z}^{k}-\bar{z}^{k+1},G(z^{k+1})\rangle\] \[=-B_{k+1}\langle\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle\] \[=-B_{k+1}\langle(2\gamma_{k+1}-\gamma_{k+1})G(z^{k+1}),G(z^{k+1})\rangle\] \[=-B_{k+1}\langle 2\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle+B_{k+1} \langle\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle. \tag{4.25}\] The latter term in line (4.25) will go on and cancel in a quadratic form as in the proof of the original lemma. Continuing, one will be left over with the term \(-B_{k+1}\langle 2\gamma_{k+1}G(z^{k+1}),G(z^{k+1})\rangle.\) At this point, if we proceed as in Lemma 4.1, we end up with the inequality \[V_{k}-V_{k+1}\geq-2\gamma_{k+1}B_{k+1}\|G(z^{k+1})\|^{2}\] or, after rearranging, \[V_{k}-V_{k+1}+2\gamma_{k+1}B_{k+1}\|G(z^{k+1})\|^{2}\geq 0.\] By construction, the left-hand side of the inequality should remain nonnegative. Now, because \[\gamma_{k+1}\leq\frac{e_{k+1}}{2B_{k+1}\|G(z^{k+1})\|^{2}},\] when we proceed as in the proof of Theorem 4.2 to show convergence, getting to the line (4.22), we get the inequality \[V_{k}\leq V_{0}+\sum_{j=1}^{k-1}2\gamma_{j}B_{j}\|G(z^{j})\|^{2}\] \[\leq V_{0}+\sum_{j=1}^{k-1}e_{j}\] \[\leq V_{0}+\sum_{j=1}^{\infty}e_{j}\] \[= CV_{0},\] where \(C\) is a constant. This completes the proof that our algorithm has both a nonincreasing Lyapunov functional and the \(O(1/k^{2})\) convergence under the assumption of a (slightly restricted) negative \(\gamma_{k}\) term. It is worth noting that \(z^{k+1}\) is computed before \(\gamma_{k+1}\) within the algorithm, so the restriction in Lemma 4.3 and others like it may not be too restrictive to work with. Our toy numerical tests allowed us to simply put a negative sign in front of the \(\gamma_{k}\) terms to attain convergence matching the optimal rate, and which is in some cases markedly faster. Unfortunately, these results do not give much of an indication as to how exactly the tuning of \(\gamma_{k}\) benefits numerical convergence rates. We leave the theoretical exploration of this phenomena to future work. ## 5. Moving anchor in nonconvex/nonconcave minmax problems In [14], the methods in [32] are expanded to a broader class of smooth structured nonconvex-nonconcave minimax problems at the same accelerated \(O(1/k^{2})\) convergence rate. This new algorithm is called the FEG, or Fast ExtraGradient method. We bring the idea of the moving anchor to this more general setting, and show that a moving anchor with more or less the same conditions in the convex-concave setting is also a feasible approach in this class of problems. Below we give the explicit definition of this FEG modified via a moving anchor, and state its convergence results via a nonincreasing Lyapunov functional and a theorem bounding the squared gradient norm. The FEG with moving anchor, following [14], is given as \[z^{k+1/2} =z^{k}+\beta_{k}(\bar{z}^{k}-z^{k})-(1-\beta_{k})(\alpha_{k}+2 \rho_{k})G(z^{k})\] \[z^{k+1} =z^{k}+\beta_{k}(\bar{z}^{k}-z^{k})-\alpha_{k}G(z^{k+1/2})-(1- \beta_{k})2\rho_{k}G(z_{k})\] \[\bar{z}^{k+1} =\bar{z}^{k}+\gamma_{k+1}G(z^{k+1})\] \[c_{k+1} =\frac{c_{k}}{1+\delta_{k}}\] \[\gamma_{k+1} =\frac{B_{k+1}}{c_{k+1}(1+\frac{1}{\delta_{k}})}\] where \(\{\delta_{k}\}\) is chosen so that \(\sum_{i=0}^{\infty}\log(1+\delta_{i})<\infty,\) with \(\{\gamma_{k}\},\)\(\{c_{k}\},\) and \(c_{\infty}\) chosen in the same method given in the EAG-V with moving anchor, and, as before, \(\bar{z}^{0}=z^{0}\). Before we state the results, two remarks are needed: **Remark 5.1**.: For some \(\rho\in\big{(}-\frac{1}{2R},\infty\big{)},\langle G(z)-G(z^{\prime}),z-z^{ \prime}\rangle\geq\rho\|G(z)-G(z^{\prime})\|^{2}\;\forall z,z^{\prime}\in\mathbb{ R}^{m}\times\mathbb{R}^{n}\). (Note \(z,z^{\prime}\) are vectors, not matrices.) This is known as \(\rho-\)comonotonicity, and has three sub-conditions. For \(\rho>0,\) we have cocoercivity; for \(\rho=0,\) we have monotonicity; and with \(\rho<0\) we have (negative) comonotonicity. This condition will hold whenever any FEG variant is discussed throughout this work. **Remark 5.2**.: As in the EAG with moving anchor, one may recover the original fixed anchor FEG by setting \(\gamma_{k}=0\) for all \(k.\) This allows us to state our algorithm while also offering an easy reference point for the original fixed anchor version. **Lemma 5.3**.: _Suppose that the sequences \(\{\alpha_{k}\}_{k\geq 0},\)\(\{\beta_{k}\}_{k\geq 0},\) and \(\{R_{k}\}_{k\geq 0}\subset(0,\infty),\) and \(\{\rho_{k}\}_{k\geq 0}\subset\mathbb{R}\) satisfy \(\alpha_{0}\in(0,\infty),\alpha_{k}\in(0,\frac{1}{R_{k}}),\beta_{0}=1,\{\beta_ {k}\}_{k\geq 1}\subseteq(0,1)\) for all \(k\). Additionally, assume that the following bound, Lipschitz conditions, and comonotonicity conditions respectively hold for all \(k\geq 0:\)_ \[\frac{(1-\beta_{k+1})}{2\beta_{k+1}}(\alpha_{k+1}+2\rho_{k+1})- \rho_{k}\leq\frac{1}{2\beta_{k}}(\alpha_{k}+2\rho_{k})-\rho_{k}\] \[\|G(z^{1})-G(z^{0})\| \leq R_{0}\|z^{1}-z^{0}\|\] \[\|G(z^{k+1})-G(z^{k+1/2})\| \leq R_{k}\|z^{k+1}-z^{k+1/2}\|\] \[\langle G(z^{k+1})-G(z^{k}),z^{k+1}-z^{k}\rangle \geq\rho_{k}\|G(z^{k+1})-G(z^{k})\|^{2}.\] _If also \(A_{0}=\frac{\alpha_{0}(L_{0}^{2}\alpha_{0}^{2}-1)}{2},B_{0}=0,B_{1}=1,\) and_ \[A_{k}=\frac{B_{k}(1-\beta_{k})}{2\beta_{k}}(\alpha_{k}+2\rho_{k})-B_{k}\rho_{ k},\;B_{k+1}=\frac{B_{k}}{1-\beta_{k}},\] _then the Lyapunov functional_ \[V_{k}:=A_{k}\|G(z^{k})\|^{2}-B_{k}\langle G(z^{k}),\bar{z}^{k}-z^{k}\rangle+c_ {k}\|z^{*}-\bar{z}^{k}\|^{2},\] _where \(z^{*}\) is a saddle point, is nonincreasing._ Proof.: This proof proceeds similarly to that of the convex-concave, monotone case in the previous section. First, we write out some relations which will be used shortly: \[z^{k+1}-z^{k}=\frac{\beta_{k}}{1-\beta_{k}}(\bar{z}^{k}-z^{k+1 })-\frac{\alpha_{k}}{1-\beta_{k}}G(z^{k+1/2})-2\rho_{k}G(z^{k}) \tag{5.1}\] \[z^{k+1}-z^{k}=\beta_{k}(\bar{z}^{k}-z^{k})-\alpha_{k}G(z^{k+1/2 })-2\rho_{k}(1-\beta_{k})G(z^{k})\] (5.2) \[z^{k+1}-z^{k+1/2}=\alpha_{k}((1-\beta_{k})G(z^{k})-G(z^{k+1/2}))\] (5.3) \[\bar{z}^{k}-\bar{z}^{k+1}=-\gamma_{k+1}G(z^{k+1}) \tag{5.4}\] As in the proof in the convex-concave case of EAG-V with moving anchor, we introduce a term to the difference of two arbitrary consecutive functionals in our sequence: \[V_{k}-V_{k+1} \tag{5.5}\] \[\geq A_{k}\|G(z^{k})\|^{2}-B_{k}\langle G(z^{k}),\bar{z}^{k}-z^{k}\rangle- A_{k+1}\|G(z^{k+1})\|^{2}+B_{k+1}\langle G(z^{k+1}),\bar{z}^{k+1}-z^{k+1}\rangle\] \[+ c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2}\] \[- \frac{B_{k}}{\beta_{k}}\big{(}\langle G(z^{k+1})-G(z^{k}),z^{k+1} -z^{k}\rangle-\rho_{k}\|G(z^{k+1})-G(z^{k})\|^{2}\big{)}\] \[= A_{k}\|G(z^{k})\|^{2}-B_{k}\langle G(z^{k}),\bar{z}^{k}-z^{k} \rangle-A_{k+1}\|G(z^{k+1})\|^{2}+B_{k+1}\langle G(z^{k+1}),\bar{z}^{k+1}-z^{k +1}\rangle\] \[+ c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2}\] \[- \frac{B_{k}}{\beta_{k}}\langle G(z^{k+1}),z^{k+1}-z^{k}\rangle+ \frac{B_{k}}{\beta_{k}}\langle G(z^{k}),z^{k+1}-z^{k}\rangle+\frac{B_{k}\rho_{ k}}{\beta_{k}}\|G(z^{k+1})-G(z^{k})\|^{2}\] From here, we first simplify the introduced term further and then substitute (5.1) into the inner product which has a \(B_{k}\) out front, and then substitute (5.2) into the inner product with a \(B_{k+1}\) out front; each of these is in line (5.5). After some computation, this leads to \[V_{k}-V_{k+1} \tag{5.6}\] \[\geq \big{(}A_{k}-\frac{2B_{k}\rho_{k}(1-\beta_{k})}{\beta_{k}}\big{)} \|G(z^{k})\|^{2}-A_{k+1}\|G(z^{k+1})\|^{2}+\frac{\alpha_{k}B_{k}}{\beta_{k}(1 -\beta_{k})}\langle G(z^{k+1}),G(z^{k+1/2})\rangle\] \[+ \frac{2\rho_{k}B_{k}}{\beta_{k}}\langle G(z^{k+1}),G(z^{k}) \rangle-\frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k}),G(z^{k+1/2})\rangle+ B_{k+1}\langle G(z^{k}),\bar{z}^{k+1}-\bar{z}^{k}\rangle\] \[+ \frac{B_{k}\rho_{k}}{\beta_{k}}\|G(z^{k+1})-G(z^{k})\|^{2}+c_{k} \|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2}\] \[= \big{(}A_{k}-\frac{B_{k}\rho_{k}(1-2\beta_{k})}{\beta_{k}}\big{)} \|G(z^{k})\|^{2}-\big{(}A_{k}-\frac{B_{k}\rho_{k}}{\beta_{k}}\big{)}\|G(z^{k+1 })\|^{2}+\frac{\alpha_{k}B_{k}}{\beta_{k}(1-\beta_{k})}\langle G(z^{k+1}),G(z^ {k+1/2})\rangle\] \[- \frac{\alpha_{k}B_{k}}{\beta_{k}}\langle G(z^{k}),G(z^{k+1/2}) \rangle+B_{k+1}\langle G(z^{k+1}),\bar{z}^{k+1}-\bar{z}^{k}\rangle+c_{k}\|z^{ *}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2}.\] Next, let's focus on the last three terms in (5.7): \(B_{k+1}\langle G(z^{k+1}),\bar{z}^{k+1}-\bar{z}^{k}\rangle+c_{k}\|z^{*}-\bar{z} ^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2}.\) By Cauchy-Schwartz, \[\|z^{*}-\bar{z}^{k+1}\|^{2}\leq(1+\delta_{k})\|z^{*}-\bar{z}^{k}\|^{2}+(1+\frac {1}{\delta_{k}})\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}.\] Second, by construction \[B_{k+1}\langle G(z^{k+1}),\bar{z}^{k}-\bar{z}^{k+1}\rangle=\frac{B_{k+1}}{ \gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}\] and \[c_{k+1}\leq\frac{c_{k}}{1+\delta_{k}},\;\gamma_{k+1}\leq\frac{B_{k+1}}{c_{k+1}( 1+\frac{1}{\delta_{k}})}.\] Applying these facts to the three terms we're considering, we get that \[B_{k+1}\langle G(z^{k+1}),\bar{z}^{k+1}-\bar{z}^{k}\rangle+c_{k}\|z ^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\|z^{*}-\bar{z}^{k+1}\|^{2}\] \[\geq \frac{B_{k+1}}{\gamma_{k+1}}\|\bar{z}^{k+1}-\bar{z}^{k}\|^{2}+c_{k }\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}\big{(}(1+\delta_{k})\|z^{*}-\bar{z}^{k}\|^{2 }+(1+\frac{1}{\delta_{k}})\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}\big{)}\] \[\geq \frac{B_{k+1}}{B_{k+1}}c_{k+1}(1+\frac{1}{\delta_{k}})\|\bar{z}^{ k}-\bar{z}^{k+1}\|^{2}+c_{k}\|z^{*}-\bar{z}^{k}\|^{2}\] \[-c_{k+1}(1+\delta_{k})\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}(1+\frac{1 }{\delta_{k}})\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}\] \[\geq c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k+1}(1+\delta_{k})\|z^{*}-\bar{ z}^{k}\|^{2}\geq c_{k}\|z^{*}-\bar{z}^{k}\|^{2}-c_{k}\|z^{*}-\bar{z}^{k}\|^{2} \geq 0.\] While this takes care of the latter three terms in lines (5.6) to (5.7), that everything else is nonnegative is a nontrivial argument. However, it directly follows the proof of Lemma 7.1 in [14], so as before we refer to their proof, and then our Lyapunov functional is also nonincreasing. **Theorem 5.4** (\(O(1/k^{2})\) convergence rate for FEG with moving anchor).: _For the \(R-\)Lipschitz continuous and \(\rho-\)comonotone operator G where \(\rho>-\frac{1}{2R}\), \(z^{*}\in Z_{*}(G),Z_{*}(G):=\{z^{*}\in\mathbb{R}^{d}:G(z^{*})=0\},\) and \(c_{\infty}-\frac{1}{\frac{1}{R}+2\rho}\geq 0,\) the sequence \(\{z^{k}\}_{k\geq 0}\) generated by FEG with moving anchor satisfies_ \[\|G(z^{k})\|^{2}\leq\frac{4c_{0}\|z^{0}-z^{*}\|^{2}}{k^{2}(\frac{1}{R}+2\rho)}\] _for all \(k\geq 1.\)_ Proof.: Under the same assumptions as Lemma 5.3, we take \(\alpha_{k}=1/R,\beta_{k}=\frac{1}{k+1},R_{k}=R,\) which satisfy the conditions in the statement for all \(k\) greater than or equal to \(0.\) These give us \(B_{k}=k,A_{k}=\frac{k^{2}}{2}(\frac{1}{R}+2\rho)-k\rho.\) From here, \[c_{0}\|z^{*}-z^{0}\|^{2}=V_{0}\geq V_{k}=\bigg{(}\frac{k^{2}}{2}(\frac{1}{R}+2 \rho)-k\rho\bigg{)}\|G(z^{k})\|^{2}-k\langle G(z^{k}),\bar{z}^{k}-z^{k}\rangle +c_{k}\|z^{*}-\bar{z}^{k}\|^{2},\] so then \[\frac{k^{2}}{2}(\frac{1}{L}+2\rho)\|G(z^{k})\|^{2}+c_{k}\|z^{*}- \bar{z}^{k}\|^{2}\] \[\leq k\langle G(z^{k}),\bar{z}^{k}-z^{k}\rangle+k\rho\|G(z^{k})\|^{2 }+c_{0}\|z^{*}-z_{0}\|^{2}\] \[\leq k\langle G(z^{k}),\bar{z}^{k}-z^{*}\rangle+c_{0}\|z^{*}-z_{0}\|^{2}\] (by comonotonicity condition) \[\leq k\|G(z^{k})\|\|\bar{z}^{k}-z^{*}\|+c_{0}\|z^{*}-z_{0}\|^{2}\] \[\leq \frac{k^{2}}{2\delta}\|G(z^{k})\|^{2}+\frac{\delta}{2}\|\bar{z}^ {k}-z^{*}\|^{2}+c_{0}\|z^{*}-z_{0}\|^{2}.\] From here, define \(\frac{1}{\delta}=\frac{1}{2R}+\rho\). Then we have that \[\frac{k^{2}}{2}\bigg{(}\frac{1}{R}+2\rho-\frac{1}{2R}-\rho\bigg{)}\|G(z^{k})\| ^{2}+\bigg{(}c_{\infty}-\frac{1}{\frac{1}{R}+2\rho}\bigg{)}\|\bar{z}^{k}-z^{*} \|^{2}\leq c_{0}\|z^{*}-z_{0}\|^{2},\] and as long as the constant \(c_{\infty}-\frac{1}{\frac{1}{R}+2\rho}\geq 0,\) we obtain the desired result by dividing both sides of the inequality \[\frac{k^{2}}{2}\bigg{(}\frac{1}{2R}+\rho\bigg{)}\|G(z^{k})\|^{2}\leq c_{0}\|z ^{*}-z_{0}\|^{2}\] by \(\frac{k^{2}}{2}\bigg{(}\frac{1}{2R}+\rho\bigg{)}\). See [14]'s proof of Theorem 4.1 for the analogous result with a fixed anchor. Next, we show that having \(-\gamma_{k+1}\) in place of \(\gamma_{k+1}\) may also, with some additional assumptions, provide a convergent algorithm. **Lemma 5.5**.: _In the setting of Lemma 5.3, replace \(\gamma_{k}\) with \(-\gamma_{k}\) in the definition of the FEG algorithm with moving anchor, and suppose \(\gamma_{k+1}=\min\frac{B_{k+1}}{c_{k+1}(1+\frac{1}{\delta_{k}})},\frac{e_{k+1 }}{2B_{k+1}\|G(z^{k+1})\|^{2}},\) where \(\sum e_{k}<\infty.\) Then the Lyapunov functional described in Lemma 5.3 is nonincreasing, and we attain the same order of convergence for the FEG with moving anchor and \(-\gamma_{k}.\)_ Proof.: The proof proceeds in exactly the same manner as that in Lemma 4.3. As in the EAG-V with moving anchor case, we suspect this restriction is not too major a restriction based off of numerical results, and that there is a 'better' way to show that the \(-\gamma_{k}\) version of our algorithm converges. ## 6. Introducing a proximal term ### Modified EAG-V with moving anchor Throughout these notes, each of the algorithms thus far developed are explicit in nature. In this section we introduce and develop a version of these moving anchor algorithms that features a proximal term, and discuss this as a potential future avenue of exploration. We begin with developing the proximal version of the EAG-V with moving anchor. **Definition 6.1** (Modified EAG-V with moving anchor).: In the setting of EAG-V with moving anchor, consider equation (4.9) from the proof of Lemma 4.1: \[\bar{z}^{k}-\bar{z}^{k+1}=-\gamma_{k+1}G(z^{k+1})\] and now let us consider the same equation with an additional term introduced: \[\bar{z}^{k}-\bar{z}^{k+1}=-\gamma_{k+1}G(z^{k+1})-t_{k}(H(\bar{z}^{k})-H(\bar{ z}^{k+1})), \tag{6.1}\] where \(H\) is a monotone operator and \(t_{k}\) is nonnegative. This only modifies the anchor update within the algorithm itself, and it does so in the following way: \[\bar{z}^{k+1}=(I+t_{k}H)^{-1}(\bar{z}^{k}+\gamma_{k+1}G(z^{k+1})+t_{k}H(\bar{ z}^{k})). \tag{6.2}\] This is the modified EAG-V with moving anchor. **Lemma 6.2**.: _Under the same conditions as Lemma 4.1 and with \(H\) any monotone operator, \(t_{k}\) a nonnegative parameter, the Lyapunov functional for the modified EAG-V algorithm with moving anchor is nonincreasing. Specifically, replacing the previous \(\bar{z}^{k+1}\) update in the unmodified EAG-V moving anchor algorithm with equation (6.2) still results in a nonincreasing Lyapunov functional._ Proof.: Within the proof of Lemma 4.1 recall the following line: \[-B_{k+1}\langle z^{k+1} -\bar{z}^{k+1},G(z^{k+1})\rangle\] \[=B_{k+1}\langle\bar{z}^{k}-z^{k+1},G(z^{k+1})\rangle-B_{k+1} \langle\bar{z}^{k}-\bar{z}^{k+1},G(z^{k+1})\rangle.\] Within this proof that the functional is nonincreasing, the primary change is that we must use equation (6.1) for substituting \(G(z^{k+1}).\) This results in \[-B_{k+1}\langle\bar{z}^{k} -\bar{z}^{k+1},G(z^{k+1})\rangle\] \[=-B_{k+1}\bigg{\langle}\bar{z}^{k}-\bar{z}^{k+1},\frac{\bar{z}^{k} -\bar{z}^{k+1}+t_{k}(H(\bar{z}^{k})-H(\bar{z}^{k+1}))}{-\gamma_{k+1}}\bigg{\rangle}\] \[=\frac{B_{k+1}}{\gamma_{k+1}}\bigg{(}\|\bar{z}^{k}-\bar{z}^{k+1} \|^{2}+t_{k}\langle\bar{z}^{k}-\bar{z}^{k+1},H(\bar{z}^{k})-H(\bar{z}^{k+1}) \bigg{)}.\] The term \(\frac{B_{k+1}}{\gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}\) will be utilized elsewhere (see Lemma 4.1) so we don't need to worry about it here, and the term \(\frac{B_{k+1}}{\gamma_{k+1}}t_{k}\langle\bar{z}^{k}-\bar{z}^{k+1},H(\bar{z}^ {k})-H(\bar{z}^{k+1})\) is nonnegative by monotonicity and the fact that \(t_{k}\) is also nonnegative. This completes the proof. **Theorem 6.3**.: _The modified EAG-V algorithm with moving anchor has convergence rate \(O(1/k^{2}).\)_ **Remark 6.4**.: While \(H\) may be any monotone operator, in practice one may wish to take \(H=G.\) ### Modified FEG with moving anchor **Definition 6.5** (Proximal FEG with moving anchor).: In the setting of FEG with moving anchor, consider (5.4) from the proof of Lemma 5.3: \[\bar{z}^{k}-\bar{z}^{k+1}=-\gamma_{k+1}G(z^{k+1})\] and now let's consider the same term with a proximal term introduced: \[\bar{z}^{k}-\bar{z}^{k+1}=-\gamma_{k+1}G(z^{k+1})-t_{k}(H(\bar{z}^{k})-H(\bar {z}^{k+1})),\] where \(H\) is a monotone operator just as before. This modification affects the anchor update in the same way as in the previous case: \[\bar{z}^{k+1}=(I+t_{k}H)^{-1}(\bar{z}^{k}+\gamma_{k+1}G(z^{k+1})+t_{k}H(\bar{ z}^{k})) \tag{6.3}\] **Lemma 6.6**.: _Under the same conditions as Lemma 5.3 and with \(H\) any monotone operator, \(t_{k}\) nonnegative for all \(k\), the Lyapunov functional for the modified FEG algorithm with moving anchor is nonincreasing. Specifically, replacing the previous \(\bar{z}^{k+1}\) update in the unmodified FEG moving anchor algorithm with (6.3) still results in a nonincreasing Lyapunov functional._ Proof.: The proof proceeds in the same manner as in that of Lemma 6.2. The only minor difference is that in this case, we begin with \(B_{k+1}(G(z^{k+1}),\bar{z}^{k+1}-\bar{z}^{k})\). We still obtain from this the terms \[\frac{B_{k+1}}{\gamma_{k+1}}\|\bar{z}^{k}-\bar{z}^{k+1}\|^{2}+\frac{B_{k+1}}{ \gamma_{k+1}}t_{k}\langle\bar{z}^{k}-\bar{z}^{k+1},H(\bar{z}^{k})-H(\bar{z}^{ k+1}),\] where the first term is utilized elsewhere in the larger proof of the functional being nonincreasing and the latter term is monotone, thus nonnegative. **Theorem 6.7**.: _The modified FEG algorithm with moving anchor has convergence rate \(O(1/k^{2}).\)_ ## 7. Numerical experiments In this section we detail several numerical experiments. First, we visualize two thousand iterations of EAG-V and FEG, each moving anchor versus the fixed anchor, on a toy 'almost bilinear' example. Next, we look at the log of the grad norm squared versus the log of iterations for the EAG examples. Note that this error graph is an example in the monotone convex-concave case. We then run a nonconvex-nonconcave negative comonotone example for FEG variants, where some interesting convergence behaviors among the moving anchor variants are exhibited. Finally, we study monotone FEG variants (moving and fixed anchor) on a nonlinear two player game. Throughout all of these examples, \(c_{1}=\pi^{2}/6\), \(c_{k}=\frac{ck_{1}}{1+\delta_{k-1}}(k=2,3,...)\), and in all except for the last example, \(\delta_{k}\) is chosen to be \(\exp(k^{2})-1\). Figure 1 compares the iterations of EAG-V with a fixed anchor to the iterations for the moving anchor EAG-V. Figure 1, Figure 2, and Figure 3 all display iterations where the function used is the 'almost bilinear' function \(f:\mathbb{R}^{2}\rightarrow\mathbb{R},f(x,y)=\epsilon\frac{\|x\|^{2}}{2}+ \langle x,y\rangle-\epsilon\frac{\|y\|^{2}}{2}\). Here, \(\epsilon\) is small, for these experiments set to \(0.01\), and the straightforward nature of the example allows for ease of visualizing the iterations as well as their differences when it comes to comparing convergence rates. In particular, the unique saddle-point is \((0,0)\). Figure 2 compares, via the same function as Figure 1, the two moving anchor variants of EAG-V. When the \(\gamma_{k}\) parameter is positive, the anchor iteratio Figure 1. The first two thousand iterations of the EAG algorithm with varying step-size, or EAG-V, compared to the first two thousand iterations of the moving anchor EAG-V algorithm. Figure 2. The two moving anchor EAG-V variants compared in red, along with their anchors in green. saddle and the algorithm updates very rapidly. When \(\gamma_{k}\) has only its sign changed to negative, the anchor (seen in green) seems to stay much closer to the iterations and the saddle-point. The iterations appear to converge at a markedly faster rate (by a constant) for this latter case over both the fixed anchor and the positive \(\gamma_{k}\) setting, an observation that is confirmed below. Figure 3 compares the two moving anchor version of the FEG method, in the same manner as the comparison shown in Figure 2: red dots are the algorithm updates, green dots are the anchor updates, and the function is the 'almost bilinear' one previously described. Note that in both cases, the iterations seem to zone in on and converge to the saddle point in a much faster manner. In [14], the authors established that even on convex-concave problems, FEG performs at the same optimal order of convergence as EAG, but at a significantly faster rate. This behavior seems to have carried over to our algorithm where we introduce the moving anchor to these frameworks. Figure 4 captures the behavior of \(\|G(z^{k})\|^{2}\) across all three convex-concave algorithms of interest: EAG-V, moving anchor EAG-V with positive \(\gamma_{k}\), and moving anchor EAG-V with Figure 4. Comparison of the grad-norm squared of three EAG-V variants of interest on a toy ‘almost bilinear’ problem. Figure 3. The two moving anchor FEG variants compared in red, along with their anchors in green. negative \(\gamma_{k}.\) Each algorithm attains the optimal order of convergence, while the negative \(\gamma_{k}\) algorithm is markedly faster than both algorithms by a constant. Identical behavior occurrs under the same problem setting with the FEG and FEG with moving anchors (positive and negative \(\gamma_{k}\)), with the negative \(\gamma_{k}\) algorithm again being the fastest, so we do not include this figure here. Figure 5 captures the error of FEG across all three anchor variants in a numerical example that is explicitly comonotone and nonconvex-nonconcave: \[L(x,y)=\frac{\rho R^{2}}{2}x^{2}+R\sqrt{1-\rho^{2}R^{2}}xy-\frac{\rho R^{2}}{2 }y^{2}\] with \(L:\mathbb{R}^{2}\rightarrow\mathbb{R},R=1,\rho=-1/3\)\(1-\)smooth and \(-1/3\)-negative comonotone. Interestingly, this is the only numerical example where the moving anchor with positive \(\gamma_{k}\) - of any variant - was the fastest of all three algorithms. The intuition is that the positive \(\gamma_{k}\) functions as negative \(\gamma_{k}\) in the monotone, convex-concave problem settings, pulling the iterations closer to the saddle. More examples in this vein may verify that this behavior with positive \(\gamma_{k}\) occurs only in the negative comonotone problem setting. More theoretical work to verify this observed numerical behavior will be one of our goals in a future work. The final figure, Figure 6, compares three different monotone FEG variants on a particular nonlinear game that was studied extensively in [4]: \[\min_{x\in\Delta^{n}}\max_{y\in\Delta^{m}}\frac{1}{2}\langle Qx,x\rangle+ \langle Kx,y\rangle\] where \(Q=A^{T}A\) is positive semidefinite for \(A\in\mathbb{R}^{k\times n}\) which has entries generated independently from the standard normal distribution, \(K\in\mathbb{R}^{m\times n}\) with entries generated uniformly and independently from the interval \([-1,1],\) and \(\Delta^{n},\Delta^{m}\) are the \(n-\) and \(m-\)simplices, respectively: \[\Delta^{n}:=\Big{\{}x\in\mathbb{R}^{n}_{+}:\sum_{i=1}^{n}x_{i}=1\Big{\}},\ \Delta^{m}:=\Big{\{}y\in\mathbb{R}^{m}_{+}:\sum_{j=1}^{m}y_{j}=1\Big{\}}.\] One may interpret this as a two person game where player one has \(n\) strategies to choose from, choosing strategy \(i\) with probability \(x_{i}\)\((i=1,...,n)\) to attempt to minimize a loss, Figure 5. Comparison of the errors of three FEG variants in a nonconvex-nonconcave setting. Note the positive \(\gamma\) converges fastest by a constant. while the second player attempt to maximize their gain among \(m\) strategies with strategy \(j\) chosen with probability \(y_{j}\) (\(j=1,...,m\)). The payoff is a quadratic function that depends on the strategy of both players. For this example, we used FEG fixed and moving anchor variants in the monotone (that is, \(\rho=0\)) setting of the algorithm. We compare \(20,000\) iterations of the log of the grad norm squared of the fixed anchor versus the same for a negative \(\gamma_{k}\) variant where the parameter \(\delta\) is scaled by \(1/10\) and a negative \(\gamma_{k}\) variant where \(\delta\) is scaled by \(1/100\). This is the first numerical example where we tune \(\delta\), a parameter used in controlling the step size. We chose \(m=500\), \(k=1000\), and \(n=2500\). We remark that, initially, it seems the \(\frac{1}{10}\delta\) variant is fastest by a constant, and then is overtaken by the \(\frac{1}{100}\delta\) variant and then the fixed anchor while the initially faster \(\frac{1}{10}\delta\) 'flattens out' rather quickly. For much of this experiment, it appears that the \(\frac{1}{100}\delta\) variant and the fixed anchor very closely parallel one another - there seems to be a slight advantage to the \(\frac{1}{100}\delta\) moving anchor for the majority of the experiment after between two and three dozen iterations, and then at the tail end of the experiment the fixed anchor may have a marginal lead. ## 8. Conclusion The moving anchor acceleration methods retain optimal convergence rates and also demonstrate superior-to-comparable numerical performance with some parameter tuning. The optimal order of convergence is obtained across different problem settings, from convex-concave to negative co-monotone problems. Interestingly, across numerous problem settings there exists a version of the moving anchor algorithm, parametrized by \(\gamma_{k}\), that demonstrates superior numerical performance compared to other state-of-the-art algorithms. The variety of numerical examples demonstrates a wide array of applications for our algorithms in both theoretical and applied settings. In addition, we develop a 'proximal' version of the moving anchor in both the convex-concave and negative co-monotone problem settings and demonstrate its convergence. Of future interest, one may consider numerical and practical implementations of the proximal moving anchor, parallelized/asynchronous implementations of moving anchor saddle point algorithms, a tighter analysis of \(-\gamma_{k}\) convergence, a theoretical understanding of how the \(\gamma_{k}\) and other parameters such as \(\delta\) affect convergence Figure 6. log of iterations versus log of the gradient norm-squared for monotone (that is, \(\rho=0\)) FEG variants studied on a particular nonlinear game. rates, and the identification of problem settings which our moving anchor may exploit effectively among many other topics. ## Acknowledgements We thank Donghwan Kim, Ernest K. Ryu, and Taeho Yoon for their invaluable suggestions that have greatly improved and inspired this work.
2303.08073
Generalized Heisenberg-Virasoro algebra and matrix models from quantum algebra
In this paper, we construct the Heisenberg-Virasoro algebra in the framework of the $\mathcal{R}(p,q)$-deformed quantum algebras. Moreover, the $\mathcal{R}(p,q)$-Heisenberg-Witt $n$-algebras is also investigated. Furthermore, we generalize the notion of the elliptic hermitian matrix models. We use the constraints to evaluate the $\mathcal{R}(p,q)$-differential operators of the Virasoro algebra and generalize it to higher order differential operators. Particular cases corresponding to quantum algebras existing in literature are deduced.
Fridolin Melong, Raimar Wulkenhaar
2023-02-27T21:15:40Z
http://arxiv.org/abs/2303.08073v1
# Generalized Heisenberg-Virasoro algebra and matrix models from quantum algebra ###### Abstract. In this paper, we construct the Heisenberg-Virasoro algebra in the framework of the \(\mathcal{R}(p,q)\)-deformed quantum algebras. Moreover, the \(\mathcal{R}(p,q)\)-Heisenberg-Witt \(n\)-algebras is also investigated. Furthermore, we generalize the notion of the elliptic hermitian matrix models. We use the constraints to evaluate the \(\mathcal{R}(p,q)\)-differential operatos of the Virasoro algebra and generalize it to higher order differential operators. Particular cases corresponding to quantum algebras existing in literature are deduced. Key words and phrases:\(\mathcal{R}(p,q)\)- calculus, quantum algebra, Heisenberg-Virasoro algebra, toy model, matrix models 2020 Mathematics Subject Classification: 17B37, 17B68, 81R10 ## 1. Introduction Quantum algebras introduced by Drinfeld are used both by mathematicians and physicists [7]. They relate to the quantum Yang-Baxter equation which plays an important role in many areas such as solvable lattice models, conformal field theory and quantum integrable systems [8]. From the mathematical point of view, quantum algebras are Hopf algebras and generalizations of Lie algebras [6, 5]. Hounkonnou et _al_ generalized Virasoro algebra, relatively to their left-symmetry structure, presented related algebraic and some hydrodynamic properties [12]. The \(q\)-deformed Heisenberg-Virasoro algebra which is a Hom-Lie algebra was constructed by Chen and Su. The central extensions and second cohomology group were also presented [4]. The super \(q\)-deformed Virasoro \(n\)-algebra for \(n\) even and a toy model for the \(q\)-deformed Virasoro constraints were investigated by Nedelin and Zabzine on the \(q\)-Virasoro constraints for a toy model [23]. The \(\mathcal{R}(p,q)\)-deformed quantum algebras and particular cases corresponding to quantum algebras known in the literature were investigated in [14]. Furthermore, in [11], the \(\mathcal{R}(p,q)\)-deformed conformal Virasoro algebra was presented, the \(\mathcal{R}(p,q)\)-deformed Korteweg-de Vries equation for a conformal dimension \(\Delta=1\), was derived, and the energy-momentum tensor induced by the \(\mathcal{R}(p,q)\)-quantum algebras for the conformal dimension \(\Delta=2\) was characterized. The generalizations of Witt and Virasoro algebras, and the Korteweg-de Vries equations from known \(\mathcal{R}(p,q)\)-deformed quantum algebras were performed. The Introduction Let \(\mathcal{R}(p,q)\) be a positive real number and \(\mathcal{R}(p,q)\) be a positive real number. The _direct product_ of \(\mathcal{R}(p,q)\)-submodules \(\mathcal{R}(p,q)\) is defined by \[\mathcal{R}(p,q):=\sum_{u,v=-\eta}\mathcal{R}(p,q)\,\text{ and }\, \mathcal{R}(p,q)=\ the \(\mathcal{R}(p,q)\)-factorials by \[[n]!_{\mathcal{R}(p,q)}:=\left\{\begin{array}{ll}1\quad\mbox{for}\quad n=0\\ \\ \mathcal{R}(p,q)\cdots\mathcal{R}(p^{n},q^{n})\quad\mbox{for}\quad n\geq 1, \end{array}\right.\] and the \(\mathcal{R}(p,q)\)-binomial coefficients \[\left[\begin{array}{c}m\\ n\end{array}\right]_{\mathcal{R}(p,q)}:=\frac{[m]!_{\mathcal{R}(p,q)}}{[n]!_{ \mathcal{R}(p,q)}[m-n]!_{\mathcal{R}(p,q)}},\quad m,n\in\mathbb{N}\cup\{0\}, \quad m\geq n.\] We denote by \(\mathcal{O}(\mathbb{D}_{R})\) the set of holomorphic functions defined on \(\mathbb{D}_{R}\) and consider the following linear operators defined on \(\mathcal{O}(\mathbb{D}_{R})\), (see [14] for more details), \[P:\Psi\longmapsto P\Psi(z): = \Psi(pz),\] \[Q:\Psi\longmapsto Q\Psi(z): = \Psi(qz),\] and the \(\mathcal{R}(p,q)\)-derivative \[\mathcal{D}_{\mathcal{R}(p,q)}:=\mathcal{D}_{p,q}\frac{p-q}{P-Q}\mathcal{R}( P,Q)=\frac{p-q}{p^{P}-q^{Q}}\mathcal{R}(p^{P},q^{Q})\mathcal{D}_{p,q} \tag{3}\] where \(\mathcal{D}_{p,q}\) is the \((p,q)\)-derivative: \[\mathcal{D}_{p,q}\Psi(z):=\frac{\Psi(pz)-\Psi(qz)}{z(p-q)}.\] The algebra associated with the \(\mathcal{R}(p,q)\)-deformation is a quantum algebra, denoted \(\mathcal{A}_{\mathcal{R}(p,q)}\), generated by the set of operators \(\{1,A,A^{\dagger},N\}\) satisfying the following commutation relations: \[AA^{\dagger}=[N+1]_{\mathcal{R}(p,q)},\qquad\quad A^{\dagger}A=[N]_{\mathcal{ R}(p,q)}.\] \[[N,\;A]=-A,\qquad\qquad\left[N,\;A^{\dagger}\right]=A^{\dagger}\] with the realization on \(\mathcal{O}(\mathbb{D}_{R})\) given by: \[A^{\dagger}:=z,\qquad A:=\partial_{\mathcal{R}(p,q)},\qquad N:=z\partial_{z},\] where \(\partial_{z}:=\frac{\partial}{\partial z}\) is the derivative on \(\mathbb{C}\). This algebra is the generalization of quantum algebras existing in the literature as follows: 1. Taking \(\mathcal{R}(x,1)=\frac{x-1}{q-1}\), we obtain the \(q\)-deformed number, derivative and the quantum algebra corresponding to the **Arick-Coon-Kuryskin algebra**[1]: \[[n]_{q}=\frac{q^{n}-1}{q-1},\quad\mathcal{D}_{q}\Psi(z):=\frac{\Psi(qz)-\Psi (z)}{z(q-1)}\] and \[\begin{array}{cc}[N,\;A]=-A,\qquad\qquad\left[N,\;A^{\dagger}\right]=A^{ \dagger}.\\ A\,A^{\dagger}-qA^{\dagger}\,A=1\quad\mbox{or}\quad A\,A^{\dagger}-A^{\dagger} \,A=q^{N}.\end{array}\] 2. The **Biedenharn-Macfarlane algebra[2, 19]**, derivative and numbers can be obtained by putting \(\mathcal{R}(x)=\frac{x-x^{-1}}{q-q^{-1}}:\) \[[n]_{q}=\frac{q^{n}-q^{-n}}{q-q^{-1}},\quad\mathcal{D}_{q}\Psi(z):=\frac{\Psi( qz)-\Psi(q^{-1}z)}{z(q-q^{-1})}\] and \[\begin{array}{ccc}[N,\ A]=-A,&&\left[N,\ A^{\dagger}\right]=A^{\dagger}.\\ A\,A^{\dagger}-qA^{\dagger}\,A=q^{-N}&\text{or}&A\,A^{\dagger}-q^{-1}A^{ \dagger}\,A=q^{N},\quad q^{2}\neq 1.\end{array}\] 3. Setting \(\mathcal{R}(x,y)=\frac{x-y}{p-q}\), we obtain the numbers, derivative and quantum algebra induced by the **Jagannathan-Srinivasa algebra**[18]: \[[n]_{p,q}=\frac{p^{n}-q^{n}}{p-q},\quad\mathcal{D}_{p,q}\Psi(z)=\frac{\Psi( pz)-\Psi(qz)}{z(p-q)}\] and \[\begin{array}{ccc}[N,\ A]=-A,&&\left[N,\ A^{\dagger}\right]=A^{\dagger}.\\ A\,A^{\dagger}-qA^{\dagger}\,A=p^{N}.\end{array}\] 4. Putting \(\mathcal{R}(x,y)=\frac{1-x\,y}{(p^{-1}-q)x}\), we get the numbers, derivative, and quantum algebra from the **Chakrabarty - Jagannathan algebra**[3]: \[[n]_{p^{-1},q}=\frac{p^{-n}-q^{n}}{p^{-1}-q},\quad\mathcal{D}_{p^{-1},q}\Psi (z)=\frac{\Psi(p^{-1}z)-\Psi(qz)}{z(p^{-1}-q)}\] and \[\begin{array}{ccc}[N,\ A]=-A,&&\left[N,\ A^{\dagger}\right]=A^{\dagger}.\\ A\,A^{\dagger}-qA^{\dagger}\,A=p^{-N}&\text{or}&A\,A^{\dagger}-q^{-1}A^{\dagger} \,A=p^{N}.\end{array}\] 5. Given \(\mathcal{R}(x,y)=x\,y-1(q-p^{-1})y\), we derive the numbers, derivative, and quantum algebra associated to the **Hounkonnou-Ngompe generalization of \(q\)-Quesne algebra**[15]: \[[n]_{p,q}^{Q}=\frac{p^{n}-q^{-n}}{q-p^{-1}},\quad\mathcal{D}_{p,q}^{Q}\Psi(z) =\frac{\Psi(pz)-\Psi(q^{-1}z)}{z(q-p^{-1})},\] and \[\begin{array}{ccc}[N,\ A]=-A,&&\left[N,\ A^{\dagger}\right]=A^{\dagger}.\\ p^{-1}A\,A^{\dagger}-A^{\dagger}\,A=q^{-N-1}&\text{or}&qA\,A^{\dagger}-A^{ \dagger}\,A=p^{N+1}.\end{array}\] Now, we recall some notions about matrix model. We use the notation for the Schur polynomials as polynomials of power sums \(p_{k}=\sum_{i}z_{i}^{k}\)[22]. The Hermitean Gaussian matrix model is defined by the partition function \[Z_{N}(p_{k}):=\frac{1}{V_{N}}\int_{H_{N}}\,dH\exp\bigg{(}-\frac{1}{2}Tr\,H^{2} +\sum_{k}\frac{p_{k}}{k}\,Tr\,H^{k}\bigg{)},\] where \(H_{N}\) is the space of Hermitean \(N\times N\) matrices, \(dH\) the Lebesgue measure and \(V_{N}\) the volume of the unitary group \(U(N)\). The relation (4) is a generating function of all gauge-invariant correlators given by: \[\bigg{\langle}\prod_{i}Tr\,H^{k_{i}}\bigg{\rangle}:=\frac{1}{Z_{N}(0)}\int_{H_{ N}}\,dH\prod_{i}Tr\,H^{k_{i}}\exp\big{(}-\frac{1}{2}Tr\,H^{2}\big{)}.\] Integrating over \(U(N)\) in the relation (4) gives [20] \[Z_{N}(p_{k}):=\frac{1}{N!}\int_{-\infty}^{\infty}\prod_{i}\,dz_{i}\prod_{j\neq i }\big{(}z_{i}-z_{j}\big{)}\exp\bigg{(}-\frac{1}{2}\sum_{i}z_{i}^{2}+\sum_{i,k} \frac{p_{k}}{k}\,z_{i}^{k}\bigg{)}\] and \[\bigg{\langle}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{\rangle}:=\frac{1}{Z_{N}(0) }\int_{-\infty}^{\infty}\prod_{i}\,dz_{i}\prod_{j\neq i}\big{(}z_{i}-z_{j} \big{)}\bigg{(}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{)}\exp\big{(}-\frac{1}{2} \sum_{i}z_{i}^{2}\big{)},\] where \(z_{i}\) are the eigenvalues of \(H.\) ## 3. \(\mathcal{R}(p,q)\)-Heisenberg Virasoro algebra In this section, we construct the operators satifying the generalized Heisenberg Witt algebra. Moreover, the central extensions are provided and the Heisenberg Virasoro algebra is deduced in the framework of the \(\mathcal{R}(p,q)\)-deformed quantum algebra. Particular cases are deduced. **Definition 1**.: _The \(\mathcal{R}(p,q)\)-deformed operators \(L_{m}\) and \(I_{m}\) are given as follows:_ \[L_{m}\phi(z)=-z^{m}\,\mathcal{D}_{\mathcal{R}(p,q)}\phi(z),\quad\text{and} \quad I_{m}\phi(z)=-(\tau\,z)^{m}\phi(z), \tag{4}\] _where \(\mathcal{D}_{\mathcal{R}(p,q)}\) is given by the relation (3) and \(\tau:=\tau(p,q)\) is a parameter of deformation depending on \(p\) and \(q.\)_ Then, the \(\mathcal{R}(p,q)\)-Heisenberg-Witt algebra is denoted by \(\mathcal{H}_{\mathcal{R}(p,q)}:=span\{L_{m},I_{m}/m\in\mathbb{Z}\}.\) We introduce a family of deformations of the commutator: \[[A,B]_{a,b}=aAB-bBA,\] where \(a\) and \(b\) are referred to as the coefficients of commutation. They can be an arbitrary complex or real numbers. Then: **Proposition 1**.: _The \(\mathcal{R}(p,q)\)-Heisenberg Witt algebra is generated by the operators (4) obeying the following commutation relations:_ \[\big{[}L_{m_{1}},L_{m_{2}}\big{]}_{x,y}\phi(z) = [m_{1}-m_{2}]_{\mathcal{R}(p,q)}\,L_{m_{1}+m_{2}}\phi(z),\] \[,\big{[}L_{m_{1}},I_{m_{2}}\big{]}_{u,v}\phi(z) = -[m_{2}]_{\mathcal{R}(p,q)}\,I_{m_{1}+m_{2}}\phi(z),\] \[\big{[}I_{m_{1}},I_{m_{2}}\big{]}_{\mathcal{R}(p,q)} = 0,\] _where_ \[\left\{\begin{array}{l}x=q^{m_{1}-m_{2}}\,p^{m_{1}}\,\Theta_{mn}(p,q) \mbox{,}\quad y=p^{m_{1}}\,\Theta_{mn}(p,q),\\ u=\tau^{m_{1}}\,p^{m_{2}}\mbox{,}\quad v=\tau^{m_{1}}\,(pq)^{m_{2}},\\ \Theta_{mn}(p,q)=\frac{[m_{1}-m_{2}]_{\mathcal{R}(p,q)}}{[m_{1}]_{\mathcal{R}( p,q)}-(pq)^{m_{1}-m_{2}}\,[m_{2}]_{\mathcal{R}(p,q)}}.\end{array}\right. \tag{5}\] Proof.: Using the \(\mathcal{R}(p,q)\)-formula: \[\mathcal{D}_{\mathcal{R}(p,q)}\big{(}f(z)g(z)\big{)} =\mathcal{D}_{\mathcal{R}(p,q)}(f(z))(Pg(z))+(Qf(z))\mathcal{D}_ {\mathcal{R}(p,q)}(g(z))\] \[=\mathcal{D}_{\mathcal{R}(p,q)}(f(z))(Qg(z))+(Pf(z))\mathcal{D}_ {\mathcal{R}(p,q)}(g(z)),\] we have: \[xL_{m_{1}}L_{m_{2}}\phi(z) =x\,z^{m_{1}}\,\mathcal{D}_{\mathcal{R}(p,q)}\big{(}z^{m_{2}} \mathcal{D}_{\mathcal{R}(p,q)}\phi(z)\big{)}\] \[=-x\,[m_{2}]_{\mathcal{R}(p,q)}\,p^{-m_{2}}\,L_{m_{2}+m_{1}}\, \phi(z)-xq^{m_{2}}\,L_{m_{2}+m_{1}}\,\mathcal{D}_{\mathcal{R}(p,q)}\phi(z) \big{)}.\] By analogy, \[yL_{m_{2}}L_{m_{1}}\phi(z)=-y\,[m_{1}]_{\mathcal{R}(p,q)}\,p^{-m_{1}}\,L_{m_{2 }+m_{1}}\,\phi(z)-yq^{m_{1}}\,L_{m_{2}+m_{1}}\,\mathcal{D}_{\mathcal{R}(p,q)} \phi(z).\] After computation, we get: \[\left\{\begin{array}{l}x=q^{m_{1}-m_{2}}\,p^{m_{1}}\,\Theta_{mn}(p,q),\\ y=p^{m_{1}}\,\Theta_{mn}(p,q),\\ \Theta_{mn}(p,q)=\frac{[m_{1}-m_{2}]_{\mathcal{R}(p,q)}}{[m_{1}]_{\mathcal{R}( p,q)}-(pq)^{m_{1}-m_{2}}\,[m_{2}]_{\mathcal{R}(p,q)}}.\end{array}\right.\] Moreover, we use the same technique to obtain \(u=\tau^{m_{1}}\,p^{m_{2}}\) and \(v=\tau^{m_{1}}\,(pq)^{m_{2}}\). **Remark 1**.: _There exist another way to construct the \(\mathcal{R}(p,q)\)-Heisenberg Witt algebra. Here, we consider \(\mathcal{H}_{\mathcal{R}(p,q)}\) be a non associative algebra with basis \(\{z^{m}\,\mathcal{D}_{\mathcal{R}(p,q)}^{s}/m\in\mathbb{Z},s\in\mathbb{N}\}\) and defined the following product:_ \[\big{(}z^{m_{1}}\mathcal{D}_{\mathcal{R}(p,q)}^{s_{1}}\big{)}\circ\big{(}z^{m _{2}}\mathcal{D}_{\mathcal{R}(p,q)}^{s_{2}}\big{)}:=z^{m_{1}+m_{2}}\sum_{i=0}^{ s_{1}}\left(\begin{matrix}s_{1}\\ i\end{matrix}\right)[m_{2}]_{\mathcal{R}(p,q)}^{i}\,\mathcal{D}_{\mathcal{R} (p,q)}^{s_{1}+s_{2}-i},\] _with \((m_{1},m_{2})\in\mathbb{Z}\times\mathbb{Z}\) and \((s_{1},s_{2})\in\mathbb{N}\times\mathbb{N}\)._ _Therefore, the operators \(L_{m}\) and \(I_{m}\) satisfy the commutation relations presented by:_ \[\big{[}L_{m_{1}},L_{m_{2}}\big{]}_{\mathcal{R}(p,q)}\phi(z) = [m_{1}-m_{2}]_{\mathcal{R}(p,q)}\,L_{m_{1}+m_{2}}\phi(z), \tag{6}\] \[\big{[}L_{m_{1}},I_{m_{2}}\big{]}_{\mathcal{R}(p,q)}\phi(z) = -\tau^{-m_{1}}\,[m_{2}]_{\mathcal{R}(p,q)}\,I_{m_{1}+m_{2}}\phi(z),\] (7) \[\big{[}I_{m_{1}},I_{m_{2}}\big{]}_{\mathcal{R}(p,q)}\phi(z) = 0. \tag{8}\] **Definition 2**.: _A Hom-Lie algebra is a vector space with skew symmetric bracket and generalised Jacobi identity \([\alpha(x),[y,z]]+[\alpha(y),[z,x]]+[\alpha(z),[x,y]]=0\) for an endomorphism \(\alpha\)._ **Definition 3**.: _A \(\mathcal{R}(p,q)\)-deformed \(2\)-cocycle on \(\mathcal{H}_{\mathcal{R}(p,q)}\) is a bilinear function \(\Psi:\mathcal{H}_{\mathcal{R}(p,q)}\times\mathcal{H}_{\mathcal{R}(p,q)} \longrightarrow\mathbb{C}\) verifying the following conditions:_ \[\Psi(x,y) = -\Psi(y,x), \tag{9}\] \[\Psi([x,y]_{\mathcal{R}(p,q)},\alpha(z)) = \Psi(\alpha(x),[y,z]_{\mathcal{R}(p,q)})+\Psi([x,z]_{\mathcal{R}( p,q)},\alpha(y)), \tag{10}\] _where \(x,y,z\in\mathcal{H}_{\mathcal{R}(p,q)}\) and_ \[\alpha(L_{m})=\frac{[2\,m]_{\mathcal{R}(p,q)}}{[m]_{\mathcal{R}(p,q)}}L_{m}\quad\text{and}\quad\alpha(I_{m})=\frac{[2\,m]_{\mathcal{R}(p,q)}}{[ m]_{\mathcal{R}(p,q)}}I_{m}.\] Note that the \(\mathcal{R}(p,q)\)-numbers (2) can be rewritten in the form [10]: \[[n]_{\mathcal{R}(p,q)}=\frac{\epsilon_{1}^{n}-\epsilon_{2}^{n}}{\epsilon_{1}- \epsilon_{2}},\quad\epsilon_{1}\neq\epsilon_{2},\] where \(\epsilon_{i},i\in\{1,2\}\), are the structure functions depending on the deformation parameters \(p\) and \(q\). **Lemma 1**.: _[_10_]_ _The \(\mathcal{R}(p,q)\)-Jacobi identity is given by:_ \[\sum_{(i,j,l)\in\mathcal{C}(n,m,k)}(\frac{1}{\epsilon_{1}\epsilon_{2}})^{-l} \frac{[2i]_{\mathcal{R}(p,q)}}{[i]_{\mathcal{R}(p,q)}}\big{[}L_{i},\big{[}L_{ j},L_{l}\big{]}_{\mathcal{R}(p,q)}\big{]}_{\mathcal{R}(p,q)}=0, \tag{11}\] _where \(n\), \(m\) and \(k\) are natural numbers, and \(\mathcal{C}(n,m,k)\) refers to the cyclic permutation of \((n,m,k)\)._ Let us now present the Heisenberg Virasoro algebra from the \(\mathcal{R}(p,q)\)- quantum algebra. It's an extension of the \(\mathcal{R}(p,q)\)-Heisenberg Witt algebra given by (6), (7), and (8). The central extension of the relation (6) is well known in our previous work as follows [10]: \[C_{\mathcal{R}(p,q)}(n)=C(p,q)\big{(}\frac{q}{p}\big{)}^{-n}\frac{[n]_{ \mathcal{R}(p,q)}}{6[2\,n]_{\mathcal{R}(p,q)}}\,[n-1]_{\mathcal{R}(p,q)}\,[n] _{\mathcal{R}(p,q)}\,[n+1]_{\mathcal{R}(p,q)},\] where \(C(p,q)\) is an arbitrary function of \((p,q)\). From the relations (9), (10), and (11), we can obtain: \[C_{LI}(m_{1})=C_{LI}(p,q)\big{(}\frac{q}{p}\big{)}^{-m_{1}}\,\frac{2[m_{1}]_{ \mathcal{R}(p,q)}}{[2\,m_{1}]_{\mathcal{R}(p,q)}}\,[m_{1}]_{\mathcal{R}(p,q)} \,[m_{1}+1]_{\mathcal{R}(p,q)}, \tag{12}\] and \[C_{I}(m_{1})=C_{I}(p,q)\big{(}\frac{q}{p}\big{)}^{-m_{1}}\,\frac{2[m_{1}]_{ \mathcal{R}(p,q)}}{[2\,m_{1}]_{\mathcal{R}(p,q)}}\,[m_{1}]_{\mathcal{R}(p,q)}. \tag{13}\] Then, the \(\mathcal{R}(p,q)\)-deformed Heisenberg-Virasoro algebra \(\bar{\mathcal{H}}_{\mathcal{R}(p,q)}:=span\{\bar{L}_{m},\bar{I}_{m}/m\in \mathbb{Z}\}\). **Proposition 2**.: _The \(\mathcal{R}(p,q)\)-deformed Heisenberg Virasoro algebra is governed by the following commutation relations:_ \[\big{[}\bar{L}_{m_{1}},\bar{L}_{m_{2}}\big{]}_{x,y}\phi(z) = [m_{1}-m_{2}]_{\mathcal{R}(p,q)}\,\bar{L}_{m_{1}+m_{2}}\phi(z)+C_{L }(m_{1})\,\delta_{m_{1}+m_{2},0},\] \[\big{[}\bar{L}_{m_{1}},\bar{I}_{m_{2}}\big{]}_{u,v}\phi(z) = -[m_{2}]_{\mathcal{R}(p,q)}\,\bar{I}_{m_{1}+m_{2}}\phi(z)+C_{LI}(m _{1})\delta_{m_{1}+m_{2},0},\] \[\big{[}\bar{I}_{m_{1}},\bar{I}_{m_{2}}\big{]}_{\mathcal{R}(p,q)} \phi(z) = C_{I}(p,q)\big{(}\frac{q}{p}\big{)}^{m_{1}}\,\frac{2[m_{1}]_{ \mathcal{R}(p,q)}}{[2\,m_{1}]_{\mathcal{R}(p,q)}}\,[m_{1}]_{\mathcal{R}(p,q)} \delta_{m_{1}+m_{2},0},\] _where_ \[C_{L}(m_{1})=C_{L}(p,q)\big{(}\frac{q}{p}\big{)}^{-m_{1}}\frac{[ m_{1}]_{\mathcal{R}(p,q)}}{6[2\,m_{1}]_{\mathcal{R}(p,q)}}\,[m_{1}-1]_{ \mathcal{R}(p,q)}\,[m_{1}]_{\mathcal{R}(p,q)}\,[m_{1}+1]_{\mathcal{R}(p,q)},\] \[C_{LI}(m_{1})=C_{LI}(p,q)\big{(}\frac{q}{p}\big{)}^{-m_{1}}\, \frac{2[m_{1}]_{\mathcal{R}(p,q)}}{[2\,m_{1}]_{\mathcal{R}(p,q)}}\,[m_{1}]_{ \mathcal{R}(p,q)}\,[m_{1}+1]_{\mathcal{R}(p,q)},\] _and \(x,\,y,\,u,\) and \(v\) are given by the relation (5)._ **Remark 2**.: _It is necessary to derive particular cases of Heisenberg Virasoro algebra induced by the deformed quantum algebra known in the literature._ * _The_ \(q\)_-operators_ \(\bar{L}_{m}=-z^{m}\,\mathcal{D}_{q}\) _and_ \(\bar{I}_{m}=-q^{m}\,z^{m}\) _satisfy the_ \(q\)_-Heisenberg Virasoro algebra with the commutation relations:_ \[\big{[}\bar{L}_{m_{1}},\bar{L}_{m_{2}}\big{]}_{x,y}\phi(z) =[m_{1}-m_{2}]_{q}\bar{L}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{C_{L}(q)q^{-m_{1}}}{12(1+q^{m_{1}})}[m_{1}-1]_{q}[m_{1}]_{ q}[m_{1}+1]_{q}\delta_{m_{1}+m_{2},0}\] \[\big{[}\bar{L}_{m_{1}},\bar{I}_{m_{2}}\big{]}_{u,v}\phi(z) =-[m_{2}]_{q}\,\bar{I}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{2C_{LI}(q)\,q^{-m_{1}}}{1+q^{m_{1}}}[m_{1}]_{q}[m_{1}+1]_{ q}\delta_{m_{1}+m_{2},0}\] \[\big{[}\bar{I}_{m_{1}},\bar{I}_{m_{2}}\big{]}_{q}\phi(z) =C_{I}(q)\,q^{m_{1}}\,\frac{2[m_{1}]_{q}}{[2\,m_{1}]_{q}}[m_{1}] _{q}\delta_{m_{1}+m_{2},0},\] _where_ \[\left\{\begin{array}{l}x=q^{m_{1}-m_{2}}\,\Theta_{mn}(q)\text{, \quad}y=\Theta_{mn}(q),\\ u=q^{m_{1}}\text{,\quad}v=q^{m_{1}-m_{2}},\\ \Theta_{mn}(q)=\frac{[m_{1}-m_{2}]_{q}}{[m_{1}]_{q}-q^{m_{1}-m_{2}}\,[m_{2}]_{ q}}.\end{array}\right.\] 2. _The_ \(q\)_-Heisenberg Virasoro algebra generated by the_ \(q\)_-operators_ \(\bar{L}_{m}=-z^{m}\,\mathcal{D}_{q}\) _and_ \(\bar{I}_{m}=-q^{2m}\,z^{m}\) _obeys the following commutation relations:_ \[\left[\bar{L}_{m_{1}},\bar{L}_{m_{2}}\right]_{x,y}\phi(z) =[m_{1}-m_{2}]_{q}\bar{L}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{C_{L}(q)q^{-2\,m_{1}}}{12(q^{m_{1}}+q^{-m_{1}})}[m_{1}-1]_{ q}[m_{1}]_{q}[m_{1}+1]_{q}\delta_{m_{1}+m_{2},0}\] \[\left[\bar{L}_{m_{1}},\bar{I}_{m_{2}}\right]_{u,v}\phi(z) =-[m_{2}]_{q}\bar{I}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{2C_{LI}(q)\,q^{-2\,m_{1}}}{q^{m_{1}}+q^{-m_{1}}}[m_{1}]_{q} [m_{1}+1]_{q}\delta_{m_{1}+m_{2},0}\] \[\left[\bar{I}_{m_{1}},\bar{I}_{m_{2}}\right]_{q}\phi(z) =C_{I}(q)\,q^{2\,m_{1}}\,\frac{2[m_{1}]_{q}}{[2\,m_{1}]_{q}}[m_{1} ]_{q}\,\delta_{m_{1}+m_{2},0},\] _where_ \[\left\{\begin{array}{l}x=q^{2m_{1}-m_{2}}\,\Theta_{mn}(q)\text{, \ \ }y=q^{m_{1}}\,\Theta_{mn}(q),\\ u=q^{2m_{1}+m_{2}}\text{, \ \ }v=q^{2m_{1}},\\ \Theta_{mn}(q)=\frac{[m_{1}-m_{2}]_{q}}{[m_{1}]_{q}-[m_{2}]_{q}}.\end{array}\right.\] 3. _The Heisenberg- Virasoro algebra induced by the_ **Chakrabarty - Ja-ganathan algebra** _is generated by the_ \((p,q)\)_- operators_ \(\bar{L}_{m}=-z^{m}\,\mathcal{D}_{p,q}\) _and_ \(\bar{I}_{m}=-\left(\frac{q}{p}\right)^{m}z^{m}\) _verifying the commutation relations:_ \[\left[\bar{L}_{m_{1}},\bar{L}_{m_{2}}\right]_{x,y}\phi(z) =[m_{1}-m_{2}]_{p,q}\bar{L}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{C_{L}(p,q)q^{-2\,m_{1}}}{12(p^{m_{1}}+q^{m_{1}})}[m_{1}-1] _{p,q}[m_{1}]_{p,q}[m_{1}+1]_{p,q}\delta_{m_{1}+m_{2},0}\] \[\left[\bar{L}_{m_{1}},\bar{I}_{m_{2}}\right]_{u,v}\phi(z) =-q^{m_{1}}\,[m_{2}]_{p,q}\bar{I}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{2C_{LI}(p,q)\,q^{-2\,m_{1}}}{p^{m_{1}}+q^{m_{1}}}[m_{1}]_{ p,q}[m_{1}+1]_{p,q}\delta_{m_{1}+m_{2},0}\] \[\left[\bar{I}_{m_{1}},\bar{I}_{m_{2}}\right]_{p,q}\phi(z) =C_{I}(q)\,q^{2\,m_{1}}\,\frac{2[m_{1}]_{p,q}}{[2\,m_{1}]_{p,q} }[m_{1}]_{p,q}\,\delta_{m_{1}+m_{2},0},\] _where_ \[\left\{\begin{array}{l}x=q^{m_{1}-m_{2}}\,p^{m_{1}}\,\Theta_{m_{1}m_{2}}(p, q)\text{, \ \ }y=p^{m_{1}}\,\Theta_{m_{1}m_{2}}(p,q),\\ u=q^{m_{1}}\,p^{m_{2}-m_{1}}\text{, \ \ }v=q^{m_{1}+m_{2}}\,p^{m_{2}-m_{1}},\\ \Theta_{mn}(p,q)=\frac{[m_{1}-m_{2}]_{p,q}}{[m_{1}]_{p,q}-(pq)^{m_{1}-m_{2}} \,[m_{2}]_{p,q}}.\end{array}\right.\] 4. _The Heisenberg Virasoro algebra associated to the_ **generalized \(q\)-Quesne algebra** _is governed by the operators_ \(\bar{L}_{m}=-z^{m}\,\mathcal{D}_{p,q}^{Q}\) _and_ \(\bar{I}_{m}=-\left(\frac{q}{p}\right)^{m}z^{m}\) obeying the commutation relations:_ \[\big{[}\bar{L}_{m_{1}},\bar{L}_{m_{2}}\big{]}_{x,y}\phi(z) =[m_{1}-m_{2}]^{Q}_{p,q}\bar{L}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{C_{L}(p,q)q^{-2\,m_{1}}}{12(q^{m_{1}}+q^{-m_{1}})}[m_{1}-1 ]_{q}[m_{1}]^{Q}_{p,q}[m_{1}+1]^{Q}_{p,q}\delta_{m_{1}+m_{2},0}\] \[\big{[}\bar{L}_{m_{1}},\bar{I}_{m_{2}}\big{]}_{u,v}\phi(z) =-q^{m_{1}}\,[m_{2}]^{Q}_{p,q}\bar{I}_{m_{1}+m_{2}}\phi(z)\] \[+\frac{2C_{LI}(q)\,q^{-2\,m_{1}}}{q^{m_{1}}+q^{-m_{1}}}[m_{1}]^{Q} _{p,q}[m_{1}+1]^{Q}_{p,q}\delta_{m_{1}+m_{2},0}\] \[\big{[}\bar{I}_{m_{1}},\bar{I}_{m_{2}}\big{]}^{Q}_{p,q}\phi(z) =C_{I}(q)\,q^{2\,m_{1}}\,\frac{2[m_{1}]^{Q}_{p,q}}{[2\,m_{1}]^{Q}_{p,q}}[m_{1 }]^{Q}_{p,q}\,\delta_{m_{1}+m_{2},0},\] _where_ \[\left\{\begin{array}{l}x=q^{-m_{1}+m_{2}}\,p^{m_{1}}\,\Theta^{Q}_{m_{1}m_{2 }}(p,q)\,,\quad y=p^{m_{1}}\,\Theta^{Q}_{m_{1}m_{2}}(p,q),\\ u=q^{-m_{1}}\,p^{m_{2}-m_{1}}\,,\quad v=q^{-m_{1}+m_{2}}\,p^{m_{2}+m_{1}},\\ \Theta^{Q}_{m_{1}m_{2}}(p,q)=\frac{[m_{1}-m_{2}]^{Q}_{p,q}}{[m_{1}]^{Q}_{p,q} -(pq)^{m_{1}-m_{2}}\,[m_{2}]^{Q}_{p,q}}.\end{array}\right.\] ## 4. Applications This section is reserved to some applications of the generalized Heisenberg Virasoro algebra. Presicely, we study the generalized Heisenberg Witt \(n\)-algebras, a toy model for the Heisenberg Virasoro algebra, the \(\mathcal{R}(p,q)\)-deformed matrix models, and the elliptic generalized matrix models. ### \(\mathcal{R}(p,q)\)-Heisenberg Witt \(n\)-algebras We construct the Heisenberg Witt \(n\)-algebras from the \(\mathcal{R}(p,q)\)-deformed quantum algebras [14]. Particular cases are deduced. We consider the following relation for the \(\mathcal{R}(p,q)\)-deformed derivative: \[\mathcal{D}_{\mathcal{R}(p,q)}:=\frac{1}{z}\,[z\partial_{z}]_{\mathcal{R}(p,q)} \tag{14}\] and the operators given by: \[\mathbb{T}^{\mathcal{R}(p^{a},q^{a})}_{m}\phi(z):=-z^{m+1}\,\mathcal{D}_{ \mathcal{R}(p^{a},q^{a})}\phi(z),\] where \(\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\) is the \(\mathcal{R}(p,q)\)-deformed derivative defined as: \[\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\big{(}\phi(z)\big{)}:=\frac{p^{a}-q^{a }}{p^{a\,P}-q^{a\,Q}}\mathcal{R}(p^{a\,P},q^{a\,Q})\,\frac{\phi(p^{a}z)-\phi( q^{a}z)}{p^{a}-q^{a}}.\] Then, from the relation (14), the \(\mathcal{R}(p,q)\)-deformed operators can be rewritten as follows: \[\mathbb{T}^{\mathcal{R}(p^{a},q^{a})}_{m}\phi(z)=-[z\partial_{z}-m]_{\mathcal{ R}(p^{a},q^{a})}\,z^{m}\phi(z). \tag{15}\] Moreover, we define the second operators as follows: \[\mathbb{T}^{\mathcal{R}(p^{a},q^{a})}_{m}\phi(z):=-\tau^{a}\,z^{m}\,\phi(z). \tag{16}\] **Proposition 3**.: _The deformed operators (15) and (16) satisfy the product relations:_ \[\mathbb{T}_{n}^{\mathcal{R}(p^{a},q^{a})}.\mathbb{T}_{m}^{\mathcal{R }(p^{b},q^{b})} =-\frac{\big{(}\epsilon_{1}^{a+b}-\epsilon_{2}^{a+b}\big{)}\epsilon_ {1}^{m\,a}}{\big{(}\epsilon_{1}^{a}-\epsilon_{2}^{a}\big{)}\big{(}\epsilon_{1}^{ b}-\epsilon_{2}^{b}\big{)}}\,\mathbb{T}_{m+n}^{\mathcal{R}(p^{a+b},q^{a+b})}+ \frac{\epsilon_{2}^{(z\partial_{z}-n)a}}{\epsilon_{1}^{b}-\epsilon_{2}^{b}}\, \mathbb{T}_{m+n}^{\mathcal{R}(p^{a},q^{a})}\] \[+\,\frac{\epsilon_{1}^{m\,a}\,\epsilon_{2}^{(z\partial_{z}-m-n)b} }{\epsilon_{1}^{a}-\epsilon_{2}^{a}}\,\mathbb{T}_{m+n}^{\mathcal{R}(p^{b},q^{ b})}\] _and_ \[\mathbb{T}_{n}^{\mathcal{R}(p^{a},q^{a})}.\mathbb{I}_{m}^{ \mathcal{R}(p^{b},q^{b})} =\frac{1}{\epsilon_{1}^{a}-\epsilon_{2}^{a}}\bigg{\{}\tau^{-a} \epsilon_{2}^{a(z\partial_{z}-n)}\,\mathbb{I}_{n+m}^{\mathcal{R}(p^{a+b},q^{a+ b})}-\big{(}\epsilon_{1}^{a(z\partial_{z}-n)}-1\big{)}\mathbb{I}_{n+m}^{ \mathcal{R}(p^{b},q^{b})}\] \[-\,\tau^{b-a}\,\mathbb{I}_{n+m}^{\mathcal{R}(p^{a},q^{a})}\bigg{\}}.\] _Furthermore, the following commutation relations hold:_ \[\Big{[}\mathbb{T}_{n}^{\mathcal{R}(p^{a},q^{a})},\mathbb{T}_{m}^{ \mathcal{R}(p^{b},q^{b})}\Big{]} = \tag{17}\] \[- \frac{\epsilon_{2}^{(z\partial_{z}-m-n)a}\big{(}\epsilon_{1}^{n\, b}-\epsilon_{2}^{m\,a}\big{)}}{\epsilon_{1}^{b}-\epsilon_{2}^{b}}\mathbb{I}_{m+n}^{ \mathcal{R}(p^{a},q^{a})}\] \[+ \frac{\epsilon_{2}^{(z\partial_{z}-m-n)b}\big{(}\epsilon_{1}^{m\, a}-\epsilon_{2}^{b}\big{)}}{\epsilon_{1}^{a}-\epsilon_{2}^{a}}\mathbb{T}_{m+n}^{ \mathcal{R}(p^{b},q^{b})}\] _and_ \[\Big{[}\mathbb{T}_{n}^{\mathcal{R}(p^{a},q^{a})},\mathbb{I}_{m}^ {\mathcal{R}(p^{b},q^{b})}\Big{]} = \frac{1}{\epsilon_{1}^{a}-\epsilon_{2}^{a}}\bigg{\{}\tau^{-a} \epsilon_{2}^{a(z\partial_{z}-n)}\big{(}1-\epsilon_{2}^{-m\,a}\big{)}\mathbb{ I}_{n+m}^{\mathcal{R}(p^{a+b},q^{a+b})} \tag{18}\] \[- \epsilon_{1}^{a(z\partial_{z}-n)}\big{(}\epsilon_{1}^{-m\,a}-1 \big{)}\mathbb{I}_{n+m}^{\mathcal{R}(p^{b},q^{b})}\bigg{\}}.\] Proof.: By simple computation. Putting \(a=b=1\), we obtain, respectively, \[\Big{[}\mathbb{T}_{n}^{\mathcal{R}(p,q)},\mathbb{T}_{m}^{ \mathcal{R}(p,q)}\Big{]} =\frac{\big{(}\epsilon_{1}^{n}-\epsilon_{1}^{m}\big{)}}{\big{(} \epsilon_{1}-\epsilon_{2}\big{)}}[2]_{\mathcal{R}(p,q)}\mathbb{T}_{m+n}^{ \mathcal{R}(p^{2},q^{2})}\] \[-\epsilon_{2}^{z\partial_{z}-m-n}\Big{(}[n]_{\mathcal{R}(p,q)}+[m ]_{\mathcal{R}(p,q)}\Big{)}\mathbb{T}_{m+n}^{\mathcal{R}(p,q)}\] and \[\Big{[}\mathbb{T}_{n}^{\mathcal{R}(p,q)},\mathbb{I}_{m}^{ \mathcal{R}(p,q)}\Big{]} =\frac{1}{\epsilon_{1}-\epsilon_{2}}\bigg{\{}\tau^{-1}\epsilon_{2} ^{(z\partial_{z}-n)}\big{(}1-\epsilon_{2}^{-m}\big{)}\mathbb{I}_{n+m}^{ \mathcal{R}(p^{2},q^{2})}\] \[-\epsilon_{1}^{(z\partial_{z}-n)}\big{(}\epsilon_{1}^{-m}-1\big{)} \mathbb{I}_{n+m}^{\mathcal{R}(p,q)}\bigg{\}}.\] We consider the \(n\)-brackets defined by: \[\Big{[}\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a_{1}},q^{a_{1}})},\cdots,\mathbb{T}_{m _{n}}^{\mathcal{R}(p^{a_{n}},q^{a_{n}})}\Big{]}:=\Gamma_{1\cdots n}^{i_{1}\cdots i _{n}}\,\mathbb{T}_{m_{i_{1}}}^{\mathcal{R}(p^{a_{i_{1}}},q^{a_{i_{1}}})}\cdots \mathbb{T}_{m_{i_{n}}}^{\mathcal{R}(p^{a_{i_{n}}},q^{a_{i_{n}}})}, \tag{19}\] and \[\left[\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a},q^{a})},\cdots,\mathbb{ T}_{m_{n}}^{\mathcal{R}(p^{a},q^{a})}\right] := \sum_{j=0}^{n-1}(-1)^{n-1+j}\Gamma_{12\cdots n-1}^{i_{1}\cdots i _{n-1}}\mathbb{T}_{m_{i_{1}}}^{\mathcal{R}(p^{a},q^{a})}\cdots\mathbb{T}_{m_{i _{j}}}^{\mathcal{R}(p^{a},q^{a})} \tag{20}\] \[\times \mathbb{I}_{m_{n}}^{\mathcal{R}(p^{a},q^{a})}\mathbb{T}_{m_{i_{j +1}}}^{\mathcal{R}(p^{a},q^{a})}\cdots\mathbb{T}_{m_{i_{n-1}}}^{\mathcal{R}(p^{ a},q^{a})},\] where \(\Gamma_{1\cdots n}^{i_{1}\cdots i_{n}}\) is the Levi-Civita symbol given by: \[\Gamma_{i_{1}\cdots i_{p}}^{j_{1}\cdots j_{p}}=det\left(\begin{array}{ccc} \delta_{i_{1}}^{j_{1}}&\cdots&\delta_{i_{p}}^{j_{1}}\\ \vdots&&\vdots\\ \delta_{i_{1}}^{j_{p}}&\cdots&\delta_{i_{p}}^{j_{p}}\end{array}\right).\] We are interested on the case with the same \(\mathcal{R}(p^{a},q^{a})\). Then, \[\Big{[}\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a},q^{a})},\cdots,\mathbb{T}_{m_{n} }^{\mathcal{R}(p^{a},q^{a})}\Big{]}=\Gamma_{1\cdots n}^{1\cdots n}\mathbb{T}_{ m_{1}}^{\mathcal{R}(p^{a},q^{a})}\cdots\mathbb{T}_{m_{n}}^{\mathcal{R}(p^{a},q^{a})}.\] Putting \(a=b\) in the relations (17) and (18), we obtain: \[\Big{[}\mathbb{T}_{n}^{\mathcal{R}(p^{a},q^{a})},\mathbb{T}_{m}^{ \mathcal{R}(p^{a},q^{a})}\Big{]} =\frac{\left(\epsilon_{1}^{n\,a}-\epsilon_{1}^{m\,a}\right)}{ \epsilon_{1}^{a}-\epsilon_{2}^{a}}\,[2]_{\mathcal{R}(p^{a},q^{a})}\mathbb{T}_ {m+n}^{\mathcal{R}(p^{2\,a},q^{2\,a})}\] \[+\frac{\epsilon_{2}^{(2\partial_{z}-m-n)a}}{\epsilon_{1}^{a}- \epsilon_{2}^{a}}\big{(}\epsilon_{1}^{m\,a}-\epsilon_{1}^{n\,a}+\epsilon_{2}^ {m\,a}-\epsilon_{2}^{n\,a}\big{)}\mathbb{T}_{m+n}^{\mathcal{R}(p^{a},q^{a})}\] and \[\Big{[}\mathbb{T}_{n}^{\mathcal{R}(p^{a},q^{a})},\mathbb{I}_{m}^ {\mathcal{R}(p^{a},q^{a})}\Big{]} =\frac{1}{\epsilon_{1}^{a}-\epsilon_{2}^{a}}\bigg{\{}\tau^{-a} \epsilon_{2}^{a(z\partial_{z}-n)}\big{(}1-\epsilon_{2}^{-m\,a}\big{)}\mathbb{ I}_{n+m}^{\mathcal{R}(p^{2\,a},q^{2\,a})}\] \[-\epsilon_{1}^{a(z\partial_{z}-n)}\big{(}\epsilon_{1}^{-m\,a}-1 \big{)}\mathbb{I}_{n+m}^{\mathcal{R}(p^{a},q^{a})}\bigg{\}}.\] After computation, the \(n\)-brackets (19) and (20) can be reduced in the form as follows: \[\Big{[}\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a},q^{a})},\cdots,\mathbb{ T}_{m_{n}}^{\mathcal{R}(p^{a},q^{a})}\Big{]} =\frac{1}{\left(\epsilon_{1}^{a}-\epsilon_{2}^{a}\right)^{n-1} }\Big{(}M_{a}^{n}[n]_{\mathcal{R}(p^{a},q^{a})}\mathbb{T}_{m_{1}+\cdots+m_{n}}^ {\mathcal{R}(p^{n\,a},q^{n\,a})}\] \[-\frac{[n-1]_{\mathcal{R}(p^{a},q^{a})}}{\epsilon_{2}^{-a}\big{(} \sum_{l=1}^{n}z\partial_{z}-m_{l}\big{)}}\big{(}M_{a}^{n}+W_{a}^{n}\big{)} \mathbb{T}_{m_{1}+\cdots+m_{n}}^{\mathcal{R}(p^{(n-1)a},q^{(n-1)a})}\Big{)},\] and \[\left[\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a},q^{a})},\mathbb{T}_{m_{2} }^{\mathcal{R}(p^{a},q^{a})},\cdots,\mathbb{I}_{m_{n}}^{\mathcal{R}(p^{a},q^{a} )}\right] =\frac{1}{\left(\epsilon_{1}^{\alpha}-\epsilon_{2}^{a}\right)^{n-1}} \bigg{\{}F_{n}^{a}\,\mathbb{I}_{m_{1}+\cdots+m_{n}}^{\mathcal{R}(p^{n\,a},q^{a })}\] \[-R_{n}^{a}\,\mathbb{I}_{m_{1}+\cdots+m_{n}}^{\mathcal{R}(p^{(n-1)a },q^{(n-1)a})}\bigg{\}}, \tag{21}\] where \[M_{\alpha}^{n} =\epsilon_{1}^{a(n-1)\sum_{s=1}^{n}m_{s}}\left(\left(\epsilon_{1} ^{a}-\epsilon_{2}^{a}\right)^{\binom{n}{2}}\prod_{1\leq j<k\leq n}\left([-m_{j }]_{\mathcal{R}(p^{a},q^{a})}-[-m_{k}]_{\mathcal{R}(p^{a},q^{a})}\right)\right.\] \[+\prod_{1\leq j<k\leq n}\left(\epsilon_{1}^{-a\,m_{j}}-\epsilon_{ 1}^{-a\,m_{k}}\right)\right),\] \[W_{a}^{n} =\epsilon_{2}^{a(n-1)\sum_{s=1}^{n}m_{s}}\bigg{(}\big{(}\epsilon_ {1}^{a}-\epsilon_{2}^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[ -m_{j}]_{\mathcal{R}(p^{a},q^{a})}-[-m_{k}]_{\mathcal{R}(p^{a},q^{a})}\Big{)}\] \[+(-1)^{n-1}\prod_{1\leq j<k\leq n}\Big{(}\epsilon_{1}^{-a\,m_{j}} -\epsilon_{1}^{-a\,m_{k}}\Big{)}\bigg{)},\] \[F_{n}^{a}=\tau^{-a}\epsilon_{2}^{a\sum_{s=1}^{n-1}(z\partial_{z}-m_{s})}\big{(} 1-\epsilon_{2}^{-a\sum_{s=1}^{n-1}m_{s}}\big{)}\] and \[R_{n}^{a}=\epsilon_{1}^{a\sum_{s=1}^{n-1}(z\partial_{z}-m_{s})}\big{(} \epsilon_{1}^{-a\sum_{s=1}^{n-1}m_{s}}-1\big{)}.\] **Remark 3**.: _Taking \(n=3,\) we obtain the \(\mathcal{R}(p,q)\)-Heisenberg Witt \(3\)-algebra:_ \[\Big{[}\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a},q^{a})},\mathbb{T}_{ m_{2}}^{\mathcal{R}(p^{a},q^{a})},\mathbb{T}_{m_{3}}^{\mathcal{R}(p^{a},q^{a})} \Big{]} =\frac{1}{\big{(}\epsilon_{1}^{a}-\epsilon_{2}^{a}\big{)}^{2}} \Big{(}M_{a}^{3}[3]_{\mathcal{R}(p^{a},q^{a})}\mathbb{T}_{m_{1}+m_{2}+m_{3}}^{ \mathcal{R}(p^{3\,a},q^{3\,a})}\] \[-\frac{[2]_{\mathcal{R}(p^{a},q^{a})}}{\epsilon_{2}^{a\big{(} \sum_{l=1}^{3}z\partial_{z}-m_{l}\big{)}}}\big{(}M_{a}^{3}+W_{a}^{3}\big{)} \mathbb{T}_{m_{1}+m_{2}+m_{3}}^{\mathcal{R}(p^{2\,a},q^{2\,a})}\Big{)},\] _where_ \[M_{a}^{3} =\epsilon_{1}^{2\,a\sum_{s=1}^{3}m_{s}}\left(\big{(}\epsilon_{1} ^{a}-\epsilon_{2}^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[-m_{j}]_{ \mathcal{R}(p^{a},q^{a})}-[-m_{k}]_{\mathcal{R}(p^{a},q^{a})}\Big{)}\right.\] \[+\left.\prod_{1\leq j<k\leq 3}\Big{(}\epsilon_{2}^{-a\,m_{j}}- \epsilon_{2}^{-a\,m_{k}}\Big{)}\right)\] _and_ \[W_{a}^{3} =\epsilon_{2}^{2\,a\sum_{s=1}^{3}m_{s}}\Big{(}\big{(}\epsilon_{1} ^{a}-\epsilon_{2}^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[-m_{j}]_{ \mathcal{R}(p^{a},q^{a})}-[-m_{k}]_{\mathcal{R}(p^{a},q^{a})}\Big{)}\] \[+\left.\prod_{1\leq j<k\leq 3}\Big{(}\epsilon_{1}^{-a\,m_{j}}- \epsilon_{1}^{-a\,m_{k}}\Big{)}\right).\] _Moreover,_ \[\left[\mathbb{T}_{m_{1}}^{\mathcal{R}(p^{a},q^{a})},\mathbb{T}_{m_{2}}^{\mathcal{ R}(p^{a},q^{a})},\mathbb{I}_{m_{3}}^{\mathcal{R}(p^{a},q^{a})}\right]=\frac{1}{ \left(\epsilon_{1}^{a}-\epsilon_{2}^{a}\right)^{2}}\bigg{\{}F_{3}^{a}\, \mathbb{I}_{m_{1}+m_{2}+m_{3}}^{\mathcal{R}(p^{2\,a},q^{2\,a})}-R_{3}^{a}\, \mathbb{I}_{m_{1}+m_{2}+m_{3}}^{\mathcal{R}(p^{2\,a},q^{2\,a})}\bigg{\}},\] _where_ \[F_{3}^{a}=\tau^{-a}\epsilon_{2}^{a\sum_{s=1}^{2}(z\partial_{z}-m_{s})}\big{(}1 -\epsilon_{2}^{-a\sum_{s=1}^{2}m_{s}}\big{)}\] _and_ \[R_{3}^{a}=\epsilon_{1}^{a\sum_{s=1}^{2}(z\partial_{z}-m_{s})}\big{(}\epsilon_ {1}^{-a\sum_{s=1}^{2}m_{s}}-1\big{)}\] **Remark 4**.: _Interesting cases of Heisenberg Witt \(n\)-algebras from quantum algebras existing in the literature are deduced as follows:_ 1. _Taking_ \(\mathcal{R}(x)=\frac{1-x}{1-q},\) _we obtain the_ \(q\)_-deformed Heisenberg Witt_ \(n\)_-algebras:_ \[\Big{[}\mathbb{T}_{m_{1}}^{q^{a}},\cdots,\mathbb{T}_{m_{n}}^{q^{ a}}\Big{]} =\frac{1}{\big{(}1-q^{a}\big{)}^{n-1}}\Big{(}M_{a}^{n}[n]_{q^{a} }\mathbb{T}_{m_{1}+\cdots+m_{n}}^{q^{n\,a}}\] \[-\frac{[n-1]_{q^{a}}}{q^{-a\big{(}\sum_{l=1}^{n}z\partial_{z}-m_{ l}\big{)}}}\big{(}M_{a}^{n}+W_{a}^{n}\big{)}\mathbb{T}_{m_{1}+\cdots+m_{n}}^{q^{ (n-1)a}}\Big{)},\] _where_ \[M_{a}^{n}=\Big{(}\big{(}1-q^{a}\big{)}^{\binom{n}{2}}\prod_{1\leq j<k\leq n} \Big{(}[-m_{j}]_{q^{a}}-[-m_{k}]_{q^{a}}\Big{)}+\prod_{1\leq j<k\leq n}\Big{(} q^{-a\,m_{j}}-q^{-a\,m_{k}}\Big{)}\Big{)}\] _and_ \[W_{a}^{n}=q^{a(n-1)\sum_{s=1}^{n}m_{s}}\Big{(}\big{(}1-q^{a}\big{)}^{\binom{n }{2}}\prod_{1\leq j<k\leq n}\Big{(}[-m_{j}]_{q^{a}}-[-m_{k}]_{q^{a}}\Big{)} \Big{)}.\] _Moreover,_ \[\left[\mathbb{T}_{m_{1}}^{q^{a}},\mathbb{T}_{m_{2}}^{q^{a}},\cdots,\mathbb{I} _{m_{n}}^{q^{a}}\right]=\frac{1}{\big{(}1-q^{a}\big{)}^{n-1}}\,F_{n}^{a}\, \mathbb{I}_{m_{1}+\cdots+m_{n}}^{q^{n\,a}},\] _with_ \[F_{n}^{a}=q^{-a}\,q^{a\sum_{s=1}^{n-1}(z\partial_{z}-m_{s})}\big{(}1-q^{-a\sum _{s=1}^{n-1}m_{s}}\big{)}.\] _For_ \(n=3,\) _we deduce the_ \(q\)_-Heisenberg Witt_ \(3\)_-algebra:_ \[\Big{[}\mathbb{T}_{m_{1}}^{q^{a}},\mathbb{T}_{m_{2}}^{q^{a}}, \mathbb{T}_{m_{3}}^{q^{a}}\Big{]} =\frac{1}{\big{(}1-q^{a}\big{)}^{2}}\Big{(}M_{a}^{3}[3]_{q^{a} }\mathbb{T}_{m_{1}+m_{2}+m_{3}}^{q^{a}}\] \[-\frac{[2]_{q^{a}}}{q^{a\big{(}\sum_{l=1}^{3}z\partial_{z}-m_{l} \big{)}}}\big{(}M_{a}^{3}+W_{a}^{3}\big{)}\mathbb{T}_{m_{1}+m_{2}+m_{3}}^{q^{a }}\Big{)},\] \[\left[\mathbb{T}_{m_{1}}^{q^{a}},\mathbb{T}_{m_{2}}^{q^{a}}, \mathbb{I}_{m_{3}}^{q^{a}}\right]=\frac{1}{\big{(}1-q^{a}\big{)}^{2}}\,F_{3}^{a }\,\mathbb{I}_{m_{1}+m_{2}+m_{3}}^{q^{3\,a}},\] _where_ \[M_{a}^{3}=\Big{(}\big{(}1-q^{a}\big{)}^{\binom{3}{2}}\prod_{1\leq j<k\leq 3} \Big{(}[-m_{j}]_{q^{a}}-[-m_{k}]_{q^{a}}\Big{)}+\prod_{1\leq j<k\leq 3}\big{(}q^{-a\,m_{j} }-q^{-a\,m_{k}}\big{)}\Big{)},\] \[W_{a}^{3}=q^{2\,a\sum_{s=1}^{3}m_{s}}\Big{(}\big{(}1-q^{a}\big{)}^{\binom{n}{2} }\prod_{1\leq j<k\leq 3}\Big{(}[-m_{j}]_{q^{a}}-[-m_{k}]_{q^{a}}\Big{)},\] _and_ \[F_{3}^{a}=q^{-a}\,q^{a\sum_{s=1}^{2}(z\partial_{z}-m_{s})}\big{(}1-q^{-a\sum_{ s=1}^{2}m_{s}}\big{)}.\] 2. _Putting_ \(\mathcal{R}(x,y)=\frac{x-y}{p-q},\) _we obtain the_ \((p,q)\)_-deformed Heisenberg Witt_ \(n\)_-algebras:_ \[\Big{[}\mathbb{T}_{m_{1}}^{p^{a},q^{a}},\cdots,\mathbb{T}_{m_{n} }^{p^{a},q^{a}}\Big{]} =\frac{1}{\big{(}p^{a}-q^{a}\big{)}^{n-1}}\Big{(}M_{a}^{n}[n]_{p ^{a},q^{a}}\mathbb{T}_{m_{1}+\cdots+m_{n}}^{p^{n}\,a,q^{n\,a}}\] \[-\frac{[n-1]_{p^{a},q^{a}}}{q^{-a\big{(}\sum_{l=1}^{n}z\partial_{z }-m_{l}\big{)}}}\big{(}M_{a}^{n}+W_{a}^{n}\big{)}\mathbb{T}_{m_{1}+\cdots+m_{n} }^{p^{(n-1)a},q^{(n-1)\alpha}}\Big{)},\] _where_ \[M_{a}^{n} =p^{a(n-1)\sum_{s=1}^{n}m_{s}}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[-m_{j}]_{p^{a},q^{a}}-[-m_{k}]_{p^{ a},q^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq n}\Big{(}q^{-a\,m_{j}}-q^{-a\,m_{k}}\Big{)} \Big{)}\] _and_ \[W_{\alpha}^{n} =q^{a(n-1)\sum_{s=1}^{n}m_{s}}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq n}\Big{(}[-m_{j}]_{p^{a},q^{a}}-[-m_{k}]_{p^ {a},q^{a}}\Big{)}\] \[+(-1)^{n-1}\prod_{1\leq j<k\leq n}\Big{(}p^{-a\,m_{j}}-p^{-a\,m_{k }}\Big{)}\Big{)}.\] _Moreover,_ \[\bigg{[}\mathbb{T}_{m_{1}}^{p^{a},q^{a}},\mathbb{T}_{m_{2}}^{p^{a},q^{a}}, \cdots,\mathbb{T}_{m_{n}}^{p^{a},q^{a}}\bigg{]}=\frac{1}{\big{(}p^{a}-q^{a} \big{)}^{n-1}}\bigg{\{}F_{n}^{a}\,\mathbb{T}_{m_{1}+\cdots+m_{n}}^{p^{n}\,a,q^ {n\,a}}-R_{n}^{a}\,\mathbb{T}_{m_{1}+\cdots+m_{n}}^{p^{(n-1)a},q^{(n-1)a}} \bigg{\}},\] _with_ \[F_{n}^{a}=(pq)^{-a}\,q^{a\sum_{s=1}^{n-1}(z\partial_{z}-m_{s})}\big{(}1-q^{-a \sum_{s=1}^{n-1}m_{s}}\big{)}\] _and_ \[R_{n}^{a}=p^{a\sum_{s=1}^{n-1}(z\partial_{z}-m_{s})}\big{(}p^{-a\sum_{s=1}^{n- 1}m_{s}}-1\big{)}.\] _Taking \(n=3,\) we obtain the \((p,q)\)-Heisenberg Witt \(3\)-algebra:_ \[\Big{[}\mathbb{T}_{m_{1}}^{p^{a},q^{a}},\mathbb{T}_{m_{2}}^{p^{a},q^ {a}},\mathbb{T}_{m_{3}}^{p^{a},q^{a}}\Big{]} =\frac{1}{\big{(}p^{a}-q^{a}\big{)}^{2}}\Big{(}M_{a}^{3}[3]_{p^{a},q^{a}}\mathbb{T}_{m_{1}+m_{2}+m_{3}}^{p^{3}\,a} \tag{22}\] \[-\frac{[2]_{p^{a},q^{a}}}{q^{a\big{(}\sum_{l=1}^{3}z\partial_{z}- m_{l}\big{)}}}\big{(}M_{a}^{3}+W_{a}^{3}\big{)}\mathbb{T}_{m_{1}+m_{2}+m_{3}}^{p^{2 }\,a\,q^{2\,a}}\Big{)},\] (23) \[\Bigg{[}\mathbb{T}_{m_{1}}^{p^{a},q^{a}},\mathbb{T}_{m_{2}}^{p^{a},q^{a}},\mathbb{T}_{m_{3}}^{p^{a},q^{a}}\Bigg{]} =\frac{1}{\big{(}p^{a}-q^{a}\big{)}^{2}}\Bigg{\{}F_{n}^{a}\, \mathbb{T}_{m_{1}+m_{2}+m_{3}}^{3\,a}-R_{3}^{a}\,\mathbb{T}_{m_{1}+m_{2}+m_{3 }}^{2\,a}\Bigg{\}},\] _where_ \[M_{a}^{3} =p^{2\,a\sum_{s=1}^{3}m_{s}}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{ \binom{3}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[-m_{j}]_{p^{a},q^{a}}-[-m_{k}]_{p^{a},q ^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq 3}\Big{(}q^{-a\,m_{j}}-q^{-a\,m_{k}} \Big{)}\Big{)},\] \[W_{a}^{3} =q^{2\,a\sum_{s=1}^{3}m_{s}}\Big{(}\big{(}p^{a}-q^{a}\big{)}^{ \binom{n}{2}}\prod_{1\leq j<k\leq 3}\Big{(}[-m_{j}]_{p^{a},q^{a}}-[-m_{k}]_{p^{a},q ^{a}}\Big{)}\] \[+\prod_{1\leq j<k\leq 3}\Big{(}p^{-a\,m_{j}}-p^{-a\,m_{k}} \Big{)}\Big{)},\] \[F_{3}^{a}=(pq)^{-a}\,q^{a\sum_{s=1}^{2}(z\partial_{z}-m_{s})}\big{(}1- q^{-a\sum_{s=1}^{2}m_{s}}\big{)}\] _and_ \[R_{3}^{a}=p^{a\sum_{s=1}^{2}(z\partial_{z}-m_{s})}\big{(}p^{-a\sum_{s=1}^{2}m_ {s}}-1\big{)}.\] ### Heisenberg Virasoro constraints and a toy model Here, we use the generalized Heisenberg Virasoro constraints to study a toy model. Particular cases are derived. They play an important role in the study of matrix models. We consider the generating function with infinitely many parameters given by [23]: \[Z^{toy}(t)=\int\ x^{\gamma}\,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{ s}\right)\,dx,\] which encodes many different integrals. We consider the following expansion: \[\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\right)=\sum_{n=0}^{\infty}B _{n}(t_{1},\cdots,t_{n})\frac{x^{n}}{n!}, \tag{24}\] where \(B_{n}\) are the complete Bell polynomials. The following property holds for the \(\mathcal{R}(p,q)\)-deformed derivative \[\int_{\mathbb{R}}\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}f(x)d\,x=\frac{K(p^{a}, q^{a})}{\epsilon_{1}^{a}-\epsilon_{2}^{a}}\Big{(}\int_{-\infty}^{+\infty} \frac{f(\epsilon_{1}^{a}\,x)}{x}dx-\int_{-\infty}^{+\infty}\frac{f(\epsilon_{2} ^{a}\,x)}{x}dx\Big{)}=0,\] where \[K(p^{a},q^{a})=\frac{p^{a}-q^{a}}{p^{P^{a}}-q^{Q^{a}}}\mathcal{R}\big{(}p^{P^{a}},q ^{Q^{a}}\big{)}.\] For \(f(x)=x^{m+\gamma+1}\,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\right),\) we have: \[\int_{-\infty}^{+\infty}\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\left(x^{m+ \gamma+1}\,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\right)\right)d\,x =0.\] Thus, \[\mathcal{D}_{\mathcal{R}(p^{a},q^{a})}\bigg{(}x^{m+\gamma+1} \,\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\right)\bigg{)}= \frac{[z\partial_{z}]_{\mathcal{R}(p^{a},q^{a})}\,x^{m+\gamma}}{\epsilon_{1}^{ a\,m}}\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!}x^{s}\right)\] \[+\frac{K(p^{a},q^{a})\epsilon_{2}^{\alpha(m+1+\gamma)}}{(\epsilon _{1}^{a}-\epsilon_{2}^{a})x^{-k-m}}\sum_{k=1}^{\infty}\frac{B_{k}(t_{1}^{a}, \cdots,t_{k}^{a})}{k!}x^{\gamma}\exp\left(\sum_{s=0}^{\infty}\frac{t_{s}}{s!} x^{s}\right),\] where \(t_{k}^{a}=(\epsilon_{1}^{a\,k}-\epsilon_{2}^{a\,k})t_{k}.\) Then, from the constraints on the partition function, \[\mathbb{T}_{m}^{\mathcal{R}(p^{a},q^{a})}\,Z^{(toy)}(t)=0,\quad m\geq 0\] and \[\mathbb{T}_{m}^{\mathcal{R}(p^{a},q^{a})}\,Z^{(toy)}(t)=0,\quad m\geq 0,\] we have: \[\mathbb{T}_{m}^{\mathcal{R}(p^{a},q^{a})} =[z\partial_{z}]_{\mathcal{R}(p^{a},q^{a})}\,m!\,\epsilon_{1}^{-a \,m}\,\frac{\partial}{\partial t_{m}}\] \[+K(p^{a},q^{a})\frac{\epsilon_{2}^{a(m+1+\gamma)}}{\epsilon_{1}^ {a}-\epsilon_{2}^{a}}\sum_{k=1}^{\infty}\frac{(k+m)!}{k!}B_{k}(t_{1}^{a}, \cdots,t_{k}^{a})\frac{\partial}{\partial t_{k+m}}. \tag{25}\] Similarly, we obtain: \[\mathbb{T}_{m}^{\mathcal{R}(p^{a},q^{a})}=\tau^{a(m+1+\gamma)}\sum_{k=1}^{ \infty}\frac{(k+m)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{ \partial t_{k+m}}. \tag{26}\] **Remark 5**.: _The Heisenberg Virasoro operators (25) and (26) corresponding with quantum algebras in the literature are deduced as:_ * _The_ \(q\)_-Heisenberg Virasoro operators:_ \[\mathbb{T}_{m}^{q^{a}}=[z\partial_{z}]_{q^{a}}\,m!\,q^{-a\,m}\,\frac{\partial} {\partial t_{m}}+K(q^{a})\frac{q^{-a(m+1+\gamma)}}{q^{a}-q^{-a}}\sum_{k=1}^{ \infty}\frac{(k+m)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{ \partial t_{k+m}}\] _and_ \[\mathbb{T}_{m}^{q^{a}}=q^{a(m+1+\gamma)}\sum_{k=1}^{\infty}\frac{(k+m)!}{k!}B _{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t_{k+m}},\] _where_ \[[x]_{q}=\frac{q^{x}-q^{-x}}{q-q^{-}}.\] 2. _The_ \((p,q)\)_-Heisenberg Virasoro operartors:_ \[\mathbb{T}_{m}^{p^{a},q^{a}} =[z\partial_{z}]_{p^{a},q^{a}}\,m!\,p^{-a\,m}\,\frac{\partial}{ \partial t_{m}}\] \[+K(p^{a},q^{a})\frac{q^{a(m+1+\gamma)}}{p^{a}-q^{a}}\sum_{k=1}^{ \infty}\frac{(k+m)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{ \partial t_{k+m}}\] _and_ \[\mathbb{T}_{m}^{p^{a},q^{a}}=(pq)^{a(m+1+\gamma)}\sum_{k=1}^{\infty}\frac{(k +m)!}{k!}B_{k}(t_{1}^{a},\cdots,t_{k}^{a})\frac{\partial}{\partial t_{k+m}}.\] ### Generalized matrix model In this section, we generalize the matrix model from the quantum algebra. Moreover, we present the Pochhammer symbol, theta function, Gaussian density, elliptic gamma function, and the integral from the \(\mathcal{R}(p,q)\)-deformed quantum algebra. We focus only of the notions used in the sequel. More information can be found in [22] and references therein. We consider now the following relation: \[\left\{\begin{array}{l}F(z)=z,\\ \\ G(P,Q)=\frac{q^{Q}-p^{P}}{q^{Q}\mathcal{R}(p^{P},q^{Q})},\quad\text{if}\quad \eta>0,\end{array}\right.\] where \(\eta\) is given in the relation (1). Then, **Definition 4**.: _The \(\mathcal{R}(p,q)\)-Pochhammer symbol is given by:_ \[\big{(}u,z;\mathcal{R}(p,q)\big{)}_{n}:=\prod_{j=0}^{n}\left(u-F \big{(}\frac{q^{j}}{p^{j}}\,z\big{)}G(P,Q)\right), \tag{27}\] _and_ \[\big{(}u,z;\mathcal{R}(p,q)\big{)}_{\infty}:=\prod_{j=0}^{\infty} \left(u-F\big{(}\frac{q^{j}}{p^{j}}\,z\big{)}G(P,Q)\right),\] _with the following relation:_ \[\big{(}u,z;\mathcal{R}(p,q)\big{)}_{n}=\frac{\big{(}u,z;\mathcal{ R}(p,q)\big{)}_{\infty}}{\big{(}u,z\frac{q^{n}}{p^{n}};\mathcal{R}(p,q)\big{)}_{ \infty}}.\] Furthermore, the generalized Gaussian density is given as follows: \[\rho(z):=\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p^{2},q^{2})\big{)}_{\infty}.\] **Definition 5**.: _The \(\mathcal{R}(p,q)\)-deformed matrix model in terms of eigenvalue integrals is given by the following relations:_ \[Z_{N}^{\mathcal{R}(p,q)}(p_{k}) :=\int_{-\xi}^{\xi}\bigg{(}\prod_{i}\,z_{i}^{\beta(N-1)}\,\rho(z_{i })d_{\mathcal{R}(p,q)}z_{i}\bigg{)}\prod_{j\neq i}\bigg{(}u,\frac{z_{i}}{z_{j}}; \mathcal{R}(p,q)\bigg{)}_{\beta}\] \[\times\exp\bigg{(}\sum_{i,k}\frac{p_{k}}{k}\,z_{i}^{k}\bigg{)}\] _and_ \[\bigg{\langle}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{\rangle}_{ \mathcal{R}(p,q)} :=\frac{1}{Z_{N}^{\mathcal{R}(p,q)}(0)}\int_{-\xi}^{\xi}\bigg{(} \prod_{i}\,z_{i}^{\beta(N-1)}\,\rho(z_{i})d_{\mathcal{R}(p,q)}z_{i}\bigg{)}\] \[\times\prod_{j\neq i}\bigg{(}u,\frac{z_{i}}{z_{j}};\mathcal{R}(p, q)\bigg{)}_{\beta}\bigg{(}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{)},\] _where \(\xi\) is a parameter._ **Remark 6**.: _Particular case of matrix models is deduced from the formalism as follows: The \((p,q)\)-Pochhammer symbol is given by:_ \[\big{(}u,z;p,q\big{)}_{\infty}:=\prod_{j=0}^{\infty}\bigg{(}u-\frac{q^{j}}{p^ {j}}\,z\bigg{)}\,,\] _with the following relation:_ \[\big{(}u,z;p,q\big{)}_{n}=\frac{\big{(}u,z;p,q\big{)}_{\infty}}{\big{(}u,z \frac{q^{n}}{p^{n}};p,q\big{)}_{\infty}}.\] _and the \((p,q)\)-deformed Gaussian density by:_ \[\rho(z):=\big{(}u,q^{2}z^{2}/\xi^{2};p^{2},q^{2}\big{)}_{\infty}. \tag{28}\] _Furthermore, the \((p,q)\)-deformed matrix model is deried by the relations:_ \[Z_{N}^{p,q}(p_{k}):=\int_{-\xi}^{\xi}\bigg{(}\prod_{i}\,z_{i}^{\beta(N-1)}\, \rho(z_{i})d_{p,q}z_{i}\bigg{)}\prod_{j\neq i}\bigg{(}u,\frac{z_{i}}{z_{j}};p, q\bigg{)}_{\beta}\exp\bigg{(}\sum_{i,k}\frac{p_{k}}{k}\,z_{i}^{k}\bigg{)}\] _and_ \[\bigg{\langle}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{\rangle}_{p,q} :=\frac{1}{Z_{N}^{p,q}(0)}\int_{-\xi}^{\xi}\bigg{(}\prod_{i}\,z_{ i}^{\beta(N-1)}\,\rho(z_{i})d_{p,q}z_{i}\bigg{)}\] \[\times\prod_{j\neq i}\bigg{(}u,\frac{z_{i}}{z_{j}};p,q\bigg{)}_{ \beta}\bigg{(}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{)},\] _where \(\xi\) is a parameter._ Now, we investigate the elliptic generalized matrix models. **Definition 6**.: _The elliptic \(\mathcal{R}(p,q)\)-Pochhammer symbol is defined as follows:_ \[\big{(}u,z;\mathcal{R}(p,q),w\big{)}_{\infty}:=\prod_{j,k=0}^{\infty}\left(u- \gamma_{j,k}(z,w)\right), \tag{29}\] \(\gamma_{j,k}(z,w)=F\big{(}\frac{q^{j}}{p^{j}}\,w^{k}\,z\big{)}G(P,Q)\)_._ _Moreover, the \(\mathcal{R}(p,q)\)-theta function \(\Theta(u,z;\mathcal{R}(p,q))\) is given by:_ \[\theta_{w}(u,z)=\big{(}u,z;w\big{)}_{\infty}\big{(}u,w/z;w\big{)}_{\infty}. \tag{30}\] _Furthermore, the generalized elliptic gamma function is defined by:_ \[\Gamma\big{(}u,z;w,\mathcal{R}(p,q)\big{)}:=\frac{\big{(}u,qw/z;w,\mathcal{R}( p,q)\big{)}_{\infty}}{\big{(}u,z;w,\mathcal{R}(p,q)\big{)}_{\infty}}.\] In the particular case, we have: \[\Gamma\big{(}u,q^{n};w,\mathcal{R}(p,q)\big{)}=\prod_{k=1}^{\infty}\frac{[k] _{\mathcal{R}(s,w)}}{[k]_{\mathcal{R}(p,q)}}\,\prod_{i=1}^{n-1}\theta_{w}(u,q ^{i}).\] We consider the relation \[\langle f(z)\rangle:=\frac{\int_{-\xi}^{\xi}\rho(z)\,f(z)d_{\mathcal{R}(p,q)}z }{\int_{-\xi}^{\xi}\rho(z)\,d_{\mathcal{R}(p,q)}z}. \tag{31}\] Then, from the generalized Andrews-Askey formula [9]: \[\int_{-\xi}^{\xi}\frac{\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p^ {2},q^{2})\big{)}_{\infty}}{\big{(}u,-\alpha_{1}z/\xi;\mathcal{R}(p,q)\big{)} _{\infty}\big{(}u,\alpha_{2}z/\xi;\mathcal{R}(p,q)\big{)}_{\infty}}d_{ \mathcal{R}(p,q)}z=\xi(p-q)\frac{(u,q^{2};\mathcal{R}(p^{2},q^{2}))_{\infty}} {(u,\alpha_{1}^{2};\mathcal{R}(p^{2},q^{2}))_{\infty}}\] \[\times\frac{(u,-1;\mathcal{R}(p,q))_{\infty}(u,\alpha_{1}\alpha_ {2};\mathcal{R}(p,q))_{\infty}}{(u,\alpha_{2}^{2};\mathcal{R}(p^{2},q^{2}))_{ \infty}}.\] For \(\alpha_{1}=\alpha_{2}=\alpha\), the above relation takes the following form: \[\int_{-\xi}^{\xi}\frac{\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p ^{2},q^{2})\big{)}_{\infty}}{\big{(}u,\alpha^{2}z^{2}/\xi^{2};\mathcal{R}(p^{2 },q^{2})\big{)}_{\infty}}d_{\mathcal{R}(p,q)}z =\xi(p-q)\frac{\big{(}u,q^{2};\mathcal{R}(p^{2},q^{2})\big{)}_{ \infty}\big{(}u,-1;\mathcal{R}(p,q)\big{)}_{\infty}}{\big{(}u,\alpha^{2}; \mathcal{R}(p^{2},q^{2})\big{)}_{\infty}\big{)}}\] \[\times\frac{\big{(}u,\alpha^{2};\mathcal{R}(p,q)\big{)}_{\infty}} {\big{(}(u,\alpha^{2};\mathcal{R}(p^{2},q^{2}))_{\infty}\big{)}}\] and can be rewritten as: \[\int_{-\xi}^{\xi}\frac{\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p ^{2},q^{2})\big{)}_{\infty}}{\big{(}u,\alpha^{2}z^{2}/\xi^{2};\mathcal{R}(p^ {2},q^{2})\big{)}_{\infty}}d_{\mathcal{R}(p,q)}z=\xi(p-q)\prod_{n=0}^{\infty} \frac{\big{(}u-F\big{(}\frac{q^{2n+2}}{p^{2n+2}}\big{)}G(P,Q)\big{)}}{\big{(}u -F\big{(}\frac{q^{2n}}{p^{2n}}\big{)}G(P,Q)\alpha^{2}\big{)}}\] \[\times\prod_{n=0}^{\infty}\frac{\big{(}u+F\big{(}\frac{q^{n}}{p^{ n}}\big{)}G(P,Q)\big{)}\big{(}u-F\big{(}\frac{q^{n}}{p^{n}}\big{)}G(P,Q)\alpha^{2} \big{)}}{\big{(}u-F\big{(}\frac{q^{2n}}{p^{2n}}\big{)}G(P,Q)\alpha^{2}\big{)} }.\] Taking \(\alpha=0\), we have \[\int_{-\xi}^{\xi}\,\rho(z)d_{\mathcal{R}(p,q)}z=\xi(p-q)\prod_{n=0}^{\infty}\big{(} u-F\big{(}\frac{q^{2n+2}}{p^{2n+2}}\big{)}G(P,Q)\big{)}\big{(}u+F\big{(}\frac{q^{n}}{p ^{n}}\big{)}G(P,Q)\big{)}.\] Then, from the relation (31), we obtain \[\bigg{\langle}\frac{1}{\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p^{2},q^{2}) \big{)}_{\infty}}\bigg{\rangle}=\prod_{n=0}^{\infty}\frac{\big{(}u-F\big{(} \frac{q^{n}}{p^{n}}\big{)}G(P,Q)\alpha^{2}\big{)}}{\big{(}u-F\big{(}\frac{q^{2n }}{p^{2n}}\big{)}G(P,Q)\alpha^{2}\big{)}^{2}}. \tag{32}\] Using the relations \[\frac{1}{\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p^{2},q^{2})\big{)}_{\infty} }=\sum_{i=0}^{\infty}\,\frac{1}{\big{(}u,q^{2};\mathcal{R}(p^{2},q^{2})\big{)} _{i}}\bigg{(}\frac{\alpha\,z}{\xi}\bigg{)}^{2i}\] and \[\big{(}u,z;\mathcal{R}(p,q)\big{)}_{\infty}=\exp\bigg{(}-\sum_{i}\frac{z^{i}} {i[i]_{\mathcal{R}(p,q)}}\bigg{)},\] the relation (32) is reduced as: \[\bigg{\langle}\frac{1}{\big{(}u,q^{2}z^{2}/\xi^{2};\mathcal{R}(p^ {2},q^{2})\big{)}_{\infty}}\bigg{\rangle} = \sum_{i=0}^{\infty}\,\frac{1}{\big{(}u,q^{2};\mathcal{R}(p^{2},q^ {2})\big{)}_{i}}\bigg{(}\frac{\alpha}{\xi}\bigg{)}^{2i}\langle z^{2i}\rangle\] \[= \exp\bigg{\{}\sum_{i}\frac{\alpha^{2i}}{i}\bigg{(}\frac{2}{[2i]_ {\mathcal{R}(p,q)}}-\frac{1}{[i]_{\mathcal{R}(p,q)}}\bigg{)}\bigg{\}}.\] Then, the following relation holds: \[\langle z^{k}\rangle=\frac{1}{2}\xi^{k}.\delta_{k}^{(2)}.\prod_{i=1}^{k/2} \big{(}u-F\big{(}\frac{q^{2i-1}}{p^{2i-1}}\big{)}G(P,Q)\big{)}. \tag{33}\] Note that, to define the generalized elliptic matrix model, we need to define the elliptic generalization of the Vandermonde factor and measure from the relation (33). Then, the elliptic analogues of the relation (33) can be deduced as follows: \[\langle z^{k}\rangle_{\text{(ell)}}=\xi^{k}.\delta_{k}^{(2)}.\prod_{i=1}^{k/2} \theta_{w}(u,q^{2i-1})\] and the elliptic Vandermonde factor is provided by the elliptic gamma function. Moreover, the elliptic Gaussian density is given by \[\rho^{\text{(ell)}}(z,w)=\big{(}u,q^{2}z^{2}/\xi^{2};w,\mathcal{R}(p^{2},q^{2}) \big{)}_{\infty}.\] Then, the definition follows: **Definition 7**.: _The generalized elliptic matrix models is defined as:_ \[Z_{N}^{\rm ell}(\{p_{k}\}) =\int\bigg{(}\prod_{i}z_{i}^{\beta(N-1)}\rho^{\rm(ell)}(z_{i})d_{ \rm ell}z_{i}\bigg{)}\] \[\times\prod_{j\neq i}\frac{\Gamma\big{(}u,q^{\beta},\tfrac{z_{i}}{ z_{j}};w,\mathcal{R}(p,q)\big{)}}{\Gamma\big{(}u,\tfrac{z_{i}}{z_{j}};w,\mathcal{R}(p,q) \big{)}}\,\exp\bigg{(}\sum_{i,k}\frac{p_{k}}{k}z_{i}^{k}\bigg{)}\] _and_ \[\bigg{\langle}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{\rangle}_{\rm(ell)} =\frac{1}{Z_{N}^{\rm(ell)}(0)}\int\bigg{(}\prod_{i}z_{i}^{\beta(N- 1)}\rho^{\rm(ell)}(z_{i})d_{\rm ell}z_{i}\bigg{)}\] \[\times\prod_{j\neq i}\frac{\Gamma\big{(}u,q^{\beta},\tfrac{z_{i}}{ z_{j}};w,\mathcal{R}(p,q)\big{)}}{\Gamma\big{(}u,\tfrac{z_{i}}{z_{j}};w, \mathcal{R}(p,q)\big{)}}\,\bigg{(}\prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{)}.\] **Remark 7**.: _Particular case of elliptic matrix models is recovered as follows: The elliptic \((p,q)\)-Pochhammer symbol is defined as follows:_ \[\big{(}u,z,w;p,q\big{)}_{\infty}:=\prod_{j,k=0}^{\infty}\left(u-\gamma_{j,k}( z,w)\right),\] \(\gamma_{j,k}(z,w)=F\big{(}\tfrac{q^{j}}{p^{j}}\,w^{k}\,z\big{)}G(P,Q).\) _Moreover, the \((p,q)\)-theta function \(\Theta(u,z;p,q)\) is given by:_ \[\theta_{w}(u,z)=\big{(}u,z;w\big{)}_{\infty}\big{(}u,w/z;w\big{)}_{\infty}\] _and the \((p,q)\)-deformed elliptic gamma function as:_ \[\Gamma\big{(}u,z;w,p,q\big{)}:=\frac{\big{(}u,qw/z;w,p,q\big{)}_{\infty}}{ \big{(}u,z;w,p,q\big{)}_{\infty}}.\] _Moreover, the \((p,q)\)-elliptic Gaussian density is given by_ \[\rho^{\rm(ell)}(z,w)=\big{(}u,q^{2}z^{2}/\xi^{2};w,p^{2},q^{2}\big{)}_{\infty}.\] _and the \((p,q)\)-elliptic matrix models by:_ \[Z_{N}^{\rm ell}(\{p_{k}\}) =\int\bigg{(}\prod_{i}z_{i}^{\beta(N-1)}\rho^{\rm(ell)}(z_{i})d_{ \rm ell}z_{i}\bigg{)}\] \[\times\prod_{j\neq i}\frac{\Gamma\big{(}u,q^{\beta},\tfrac{z_{i}}{ z_{j}};w,p,q\big{)}}{\Gamma\big{(}u,\tfrac{z_{i}}{z_{j}};w,p,q\big{)}}\exp\bigg{(} \sum_{i,k}\frac{p_{k}}{k}z_{i}^{k}\bigg{)}\] _and_ \[\left\langle\,\prod_{i}\sum_{m}z_{m}^{k_{i}}\right\rangle_{\rm(ell)} =\frac{1}{Z_{N}^{(ell)}(0)}\int\bigg{(}\,\prod_{i}z_{i}^{\beta(N-1 )}\rho^{\rm(ell)}(z_{i})d_{\rm ell}z_{i}\bigg{)}\] \[\times\prod_{j\neq i}\frac{\Gamma\big{(}u,q^{\beta},\frac{z_{i}}{ z_{j}};w,p,q\big{)}}{\Gamma\big{(}u,\frac{z_{i}}{z_{j}};w,p,q\big{)}}\,\bigg{(}\, \prod_{i}\sum_{m}z_{m}^{k_{i}}\bigg{)}.\] **Definition 8**.: _The \(\mathcal{R}(p,q)\)-differential operator is defined as follows:_ \[T_{n}^{\mathcal{R}(p,q)}\phi(z):=-\sum_{l=1}^{N}\mathcal{D}_{\mathcal{R}(p,q)}^ {z_{l}}\,z_{l}^{n+1}\phi(z), \tag{34}\] _which acts on the functions of \(N\) variables and \(\mathcal{D}_{\mathcal{R}(p,q)}^{z_{l}}\) is \(\mathcal{R}(p,q)\)-derivative with respect to the \(z_{l}\)-variable._ **Proposition 4**.: _The operators (34) verify the \(\mathcal{R}(p,q)\)-deformed commutation relation:_ \[\big{[}T_{n}^{\mathcal{R}(p,q)},T_{m}^{\mathcal{R}(p,q)}\big{]}_{x_{n},x_{m}} =\big{(}[n]_{\mathcal{R}(p,q)}-[m]_{\mathcal{R}(p,q)}\big{)}T_{n+m}^{ \mathcal{R}(p,q)},\] _where_ \[x_{n}=q^{n-m}p^{n}\,\chi_{nm}(p,q)\text{,}\quad x_{m}=p^{n}\chi_{nm}(p,q)\] _and_ \[\chi_{nm}(p,q)=\frac{[n]_{\mathcal{R}(p,q)}-[m]_{\mathcal{R}(p,q)}}{[n+1]_{ \mathcal{R}(p,q)}-(pq)^{n-m}[m+1]_{\mathcal{R}(p,q)}}.\] We can rewrite the above relation by: **Proposition 5**.: _The \(\mathcal{R}(p,q)\)-operator (34) can be given as follows:_ \[T_{n}^{\mathcal{R}(p,q)} =\frac{K(P,Q)}{p-q}\bigg{[}(\frac{q}{p})^{n+1+\beta(N-1)}\sum_{l =0}^{\infty}\frac{(l+n-2N)!}{l!}B_{l}(\tilde{t}_{1},...,\tilde{t}_{l})\] \[\times D_{N}\frac{\partial}{\partial t_{l+n-2N}}-p^{n+1+\beta(N- 1)}n!\frac{\partial}{\partial t_{n}}\bigg{]}, \tag{35}\] _where \(D_{N}\) is a differential operator(40)._ Proof.: The elliptic generalized matrix model can be rewritten as: \[Z_{N}^{\rm ell}(\{p_{k}\}) =\int\prod_{i}d_{\rm ell}z_{i}\,\prod_{i}z_{i}^{\beta(N-1)}\rho^{ \rm(ell)}(z_{i})\] \[\times\prod_{j\neq i}\frac{\Gamma\big{(}u,q^{\beta},\frac{z_{i}}{ z_{j}};w,\mathcal{R}(p,q)\big{)}}{\Gamma\big{(}u,\frac{z_{i}}{z_{j}};w, \mathcal{R}(p,q)\big{)}}\,\exp\bigg{(}\,\sum_{i,k}\frac{p_{k}}{k}z_{i}^{k} \bigg{)}. \tag{36}\] Putting the \(\mathcal{R}(p,q)\)-differential operators (34) under the integral (36), we obtain naturally zero. Now we have to evaluate how these differential operators act on the integrand. Setting \[g(z)=\prod_{i}z_{i}^{\beta(N-1)}\rho^{(\text{ell})}(z_{i})\prod_{j \neq i}\frac{\Gamma\big{(}u,q^{\beta},\frac{z_{i}}{z_{j}};w,\mathcal{R}(p,q) \big{)}}{\Gamma\big{(}u,\frac{z_{i}}{z_{j}};w,\mathcal{R}(p,q)\big{)}}\] and \[f(z)=z^{n+1},\] we have: \[T_{n}^{\mathcal{R}(p,q)}g(z) = \sum_{l=1}^{N}\frac{K(P,Q)}{p-q}\bigg{(}\big{(}\frac{q}{p}\big{)}^ {n+1+\beta(N-1)}\prod_{j\neq l}\frac{p}{q}\frac{z_{j}^{2}}{z_{l}^{2}}-1\bigg{)} p^{n+1+\beta(N-1)}z_{l}^{n} \tag{37}\] \[\times \prod_{i}(zp)_{i}^{\beta(N-1)}\rho^{(\text{ell})}(pz_{i})\prod_{ j\neq i}\frac{\Gamma\big{(}u,q^{\beta},\frac{z_{i}}{z_{j}};w,\mathcal{R}(p,q) \big{)}}{\Gamma\big{(}u,\frac{z_{i}}{z_{j}};w,\mathcal{R}(p,q)\big{)}},\] where \[K(P,Q)=\frac{p-q}{p^{P}-q^{Q}}\,\mathcal{R}(p^{P},q^{Q}).\] The \(n\)th complete Bell polynomial \(B_{n}\) given by (24) satisfy the following relations: \[B_{l}(\tilde{t}_{1},...,\tilde{t}_{l})=\sum_{\nu=0}^{l}q^{\nu} \binom{l}{\nu}B_{\nu}(t_{1},...,t_{\nu})B_{n-\nu}(-t_{1},...,-t_{n-\nu}), \tag{38}\] where \(\tilde{t}_{k}=(q^{k}-1)t_{k}\), and \[\exp\left(\sum_{k=1}^{\infty}\frac{t_{k}}{k!}q^{k}z_{i}^{k}\right) =\sum_{k=0}^{\infty}\sum_{\nu=0}^{\infty}\frac{1}{k!\nu!}B_{k}(t_ {1},\ldots,t_{k})\] \[\times B_{\nu}(-t_{1},\ldots,-t_{\nu})q^{k}z_{i}^{k+\nu}\exp\bigg{(} \sum_{l=1}^{\infty}\frac{t_{l}}{l!}z_{i}^{l}\bigg{)}\] \[=\sum_{k=0}^{\infty}\frac{1}{k!}B_{k}\left(\tilde{t}_{1},\ldots, \tilde{t}_{k}\right)x^{k}\exp\left(\sum_{l=1}^{\infty}\frac{t_{l}}{l!}z_{i}^{ l}\right). \tag{39}\] Applying the formulas (37) and (39), we find the insertion of the \(\mathcal{R}(p,q)\)-operator (34) under the integral (36). Then, the relation (37) can be rewritten in the simpler form: \[T_{n}^{\mathcal{R}(p,q)}g(z) =\frac{K(P,Q)}{p-q}\bigg{[}\prod_{j=1}^{N}z_{j}^{2}\sum_{l=1}^{N} \sum_{k,\nu=0}^{\infty}\big{(}\frac{q}{p}\big{)}^{n+1+\beta(N-1)}q^{k}\frac{1}{ k!\nu!}B_{k}(t_{1},\ldots,t_{k})\] \[\times B_{\nu}(-t_{1},\ldots,-t_{\nu})z_{l}^{k+\nu+n-2N}-p^{n+1+ \beta(N-1)}\,\sum_{l=1}^{N}z_{l}^{n}\bigg{]}\.\] Using the Newton's identities, \[\prod_{i=1}^{N}z_{i}=\frac{1}{N!}\left|\begin{array}{cccccc}\nu_{1}&1&0& \ldots&&\\ \nu_{2}&\nu_{1}&2&0&\ldots&\\ \cdots&\cdots&\cdots&\cdots&\cdots&\\ \nu_{N-1}&\nu_{N-2}&\ldots&\cdots&\nu_{1}&N-1\\ \nu_{N}&\nu_{N-1}&\ldots&\cdots&\nu_{2}&\nu_{1}\end{array}\right|\,\] where \(\nu_{k}\equiv\sum\limits_{i=1}^{N}z_{i}^{k}\), the terms \(\sum\limits_{i=1}^{N}z_{i}^{k}\) may be generated by taking the derivatives with respect to \(t\) and thus we can consider the following differential operator \[D_{N}=\frac{1}{N!}\left|\begin{array}{cccccc}2!\frac{\partial}{\partial t_ {2}}&1&0&\ldots&&\\ 4!\frac{\partial}{\partial t_{4}}&2!\frac{\partial}{\partial t_{2}}&2&0& \ldots&\\ \cdots&\cdots&\cdots&\cdots&\cdots&\\ (2N-2)!\frac{\partial}{\partial t_{2N-2}}&(2N-4)!\frac{\partial}{\partial t_ {2N-4}}&\cdots&\cdots&2!\frac{\partial}{\partial t_{2}}&N-1\\ (2N)!\frac{\partial}{\partial t_{2N}}&(2N-2)!\frac{\partial}{\partial t_{2N-2 }}&\cdots&\cdots&4!\frac{\partial}{\partial t_{44}}&2!\frac{\partial}{\partial t _{2}}\end{array}\right|\, \tag{40}\] with the property that \[\prod_{j=1}^{N}z_{j}^{2}\sum_{e^{k=0}}^{\infty}\tfrac{t_{k}}{k!}\sum\limits_{ i=1}^{N}z_{i}^{k}=D_{N}\left(e^{\sum\limits_{k=0}^{\infty}\tfrac{t_{k}}{k!}\sum \limits_{i=1}^{N}z_{i}^{k}}\right)\.\] Combining all together we obtain the following \(\mathcal{R}(p,q)\)-Virasoro operator: \[T_{n}^{\mathcal{R}(p,q)} = \frac{K(P,Q)}{p-q}\bigg{[}\sum_{k,\nu=0}^{\infty}\big{(}\frac{q}{ p}\big{)}^{n+1-\beta(N-1)}q^{k}\frac{(k+\nu+n-2N)!}{k!\nu!}B_{k}(t_{1},\ldots,t_{k})\] \[\times B_{\nu}(-t_{1},\ldots,-t_{\nu})D_{N}\frac{\partial}{\partial t_{k +\nu+n-2N}}-p^{n+1+\beta(N-1)}\,n!\frac{\partial}{\partial t_{n}}\bigg{]}\,\] which annihilates the generating function \(Z_{N}^{\rm ell}(\{t\})\). Using the property (38), the result follows. Now, we can show that the \(\mathcal{R}(p,q)\)-operators (35) obey the following commutation relation: \[[T_{n}^{\mathcal{R}(p,q)},T_{m}^{\mathcal{R}(p,q)}]=f_{nm}(p,q)([n]_{\mathcal{ R}(p,q)}-[m]_{\mathcal{R}(p,q)})\bigg{(}[2]_{\mathcal{R}(p,q)}T_{n+m}^{ \mathcal{R}(p^{2},q^{2})}-T_{n+m}^{\mathcal{R}(p,q)}\bigg{)},\] where \(f_{nm}(p,q)\) is the function depending on \(p,q,n,\) and \(m\) and \(T_{n}^{\mathcal{R}(p^{2},q^{2})}\) is the \(\mathcal{R}(p,q)\)- difference operator defined by: \[T_{n}^{\mathcal{R}(p^{2},q^{2})}\phi(z)=-\sum_{l=1}^{N}\mathcal{D}_{\mathcal{R} (p^{2},q^{2})}^{z_{l}}\,z_{l}^{n+1}\phi(z).\] From the above procedure, we can deduce that the operators \(T_{n}^{\mathcal{R}(p^{2},q^{2})}\) also annihilate the \(\mathcal{R}(p,q)\)-generating function \(Z_{N}^{\rm ell}(\{t\}).\) Then, we have: \[T_{n}^{\mathcal{R}(p^{2},q^{2})}Z_{N}^{\rm ell}(\{t\})=0,\] where the \(\mathcal{R}(p,q)\)-differential operator \(T_{n}^{\mathcal{R}(p^{2},q^{2})}\) is given by the following relation using (35): \[T_{n}^{\mathcal{R}(p^{2},q^{2})} = \frac{K(P,Q)}{p^{2}-q^{2}}\bigg{[}\big{(}\frac{q}{p}\big{)} \bigg{)}^{2n+4-4\beta(N-1)}\sum_{l=0}^{\infty}\frac{(l+n-4N)!}{l!}B_{l}(\hat{t }_{1},...,\hat{t}_{l}) \tag{41}\] \[\times \tilde{D}_{N}\frac{\partial}{\partial t_{l+n-4N}}-p^{2n+4+4\beta (N-1)}n!\frac{\partial}{\partial t_{n}}\bigg{]},\] with \(\tilde{D}_{N}\) the differential operator defined by: \[\tilde{D}_{N}=\frac{1}{N!}\left|\begin{array}{cccccc}4!\frac{\partial}{ \partial t_{4}}&1&0&\ldots&&\\ 8!\frac{\partial}{\partial t_{8}}&4!\frac{\partial}{\partial t_{4}}&2&0& \ldots&\\ \cdots&\cdots&\cdots&\cdots&\cdots&\\ (4N-4)!\frac{\partial}{\partial t_{4N-4}}&(4N-8)!\frac{\partial}{\partial t_{ 4N-8}}&\cdots&\cdots&4!\frac{\partial}{\partial t_{4}}&N-1\\ (4N)!\frac{\partial}{\partial t_{4N}}&(4N-4)!\frac{\partial}{\partial t_{4N-4} }&\cdots&\cdots&8!\frac{\partial}{\partial t_{8}}&4!\frac{\partial}{\partial t _{4}}\end{array}\right|.\] From the relation (41), we see that the operators \(T_{n}^{\mathcal{R}(p^{2},q^{2})}\) are higher order differential operators. Similarly, using the same procedure, we can define the operators \(T_{n}^{\mathcal{R}(p^{j},q^{j})}\) as follows: \[T_{n}^{\mathcal{R}(p^{j},q^{j})} = \frac{K(P,Q)}{p^{j}-q^{j}}\bigg{[}q^{jn+j^{2}-j^{2}\beta(N-1)} \sum_{l=0}^{\infty}\frac{(l+n-2jN)!}{l!}B_{l}(\hat{t}_{1},...,\hat{t}_{l})\] \[\times \hat{\mathcal{D}}_{N}\frac{\partial}{\partial t_{l+n-2jN}}-p^{jn+ j^{2}\beta(N-1)}n!\frac{\partial}{\partial t_{n}}\bigg{]},\] with \(\hat{\mathcal{D}}_{N}\) given by: \[\hat{D}_{N}=\frac{1}{N!}\left|\begin{array}{cccccc}2j!\frac{\partial}{ \partial t_{2j}}&1&0&\ldots&&\\ 4j!\frac{\partial}{\partial t_{4j}}&2j!\frac{\partial}{\partial t_{2j}}&2&0& \ldots&\\ \cdots&\cdots&\cdots&\cdots&\cdots&\\ (2N-2)j!\frac{\partial}{\partial t_{(2N-2)j}}&(2N-4)j!\frac{\partial}{ \partial t_{(2N-4)j}}&\cdots&\cdots&2j!\frac{\partial}{\partial t_{2j}}&N-1\\ (2jN)!\frac{\partial}{\partial t_{2jN}}&(2N-2)j!\frac{\partial}{\partial t_{(2 N-2)j}}&\cdots&\cdots&4j!\frac{\partial}{\partial t_{4j}}&2j!\frac{\partial}{ \partial t_{2j}}\end{array}\right|.\] ## 5. Concluding remarks We have constructed the \(\mathcal{R}(p,q)\)-deformed Heisenberg-Virasoro algebra, the \(\mathcal{R}(p,q)\)-Heisenberg-Witt \(n\)-algebra. Moreover, we have generalized the matrix models, the elliptic hermitian matrix models and presented the \(\mathcal{R}(p,q)\)-differential operatos of the Virasoro algebra. Related particular cases have been deduced. ## Acknowledgements This work is supported by a DAAD research stay, reference 91819215. The research of RW is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 427320536 - SFB 1442, as well as under Germany's Excellence Strategy EXC 2044 390685587, Mathematics Munster: Dynamics-Geometry-Structure.
2303.07340
Doubly optimal parallel wire cutting without ancilla qubits
A restriction in the quality and quantity of available qubits presents a substantial obstacle to the application of near-term and early fault-tolerant quantum computers in practical tasks. To confront this challenge, some techniques for effectively augmenting the system size through classical processing have been proposed; one promising approach is quantum circuit cutting. The main idea of quantum circuit cutting is to decompose an original circuit into smaller sub-circuits and combine outputs from these sub-circuits to recover the original output. Although this approach enables us to simulate larger quantum circuits beyond physically available circuits, it needs classical overheads quantified by the two metrics: the sampling overhead in the number of measurements to reconstruct the original output, and the number of channels in the decomposition. Thus, it is crucial to devise a decomposition method that minimizes both of these metrics, thereby reducing the overall execution time. This paper studies the problem of decomposing the parallel $n$-qubit identity channel, i.e., $n$-parallel wire cutting, into a set of local operations and classical communication; then we give an optimal wire-cutting method comprised of channels based on mutually unbiased bases, that achieves minimal overheads in both the sampling overhead and the number of channels, without ancilla qubits. This is in stark contrast to the existing method that achieves the optimal sampling overhead yet with ancilla qubits. Moreover, we derive a tight lower bound of the number of channels in parallel wire cutting without ancilla systems and show that only our method achieves this lower bound among the existing methods. Notably, our method shows an exponential improvement in the number of channels, compared to the aforementioned ancilla-assisted method that achieves optimal sampling overhead.
Hiroyuki Harada, Kaito Wada, Naoki Yamamoto
2023-03-13T17:59:18Z
http://arxiv.org/abs/2303.07340v2
# Optimal parallel wire cutting without ancilla qubits ###### Abstract The circuit cutting techniques are widely used to execute quantum algorithms that require more qubits than physically available; the idea is to decompose the original circuit to smaller sub-circuits that are combined to simulate the original one by sampling. However, they suffer from an inevitable issue that the sampling cost exponentially grows with the number of cuts, and thus it is practically important to develop decomposition methods working with smaller sampling cost. This paper focuses on the parallel wire-cutting problem, meaning that the target to cut is the identity channel acting on multiple qubits in parallel. There are two previous studies. The first one is a teleportation-based method that achieves this task with the use of ancilla qubits, whose sampling cost was proven to be optimal. The second one uses the technique of classical shadow to solve the problem without any ancilla qubits, which however does not achieve the optimal sampling cost. This paper gives a definitive solution to this problem. That is, we develop the optimal parallel wire cutting without ancilla qubits, in the same setup as that taken in the above previous studies. Moreover, the developed ancilla-free method can be applied to the non-parallel case, which also improves the sampling cost obtained in the previous study. ## I Introduction Quantum computers are expected to have significant advantage in certain tasks over classical ones [1; 2]. In recent years, with the advent of Noisy Intermediate-Scale Quantum (NISQ) devices [3], extensive research have been conducted to explore algorithms that may demonstrate usefulness of those near-term devices, e.g., quantum machine learning [4; 5; 6; 7], quantum simulation [8; 9; 10] and quantum chemical computation [11; 12; 13]. However, current NISQ devices have fundamental limitations in both quality and quantity of available qubits, which are major obstacles to their practical use. To mitigate this problem, a wide range of approaches have been proposed, which effectively expand the size of quantum systems with the help of classical post-processing [14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. In particular, the methods of partitioning a quantum circuit [15; 16; 17; 18; 19; 20; 21; 22; 23] are useful. The idea is to decompose a quantum circuit into several subsystems and recover the original output by appropriately combining the output results from all the subsystems. The circuit partitioning method is categorized into mainly two techniques, depending on the type of target quantum circuits to be decomposed. One is the method to decompose the identity channel (or the _wires_) into linear combinations of _measure-and-prepare channels_ with measurement followed by preparation of a quantum state, which is called the _wire cutting_ (or circuit cutting or time-like cut) [15; 16; 17; 18; 19]. The other one is called the _gate cutting_ (or space-like cut), which decomposes a global channel into linear combinations of tensor products of local channels [20; 21; 22; 23], based on the quasiprobability decomposition methods widely used in error mitigation techniques [27; 28; 29]. However, they have an inevitable problem that the variance of target quantities calculated using the subsystems exponentially increases with respect to the number of cut locations. Clearly, this sampling issue will become serious in practice, and several studies have been conducted to pursue a less-costly decomposition [18; 19; 21; 22]. In this paper, we consider the problem of wire cutting, for which the methods proposed in [18; 19] realize a smaller sampling cost than the original scheme [15]. Here we briefly describe these methods. The idea of decomposing the \(n\) parallel identity channel (i.e., \(\text{Id}^{\otimes n}\) with the single-qubit identity channel Id) is illustrated in Fig. 1. That is, \(\text{Id}^{\otimes n}\) depicted with the blue rectangular is replaced by a channel \(\Phi\) which functions as an identity channel from the top \(n\) wires to the bottom \(n\) wires; additionally, to realize the wire cutting, the top and bottom wires are not connected via any quantum resource, but any local operations and classical communication (LOCC) is allowed. Hence, the problem boils down to how to realize such \(\Phi\). Ref. [18] employed the idea of classical shadow [30] for this purpose, as illustrated in Fig. 1(b1). Specifically, a random unitary \(V^{\dagger}\) followed by the computational basis measurement on the top wire yields the output \(\mathbf{y}\), from which we input the state \(V\left|\mathbf{y}\right\rangle\left\langle\mathbf{y}\right|V^{\dagger}\) (called the classical snapshot in [30]) to the bottom wire; if \(V\) forms a unitary \(t\)-design (\(t\geq 2\)) such as the Clifford random gate, then their ensemble average over \(V\) and \(\mathbf{y}\) together with an additional input \(\mathbf{y}^{\prime}\) (which is a uniformly random vector in \(\{0,1\}^{n}\)) recovers the input state at the bottom wire. Clearly the top and bottom wires in \(\Phi\) in this case are separated, and thus the entire circuit depicted in Fig. 1(a) is decomposed into two smaller sub-circuits. Note that this scheme does not use any ancilla qubit other than the top and bottom wires. The sampling cost (a formal defini
2308.05625
The Coble-Mukai lattice from $\mathbb Q$-Gorenstein deformations
We show some geometric properties of Enriques surfaces via $\mathbb Q$-Gorenstein smoothings of Coble surfaces. In particular, we explicitly identify the Enriques lattice of the general fiber with the Coble-Mukai lattice. At the end, we discuss applications to Gorenstein $\mathbb Q$-Homology projective planes with trivial canonical class.
Giancarlo Urzúa
2023-08-10T15:13:50Z
http://arxiv.org/abs/2308.05625v1
# The Coble-Mukai lattice from \(\mathbb{Q}\)-Gorenstein deformations ###### Abstract. We show some geometric properties of Enriques surfaces via \(\mathbb{Q}\)-Gorenstein smoothings of Coble surfaces. In particular, we explicitly identify the Enriques lattice of the general fiber with the Coble-Mukai lattice. At the end, we discuss applications to Gorenstein \(\mathbb{Q}\)-Homology projective planes with trivial canonical class. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Class and Picard groups * 4 Coble-Mukai lattice and Enriques lattice * 5 Explicit computations for the first degeneration * 6 Further degenerations and \(\mathbb{Q}\)-Homology projective planes ## 1. Introduction Our ground field is \(\mathbb{C}\). An _Enriques surface_ is a nonsingular projective surface \(Z\) with \(q(Z)=h^{1}(\mathcal{O}_{Z})=0\), \(p_{g}(Z)=h^{2}(\mathcal{O}_{Z})=0\), and \(2K_{Z}\sim 0\). (An excellent survey on Enriques surfaces is [1], our main reference will be the two volumes book [1, 2].) Degenerations of Enriques surfaces have been studied in e.g. [10], [11], [12], [13]. We are interested in \(\mathbb{Q}\)-Gorenstein degenerations (called moderate degenerations in [1]). An _Enriques W-surface_ is a normal projective surface \(X\) together with a proper deformation \((X\subset\mathcal{X})\to(0\in\mathbb{D})\), where \(\mathbb{D}\) is a smooth curve germ, such that 1. \(X\) has at most _Wahl singularities_, this is, cyclic quotient singularities of type \(\frac{1}{n^{2}}(1,na-1)\) with \(\gcd(a,n)=1\), 2. \(\mathcal{X}\) is a normal \(3\)-fold with \(K_{\mathcal{X}}\)\(\mathbb{Q}\)-Cartier, 3. the fiber \(X_{0}\) is reduced and isomorphic to \(X\), 4. the fiber \(X_{t}\) is an Enrique surface for \(t\neq 0\). (For more general W-surfaces and their birational geometry see [10].) Kawamata proves that the monodromy for this type of degenerations is trivial [14, Section 2], and that an Enriques W-surface \(X\) with \(K_{X}\) nef can have only singularities of type \(\frac{1}{4}(1,1)\)[14, Theorem 4.1] (i.e. these are flower pot degenerations [13]). In addition, if \(X\) is indeed singular, then it must be rational. Let \(\phi\colon V\to X\) be the minimal resolution, and let \(\{C_{1},\ldots,C_{s}\}\) be the disjoint exceptional \((-4)\)-curves. Then \[-2K_{V}\sim C_{1}+\ldots+C_{s},\] and so \(V\) is a _Coble surface of K3 type_[1, Section 9.1]. By [1, Corollary 9.1.5], we have that \(s\leq 10\). For any given Enriques W-surface \(X\), the purpose of this note is to directly compute some geometric properties of the Enriques surfaces \(X_{t}\). For example, this point of view gives a direct way to identify the Coble-Mukai lattice of \(V\) with the Picard group of \(X_{t}\) modulo canonical class (see Theorem 4.2). The Coble-Mukai lattice was introduced by Mukai with no reference to a degeneration of Enriques surfaces. In [1, Theorem 9.2.15] it is proved that the Coble-Mukai lattice is isomorphic to the Enriques lattice, but there are no degenerations of Enriques surfaces involved either. We also explicitly describe \(\operatorname{Pic}(X_{t})\) from classes of curves in the singular surface \(X\). As an application, we discuss nodal Enriques surfaces from Coble surfaces, and in particular some open questions on \(\mathbb{Q}\)-homology projective planes via Enriques W-surfaces. _Acknowledgments._ The author would like to thank Igor Dolgachev for the motivation to write this note, and for various useful comments and references. The author also thanks Matthias Schutt for his interest and useful comments. The author is grateful to Phillip Griffiths and Carlos Simpson for the invitation to talk at the "Moduli and Hodge theory" online conference at the Institute of the Mathematical Sciences of the Americas (IMSA) during February 1-5 2021, and for the invitation to write a paper for the corresponding proceedings. The author would like to thank the anonymous referee for interesting and useful observations. The author was supported by the FONDECYT regular grant 1230065. ## 2. Preliminaries Let \(X\) be an Enriques W-surface with \(K_{X}\) nef. Let \[\phi\colon V\to X\] be the minimal resolution of \(X\). As it was said in the introduction, the surface \(V\) must be a _Coble surface of K3 type_. In this way, we have two options for \(V\) (see [1, Prop. 9.1.3]): it is the blow-up of either: 1. an Halphen surface of index two 1 over the singularities of one reduced fiber, which is of type \(II\), \(III\), \(IV\), or \(I_{n}\), of, or 2. a Jacobian rational minimal elliptic fibration over the singularities of two reduced fibers of type \(II\), \(III\), \(IV\), or \(I_{n}\). Footnote 1: An Halphen surface of index two is a rational elliptic surface with a multiplicity two fiber, and no \((-1)\)-curves in the fibers. The blow-ups are as follows. Over a type \(II\) fiber we only blow-up the cusp to obtain one \((-4)\)-curve. For type \(III\) we blow-up twice over the tangent point, and so we obtain two \((-4)\)-curves over the fiber. In the case \(IV\) we blow-up \(4\) times over the triple point, so that we obtain four \((-4)\)-curves. Finally, for any \(I_{n}\) fiber we blow-up the \(n\) nodes, so that we obtain \(n\)\((-4)\)-curves. Let \(\pi\colon V\to S\) be the composition of the blow-ups. We obtain the boundary curves \(\{C_{1},\dots,C_{s}\}\) in \(V\) from the distinguished singular fiber(s). We recall that \(C_{i}\simeq\mathbb{P}^{1}\) and \(C_{i}^{2}=-4\), i.e. they are the \((-4)\)-curves over the fibers. The contraction of the boundary curves gives the surface \(X\). We will only consider the case when the singular fibers are of type \(I_{n}\), the other cases are degenerations. Hence for one singular fiber we have \(I_{s}\), and for two we have \(I_{s_{1}}\) and \(I_{s_{2}}\) with \(s_{1}+s_{2}=s\). We have that \(1\leq s\leq 10\) by [10, Corollary 9.1.4]. One can actually prove that such an \(X\) has no local-to-global obstructions to deform, and that any \(\mathbb{Q}\)-Gorenstein smoothing of it is an Enriques surface (see [14, Theorem 4.2(0)], where there is a result for more general elliptic fibrations). The \(\mathbb{Q}\)-Gorenstein deformation space of \(X\) has dimension \(10\) (see [13, Section 3]). _Example 2.1_ (Blow-ups of Enriques W-surfaces).: As above, consider an elliptic fibration \(S\to\mathbb{P}^{1}\) with sections and at least two \(I_{1}\) fibers \(F_{1}\) and \(F_{2}\). Let \(V\to S\) be the blow-up of both nodes. We now blow-up over a general point in, say, \(F_{2}\) to obtain the chain \([6,2,2]\) with a \((-1)\)-curve intersecting the middle \((-2)\)-curve, where the \((-6)\)-curve is the proper transform of \(F_{2}\) (see Figure 1). We contract \([4]\) and \([6,2,2]\) to obtain a surface \(Z\). One can prove that \(Z\) has no local-to-global obstructions to deform. We consider a \(\mathbb{Q}\)-Gorenstein smoothing of \(Z\). One can show that the general fiber is the blow-up of an Enriques surface at one point. We note that \(K_{Z}\) is not nef because the image \(A\) of the \((-1)\)-curve intersecting the middle \((-2)\)-curve is a negative curve for \(K_{Z}\). There is a divisorial contraction of the family which contracts the \((-1)\)-curves in the general fibers and \(A\) in \(Z\), and the resulting surface is an Enriques W-surface. These blow-ups of Enriques W-surfaces come from the divisorial contraction universal family [12] defined by \(\frac{1}{4}(1,1)\). This is explained in [14, Section 2], precisely this particular divisorial contraction family is [14, Example 2.13], where the combinatorial data with all the infinitely many possible Wahl singularities is given by \[[4]-[2,\bar{2},6]-[2,2,2,\bar{2},8]-[2,2,2,2,2,\bar{2},10]-\cdots.\] For example, in Figure 2 we have a way to produce a similar situation but with three Wahl singularities, where the \((-1)\)-curve in the general fiber (blow-up of an Enriques surface at one point) degenerates into the image of the \((-1)\)-curve between \([8,2,2,2,2]\) and \([6,2,2]\). _Example 2.2_.: In [10, Example 4.5], Kawamata shows the local picture of a \(\mathbb{Q}\)-Gorenstein smoothing of an elliptic fibration with a singular \(I_{d}\) fiber with Wahl singularities \(\frac{1}{n^{2}}(1,na-1)\) at the nodes. Kawamata explains how these local elliptic surface \(\mathbb{Q}\)-Gorenstein smoothens into other elliptic surfaces, and the degeneration is explained as the concurrence of a smooth fibre with multiplicity \(n\) with \(d\)\(I_{1}\) fibers. Let \(\beta_{i}\) be the class of \(C_{i}\) in \(\operatorname{Pic}(V)\). As in [1, Chapter 9], we define the _Coble-Mukai lattice_ of \(V\) as \[\operatorname{CM}(V):=\{x\in\widetilde{\operatorname{Pic}}(V)\colon x\cdot \beta_{i}=0\text{ for all }i\}\] where \(\widetilde{\operatorname{Pic}}(V)\) is the lattice in \(\operatorname{Pic}(V)_{\mathbb{Q}}\) generated by \(\operatorname{Pic}(V)\) and the rational classes \(\frac{1}{2}\beta_{i}\). ## 3. Class and Picard groups Let \(X\) be an Enriques W-surface with \(K_{X}\) nef. We only consider Coble surfaces \(V\) of K3 type from a rational minimal elliptic fibration Figure 1. The Wahl chains \([4]\) and \([6,2,2]\) Figure 2. The Wahl chains \([4]\), \([6,2,2]\), and \([8,2,2,2,2]\) \(S\to\mathbb{P}^{1}\) with sections. Hence we have blow-ups at the nodes of two singular fibers \(I_{s_{1}}\), \(I_{s_{2}}\) with \(s_{1}+s_{2}=s\). We have the diagram where the morphism \(\phi\) is the minimal resolution of \(X\), and \(\pi\) is a composition of blow ups from \(S\). The \((-4)\)-curves \(\{C_{1},\ldots,C_{s}\}\) are the exceptional curves of \(\phi\). We choose in \(V\) a chain of smooth rational curves \[C_{1}-(-1)-C_{2}-(-1)-\ldots-(-1)-C_{s}\] using one section from \(S\to\mathbb{P}^{1}\), and \((-1)\)-curves from the blow-up \(\pi\colon V\to S\). The contraction of that chain defines a surface \(W\) with one singularity of type \(\frac{1}{4s}(1,2s-1)\), whose Hirzebruch-Jung continued fraction is \([3,2,\ldots,2,3]\) (with \((s-2)\)\(2^{\prime}\)s). Hence we have a contraction \(\sigma\colon X\to W\), and \(W\) has no local-to-global obstructions to deform (same proof as for \(X\)). We now look at \(\mathbb{Q}\)-Gorenstein deformations of \(W\), and then we will do it for \(X\). All deformations below happen over \(\mathbb{D}\), and \(\mathbb{Q}\)-Gorenstein deformation means that the canonical class for the corresponding \(3\)-fold is \(\mathbb{Q}\)-Cartier. **Proposition 3.1**.: The surface \(W\) can be \(\mathbb{Q}\)-Gorenstein deformed into a surface \(W_{t}\) with singularities of types either \(A_{e_{1}-1}\), \(\ldots\), \(A_{e_{r}-1}\) or \(\frac{1}{4e_{1}}(1,2e_{1}-1),A_{e_{2}-1}\), \(\ldots\), \(A_{e_{r}-1}\), where \(e_{1}\), \(\ldots\), \(e_{r}\) is a partition of \(s\). In the case when we have only singularities of type \(A_{e_{i}-1}\), the minimal resolution of \(W_{t}\) is an Enriques surface. Proof.: We know that there are no local-to-global obstructions to deform \(W\). The rest is [10, Proposition 2.3]. The partial resolution \(\sigma\colon X\to W\) is locally an M-resolution of the singularity in \(W\) (cf. [1]). Any \(\mathbb{Q}\)-Gorenstein deformation of the singularity in \(W\) is the blowing-down deformation (see [11, 12]) of a \(\mathbb{Q}\)-Gorenstein deformation of the exceptional divisor of \(\sigma\). The deformations in Proposition 3.1 into singular surfaces happen over a proper analytic closed set of the deformation space of the singularity [10], and so for almost all \(\mathbb{Q}\)-Gorenstein smoothings of \(X\), the blowing-down deformation gives an isomorphism between general fibers. **Lemma 3.2**.: Let \((W\subset\mathcal{W})\to(0\in\mathbb{D})\) be a \(\mathbb{Q}\)-Gorenstein smoothing. Let \(P\) be the singularity \(\frac{1}{4s}(1,2s-1)\) in \(W\). Let \(M_{P}\) be the Milnor fiber corresponding to the induced local smoothing of \(P\). Then, there is an exact sequence \[0\to H_{2}(M_{P})\simeq\mathbb{Z}^{s-1}\to H_{2}(W_{t})\to H_{2}(W)\to 0,\] where \(W_{t}\) is a smooth fiber (and so an Enriques surface). Proof.: Let \(W^{o}=W\setminus\{P\}\). By [12, Prop.4.2(3)], we have \(H_{1}(W^{o})\) is torsion, so \(H^{1}(W^{o})=0\). By [12, Prop.4.2(1)], we have \(H^{1}(W^{o})\simeq H_{3}(W)\) and so \(H_{3}(W)=0\). By [10, Lemma 2.2.3], we have then the exact sequence \[0\to H_{2}(M_{P})\to H_{2}(W_{t})\to H_{2}(W)\to H_{1}(M_{P})\to H_{1}(W_{t})\to 0,\] since \(H_{1}(W)=0\). On the other hand, we know that \(H_{2}(M_{P})=\mathbb{Z}^{s-1}\), \(H_{1}(M_{P})=\mathbb{Z}/2\), and \(H_{1}(W_{t})=\mathbb{Z}/2\), and so we have the wanted exact sequence. We recall that locally at \(P\), a \(\mathbb{Q}\)-Gorenstein smoothing of \(\frac{1}{4s}(1,2s-1)\) is a \(\mathbb{Z}/2\)-quotient of a smoothing of the canonical cover \(A_{2s-1}\to\frac{1}{4s}(1,2s-1)\), and on the smooth fibers is a topological covering (see e.g. [10, Section 2.2]). **Theorem 3.3**.: Consider the partial resolution \(\sigma\colon X\to W\) of \(P\) (hence \(X\) has \(s\) singularities of type \(\frac{1}{4}(1,1)\) in a chain of \(\mathbb{P}^{1}\)'s), and let us consider an Enriques W-surface \(X\). Let \((W\subset\mathcal{W})\to(0\in\mathbb{D})\) be the induced blowing-down \(\mathbb{Q}\)-Gorenstein deformation, and assume that it is isomorphism on general fibers (i.e. \(X_{t}\simeq W_{t}\)). Then, there is an exact sequence \[0\to H_{2}(X_{t})\to H_{2}(X)\to(\mathbb{Z}/2)^{s-1}\to 0.\] The group \((\mathbb{Z}/2)^{s-1}\) has as generators the images of the \((-1)\)-curves between the \(\frac{1}{4}(1,1)\) singularities. We have \(H_{2}(X_{t})=\operatorname{Pic}(X_{t})\), and \(H_{2}(W)=\operatorname{Cl}(W)\) (the class group of \(W\)). Proof.: Let \(\Gamma_{i}\) be the images in \(X\) of the \((-1)\)-curves between the \(C_{i}\). This short exact sequence is achieved through Lemma 3.2, starting with the restriction morphism \[H_{2}(X_{t})\to H_{2}(X)\simeq H_{2}(W)\oplus_{i=1}^{s-1}\mathbb{Z}\Gamma_{i}.\] For more details see [10, Section 8], where we work out a more general picture for \(\mathbb{Q}\)-Gorenstein smoothings. For the last part see [10, p.1196]. _Remark 3.4_.: Since \(q(X)=p_{g}(X)=0\), we have identifications \(\operatorname{Pic}(X_{t})=H^{2}(X_{t})=H_{2}(X_{t})\), \(\operatorname{Pic}(X)=H^{2}(X)\subset H_{2}(X)=\operatorname{Cl}(X)\) (see [12]), and Mumford's intersection theory on \(X\) which respects the topological intersection theory when restricted to \(H^{2}(X)\). Hence the restriction map \(H_{2}(X_{t})\to H_{2}(X)\) of the sequence in Theorem 3.3 respects intersections. _Remark 3.5_.: Let \(\Gamma\) be the image in \(X\) of a \((-1)\)-curve from the chain \[C_{1}-(-1)-C_{2}-(-1)-\ldots-(-1)-C_{s}.\] Any divisor \(D\) in \(\operatorname{Cl}(X)\) corresponds to \(D=\sum_{i=1}^{s-1}d_{i}\Gamma_{i}\) in \((\mathbb{Z}/2)^{s-1}\), where \(d_{i}\) is the local multiplicity of \(D\) at any of the singularities in \(\Gamma_{i}\). Note that numerically \[\phi^{*}(D)\equiv D^{\prime}+\sum_{i=1}^{s}\mu_{i}C_{i},\] where \(D^{\prime}\) is the strict transform, and so \(D^{\prime}\cdot C_{i}=4\mu_{i}=d_{i}\) for \(i=1,\ldots,s-1\) as \(H_{1}=\mathbb{Z}/4\) for the link at each singularity. ## 4. Coble-Mukai lattice and Enriques lattice Let \(X\) be an Enriques W-surface as in Theorem 3.3, and let \(V\) be the associated Coble surface of K3 type. Let \(\sigma\colon V\to X\) be the contraction of the boundary \(\{C_{1},\ldots,C_{s}\}\). We have the short exact sequence \[0\to\operatorname{Pic}(X_{t})\to\operatorname{Cl}(X)\to(\mathbb{Z}/2)^{s-1}\to 0.\] (Note that for \(s=1\) it also works, and so \(\operatorname{Pic}(X_{t})\simeq\operatorname{Cl}(X)\) in that case.) **Lemma 4.1**.: The image of \(\operatorname{Pic}(X_{t})\) in \(\operatorname{Cl}(X)\) is the set of classes whose proper transform have even intersection number with each \(C_{i}\). Proof.: In Remark 3.5 we have \[\phi^{*}(D)\equiv D^{\prime}+\sum_{i=1}^{s}\mu_{i}C_{i},\] where \(D^{\prime}\cdot C_{i}=4\mu_{i}=d_{i}\) and \((d_{1},\ldots d_{s-1})\) is the image of \(D\) in \((\mathbb{Z}/2)^{s-1}\). But \(\operatorname{Pic}(X_{t})\) is exactly the kernel. **Theorem 4.2**.: The image of \(\operatorname{Pic}(X_{t})\) in \(\operatorname{Cl}(X)\) quotient by \(\langle\mathcal{O}_{X_{t}}(K_{X_{t}})\rangle\) is isomorphic to \(\operatorname{CM}(V)\). Proof.: We prove this through the pull-back morphism \[\phi^{*}\colon\operatorname{Cl}(X)\to\widetilde{\operatorname{Pic}}(V).\] First, the pull-back on any class in \(\operatorname{Cl}(X)\) is orthogonal to all \(\beta_{i}\). So we now restrict to \(\phi^{*}\colon\operatorname{Pic}(X_{t})\to\operatorname{CM}(V)\). Let \(D=D^{\prime}+\sum_{i=1}^{s}\frac{a_{i}}{2}\beta_{i}\) in \(\operatorname{CM}(V)\) with \(D^{\prime}\) not supported at the \(\beta_{i}\), and \(a_{i}\in\mathbb{Z}\). Then by definition \(D\cdot\beta_{i}=0\) and so \(D^{\prime}\cdot C_{i}\) is even for all \(i\), and so \(\phi^{*}\) is onto, by Lemma 4.1. If \(\phi^{*}(D)=D^{\prime}+\sum_{i=1}^{s}\frac{a_{i}}{2}\beta_{i}=0\), then \(\phi^{*}(2D)=0\) in \(\operatorname{Pic}(V)\). Say that \(D\neq 0\) in \(\operatorname{Pic}(X_{t})\), and so we have a numerical 2-torsion class, and this implies \(D\sim K_{X_{t}}\) by Riemann-Roch on \(D\). In [10, Theorem 9.2.15], it is proved that the Coble-Mukai lattice of a Coble surface is isomorphic to the Enriques lattice over \(\mathbb{C}\) by a different method. ## 5. Explicit computations for the first degeneration In this section, we explicitly compute the Picard group of an Enriques surface from the class group of \(X\), which has two \(\frac{1}{4}(1,1)\) singularities and it is constructed as follows. Consider a pencil of cubic curves generated by two nodal cubics \(C_{1},C_{2}\) which intersect at nine distinct points. Let \(S\to\mathbb{P}^{2}\) be the blow-up at those points, and so \(S\) has an elliptic fibration with two \(I_{1}\) fibers \(C_{1}\), \(C_{2}\) (proper transforms of \(C_{1},C_{2}\)), and at least nine sections \(R_{1},\ldots,R_{9}\) from the nine points blown-up. Let \(\pi\colon V\to S\) be the blow-up at the nodes of \(C_{1}\) and \(C_{2}\), and let \(E_{1},E_{2}\) be the exceptional curves. Let \(H\) be the pull-back in \(V\) of a general line in \(\mathbb{P}^{2}\). As before, we contract the \((-4)\)-curves \(C_{1}\) and \(C_{2}\) (proper transforms of \(C_{1}\) and \(C_{2}\)) to obtain \(X\) which has no local-to-global obstructions to deform. Let \(H\), \(E_{1}\), \(E_{2}\), \(R_{1}\), \(\ldots\), \(R_{9}\) be the images of the corresponding curves in \(X\). We see them as classes in \(\operatorname{Cl}(X)\). In this way, we can easily compute \[\operatorname{Cl}(X)=\frac{\langle H,E_{1},E_{2},R_{1},\ldots,R_{9}\rangle}{ \langle 2E_{1}-2E_{2},2E_{1}-3H+R_{1}+R_{2}+\ldots+R_{9}\rangle}.\] Indeed, in \(\operatorname{Cl}(V)\) we have that \(C_{i}=3H-2E_{i}-R_{1}-R_{2}-\ldots-R_{9}\), \(C_{1}-C_{2}=-2E_{1}+2E_{2}\), and \(C_{1},C_{2}\) are contracted by \(V\to X\). We have \(H^{2}=11/2\), \(H\cdot R_{i}=3/2\), \(R_{i}^{2}=-1/2\), \(E_{i}^{2}=0\), and \(R_{i}\cdot R_{j}=1/2\) for \(i\neq j\). Consider an Enriques W-surface \(X\) as in Theorem 3.3, so that we have the short exact sequence \[0\to\operatorname{Pic}(X_{t})\to\operatorname{Cl}(X)\to\mathbb{Z}/2\to 0. \tag{5.1}\] We can think of \(\mathbb{Z}/2\) as generated by the image (say) of \(R_{9}\) (any \(R_{i}\) works in this case, since they are locally toric boundaries for both singularities). Note that \[K_{X}=E_{1}-E_{2},\] and it represents the canonical class of \(X_{t}\). Using that \(\operatorname{Pic}(W_{t})\) is the kernel in the short exact sequence (5.1) and numerical independence of classes, we can compute \[\operatorname{Pic}(X_{t})=\langle E_{1},E_{1}+2R_{9},R_{1}-R_{2},R_{5}-R_{6},R _{6}-R_{7},\] \[R_{2}-R_{3},R_{3}-R_{4},R_{4}-R_{5},R_{7}-R_{8},H-R_{1}-R_{2}-R_{3}\rangle+K_{ W},\] and these generators give a basis for the Enriques lattice \(\mathbb{H}\oplus E_{8}(-1)\), where \[\mathbb{H}=\langle E_{1},E_{1}+2R_{9}\rangle\] and \[E_{8}(-1)=\langle R_{1}-R_{2},R_{2}-R_{3},R_{3}-R_{4},\] \[R_{4}-R_{5},R_{5}-R_{6},R_{6}-R_{7},R_{7}-R_{8},H-R_{1}-R_{2}-R_{3}\rangle.\] Following the recipe and notation in [1, Section 1.5], we have the root basis for the Enriques lattice \[\alpha_{0}:=H-R_{1}-R_{2}-R_{3},\ \ \alpha_{1}:=R_{1}-R_{2},\] \[\alpha_{2}:=R_{2}-R_{3},\ \ \alpha_{3}:=R_{3}-R_{4},\] \[\alpha_{4}:=R_{4}-R_{5},\ \ \alpha_{5}:=R_{5}-R_{6},\] \[\alpha_{6}:=R_{6}-R_{7},\ \ \alpha_{7}:=R_{7}-R_{8},\] \[\alpha_{8}:=-3H+E_{1}+\sum_{j=1}^{7}R_{j}+2R_{8},\ \ \alpha_{9}:=2R_{9}.\] We also write down the following isotropic sequence ([1, Section 1.5]): \[f_{10}=E_{1},\ \ \ f_{9}=E_{1}+2R_{9},\ \ \ f_{i}=-3H+2E_{1}+\sum_{j=1}^{8}E_{j}+R_ {i}+2R_{9}\] for \(i=1,\ldots,7\), and so \(\Delta=\frac{1}{3}(f_{1}+\ldots+f_{10})=-8H+6E_{1}+3\sum_{j=1}^{9}R_{j}\). ## 6. Further degenerations and \(\mathbb{Q}\)-Homology projective planes As explained in [1, 5.9], the Baily-Borel compactification of the moduli space of Enriques surfaces is formed by a Coble divisor, which is the moduli space of Coble surfaces, and two smooth rational curves [1, Theorem 5.9.8]. Hence the Baily-Borel compactification suggests that we should see a big part "closer to the boundary" of the moduli space of Enriques surfaces via Enriques W-surfaces (see [1, 5.10]). In this section we show what sort of things one can actually see. We start with the most degenerate Coble surface (see [1, Example 9.2.7]). It has \(10\ \frac{1}{4}(1,1)\) singularities, and it is unique up to isomorphism (see [1, Chapter 9]). This surface corresponds to one of the two most algebraic K3 surfaces [20] under the canonical double cover. (In fact, the K3 surface is \(X_{4}\) in Vinberg's notation, which is birational to the \(4\)-th cyclic cover of \(\mathbb{P}^{2}\) branched at \(xyz(x+y+z)=0\).) We take as a model the following construction. We start with the rational elliptic fibration \(S^{\prime}\to\mathbb{P}^{1}\) which has exactly \(4\) singular fibers: one \(I_{9}\) and three \(I_{1}\). This surface has Mordell-Weil group isomorphic to \(\mathbb{Z}/3\) (see e.g. [2]). Figure 3 gives the notation for all relevant curves. Let \(\pi^{\prime}\colon V^{\prime}\to S^{\prime}\) be the blow-up at all \(9\) nodes in \(I_{9}\), and at the node in \(I_{1}=C^{\prime}_{10}\). As we have done it before, we keep the notation for proper transforms of curves. Let \(E^{\prime}_{i}\) be the \((-1)\)-curves from the blow-up. Hence we consider in \(V^{\prime}\) the chain \[C^{\prime}_{10}-A-C^{\prime}_{1}-E^{\prime}_{1}-C^{\prime}_{2}-E^{\prime}_{2}- \ldots-E^{\prime}_{8}-C^{\prime}_{9}\] and where \(C_{i}^{\prime 2}=-4\) and \(A^{2}=-1\). Hence \(E_{9}^{\prime}\) is between \(C_{1}^{\prime}\) and \(C_{9}^{\prime}\), and \(E_{10}^{\prime}\) is intersecting \(C_{10}^{\prime}\) at two distinct points. We now contract all the \(C_{i}^{\prime}\) via \(\phi^{\prime}\colon V^{\prime}\to X^{\prime}\), so \(X^{\prime}\) has \(10\)\(\frac{1}{4}(1,1)\) singularities, and \(K_{X^{\prime}}\equiv 0\). Indeed, we have that \[C_{10}^{\prime}+2E_{10}^{\prime}\sim F\sim\sum_{i=1}^{9}C_{i}^{\prime}+2\sum_{ i=1}^{9}E_{i}^{\prime},\] where \(F\) is a general fiber of the elliptic fibration \(V^{\prime}\to\mathbb{P}^{1}\), and \[\phi^{\prime*}(K_{X^{\prime}})-\frac{1}{2}\sum_{i=1}^{10}C_{i}^{\prime}\equiv K _{V^{\prime}}\sim-F+\sum_{i=1}^{10}E_{i}^{\prime}.\] The surface \(X^{\prime}\) has no local-to-global obstructions to deform, because we are contracting curves over two fibers of \(S^{\prime}\to\mathbb{P}^{1}\) (see e.g. [4, Section 4]). Our notation for relevant curves in \(X^{\prime}\) is \(\Gamma_{0}:=E_{1}^{\prime}\), \(\Gamma_{1}:=E_{2}^{\prime}\), \(\Gamma_{2}:=E_{3}^{\prime}\), \(\Gamma_{3}:=E_{4}^{\prime}\), \(\Gamma_{4}:=E_{5}^{\prime}\), \(\Gamma_{5}:=E_{6}^{\prime}\), \(\Gamma_{6}:=E_{7}^{\prime}\), \(\Gamma_{7}:=E_{8}^{\prime}\), \(\Gamma_{8}:=E_{9}^{\prime}\), \(\Gamma_{9}:=A\), \(\Gamma_{10}:=B\), \(\Gamma_{11}:=C\), and \(\Gamma_{12}:=E_{10}^{\prime}\), which is represented in Figure 4. Figure 4. Relevant curves in \(X^{\prime}\) Figure 3. Relevant curves in \(S^{\prime}\) In this case we have that \[\mathrm{Cl}(X^{\prime})=H_{2}(X^{\prime})=\langle\Gamma_{0},\Gamma_{1},\ldots, \Gamma_{12}\rangle/\langle\mathcal{R}_{1},\mathcal{R}_{2},\mathcal{R}_{3}\rangle\] where the relations are: \[\mathcal{R}_{1} :=-2\Gamma_{0}-2\Gamma_{1}-2\Gamma_{2}-\Gamma_{3}+\Gamma_{4}+3 \Gamma_{5}+3\Gamma_{6}+\Gamma_{7}-\Gamma_{8}-\Gamma_{9}-\Gamma_{10}+2\Gamma_{11}\] \[\mathcal{R}_{2} :=-\Gamma_{0}-3\Gamma_{1}-5\Gamma_{2}-4\Gamma_{3}+4\Gamma_{5}+5 \Gamma_{6}+3\Gamma_{7}+\Gamma_{8}-3\Gamma_{10}+3\Gamma_{11}\] \[\mathcal{R}_{3} :=-\Gamma_{0}-3\Gamma_{1}-5\Gamma_{2}-3\Gamma_{3}+3\Gamma_{4}+9 \Gamma_{5}+10\Gamma_{6}+6\Gamma_{7}+2\Gamma_{8}-\Gamma_{9}-4\Gamma_{10}+5 \Gamma_{11}-2\Gamma_{12}\] Let \(X\) be the surface constructed in Section 5. We have the curves \(H,E_{1},E_{2},R_{1},\ldots,R_{9}\). One can see \(X\) as a \(\mathbb{Q}\)-Gorenstein smoothing of \(X^{\prime}\). Indeed we could choose a general \(\mathbb{Q}\)-Gorenstein smoothing of \(X^{\prime}\) which smooths all singularities except the singularities at \(\Gamma_{0}\cap\Gamma_{8}\) and at the node of \(\Gamma_{12}\). We can recover all curves \(H,E_{1},E_{2},R_{1},\ldots,R_{9}\) from linear combinations of the curves \(\Gamma_{0}\), \(\ldots\), \(\Gamma_{12}\), and in particular we can read \(\mathrm{Pic}(X_{t})\) from \(\mathrm{Cl}(X^{\prime})\). Details in a more general setup will be given in [12]. We note that in this case \[0\to\mathrm{Pic}(X_{t})\to\mathrm{Cl}(X^{\prime})\to(\mathbb{Z}/2)^{9}\to 0.\] On the other hand, we note that we have many subchains in \(\Gamma_{0}\), \(\ldots,\Gamma_{11}\) which can be contracted to singularities of type \([3,2,\ldots,2,3]\). The relevance of that is the application of Proposition 3.1 to particular deformations of \(W^{\prime}\), which is the contraction of some disjoint chains of \(\Gamma_{i}\)'s. Let us take the maximal chain \(\Gamma_{1}+\cdots+\Gamma_{9}\). Let \(X^{\prime}\to W^{\prime}\) be the contraction of that chain. In this way, the surface \(W^{\prime}\) has one T-singularity, whose minimal resolution corresponds to the Hirzebruch-Jung continued fraction \[\frac{40}{19}=[3,2,2,2,2,2,2,2,2,3].\] We recall that \(W^{\prime}\) has no local-to-global obstructions to deform (same reason as for \(X^{\prime}\)), and so we can choose from Proposition 3.1 a global \(\mathbb{Q}\)-Gorenstein deformation which deforms \(\frac{1}{40}(1,19)\) into an \(A_{9}\) rational double point. This is a one dimensional family. In this way, the general fiber \(W^{\prime}_{t}\) is a Gorenstein \(\mathbb{Q}\)-Homology projective plane with \(K\equiv 0\). We recall that a normal projective surface with only quotient singularities whose second Betti number is \(1\) is called a \(\mathbb{Q}\)-Homology projective plane, and it is Gorenstein if all singularities are rational double points. In [15], Hwang, Keum and Ohashi classified all possible configurations of singularities for Gorenstein \(\mathbb{Q}\)-Homology projective planes. In the case of \(K\equiv 0\) there are \(31\) possible configurations, and they were able to produce examples for \(29\) of them. We can have at most \(5\) singularities in these configurations. Of course, the minimal resolution of all of them are Enriques surfaces with particular ADE configurations of nine \((-2)\)-curves. In [10], Schutt classifies all Gorenstein \(\mathbb{Q}\)-Homology projective planes. It turns out that for each of the \(31\) types we have a one dimensional moduli space, and the number of components for each type varies from \(1\) to \(3\): * \(3\) components for \(A_{7}+2A_{1}\), and \(3A_{3}\); * \(2\) components for \(A_{8}+A_{1}\), \(A_{7}+A_{2}\), \(A_{5}+A_{3}+A_{1}\), \(A_{5}+2A_{2}\), \(D_{8}+A_{1}\), \(D_{6}+A_{3}\), \(D_{4}+A_{3}+2A_{1}\), \(E_{8}+A_{1}\), \(E_{7}+A_{2}\), \(E_{6}+A_{3}\), \(E_{7}+A_{2}\), \(E_{8}+A_{1}\), \(D_{9}\), and \(D_{7}+2A_{1}\); * all the other \(15\) root types. **Theorem 6.1**.: The one dimensional family from \(W^{\prime}\) whose general fiber has an \(A_{9}\) singularity is in closure of the Schutt irreducible moduli corresponding to \(A_{9}\). We can actually obtain almost all of the \(31\) types from Enriques W-surfaces just as explained for \(A_{9}\), and a small variant when we meet the type \(D\) or \(E\) singularities. We only give two examples below, but it is possible to realize at least \(27\) types. _Example 6.2_.: Consider the rational elliptic fibration with sections \(S\to\mathbb{P}^{1}\) whose singular fibers are \(I_{4}^{*}+2I_{1}\) (see e.g. [12, p.7]). It has exactly two sections. Let us fix one of these sections, we denote it by \(A\). Let \(B\) and \(C\) be the \(I_{1}\) fibers, and let \(D\) be \(I_{4}^{*}\) minus the component intersecting \(A\). We blow-up the node at \(B\) and the node at \(C\), and then we contract the section \(A\). We obtain a surface \(V\) with \(B^{2}=C^{2}=-3\), and \(D\) is a \(D_{8}\) configuration, i.e., the minimal resolution of a \(D_{8}\) singularity. The contraction of \(B+C\) produces a surface \(W\) with a singularity \([3,3]\), and it has no-global-to-local obstructions. Moreover, since \(D\) is disjoint to \(B+C\), the contraction \(W^{\prime}\) of \(D\) in \(W\) has also no local-to-global obstructions [13, Section 4]. Therefore, by Proposition 3.1, we can choose to \(\mathbb{Q}\)-Gorenstein deform \(W^{\prime}\) so that we keep the \(D_{8}\) singularity and we deform \([3,3]\) into \(A_{1}\). In this way, we obtain a \(\mathbb{Q}\)-Homology projective plane with \(K\equiv 0\) and root type \(D_{8}+A_{1}\). According to Schutt's result, this root type corresponds to a moduli curve with two components. Similarly, we can construct a surface for the root type \(D_{5}+A_{4}\) from the rational elliptic surface with sections whose singular fibers are \(I_{1}^{*}+I_{4}+I_{1}\). Here the Mordell-Weil group is \(\mathbb{Z}/4\)[13]. Just choose one of the four sections, and blow-up a suitable node in \(I_{4}\) and the node in \(I_{1}\), and then consider \(D_{5}\) and \([3,2,2,2,3]\) to construct what we want through Proposition 3.1. This root type has an irreducible moduli. To get other root types, we need to consider more involved situations. For example, we may start with a special Halphen surface of index \(2\) and some suitable double sections. Compactifications of Schutt moduli curves should correspond to degenerations of Enriques surfaces, since there are no non-isotrivial smooth families of Enriques surfaces over proper bases by [13, Corollary 5.9.13]. Hence, a natural question is: _Find compactifications of Schutt moduli curves and the corresponding limit singular surfaces. Which root types hit Enriques W-surfaces? What else?_
2306.05389
Towards distinguishing Dirac from Majorana neutrino mass with gravitational waves
We propose a new method towards distinguishing the Dirac versus Majorana nature of neutrino masses from the spectrum of gravitational waves (GWs) associated with neutrino mass genesis. Motivated by the principle of generating small neutrino masses without tiny Yukawa couplings, we assume generic seesaw mechanisms for both Majorana and Dirac neutrino masses. For Majorana neutrinos, we further assume a spontaneously broken gauged $U(1)_{B-L}$ symmetry, independently of the type of Majorana seesaw mechanism, which gives a cosmic string induced GW signal flat over a wide range of frequencies. For Dirac neutrinos, we assume the spontaneous breaking of a $Z_2$ symmetry, the minimal symmetry choice associated with all Dirac seesaw mechanisms, which is softly broken, generating a peaked GW spectrum from the annihilation of the resulting domain walls. In fact, the GW spectra for all types of Dirac seesaws with such a broken $Z_2$ symmetry are identical, subject to a mild caveat. As an illustrative example, we study the simplest respective type-I seesaw mechanisms, and show that the striking difference in the shapes of the GW spectra can help differentiate between these Dirac and Majorana seesaws, complementing results of neutrinoless double beta decay experiments. We also discuss detailed implications of the recent NANOGrav data for Majorana and Dirac seesaw models.
Stephen F. King, Danny Marfatia, Moinul Hossain Rahat
2023-06-08T17:42:07Z
http://arxiv.org/abs/2306.05389v4
# Towards distinguishing Dirac from Majorana neutrino mass with gravitational waves ###### Abstract We propose to distinguish the nature of neutrino masses, Dirac vs Majorana, from the spectrum of gravitational waves produced in neutrino mass genesis. We study two simple models of Majorana and Dirac mass genesis motivated by generating small neutrino masses without assuming tiny Yukawa couplings. For Majorana neutrinos, spontaneous breaking of the gauged \(B-L\) symmetry gives a cosmic string induced gravitational wave signal flat over a wide range of frequencies, whereas for Dirac neutrinos, spontaneous and soft breaking of a \(Z_{2}\) symmetry generates a peaked gravitational wave spectrum from annihilation of domain walls. The striking difference in the shapes of the spectra in the two cases can help differentiate between Dirac and Majorana neutrino masses in the two classes of models considered, complementing results of neutrinoless double beta decay experiments. ## 1 Introduction Charged fermion masses in the standard model (SM) are necessarily of the Dirac type because of electric charge conservation. Neutrino mass, on the other hand, may be of two types: Dirac or Majorana, where the latter possibility arises due to the fact that neutrinos are electrically neutral. If neutrinos were massless particles, as originally envisioned in the SM, their nature, i.e. Dirac vs Majorana, would not be distinguishable in weak interactions. However, as oscillation experiments have confirmed, they possess nonzero, albeit tiny, mass [1; 2; 3]. Dirac and Majorana neutrino mass are traditionally distinguished experimentally by neutrinoless double beta decay [4; 5]. This process is allowed only in the former case, where the first entry of the Majorana neutrino mass matrix \(m_{\beta\beta}\) is model dependent. Extensive experimental efforts are currently underway for detecting neutrinoless double beta decay, achieving upper bounds \(|m_{\beta\beta}|\lesssim\mathcal{O}(10-100)\) meV [6], and are expected to gain another order of sensitivity in the next decade [7]. However, the non-observation of neutrinoless double beta decay will not be decisive about the Majorana or Dirac nature of neutrinos. At this point, it remains interesting to seek astrophysical or cosmological probes of distinguishing the nature of the neutrino mass (see [8; 9; 10; 11] for recent studies), and this work is motivated by such considerations. We begin by recalling that neutrino mass is associated with the breaking of separate lepton numbers \(L_{e},L_{\mu},L_{\tau}\). Dirac neutrino mass preserves total lepton number \(L=L_{e}+L_{\mu}+L_{\tau}\), while Majorana mass breaks it. In the latter case, the small mass of the neutrinos may originate from the dimension-five Weinberg operator \(\bar{\ell}\ell HH\), where \(\ell\) represents the lepton doublets and \(H\) is a Higgs doublet, breaking the lepton number by two units. This could be associated with the breaking of an Abelian \(U(1)_{L}\) symmetry which may be global, or, when combined with baryon number, gauged \(U(1)_{B-L}\). The occurrence of cosmic strings in the early universe is a consequence in both scenarios, and their subsequent decay can produce detectable gravitational wave (GW) signatures [12; 13; 14]. This offers a potential observable indication of Majorana neutrino mass. A convincing ultraviolet completion of the Weinberg operator is achieved by introducing right-handed neutrinos that get large Majorana masses after spontaneous breaking of a \(U(1)_{L}\) or \(U(1)_{B-L}\) symmetry, the latter case opening up the possibility of a gauged Abelian symmetry which may be anomaly free if there are precisely three right-handed neutrinos. The type I seesaw mechanism [15; 16; 17; 18; 19] then provides an elegant explanation for the generation of light neutrino masses, avoiding the need for extremely small Yukawa couplings. If neutrinos are Dirac particles, the most minimal extension of the SM would be to add two or three right-handed SM singlet neutrinos \(\nu_{R}\) with tiny tree-level Yukawa couplings \(y_{D}\), defined by \(y_{D}\bar{\ell}\nu_{R}H\), together with a preserved \(U(1)_{L}\) or \(U(1)_{B-L}\) symmetry. However such an approach involves tiny Yukawa couplings \(y_{D}\) a million times smaller that that of the electron. There have been many attempts in literature which yield Dirac neutrinos without relying on such tiny Yukawa couplings [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. Each of these mechanisms will have its own experimental implications which have been studied in the literature, but most of them have so far not considered their GW signatures. Since our idea is to use GW signatures to distinguish Dirac from Majorana mass mechanisms, it is reasonable to study a mechanism for Dirac neutrino masses that is as closely related to the type I seesaw mechanism, so that we are comparing two closely related mechanisms which can lead to the two different types of neutrino mass. Motivated by the above considerations, we consider GW signatures associated with the following mechanism for explaining tiny Dirac neutrino masses. The Yukawa couplings which give rise to the heaviest Dirac neutrino mass (around \(0.1\) eV) will be required to be of similar magnitude to the third family SM Yukawa couplings, thus are required to be in the approximate range \(0.01-1\), up to a factor of a few. The seesaw-like mechanism will also be required to be a self-consistent effective field theory valid up to the highest explicit mass scale appearing in the model, which could be as high as the Planck scale. The above guidelines inspire us to consider a rather plausible explanation of Dirac neutrinos in which the renormalizable tree-level Yukawa term \(\bar{\ell}\nu_{R}H\) is forbidden by a symmetry, but which instead emerges as an effective non-renormalizable operator, as in the type I seesaw mechanism. For example, small Dirac masses may appear from a dimension-five operator \(\frac{1}{M}\bar{\ell}\nu_{R}H\sigma\), where \(M\) represents the mass scale of some new heavy fermion [21; 41]. After the new scalar \(\sigma\) and the Higgs doublet \(H\) get nonzero vacuum expectation values (VEVs), this may result in naturally small Dirac neutrino masses, analogous to the type I seesaw mechanism. A simple way to forbid the tree-level Yukawa term would be to make both \(\nu_{R}\) and \(\sigma\) odd under some new discrete \(Z_{2}\) symmetry, which is spontaneously broken when \(\sigma\) gets a nonzero VEV. In this case the breaking of a discrete symmetry leads to the creation of domain walls associated with the degenerate vacua [41]. Domain walls are typically problematic for cosmology, but they may annihilate to emit GWs if the discrete symmetry is softly broken. Such GW signals are very distinct from cosmic string signals generated in the case of Majorana neutrinos, thus offering an intriguing way to discriminate between the two types of neutrino mass. In this work, we explore two motivated scenarios of tiny neutrino mass generation giving rise to distinct GW signals that can be differentiated based on whether the neutrino mass is of the Dirac or Majorana type. We study two simple models of Majorana and Dirac mass genesis inspired by generating small neutrino masses without assuming tiny Yukawa couplings. For Majorana neutrinos, spontaneous breaking of the gauged \(B-L\) symmetry gives a cosmic string induced gravitational wave signal flat over a large range of frequencies, whereas for Dirac neutrinos, spontaneous and soft breaking of a \(Z_{2}\) symmetry generates a peaked gravitational wave spectrum from annihilation of domain walls. Furthermore, the domain wall signal generated in this case may even explain the NANOGrav 15 yr result [42]. The striking difference between the shapes of the spectra in the case of domain walls and cosmic strings can help differentiate between Dirac vs Majorana neutrino masses in the two classes of models considered, complementing neutrinoless double beta decay experiments. The organization of the paper is as follows. In section 2 we discuss models of neutrino mass generation in the Majorana and Dirac cases. Sections 3 and 4 focus on the production of gravitational waves, specifically from cosmic strings in the context of Majorana mass generation, and from domain walls in relation to Dirac mass generation. The resulting signals are examined in section 5, followed by concluding remarks in section 6. ## 2 Neutrino mass models In this section we explore two simple models for generating Majorana and Dirac neutrino masses. Both models are motivated to produce small neutrino masses without assuming tiny Yukawa couplings. ### Majorana neutrinos We consider a type I seesaw scenario in which the SM is extended with three right-handed neutrinos \(\bar{N}_{i}\) and a scalar \(\phi\), both singlet under the SM gauge groups. However, the model has an anomaly-free gauged \(B-L\) symmetry, under which \(\phi\) has two units of charge and \(\bar{N}_{i}\) have a single unit of charge. The Lagrangian of the model is given by \[-\mathcal{L}_{M}\supset\mathcal{Y}\ \bar{\ell}H\bar{N}+\bar{N}\bar{N}^{T}\phi\,, \tag{1}\] which yields the diagram, The right-handed neutrinos gain heavy Majorana masses after the \(B-L\) symmetry is spontaneously broken when \(\phi\) gets a nonzero VEV. Light neutrino masses are generated by integrating out the heavy right-handed neutrinos, and their mass matrix is given by \[\mathcal{M}_{M}=\frac{1}{\sqrt{2}}v^{2}\ \mathcal{Y}\ \mathcal{M}_{N}^{-1}\ \mathcal{Y}^{T}\,, \tag{2}\] where \(\mathcal{M}_{N}\) is the mass matrix of the right-handed neutrinos. The main characteristic of this scenario is the breaking of the \(U(1)_{B-L}\) symmetry, which creates a cosmic string network that eventually decays and produces a stochastic gravitational wave background. As we will discuss in section 3, such GW signals are nearly flat for a vast range of observable frequencies in gravitational wave interferometers. We note that there could be a secondary contribution to the GW spectrum if the scalar \(\phi\) undergoes a first order phase transition (FOPT) when it spontaneously breaks the \(U(1)_{B-L}\) symmetry. However, it is well known that the signal from FOPT of a single scalar is typically suppressed compared to the cosmic string signal, particularly when the \(U(1)\) symmetry is broken at a sufficiently high scale. Hence, we will not consider the FOPT signal for this model. ### Dirac neutrinos We consider a minimal model in which SM neutrinos are Dirac and the smallness of their mass originate from an ultraviolet scale, analogous to the type I seesaw mechanism for Majorana neutrinos. To facilitate this, we add three copies of a heavy Dirac fermion \(\Delta\), right-handed counterparts \(\nu_{R}\) of the SM neutrinos, and a scalar \(\sigma\), all assumed to be singlet under SM gauge groups. The model has a global \(U(1)_{L}\) symmetry, which remains unbroken to ensure the Dirac nature of the SM neutrinos, and prohibits a Majorana mass term for the right-handed neutrinos. We further ensure that no Dirac mass term for the SM neutrinos is allowed by imposing a \(Z_{2}\) symmetry under which only \(\nu_{R}\) and \(\sigma\) are odd. The Lagrangian of the model is given by \[-\mathcal{L}_{D}\supset\mathcal{Y}_{L}\bar{\ell}H\Delta_{R}+\mathcal{Y}_{R} \overline{\Delta}_{L}\sigma\nu_{R}+\mathcal{M}_{\Delta}\overline{\Delta}\Delta\,, \tag{3}\] which yields the diagram, \(H\)\(\sigma\)\(\overline{\ell}\)\(\sigma\)\(\overline{\Delta}\)\(\overline{\Delta}\)\(\overline{\Delta}\)\(\overline{\Delta}\)\(\overline{\Delta}\)\(\overline{\Delta}\)\(\overline{\Delta}\)\(H\)\(\sigma\) After integrating out the heavy mediators and the SM Higgs and the new scalar get VEVs \(v\) and \(u\), respectively, an effective Dirac mass term \(\mathcal{M}_{D}\bar{L}\nu_{R}\) for the light neutrinos is generated, where \[\mathcal{M}_{D}=\frac{1}{\sqrt{2}}\ v\ u\ \mathcal{Y}_{L}\mathcal{M}_{ \Delta}^{-1}\mathcal{Y}_{R} \tag{4}\] is the Dirac mass matrix suppressed by the large eigenvalues of the mass matrix \(\mathcal{M}_{\Delta}\) of the heavy fermions \(\Delta\). The scalar \(\sigma\) spontaneously breaks the \(Z_{2}\) symmetry when it acquires a nonzero VEV, necessary for Dirac mass generation. This leads to the creation of domain walls. Long-lived domain walls are dangerous for cosmology if they dominate the energy density of the Universe. However, they can be made to annihilate into gravitational waves by softly breaking the \(Z_{2}\) symmetry, which lifts the degeneracy between the two \(Z_{2}\) vacua. This leads to characteristic GW signals peaked at a single frequency. Since the global lepton number symmetry remains unbroken due to the Dirac nature of the neutrinos, this setup does not lead to the generation of cosmic strings and an associated flat GE spectrum as in the Majorana case. In the following two sections we discuss the generation of gravitational waves in both scenarios of Majorana and Dirac mass genesis. ## 3 GWs from cosmic strings in Majorana neutrino mass genesis Spontaneous breaking of the \(U(1)_{B-L}\) symmetry leads to the creation of a horizon-length string network [43]. Here we specifically focus on Nambu-Goto cosmic strings that lose energy primarily through loop formation and emission of gravitational radiation. The energy density in the string network is diluted by producing closed string loops [44], about 10% of which are large loops and the remaining are highly boosted smaller loops [45; 46; 47; 48]. The formation of the loops from long string networks can be described using the velocity-dependent one-scale model [49; 50]. The loop formation rate is assumed to be equal to the rate of energy loss of the evolving long string network in a cosmological background, and is given by \[\frac{dn_{\alpha}}{dt}=\mathcal{F}_{a}\frac{C_{\rm eff}}{\alpha} \frac{1}{t^{4}}\,, \tag{5}\] with the parameter values \(\alpha\simeq\mathcal{F}_{\alpha}\simeq 0.1\), \(C_{\rm eff}\simeq 0.5\) and \(5.7\) during matter and radiation domination, respectively, are found from lattice simulations [51]. While the kinetic energy of the smaller loops are diluted by simple redshifting, the larger loops oscillate and emit energy in the form of gravitational waves at a constant rate, \[\frac{dE}{dt}=-\Gamma G\mu^{2}\,, \tag{6}\] where \(\Gamma\simeq 50\) is a dimensionless constant [52], \(G\) is the Newton's constant and \(\mu\) is the tension in the strings. Typically \(\mu\sim\mathcal{O}(\Lambda)\), where \(\Lambda\) is the scale of the \(U(1)\) symmetry breaking. As a consequence of emitting gravitational radiation, the initial length of a large loop created by the network at time \(t_{i}\), given by \(l_{i}=\alpha t_{i}\), decreases as \[\ell(t)=\alpha t_{i}-\Gamma G\mu(t-t_{i})\,, \tag{10}\] until the loop completely disappears. The total energy loss from a loop can be decomposed into normal modes with frequency \(\tilde{f}_{k}=2k/\ell\) at a time \(\tilde{t}\), where \(k=1,2,3,\ldots\) is the mode number. Accounting for redshift evolution, the frequency today becomes \(f=[a(\tilde{t})/a(t_{0})]\,\tilde{f}_{k}\), where \(t_{0}\) is the current time. The relative emission rate per mode is given by \[\Gamma^{(k)}=\frac{\Gamma k^{-4/3}}{\sum_{j=1}^{\infty}j^{-4/3}}\simeq\frac{ \Gamma k^{-4/3}}{3.60}\,. \tag{11}\] Combining Eqs. (10), (11) and (10), and integrating over the emission time, the gravitational wave amplitude of the \(k\)-th mode is given by \[\Omega^{(k)}_{\rm GW}(f)=\frac{1}{\rho_{c}}\frac{2k}{f}\frac{\mathcal{F}_{ \alpha}\Gamma^{(k)}G\mu^{2}}{\alpha(\alpha+\Gamma G\mu)}\int_{t_{F}}^{t_{0}}d \tilde{t}\frac{C_{\rm eff}(t_{i}^{(k)})}{{t_{i}^{(k)}}^{4}}\left[\frac{a( \tilde{t})}{a(t_{0})}\right]^{5}\left[\frac{a(t_{i}^{(k)})}{a(\tilde{t})} \right]^{3}\,\Theta(t_{i}^{(k)}-t_{F})\,, \tag{12}\] where \(\rho_{c}=3H_{0}^{2}/(8\pi G)\) is the critical energy density, \(t_{i}^{(k)}\) is the formation time of loops contributing to the \(k\)-th mode and is given by \[t_{i}^{(k)}(\tilde{t},f)=\frac{1}{\alpha+\Gamma G\mu}\left[\frac{2k}{f}\frac{ a(\tilde{t})}{a(t_{0})}+\Gamma G\mu\tilde{t}\right]. \tag{13}\] Summing over all modes, we get the total amplitude of the gravitational waves \[\Omega_{\rm GW}(f)=\sum_{k}\Omega^{(k)}_{\rm GW}(f)\,, \tag{14}\] where the sum can be easily evaluated using \[\Omega^{(k)}_{\rm GW}(f)=\frac{\Gamma^{(k)}}{\Gamma^{(1)}}\Omega^{(1)}_{\rm GW }(f/k)=k^{-4/3}\ \Omega^{(1)}_{\rm GW}(f/k)\,. \tag{15}\] ## 4 GWs from domain walls in Dirac neutrino mass genesis We assume a simple potential for the scalar \(\sigma\): \[V(\sigma)=\frac{\lambda}{4}(\sigma^{2}-u^{2})^{2}\,. \tag{16}\] This potential has two degenerate minima at \(\sigma=\pm u\) and is symmetric under a \(Z_{2}\) transformation \(\sigma\rightarrow-\sigma\). Domain walls are formed around the boundaries of these two minima. The symmetry is spontaneously broken when the scalar chooses one of the two vacua. This choice depends on random fluctuations of the field and is made independently at spatially distant regions in space, creating the so-called 'domains'. Domain walls are formed around the boundaries of these domains. We assume that the domain walls have a static planar configuration perpendicular to the \(z\) direction. Introducing a kinetic term \(\frac{1}{2}(\partial_{\mu}\sigma)^{2}\), the field equation for \(\sigma(z)\) is given by \[\frac{d^{2}\sigma}{dz^{2}}-\frac{dV}{d\sigma}=0\,, \tag{10}\] which yields the solution, \[\sigma(z)=u\,\tanh\!\left(\sqrt{\frac{\lambda}{2}}uz\right), \tag{11}\] for the boundary condition \(\sigma(z\rightarrow\pm\infty)\rightarrow\pm u\). The surface energy density (also called tension) of the wall can be derived from integrating the \(00\) component of the stress-energy tensor \({\cal T}_{\mu\nu}=(d\sigma/dz)^{2}\) diag\((+1,-1,-1,0)\), and is given by \[{\cal E}=\frac{2}{3}\sqrt{2\lambda}\ u^{3}\,. \tag{12}\] Domain walls can be very long-lived and may dominate the energy density of the Universe, alter its equation of state and lead to rapid expansion inconsistent with standard cosmology. Even if their energy density is subdominant today, domain walls may produce excessive density perturbations observable in the CMB epoch if their surface energy density is above \({\cal O}({\rm MeV}^{3})\)[53]. An interesting solution to the domain wall problem is to softly break the discrete symmetry that lifts the degeneracy between the vacua. We introduce an explicit breaking term in the potential, \[\Delta V(\sigma)=\epsilon u\sigma\left(\frac{\sigma^{2}}{3}-u^{2}\right)\,, \tag{13}\] where \(\epsilon\) is a dimensionless constant. The overall potential \(V(\sigma)+\Delta V(\sigma)\) still has two minima at \(\sigma=\pm u\), but with a difference in the potential at these points: \[V_{\rm bias}=V(-u)-V(+u)=\frac{4}{3}\epsilon u^{4}\,. \tag{14}\] The probability \(p_{-}\) of a domain ending up in the \(-u\) vacuum ('false' vacuum) is smaller compared to \(p_{+}\) of it being in the \(+u\) vacuum ('true' vacuum), their ratio being related to the potential difference \[\frac{p_{-}}{p_{+}}\simeq\exp\left(-\frac{V_{\rm bias}}{V_{0}}\right), \tag{15}\] where \[V_{0}=\frac{u^{4}}{12\lambda^{3}}(3\lambda-\epsilon)(\lambda+\epsilon)^{3} \tag{16}\] is the potential difference between the maximum and the \(+u\) minimum. Treating the system as a three-dimensional lattice, percolation theory predicts that an infinite cluster of the false vacuum appears in space if the corresponding probability is above the threshold \(p_{c}=0.311\)[54]. This yields an upper bound on the bias potential for the generation of domain walls, \(V_{\rm bias}<V_{0}\log\frac{1-p_{c}}{p_{c}}=0.795V_{0}\), which can be written as \[V_{\rm bias}<0.38\lambda u^{4}. \tag{11}\] As long as the bias potential is below this limit, domain walls are created and their dynamics is mostly controlled by the surface energy density. The energy density of the wall in this regime is given by a scaling solution [55], \[\rho_{\rm wall}(t)=\mathcal{A}\frac{\mathcal{E}}{t}\,, \tag{12}\] where \(\mathcal{A}\simeq 0.8\pm 0.1\) is the so-called area parameter [56]. A volume pressure \(p_{v}\sim V_{\rm bias}\) tends to shrink the false vacuum region. Domain walls collapse when the volume pressure overcomes the pressure from surface energy density, which happens at \[t_{\rm ann}=\mathcal{C}_{\rm ann}\mathcal{A}\frac{\mathcal{E}}{V_{\rm bias}}\,, \tag{13}\] where \(\mathcal{C}_{\rm ann}=5\) for \(Z_{2}\) breaking [57]. Assuming that the domain walls annihilate during the radiation dominated era and annihilation happens instantaneously at \(t=t_{\rm ann}\), the peak amplitude of the generated gravitational waves at present time \(t_{0}\) can be expressed as [58] \[\Omega_{\rm GW}^{\rm peak}h^{2}(t_{0})\simeq 1.49\times 10^{-10}\times \left(\frac{\tilde{\epsilon}_{\rm GW}}{0.7}\right)\left(\frac{\mathcal{A}}{0. 8}\right)^{4}\left(\frac{10.75}{g_{\star}}\right)^{1/3}\left(\frac{\mathcal{E} ^{1/3}}{10^{7}~{}{\rm GeV}}\right)^{12}\left(\frac{10^{7}~{}{\rm GeV}^{4}}{V_ {\rm bias}}\right)^{2}\,, \tag{14}\] and the peak frequency is given by \[f_{\rm peak}\simeq 1.4\times 10^{-5}~{}{\rm Hz}\times\left(\frac{0.8}{\mathcal{ A}}\right)^{1/2}\left(\frac{10^{7}~{}{\rm GeV}}{\mathcal{E}^{1/3}}\right)^{3/2} \left(\frac{V_{\rm bias}}{10^{7}~{}{\rm GeV}^{4}}\right)^{1/2}\,, \tag{15}\] where the parameter \(\tilde{\epsilon}_{\rm GW}\) is estimated to be \(\tilde{\epsilon}_{\rm GW}\simeq 0.7\pm 0.4\)[56]. Here, \(g_{\star s}\) is the relativistic degrees of freedom for the entropy density at temperature \(T\). Numerical simulations suggest that the gravitational wave amplitude rises as \(\Omega_{\rm GW}\propto f^{3}\) for \(f<f_{\rm peak}\) and falls off as \(\Omega_{\rm GW}\propto f^{-1}\) for \(f>f_{\rm peak}\). It should be noted that another possibility of obtaining gravitational wave signatures in this model is through a first order phase transition induced by the scalar \(\sigma\). The bias potential in Eq. (10) contains a cubic term which creates a barrier between the true and false vacua at zero temperature. However, we have explicitly checked for \(10^{-6}<\lambda<1\), \(10^{3}<u<10^{15}\) GeV and \(10^{-9}<\epsilon<1\) that a FOPT either does not occur or is extremely weak. This is because the linear term in the potential, also controlled by the parameter \(\epsilon\), tends to remove the barrier between the two vacua. It is conceivable that in a variation of the model, a FOPT would occur, which would produce a peaked GW spectrum, more akin to that from domain walls than cosmic strings. ## 5 Results In this section we discuss the gravitational wave signatures from cosmic strings and domain walls in the context of Majorana and Dirac neutrinos, respectively. Existing and planned interferometers probe frequencies from \(10^{-9}\) to \(10^{4}\) Hz range. In the nanoHz range (\(10^{-9}-10^{-7}\) Hz), currently operating pulsar timing arrays (PTA) EPTA [59] and NANOGrav [60] have set upper bounds on the stochastic GW background, and the upcoming SKA [61] and IPTA [62] interferometers will have much greater sensitivity. \(\mu\)Ares [63] will be sensitive to the \(\mu\)-Hz to Hz band. The mHz to Hz band will be further probed by future laser interferometers LISA [64], BBO [65] and DECIGO [66, 67], as well as by atomic interferometers AION [68] and AEDGE [69]. Around the 100 Hz, advanced LIGO and VIRGO [70] have set an upper limit [71, 72] and their future upgrades will improve on the sensitivity by at least an order [71]. Einstein Telescope (ET) [73] and Cosmic Explorer (CE) [74] are planned to operate in the same band with three orders of magnitude greater sensitivity. For the Nambu-Goto cosmic string network, the only free parameter in Eq. (10) is \(\mu\) of \(\mathcal{O}(\Lambda^{2})\), where \(\Lambda\) is the scale of the \(U(1)_{B-L}\) symmetry breaking that generates the Majorana masses of the right-handed neutrinos. In Fig. 1 we show the GW spectrum for \(\Lambda=10^{14},10^{13}\), \(10^{12}\), \(10^{11}\) and \(10^{10}\) GeV, corresponding to a high scale of the right-handed neutrino masses. For comparison, we also show the sensitivity and upper bounds of various interferometers spanning a large range of frequencies from nano-Hz to kilo-Hz. The characteristic shape of the cosmic string induced GW signal is a rising spectrum at low frequencies which plateaus at higher frequencies. The height of this plateau is proportional to the symmetry breaking scale. The signals for \(\Lambda\gtrsim 10^{14}\) GeV are ruled out by EPTA, whereas signals for smaller scales are within the sensitivity of several interferometers. We note that the signals for \(\Lambda\simeq 10^{13-14}\) GeV may be candidates for the NANOGrav 15 yr result, albeit with a small Bayer factor [42]. For the Dirac mass model, the parameter space is subject to various physical constraints that impact the formation and stability of the domain walls [58, 75]. If the bias potential is sufficiently small, domain walls collapse too late and may dominate the energy density of the Universe. The time at which domain walls become dominant is given by \[t_{\rm dom}=\frac{3}{4}\frac{M_{\rm Pl}^{2}}{\mathcal{A}\mathcal{E}}\,. \tag{12}\] Requiring \(t_{\rm ann}<t_{\rm dom}\) yields a lower bound on the bias potential, \(V_{\rm bias}>4C_{\rm ann}{\cal A}^{2}{\cal E}^{2}/(3M_{\rm Pl}^{2})\), which can be written as \[V_{\rm bias}^{1/4}>8.95\times 10^{-10}\ {\rm GeV}\ \lambda^{1/4}\ \left(\frac{{\cal C }_{\rm ann}}{5}\right)^{1/4}\left(\frac{{\cal A}}{0.8}\right)^{1/2}\left(\frac {u}{{\rm GeV}}\right)^{3/2}\,. \tag{30}\] Even if the domain walls decay before they overclose the Universe, their decay products may destroy the light element abundances created by Big Bang nucleosynthesis (BBN). Assuming that a significant fraction of the energy density of the domain walls is converted into energetic particles, constraints on energy injection at the epoch of BBN require \(t_{\rm ann}\lesssim t_{\rm BBN}\simeq 0.01\) sec [76; 77], which can be written as \[V_{\rm bias}^{1/4}>3.97\times 10^{-6}\ {\rm GeV}\ \lambda^{1/8}\ \left(\frac{{\cal C }_{\rm ann}}{5}\right)^{1/4}\left(\frac{{\cal A}}{0.8}\right)^{1/4}\left( \frac{u}{{\rm GeV}}\right)^{3/4}\,. \tag{31}\] Equations (30) and (31), together with Eq. (29), constrain the parameter space for annihilation of domain walls and subsequent gravitational wave production. In terms of the scalar VEV \(u\) and bias potential \(V_{\rm bias}\), and choosing \({\cal A}=0.8,\ {\cal C}_{\rm ann}=5\), these constraints can be expressed as \[t_{\rm ann}<t_{\rm dom}:\quad\frac{V_{\rm bias}}{{\rm GeV}^{4}}>6.42\times 10^ {-37}\lambda\left(\frac{u}{{\rm GeV}}\right)^{6}\,, \tag{32}\] Figure 1: Gravitational wave spectrum induced by cosmic strings generated via the spontaneous breaking of the gauged \(U(1)_{B-L}\) symmetry responsible for Majorana mass of the neutrinos. \(\Lambda\) denotes the scale of symmetry breaking. \[t_{\rm ann}<t_{\rm BBN}:\quad\frac{V_{\rm bias}}{\rm GeV^{4}}>2.49\times 10^{-22} \sqrt{\lambda}\left(\frac{u}{\rm GeV}\right)^{3}\,, \tag{5.5}\] \[V_{\rm bias}<0.795V_{0}:\quad\frac{V_{\rm bias}}{\rm GeV^{4}}<0.38\lambda\left( \frac{u}{\rm GeV}\right)^{4}\,. \tag{5.6}\] Finally, from Eq. (2.4), if we assume that the mediator fermion mass is below the Planck scale \(\mathcal{O}(10^{19})\) GeV, and the heaviest light neutrino mass is around \(\mathcal{O}(0.1)\) eV, we find \[u\lesssim\frac{\mathcal{O}(10^{7})\ \text{GeV}}{y^{2}}\,. \tag{5.7}\] Here we have assumed a single mediator responsible for the \(\mathcal{O}(0.1)\) eV neutrino mass, and a single coupling \(y=y_{L}=y_{R}\) associated with it. For Yukawa couplings \(y\gtrsim\mathcal{O}(10^{-2})\), this implies an upper bound on the scale of \(Z_{2}\) symmetry breaking, \(u\lesssim\mathcal{O}(10^{11})\) GeV. The constraints of Eqs. (5.4)-(5.6) are depicted by the gray shaded regions in Fig. 2 for (a) \(\lambda=1\) and (b) \(\lambda=10^{-3}\). For smaller \(\lambda\), the upper left region expands while the other two regions shrink, as expected from Eqs. (5.4)-(5.6). The peak frequencies of the gravitational waves \(f_{\rm peak}=10^{-8},10^{-6},\ldots 10^{2}\) Hz, are marked by dots. Their colors represent the amplitude of the GW signal at the corresponding peak frequency. We find that amplitudes above \(\Omega_{\rm GW}^{\rm peak}h^{2}\sim 10^{-9}\) are ruled out by Eq. (5.4), while peak frequencies below \(10^{-8}\) Hz are ruled out by Eq. (5.5). Interestingly, the allowed region can still generate GW signals within the sensitivity of various interferometers. Figure 2: Parameter space of the Dirac neutrino mass model for creation and subsequent annihilation of the domain wall network. Gray shaded regions show the parameter space ruled out by physical constraints. Dotted lines in the allowed parameter space represent contours of peak frequency, and the colors of the dots indicate the amplitude of the gravitational wave signal at that frequency. he four benchmark points listed in Table 1 are chosen from the allowed parameter space. The last column of the table gives the upper bound on the Yukawa coupling, assuming that the mediator mass lies below the Planck scale. We note that the values of such Yukawa couplings cover the range of the third family charged fermion Yukawa couplings in the SM, and only exceed this range by less than an order of magnitude. The gravitational wave spectra for these benchmark points are shown in Fig. 3. Benchmark point 1 may explain the NANOGrav 15 yr result1 and can also be probed by SKA, while 2 and 3 can be probed by \(\mu\)Ares, LISA, AEDGE, DECIGO, BBO, AION, and 4 by Advanced LIGO+VIRGO, ET and CE, among others. Footnote 1: Note that GW spectrum from domain walls annihilating into SM particles have a Bayes factor \(>10\) and are one of the most promising candidates for explaining the NANOGrav 15 yr signal [42]. The main difference between the signals for the Majorana mass model and the Dirac mass model is their shape. While cosmic string signals for the former are mostly flat for observable frequencies, domain wall signals for the latter are peaked. We expect that cosmic string signals should be detected at multiple interferometers at different frequency bands, whereas domain wall signals are likely to be detected in only a narrow frequency range. Such a detection will provide valuable information about the nature of neutrinos and will complement results from neutrinoless double beta experiments. Figure 3: Gravitational wave spectrum from annihilation of domain walls created by soft-breaking of the \(Z_{2}\) symmetry in a Dirac neutrino mass model. Benchmark points 1� ## 6 Conclusion We have proposed a novel way to detect whether standard model neutrinos have Dirac or Majorana mass through the difference in the gravitational wave spectrum generated in each case. We have considered two simple models, both inspired by the assumption that small neutrino masses be generated without tiny Yukawa couplings. The key difference between Majorana and Dirac neutrino masses is whether or not lepton number is broken. In the case of Majorana masses, it is broken by two units. In a well motivated scenario, the lepton number symmetry, or equivalently, the \(B-L\) symmetry, is exact at ultraviolet scales, but is spontaneously broken when a scalar charged under the gauged \(U(1)_{B-L}\) symmetry gets a nonzero VEV. This gives Majorana mass to the right-handed neutrinos, which further generates small masses for the SM neutrinos via the type I seesaw mechanism. The breaking of the \(U(1)_{B-L}\) symmetry triggers the creation of cosmic strings in the early Universe. The string network loses energy via the production of string loops, some of which emit gravitational waves. The GWs have a flat spectrum over a wide range of frequencies, and their amplitude is related to the scale of symmetry breaking. Hence, the detection of a flat spectrum of stochastic gravitational wave background may imply a Majorana nature of the neutrinos and shed light on the scale at which such masses are generated. On the other hand, a seesaw-like mechanism to generate a small Dirac mass for the SM neutrinos is via the effective operator \(\bar{L}\nu_{R}H\sigma\), when both the Higgs and the scalar \(\sigma\) gets nonzero VEVs. To keep lepton number symmetry unbroken, and to prohibit a tree-level Dirac mass term for the SM neutrinos requires \(\nu_{R}\) and \(\sigma\) to be non-trivially charged under a \(Z_{2}\) symmetry, which is spontaneously broken by the VEV of \(\sigma\). The breaking of a discrete symmetry creates a domain wall network, which poses a threat to the standard cosmology if it is long-lived and/or dominates the energy density of Universe. Domain walls can be made to annihilate by softly breaking the discrete symmetry, thereby lifting the degeneracy between the \(Z_{2}\) symmetric vacua and creating a bias potential that tends to collapse the walls. This leads to a characteristic gravitational wave signal peaked at a frequency determined by the scale of spontaneous and soft symmetry breaking. We have \begin{table} \begin{tabular}{c c c c} \hline \hline Benchmark Point & \(u\) [GeV] & \(V_{\rm bias}\) [GeV\({}^{4}\)] & \(y_{\rm max}(M_{\Delta}<M_{\rm Pl})\) \\ \hline \hline 1 & \(10^{5}\) & \(1.5\times 10^{-5}\) & 4.93 \\ 2 & \(5.2\times 10^{7}\) & \(7.14\times 10^{10}\) & 0.216 \\ 3 & \(1.2\times 10^{9}\) & \(10^{19}\) & 0.045 \\ 4 & \(2\times 10^{11}\) & \(2.5\times 10^{32}\) & 0.0035 \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark points for gravitational wave signals from domain walls with \(\lambda=1\). shown that depending on the parameter space, such signals may be probed by various terrestrial and satellite-based interferometers. In summary, the very different shapes of the gravitational wave spectra, flat in the Majorana case, and peaked in the Dirac case may help distinguish the nature of neutrinos, and complement the results of neutrinoless double beta decay experiments. In particular, given that non-observation of neutrinoless double beta decay does not necessarily imply that neutrinos are Dirac particles, the peaked domain wall signal may indicate the Dirac nature of neutrinos. S.F.K. would like to thank IFIC, University of Valencia for hospitality. S.F.K. and M.H.R. acknowledge support from the STFC Consolidated Grant ST/T000775/1, and from the European Union's Horizon 2020 Research and Innovation Programme under Marie Sklodowska-Curie grant agreement HIDDeN European ITN project (H2020-MSCA-ITN-2019//860881-HIDDeN). D.M. is supported in part by the U.S. Department of Energy under Grant No. DE-SC0010504.
2306.03920
Outflow densities and ionisation mechanisms in the NLRs of the prototypical Seyfert galaxies NGC 1068 and NGC 4151
Despite being thought to play an important role in galaxy evolution, the true impact of outflows driven by active galactic nuclei (AGN) on their host galaxies is unclear. In part, this may be because electron densities of outflowing gas are often underestimated: recent studies that use alternative diagnostics have measured much higher densities than those from commonly used techniques, and consequently find modest outflow masses and kinetic powers. Furthermore, outflow ionisation mechanisms - which are often used to probe acceleration mechanisms - are also uncertain. To address these issues, we have analysed archival HST/STIS spectra of the inner regions (r<160pc) of the nearby prototypical Seyfert galaxies NGC 1068 and NGC 4151, which show evidence of warm-ionised outflows driven by the central AGN. We derive high electron densities ($10^{3.6}$<$n_e$<$10^{4.8}$cm$^{-3}$) using the transauroral [OII] and [SII] emission lines ratios for the first time with spatially-resolved observations. Moreover, we find evidence that the gas along the radio axis in NGC 1068 has a significant AGN-photoionised matter-bounded component, and there is evidence for shock-ionisation and/or radiation-bounded AGN-photoionisation along the radio axis in NGC 4151. We also note that the outflow extents are similar to those of the radio structures, consistent with acceleration by jet-induced shocks. Taken together, our investigation demonstrates the diversity of physical and ionisation conditions in the narrow line regions of Seyfert galaxies, and hence reinforces the need for robust diagnostics of outflowing gas densities and ionisation mechanisms.
Luke R. Holden, Clive N. Tadhunter
2023-06-06T18:00:02Z
http://arxiv.org/abs/2306.03920v1
Outflow densities and ionisation mechanisms in the NLRs of the prototypical Seyfert galaxies NGC 1068 and NGC 4151 ###### Abstract Despite being thought to play an important role in galaxy evolution, the true impact of outflows driven by active galactic nuclei (AGN) on their host galaxies is unclear. In part, this may be because electron densities of outflowing gas are often underestimated: recent studies that use alternative diagnostics have measured much higher densities than those from commonly used techniques, and consequently find modest outflow masses and kinetic powers. Furthermore, outflow ionisation mechanisms -- which are often used to probe acceleration mechanisms -- are also uncertain. To address these issues, we have analysed archival HST/STIS spectra of the inner regions (r < 160 pc) of the nearby prototypical Seyfert galaxies NGC 1068 and NGC 4151, which show evidence of warm-ionised outflows driven by the central AGN. We derive high electron densities (\(10^{3.6}<n_{e}<10^{4.8}\) cm\({}^{-3}\)) using the transauroral [OII] and [SII] emission lines ratios for the first time with spatially-resolved observations. Moreover, we find evidence that the gas along the radio axis in NGC 1068 has a significant AGN-photoionised matter-bounded component, and there is evidence for shock-ionisation and/or radiation-bounded AGN-photoionisation along the radio axis in NGC 4151. We also note that the outflow extents are similar to those of the radio structures, consistent with acceleration by jet-induced shocks. Taken together, our investigation demonstrates the diversity of physical and ionisation conditions in the narrow line regions of Seyfert galaxies, and hence reinforces the need for robust diagnostics of outflowing gas densities and ionisation mechanisms. keywords: galaxies: active - galaxies: evolution - galaxies: individual: NGC 1068 - galaxies: individual: NGC 4151 - galaxies: Seyfert - ISM: jets and outflows ## 1 Introduction Active galactic nuclei (AGN) can drive gas outflows through radiation-pressure driven winds from their accretion disks (Di Matteo et al., 2005; Hopkins & Elvis, 2010) and/or radio jets (Axon et al., 1998; Wagner & Bicknell, 2011; Mukherjee et al., 2018). These outflows, as well as the heating and ionising of near-nuclear gas, may constitute an important part of 'AGN feedback', which now routinely plays a crucial role in theoretical models of galaxy evolution. AGN-feedback is required to explain observed galaxy properties (e.g. Di Matteo et al., 2005; Somerville et al., 2008; Schaye et al., 2015; Dubois et al., 2016; Dave et al., 2019) and empirical scaling relations between supermassive black holes and host galaxy properties (e.g. Magorrian et al., 1998; Silk & Rees, 1998; Fabian, 1999; Gebhardt et al., 2000; Ferrarese & Merritt, 2000). Models often require that the kinetic power (\(\dot{\rm E}_{\rm kin}\)) of the outflowing gas is above a certain fraction of the AGN bolometric luminosity (L\({}_{\rm bol}\)): this is characterised by a ratio known as the 'coupling factor' (\(\epsilon_{f}\)=\(\dot{\rm E}_{\rm kin}/L_{\rm bol}\)), and is typically required to be in the range 0.5 <\(\epsilon_{\rm f}\) <10 per cent (Di Matteo et al., 2005; Springel et al., 2005; Hopkins & Elvis, 2010). Observational studies commonly attempt to quantify the impact of outflows on their host galaxies by comparing measured coupling efficiencies to those required by models (e.g. Liu et al., 2013; Cicone et al., 2014; Harrison et al., 2014; Rose et al., 2018; Riffel, 2021). However, many key outflow properties are highly uncertain, leading to a wide range of observationally-derived coupling efficiencies (Harrison et al., 2018). For the warm ionised outflow phase (i.e. traced by [OIII] and H\(\beta\); 10,000 \(<T_{e}<\) 25,000 K), the largest source of uncertainty is likely to be the electron density of the outflowing gas, which is often estimated or assumed to be in the range \(n_{e}\sim\)100-1000 cm\({}^{-3}\)(e.g. Kraemer & Crenshaw, 2000a; Nesvadba et al., 2006; Fiore et al., 2017). This is because the commonly used 'traditional' density diagnostics -- the [SII](6717/6731) and [OII](3726/3729) emission-line doublet ratios -- are only sensitive up to \(n_{\rm e}\sim 10^{3.5}\) cm\({}^{-3}\), and are often blended in the case of complex outflow kinematics. However, in recent years, alternative density diagnostics have been developed and used, such as detailed photoionisation modelling that makes use of a wide range of emission lines (Collins et al., 2009; Crenshaw et al., 2015; Revalski et al., 2021, 2022), and a technique involving ionisation parameter measurements with infrared estimates of outflow radii (Baron & Netzer, 2019). Such methods have measured higher electron densities for the warm ionised phase than commonly-used traditional techniques, up to \(n_{e}\sim 10^{5.5}\) cm\({}^{-3}\). Studies which make use of the higher critical density 'transauroral' ('TR'; Boyce et al., 1933) [OII](3726 + 3729)/(319 + 7331) and [SII](4068 + 4076)/(6717 + 6731) diagnostic ratios have similarly found densities in the range of \(10^{3}<n_{e}<10^{5.5}\) cm\({}^{-3}\)(Holt et al., 2011; Rose et al., 2018; Santoro et al., 2018; Spence et al., 2018; Ramos Almeida et al., 2019; Santoro et al.2020; Davies et al.2020; Speranza et al.2022; Holden et al.2023). Considering that the derived outflow kinetic power is inversely proportional to the electron density, if electron densities are truly orders of magnitude higher than are commonly assumed or estimated, the resulting kinetic powers and coupling factors for the warm ionised phase will be orders of magnitude lower. This could significantly change our understanding of the importance of AGN feedback in galaxy evolution. Moreover, where possible, it is important to use spatially-resolved observations when deriving electron densities, since global electron densities may significantly underestimate the values at small radial distances from the nucleus, where the outflows are the most extreme (Revalski et al.2022; but see also Kakkad et al.2018). Thus, detailed spatially-resolved observations are needed to robustly assess electron densities in different types of AGN, as well as to compare and verify different density diagnostics. Investigations into the impact of outflows on their host galaxies are further complicated by the fact that the dominant acceleration and ionisation mechanisms are unclear: while it is thought that outflows may be accelerated by radiation pressure from the AGN (either 'in situ': e.g. Crenshaw et al.2015; Fischer et al.2017; Revalski et al.2018; Meena et al.2023, or from the nucleus: e.g. Hopkins & Elvis2010; Meena et al.2021), a study of a large sample of local AGN found a link between intermediate radio power AGN (L\({}_{\rm 1.4~{}GHz}=10^{23-25}\) W Hz\({}^{-1}\)) and outflow kinematics (Mullaney et al.2013), suggesting that feedback from jets is also important in AGN that are classified as radio-quiet. Indeed, hydrodynamic simulations have shown that jets interacting with the ISM on kpc-scales can explain observed gas kinematics in some objects (e.g. Mukherjee et al.2018; Audibert et al.2023), and may have both a positive and negative impact on local star formation rates (Mandal et al.2021). Therefore, determining dominant acceleration mechanisms is crucial for facilitating proper comparisons between observations and predictions from theoretical modelling, which are needed to interpret the role of outflows in AGN feedback. The ionisation and excitation mechanisms of the outflowing gas may provide clues as to the acceleration mechanism(s) present. For example, shock-ionised gas must have passed through (and been accelerated by) a shock. However, AGN-photoionised gas may have been previously accelerated by another mechanism, and reionised by photons from the AGN after cooling (Holden et al.2023). Hence, the true nature of the relationship between outflow acceleration and ionisation mechanisms is complex, and requires further careful analysis. Regardless of how outflows are accelerated, understanding the dominant ionisation mechanisms impacts our ability to extract key diagnostic information for the warm outflowing gas. Specifically, the techniques presented by Holt et al. (2011) and Revalski et al. (2021) (see also Collins et al.2009) both rely on photoionisation models, and the transauroral lines (in the case of the Holt et al.2011 method) cannot be emitted by a matter-bounded component. If, in reality, a gas outflow is shock-ionised or has a large contribution from a matter-bounded component, this may have a significant impact on the validity of these methods. Thus, it is important to investigate the ionisation mechanisms present in active galaxies for which these techniques have been applied in the past, as well the potential impact of matter-bounded components or shock ionisation on derived densities. In order to address these issues, we are undertaking detailed spatially-resolved studies of nearby AGN that show clear evidence of outflows on pc to kpc scales. In Holden et al. (2023), we presented a detailed study of the central regions of the nearby Seyfert 2 IC 5063 using Very Large Telescope (VLT) / Xshooter ultraviolet (UV), optical and near-infrared (NIR) spectroscopy: we found electron densities just above the critical density of the traditional [SII] ratio, and evidence for a post-shock cooling sequence and reionisation via AGN photoionisation. There is a clear need to determine whether the conditions found in the narrow line region (NLR) of IC 5063 are similar in other Seyfert galaxies, specifically to further investigate the true outflow gas density, kinetic powers, and ionisation mechanisms present on different spatial scales. Therefore, here we analyse archival Hubble Space Telescope (HST) / Space Telescope Imaging Spectrograph (STIS) spectra of the inner NLRs (\(r<160\) kpc) of the prototypical Seyfert galaxies NGC 1068 and NGC 4151, and apply and expand upon many of the techniques presented in Holden et al. (2023). We take the distances to NGC 1068 and NGC 4151 to be \(D=13.0\) Mpc (Revalski et al.2021) and \(D=15.8\) Mpc (Yuan et al.2020), respectively, which correspond to spatial scales of 0.07 kpc/arcseconds for NGC 1068 and 0.078 kpc/arcseconds for NGC 4151. The structure of the paper is as follows: in Section 2, we introduce the prototypical Seyfert galaxies NGC 1068 and NGC 4151; in Section 3, we detail the archival HST/STIS observations and our data reduction and handling processes; in Section 4, we present our analysis of the STIS data; in Section 5, we discuss the implications of our findings, and in Section 6 we give our conclusions. ## 2 Two prototypical Seyferts: NGC 1068 & NGC 4151 NGC 1068 and NGC 4151 appeared in Carl Seyfert's original paper that established the Seyfert class (Seyfert, 1943), and are respectively the prototypical Seyfert 2 (Sey2) and Seyfert 1 (Sey1) galaxies. In consequence, they are perhaps the most well-studied AGN of their respective types. Their close proximity to Earth, and the previous, extensive multi-wavelength studies of their properties, make them ideal objects for our project: the outflows in their central regions can be spatially resolved, and we can compare our results to those obtained using other methods. Principally, this allows us to assess the validity of the different density diagnostic techniques, as well as investigate the ionisation of the gas. ### Ngc 1068 NGC 1068 is one of the closest and brightest (in terms of observed flux) Seyfert 2 galaxies, allowing detailed spatially-resolved observations, and thus making it the target for extensive studies that cover a range of spatial scales in the optical (e.g. Cecil et al.1990; Evans et al.1991; Asou et al.1998; Crenshaw & Kraemer2000; Kraemer & Crenshaw2000b; Das et al.2006, NIR (e.g. Raban et al.2009; Muller Sanchez et al.2009; May & Steiner2017) and radio (e.g. Wilson & Ulvestad1983; Gallimore et al.1996; Garcia-Burillo et al.2014, 2019). NGC 1068 has a radio luminosity of L\({}_{\rm 1.4~{}GHz}=2.3\times 10^{23}\) W Hz\({}^{-1}\)(Ulvestad & Wilson, 1984), placing it in the upper end of the radio luminosity range for Seyfert galaxies, and its high bolometric luminosity (0.4 \(<L_{\rm bol}4\) + 7.8\(\times 10^{38}\) W: Woo & Urry2002; Alonso-Herrero et al.2011; Lopez-Rodriguez et al.2018; Gravity Collaboration et al.2020) is close to the lower boundary of the luminosity range for quasars (L\({}_{\rm bol}>10^{38}\) W). The galaxy also has an important historical role, as it was the first object used to verify the orientation-based unified scheme for AGN (Antonucci & Miller, 1985). The NLR of NGC 1068 presents as an 'hourglass'-shaped bicone (Riffel et al., 2014; Barbosa et al., 2014; May and Steiner, 2017) with an opening angle of \(\theta\sim 40^{\circ}\) along PA= \(30\pm 2^{\circ}\) at an inclination of \(i=5^{\circ}\), placing the biconae axis close to the plane of the sky and inclined \(\sim 45^{\circ}\) out of the galaxy's disk (Das et al., 2006; but see also Crenshaw and Kraemer, 2000b). Outflows of warm-ionised gas with velocities up to \(\sim\)1500 km s\({}^{-1}\) have been detected in the bicone (Crenshaw and Kraemer, 2000c; Das et al., 2006). In the NE cone, the radio axis is closely aligned with the bicone axis -- interpreted as a radio jet propagating within the hollowed-out cone -- with a radio lobe that extends just beyond the maximum extent of the cone (Wilson and Ulvestad, 1987; shown in Figure 1). Lower velocity cold molecular CO(3-2) outflows have been detected at this position, indicating that the lobe may represent the termination of the AGN-driven outflows Garcia-Burillo et al. (2014). The outflows in the NLR of NGC 1068 have been argued to be radiatively accelerated by some authors (Kraemer and Crenshaw, 2000; Das et al., 2006; Revalski et al., 2021; Meena et al., 2023), while others have proposed they are driven by jet-induced shocks (Capetti et al., 1997; Axon et al., 1998). May and Steiner (2017) propose a scenario in which the radio jet impacts molecular clouds on small radial scales near the central AGN, accelerating high-velocity 'bullets' of gas that propagate within the bicone but constitute only a small fraction of the total outflowing mass. ### Ngc 4151 NGC 4151 is the prototypical Seyfert 1 (Sey1) galaxy1 and is also one of the closest and brightest (in terms of observed flux) of its class, leading to its NLR outflows being the target of extensive studies of the coronal (e.g. Storchi-Bergmann et al., 2009, 2010), warm ionised (e.g. Winge et al., 1997; Hutchings et al., 1999; Crenshaw et al., 2000; Das et al., 2005; May et al., 2020) and warm molecular (H\({}_{2}\); \(T\sim\)2000 K, e.g. May et al., 2020) gas phases, which have distinct flux distributions (Storchi-Bergmann et al., 2009). Similar to NGC 1068, the bicone-shaped NLR also has an hourglass morphology (May et al., 2020), with PA=\(22^{\circ}\) at an inclination of \(i=21^{\circ}\)(Pedlar et al., 1992; \(36^{\circ}\) to the galactic disk) and an opening angle of \(33^{\circ}\)(Das et al., 2005). However, the bolometric luminosity of the AGN in NGC 4151 (\(L_{\rm bol}=1.4\times 10^{37}\) W) is approximately an order of magnitude below that of NGC 1068. Footnote 1: NGC 4151 was later classified as an intermediate ‘Seyfert 1.5’ (Osterbrock and Koski, 1976; Robinson et al., 1994). The radio source (of luminosity L\({}_{\rm 1.4~{}GHz}\)=1.6\(\times 10^{22}\) W Hz\({}^{-1}\); Ulvestad and Wilson, 1984) consists of a double sided jet (PA \(\sim 77^{\circ}\)) originating from the nucleus. High-resolution radio imaging (Carral et al., 1990; Pedlar et al., 1993; Williams et al., 2017) shows several radio knots along this structure within the central few arcseconds, whereas lower-resolution radio observations (Johnston et al., 1982; Pedlar et al., 1993) reveal a larger-scale lower surface brightness structure with a radio lobe in the NE cone extending to 6.3 arcseconds from the nucleus along the radio axis. It has been argued that the radio jet has little connection to the NLR outflow kinematics in NGC 4151 (Hutchings et al., 1999; Crenshaw et al., 2000; Das et al., 2005). However, enhanced line fluxes from the warm ionised gas, high electron temperatures (T\({}_{\rm e}\) >16,000 K) and high [FeII]/[PII] ratios have been spatially associated with the radio structure (Mundell et al., 2003; Storchi-Bergmann et al., 2009, 2010), indicating that jet-ISM interactions may still drive shocks into the gas at certain locations within the bicone (see also Wang et al., 2011a,b; Williams et al., 2017). May et al. (2020) propose a similar model as they proposed for NGC 1068 (May and Steiner, 2017) -- albeit on smaller spatial scales with less extreme kinematics -- to explain the NLR and outflow structure in NGC 4151: the radio jet impacts a molecular cloud near the nucleus (potentially due to misalignment between the jet and torus/disk: Storchi-Bergmann et al., 2010; May et al., 2020), driving fragmented, shock-accelerated gas into the cones and contributing to the NLR morphology. ### Previous photoionisation modelling of NGC 1068 and NGC 4151 Crenshaw et al. (2015) and Revalski et al. (2021, 2022) performed detailed, multi-ionisation component photoionisation modelling of the warm ionised outflows in NGC 1068 and NGC 4151, finding densities in the range \(10^{3.0}\) cm\({}^{-3}<n_{e}<10^{7.2}\) cm\({}^{-3}\) for the NLR gas in both objects, and coupling efficiencies above the lower limit required by galaxy evolution models (0.5 per cent: Hopkins and Elvis, 2010) in the case of NGC 1068. In order to further investigate the electron densities of the outflowing gas in the NLR of these two important objects, and to attempt to clarify the uncertainties regarding the acceleration and ionisation mechanisms of the gas, we require high spatial resolution, wide wavelength-coverage long-slit spectroscopy with the slit aligned along the radio axes (which is approximately along the bicone axes). ## 3 Observations and data reduction ### Archival HST/STIS observations To achieve our science goals, suitable archival HST/STIS long-slit spectra were downloaded from the Hubble Legacy Archive ([https://hla.stsci.edu/hlaview.html](https://hla.stsci.edu/hlaview.html)). We required data taken using both the G430L and G750L gratings in order to ensure sufficient wavelength coverage, namely that the spectra contained the blue [SII]\(\lambda\lambda\)4068, 4076 and red [OII]\(\lambda\lambda\)7319, 7331 transauroral doublets. Both gratings have a spatial pixel scale of 0.051 arcseconds per pixel, and the dispersions of the two gratings are 2.72 \(\AA\)/pixel (G430L; 2900-5700 \(\AA\)) and 4.92 \(\AA\)/pixel (G750L; 5240-10270 \(\AA\)). We also required that these data were taken along (or close to) the PA of the radio/bicone structures to ensure we are tracing the the gas that is impacted most by the jet. The data for NGC 1068 were taken as part of the Cycle 7 HST Proposal GTO:7573 (PI Kraemer), with a 52\(\times\)0.1 arcsecond slit along PA=202\({}^{\circ}\), centred on a bright emission-line knot close (\(<\)0.4\({}^{\prime\prime}\)) to the nucleus (see Crenshaw and Kraemer, 2000a and Kraemer and Crenshaw, 2000a). Data for NGC 4151 were taken with a 52\(\times\)0.1 arcsecond slit along PA=70\({}^{\circ}\), offset to the south by 0.1 arcsecond to reduce contamination from the bright Sey1 nucleus, and were taken in Cycle 7 as part of HST Proposal GTO:7569 (PI Hutchings) -- a full description of the NGC 4151 observations is given by Nelson et al. (2000). We show the positions of the STIS slits over the central regions of the two Seyferts in Figure 1. ### Reduction and handling of STIS data #### 3.2.1 Data reduction The first step in the data reduction was performed with the standard CALSTIS pipeline. For the NGC 1068, only a single exposure for each grating was available, while for NGC 4151 we took the average of two exposures for each grating using Python scripts which made use of the Numpy(Harris et al., 2020) and AstroPy(Astropy Collaboration et al., 2013, 2018) modules. In order to ensure that the individual exposures for each grating were aligned, we first extracted spatial slices along the slit direction in a line-free region of the continuum covering the wavelength range 5480-5600 A for the G430L grating and 6795--6890 A for the G750L grating. The centroids of the spatial peaks -- determined with Gaussian profile fits -- were consistent within better than 0.4 pixels, confirming that each exposure was taken with the same telescope pointing within 0.02 arcseconds. We also checked that the spectra taken with the G430L and G750L gratings for the each object were aligned, using the same method of Gaussian fits to the spatial flux profiles. Again, the spatial positions of the peak flux between gratings were consistent to within better than 0.4 pixels, indicating that the observations with different gratings were closely spatially aligned. Residual hot pixels and cosmic rays were removed from the spectra using the CLEAN command from the STARLINK FI-GARO software package (Currie et al., 2014). We then corrected for extinction due to dust in the Milky Way using the Galactic extinction maps presented by Schlegel et al. (1998) and recalibrated by Schalfly & Finkbeiner (2011). Using the NASA/IPAC Infrared Science Archive reddening lookup tool ([https://irsa.ipac.caltech.edu/applications/DUST/](https://irsa.ipac.caltech.edu/applications/DUST/)) with these maps, we find that there are mean colour excesses in the directions of NGC 1068 and NGC 4151 of E(B-V)\({}_{\rm mean}=0.0289\pm 0.0004\) and E(B-V)\({}_{\rm mean}=0.0237\pm 0.0011\) respectively. The \(R_{\nu}=3.1\) extinction law presented by Cardelli et al. (1989) (hereafter CCM89) was then used to correct for Galactic extinction. #### 3.2.2 Aperture selection and extraction The STIS long-slit spectra of NGC 1068 and NGC 4151 show disturbed kinematics (indicating outflows) and several bright emission-line knots in the central few hundred parsecs, as noted by previous studies (Crenshaw & Kraemer, 2000b; Kraemer & Crenshaw, 2000a; Das et al., 2005, 2006; Meena et al., 2023). We extracted several apertures (integrated groupings of pixel rows) from the two-dimensional G430L and G750L spectra, with each aperture forming an integrated one-dimensional spectrum that corresponds to a certain spatial position along the slit. We selected the apertures to cover the locations of the bright emission knots seen in our two-dimensional spectra (Figure 2). The widths of the apertures (6-15 pixels; 0.3-0.8 arcseconds) were set to contain sufficient signal in the fainter emission lines that are used for diagnostics in our analysis, namely the fainter transauroral [OII]\(\lambda\lambda\)7319,7331 and [SII]\(\lambda\lambda\)4068,4076 doublets. We extracted the same apertures from the G430L and G750L spectra for each object, as we previously determined that the spectra were closely spatially aligned (Section 3.2.1). Flux errors were determined by adding the flux errors from individual pixel rows (which constitute a given aperture) in quadrature. As an example, we present part of the spectrum of Aperture 2 for NGC 1068 in Figure 3. The chosen apertures extended out to a maximum radial distance of 139 pc for NGC 1068, and 151 pc in the case of NGC 4151. Aperture 3 for NGC 1068 was placed over a bright emission knot that corresponds to a previously detected radio source at the likely position of the galaxy's nucleus (see discussion in Kraemer & Crenshaw, 2000a), while Aperture 4 for NGC 4151 corresponds to the location along the slit that is closest to the nucleus. We note that the spectra for NGC 4151 do not directly cover the nucleus, due to the Figure 1: The STIS slits of our archival observations (red) shown plotted over archival HST/WFPC2 [OIII] emission-line images of the inner regions of NGC 1068 and NGC 4151, taken with the F502N filter (NGC 1068: GTO:5754, PI Ford; NGC 4151: GTO:5124; PI Ford). The extents of our apertures (Section 3.2.2) along the slits are shown in blue. **Left:** The STIS slit shown over the [OIII] emission-line image of the near-nuclear regions of NGC 1068. VLA 22 GHz contours from Gallimore et al. (1996) are presented in green, showing the radio structure near the core and an extended lobe to the NE. **Right:** the STIS slit shown over the [OIII] emission-line image of the near-nuclear regions of NGC 4151; the green contours are from high-resolution eMERLIN 1 5 GHz imaging presented by Williams et al. (2017), and show a string of radio knots near the nucleus. We note that, while the narrow-band images are not continuum-subtracted, the brighter parts of the NLR emission are dominated by [OIII] emission in the filter bandpass, and so the images provide a good representation of the main NLR structures. 0.1 arcsecond slit offset to the south to avoid nuclear contamination. Unfortunately, the south-west part of the slit for NGC 1068 (seen above Aperture 4 in Figure 2) did not contain enough signal for the measurement of the faint [OII]\(\lambda\lambda\)7319, 7331 transauroral doublet, even when integrated as a single aperture. Therefore, we omit this region from our analysis. Following aperture extraction, we ensured that the flux calibration was consistent between the two gratings for each aperture by overplotting the spectra in the region where the wavelength ranges of the gratings overlap (5275-5705 A). We found that all apertures for NGC 1068 are closely matched in flux. However, for apertures 2 and 4 of NGC 4151, the flux in the overlap region was \(>\)8 per cent higher in the G430L grating than the G750L grating, potentially due to internal reflections within the instrument caused by the bright Type 1 nucleus (see Nelson et al. 2000). Therefore, we do not use these apertures in further analysis. #### 3.2.3 The contribution of stellar continua to the spectra We did not model and subtract the underlying stellar continuum in detail using a template-fitting approach (as was done for similar analyses of other objects by Rose et al. 2018 and Holden et al. 2023) for various reasons. First, our archival STIS G430L and G750L spectra did not have sufficient spectral resolution to clearly resolve absorption features that could be used to verify the robustness of the continuum fits. Second, there may be substantial contamination by direct and scattered AGN continuum (Antonucci & Miller 1985) and nebular continuum (Tadhunter 2016) that precludes accurate stellar continuum modelling. Finally, the emission lines in our spectra have relatively high equivalent widths, which fill in various stellar absorption features. In order to verify whether stellar continuum modelling was needed in this study, we measured the equivalent widths (EWs) for the H\(\beta\) recombination line. We find 36 \(<\) EW \(<\) 148 A in our NGC 1068 apertures and 30 \(<\) EW \(<\) 151 A in the NGC 4151 apertures. The lowest emission-line equivalent width we measure (EW\(=\) 30 A for Aperture 3 in NGC 4151) is a factor of three higher than that of the H\(\beta\) absorption feature as modelled for a \(\sim\)400 Myr old stellar population (which gives the highest EWs in modelling by Gonzalez Delgado et al. 1999). Thus, underlying stellar absorption features may affect our measured H\(\beta\) luminosities by a maximum factor of 1.3 (for a stellar EW\(=\) 10 A). However, this is very much an upper limit since we do not detect a Balmer break in the continuum in any of our apertures, as would be expected for intermediate age stellar populations that have strong Balmer absorption lines. #### 3.2.4 Fits to key emission lines The NLR kinematics in NGC 1068 and NGC 4151 are complex, and have been previously modelled in detail as biconical outflows based on higher resolution STIS spectra than those used here (Das et al. 2006 and Das et al. 2005 respectively; but see also Crenshaw & Kraemer 2000c and Crenshaw et al. 2000). In those studies, the [OIII]\(\lambda\lambda\)4959,5007 doublet line profiles were fit with multiple Gaussians for each pixel row of the 2D spectra. Here, we perform a similar procedure for our extracted apertures by simultaneously fitting a 1st or 2nd order polynomial to the continuum surrounding the [OIII]\(\lambda\lambda\)4959,5007 doublet, and one or two Gaussian profiles to each of the lines in the doublet itself. We set the wavelength separation of the lines in the doublet, as well as the intensity ratio of the lines (1:2.99), to those defined by atomic physics (Osterbrock & Ferland 2006). Furthermore, we constrained the widths of a given Gaussian Figure 2: Selected apertures for NGC 1068 (left) and NGC 4151 (right), positioned over the [OIII]\(\lambda\lambda\)4959,5007 doublet in the two-dimensional STIS G430L spectra. The spectral direction is horizontal (left = bluewards; right = redwards) and the vertical direction is spatial along the slit (with the direction shown by the labelled arrows); velocity scale bars are shown in green, and the spatial extents in arcseconds and parsecs are shown to the right of each spectrum. The apertures are shown as regions bounded by dashed lines, and are labelled on the left of each image — they were chosen to contain enough signal for the measurement of faint lines in distinct kinematic regions within the central few hundred parsecs of each galaxy. component to be the same for each line in the doublet. We present the model parameters for each aperture in Table 1. Once we had established [OIII]\(\lambda\lambda\)4959,5007 doublet fits in each aperture, we calculated the difference between the mean wavelength of each Gaussian component and the rest [OIII] wavelength in the reference frame of the galaxy, using redshifts2 of \(z=0.00381\) for NGC 1068 and \(z=0.003262\) for NGC 4151. We also determined the intrinsic width of each component by subtracting the instrumental width of the STIS G430L grating in quadrature from the measured widths. According to the STIS manual, for a slit of width 0.1 arcseconds, the instrumental broadening in the spectral direction is in the range \(2<\mathrm{FWHM}<3\) pixels, corresponding to \(5.5<\mathrm{FWHM}<8.2\) A for the G430L grating and \(9.8<\mathrm{FWHM}<14.8\) A for the G750L grating. By fitting single Gaussians to the [OIII]\(\lambda\lambda\)4959,5007 emission-line doublet at a radial distance of 4 arcseconds from the nucleus of NGC 4151 in the G430L spectra (where the lowest line widths are measured), we measure a line width of \(\mathrm{FWHM_{instr}}=6.0\pm 0.4\) A; similarly, measuring the [SII]\(\lambda\)9531 line in the G750L spectra with this method resulted in a line width of \(\mathrm{FWHM_{instr}}=12.3\pm 2.4\) A. Thus, we adopt instrumental widths of \(\mathrm{FWHM_{instr}}=6.0\) A (360 km s\({}^{-1}\) at 5007 A) and \(\mathrm{FWHM_{instr}}=12.3\) A (560 km s\({}^{-1}\) at 6575 A) for the G430L and G750L gratings, respectively. Footnote 2: 21 cm redshifts from the NASA/IPAC Extragalactic Database ([https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)). In subsequent analysis, we only consider _total_ line fluxes -- including all Gaussians components used -- rather than fluxes from individual components (i.e. potentially representing outflowing and quiescent gas). This was done because of the low spectral resolutions of the G430L and G750L gratings, which made it challenging to separate different kinematic components in cases where lines are heavily blended. Nonetheless, in order to improve the accuracy of the fits to the weaker emission lines and blends in the spectra, we used the kinematics (velocity shifts and widths) derived from fits to the [OIII] doublet in each aperture to constrain the fits to the other key diagnostic lines used in our analysis, such as H\(\beta\), H\(\gamma\), [OIII]\(\lambda\)4363, [OIII]\(\lambda\)3726,3729, [OIII]\(\lambda\lambda\)7319,7331, [SII]\(\lambda\lambda\)4068,4076, [SII]\(\lambda\lambda\)6717,6731, [ArIV]\(\lambda\lambda\)4711,4740 and [HeII]\(\lambda\)4686. We found that this procedure produced acceptable fits to these lines, including the transauroral [SII]\(\lambda\lambda\)4068,4076 and [OII]\(\lambda\lambda\)7319,7331 doublets. However, for closely spaced doublets such as [OII]\(\lambda\)3726,3729, the low spectral resolution meant that we did not resolve individual lines, and so we modelled the _total_ doublet profile as a single emission line during the fitting process. Figure 3: The G430L grating spectrum for Aperture 2 of NGC 1068 (Figure 2). Key emission lines that are used in our analysis are labelled with dotted lines. Note that, for presentation reasons, the limit on the flux axis has been chosen so that fainter lines can be clearly seen; as a result, the peak of the [OIII]\(\lambda\)5007 line is not visible. ## 4 Analysis of the STIS spectra ### Transauroral line diagnostics In order to provide estimates of the electron densities of the warm ionised gas in NGC 1068 and NGC 4151, we make use of a technique first described by Holt et al. (2011) which requires measurement of the transauroral [SII] and [OII] ratios: \[TR(\{OII\})=F(3726+3729)/F(7319+7331),\] \[TR(\{SII\})=F(4068+4076)/F(6717+6731).\] In this technique, measured TR([OII]) and TR([SII]) ratios are compared to those expected from photoionisation modelling in order to simultaneously derive electron densities and reddenings. This has several important advantages as a density diagnostic over commonly-used, traditional methods. First, these lines have higher critical densities (\(n_{\rm e}\sim 10^{5.5}\) cm\({}^{-3}\)) than the traditional [OII](3726/3729) and [SII](6717/6731) density diagnostics, which are only sensitive up to \(n_{\rm e}\sim 10^{3.5}\) cm\({}^{-3}\). Furthermore, the TR method uses the ratios of the _total_ line fluxes of widely-separated emission-line doublets, unlike the traditional [SII] and [OII] techniques, which rely on the flux ratios of lines _within_ the doublets. This means that the TR ratios are less susceptible to uncertainties from fit degeneracy resulting from the larger velocity widths (as often seen for outflowing gas) and low spectral resolutions (as for our STIS spectra) that lead to blending of line profiles within the doublets. We used the [OIII] model fits to the TR lines to measure line fluxes, which were then used to calculate measured TR ratios. The CLOUDY code (version C17.02: Ferland et al. 2017) was then used to generate plane-parallel, single-slab, radiation-bounded models of solar-composition gas with no dust depletion, photoionised by a central source. We set the ionising continuum of this source to follow a power-law of shape \(F_{\nu}\propto v^{-\alpha}\) between 10 um and 50 keV, with a spectral index of \(\alpha=1.5\). This is close to the average optical to X-ray spectral index measured in radio-quiet AGN (Zamorani et al. 1981; Miller et al. 2011), and is consistent with photoionisation modelling of the emission-line ratios of the extended and nuclear NLRs in various samples of AGN (e.g. Ferland & Netzer 1983, Robinson et al. 1987). We note, however, that the TR ratios are relatively insensitive to the shape of the ionising continuum (see Appendix B in Santoro et al. 2020). We selected an ionisation parameter of log \(U=-3\) (the highest value that reproduced the measured TR ratios) and varied the electron density of the modelled gas in 0.01 dex steps between 2.00 < log\({}_{10}(n_{e}\) [cm\({}^{-3}\)] < 5.00. We then reddened the modelled TR ratios produced for each electron density value with the \(R_{\nu}=3.1\) CCMS0 law, producing a grid of values that we compared to our measured ratios in order to provide simultaneous values of electron density and reddening. The resulting TR grid is shown in Figure 4, and the derived values are given in Table 1. The electron densities measured in this way for NGC 1068 have values in the range \(4.00<\) log\({}_{10}(n_{e}\) [cm\({}^{-3}\)] \(<4.75\), while those for NGC 4151 are approximately an order of magnitude lower (\(3.60<\) log\({}_{10}(n_{e}\) [cm\({}^{-3}\)] \(<4.10\)). This is the first time that densities above \(n_{e}=10^{3.5}\) cm\({}^{-3}\) have been found using the transauroral lines with _spatially-resolved_ observations, and agree with similarly high electron densities derived using this technique for non-spatially resolved observations of other AGN (e.g. Holt et al. 2011; Rose et al. 2018; Santoro et al. 2018; Spence et al. 2018; Davies et al. 2020; Speranza et al. 2022). Importantly, the densities we find here are above the critical densities of the traditional [OII](3726/3729) and [SII](6717/6731) line ratios (Appendix A), and since we do not separate broad (outflowing) and narrow (quiescent; non-outflowing) components, are likely to be underestimates for the outflowing gas (which is expected to be denser than the quiescent gas: e.g. Villar-Martin et al. 1999; Holden et al. 2023). The reddenings that we measure are relatively modest and in the range 0.05 < E(B-V)\({}_{\rm TR}<0.25\) for both objects -- these values were used to deredden our spectra for all further analysis. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Aperture} & Distance & Distance & \(v_{\rm c,a}\) & FWHM\({}_{\rm c,a}\) & \(v_{\rm c,b}\) & FWHM\({}_{\rm c,b}\) & \(\log_{10}(n_{e}\) [cm\({}^{-3}\)] & E(B-V)\({}_{\rm TR}\) & \(T_{e}\) (K) \\ & (arcseconds) & (pc) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & \(\log_{10}(n_{e}\) [cm\({}^{-3}\)] & E(B-V)\({}_{\rm TR}\) & \(T_{e}\) (K) \\ \hline \hline \multicolumn{11}{c}{NGC 1068} \\ \hline 1 & \(-\)1.45 & \(-\)97 & \(-\)828\(\pm\)4 & 572\(\pm\)25 & 295\(\pm\)40 & 1078\(\pm\)96 & 4.06\({}^{+0.05}_{-0.06}\) & 0.16\({}^{+0.04}_{-0.05}\) & 14300\({}^{+210}_{-1300}\) \\ 2 & \(-\)0.74 & \(-\)50 & \(-\)184\(\pm\)7 & 1017\(\pm\)31 & — & — & 4.65\({}^{+0.05}_{-0.04}\) & 0.05\({}^{+0.04}_{-0.05}\) & 14400\({}^{+150}_{-1100}\) \\ 3 & \(-\)0.23 & \(-\)15 & \(-\)306\(\pm\)3 & 662\(\pm\)26 & \(-\)5\(\pm\)20 & 1770\(\pm\)43 & 4.74\({}^{+0.05}_{-0.05}\) & 0.16\({}^{+0.05}_{-0.04}\) & 16100\({}^{+1400}_{-1600}\) \\ 4 & 0.10 & 7 & 95\(\pm\)3 & 367\(\pm\)26 & 235\(\pm\)8 & 1684\(\pm\)34 & 4.45\({}^{+0.09}_{-0.09}\) & 0.17\({}^{+0.08}_{-0.08}\) & 16000\({}^{+600}_{-1100}\) \\ \hline \multicolumn{11}{c}{NGC 4151} \\ \hline 1 & \(-\)1.58 & \(-\)123 & \(-\)172\(\pm\)2 & 420\(\pm\)25 & \(-\)263\(\pm\)29 & 1261\(\pm\)102 & 3.68\({}^{+0.08}_{-0.10}\) & 0.11\({}^{+0.05}_{-0.07}\) & 16300\({}^{+3400}_{-1800}\) \\ 3 & \(-\)0.38 & \(-\)30 & \(-\)392\(\pm\)6 & 04\(\pm\)28 \({}^{a}\) & \(-\)356\(\pm\)3 & 1065\(\pm\)26 & 4.04\({}^{+0.07}_{-0.15}\) & 0.13\({}^{+0.05}_{-0.06}\) & 21000\({}^{+3800}_{-2100}\) \\ 5 & 0.48 & 37 & 34\(\pm\)1 & 234\(\pm\)24 & 121\(\pm\)11 & 1013\(\pm\)44 & 3.94\({}^{+0.10}_{-0.10}\) & 0.15\({}^{+0.08}_{-0.08}\) & 17300\({}^{+5200}_{-2400}\) \\ 6 & 1.02 & 80 & 40\(\pm\)1 & 307\(\pm\)25 & 126\(\pm\)8 & 768\(\pm\)35 & 3.75\({}^{+0.8}_{-0.10}\) & 0.23\({}^{+0.07}_{-0.07}\) & 15200\({}^{+3400}_{-1800}\) \\ \hline \hline \end{tabular} \({}^{a}\) The measured width of the _component_ is consistent with the instrumental width, and hence unresolved. \end{table} Table 1: [OIII] model parameters (galaxy rest-frame component velocity shift: \(v_{\rm c}\); instrumentally-corrected component velocity width: FWHM\({}_{\rm c}\)), distances from the nucleus (in arcseconds and pc), electron densities, reddening values and electron temperatures for each of our apertures for NGC 1068 and NGC 4151. In apertures where there are multiple Gaussian components for the [OIII] models, we label the kinematic parameters for the two components with the subscripts ’a’ and ’b’. The densities and reddenings were determined simultaneously using the transauroral line technique (Section 4.1; Figure 4), and the temperatures were determined using the [OIII](5007+4959)/4363 ratios (Section 4.2.1). ### Ionisation states and mechanisms of the warm gas The relatively low-ionisation transauroral lines must be emitted by radiation-bounded clouds. Therefore it is uncertain how well densities derived from the transauroral ratios would represent the densities of clouds or cloud complexes that have been shock-ionised or have significant matter-bounded components. Furthermore, the model used in the transauroral ratio method assumes radiation-bounded AGN-photoionised clouds, with no contribution from a matter-bounded component or shock-ionisation. Similarly, the multi-component ionisation modelling by Revalski et al. (2021) -- which has previously been applied to NGC 1068 and NGC 4151 -- uses AGN photoionisation models. Therefore, it is important to investigate the ionisation mechanisms for the gas detected in our STIS slits, which potentially can also give information regarding the outflow acceleration mechanism(s) present. #### 4.2.1 Electron temperatures Electron temperatures of the warm ionised phase are expected to be higher for shocked gas than AGN-photoionised gas (e.g. Fosbury et al., 1978; Villar-Martin et al., 1999). Therefore, to provide a first indication of the ionisation mechanisms of the warm ionised gas observed in our apertures, we measured electron temperatures using the (dereddened) [OIII](5007+4959)/4363 emission-line ratio and the PyNeb Python module (Luridiana et al., 2015), taking the electron densities for the apertures to be those derived using the transauroral line technique for both objects (3.75 \(<\) log\({}_{10}\)(\(n_{\rm e}\)[cm\({}^{-3}\)]) \(<\) 4.75: see Table 1 and Section 4.1). We present the measured electron temperatures in Table 1, which are found to be high (14,300 \(<\) T\({}_{\rm e}\)\(<\) 21,000 K) for every aperture in both objects, with particularly high temperatures (up to T\({}_{\rm e}\)\(=\) 21, 000 K) being found in the central apertures of NGC 4151. The high electron temperatures that we find in our apertures for both objects may not be fully explainable as being due to AGN-photoionisation of radiation-bounded gas (Fosbury et al., 1978; Binette et al., 1996; Villar-Martin et al., 1999; Holden et al., 2023). #### 4.2.2 Shock-ionisation vs matter-bounded AGN photoionisation In order to investigate the cause of the high electron temperatures further, we produced the [OIII](5007/4363) vs HeII/H\(\beta\) diagnostic diagram developed by Villar-Martin et al. (1999), as shown in Figure 5. The radiation-bounded photoionisation models shown here are the same as those used for the TR ratio grid in Section 4.1 (Figure 4), albeit for an electron density of \(n_{\rm e}\)\(=\) 10\({}^{4}\) cm\({}^{-3}\), varying ionisation parameters (between -3.5 \(<\) log\({}_{10}\)\(U\) \(<\) -2.0), and two values of spectral index (\(\alpha\)\(=\) 1.0, 1.5). The pure shock and precursor (pre-shock) models are taken from the MAPPINGS III library presented by Allen et al. (2008), with varying shock velocities in the range 100 \(<\)\(v_{\rm shock}\)\(<\) 1000 km s\({}^{-1}\) and magnetic parameters of \(B/\sqrt{n}\)\(=\) 2.4 \(\mu\)G cm\({}^{-3/2}\) for a solar-composition pre-shock gas with a density of \(n=\) 10\({}^{2}\) cm\({}^{-3}\). The magnetic parameters were chosen to cover a reasonable range of values expected in the ISM (Dopita & Sutherland, 1995), in addition to being close to the magnetic parameters near equipartition (\(B/\sqrt{n}\)\(\sim\) 3.23 \(\mu\)G cm\({}^{3/2}\): Allen et al., 2008). Note that we do not use the standard 'BPT' diagrams (Baldwin et al., 1981) to investigate the ionisation of the gas, because some of the lines involved in those diagrams (such as H\(\alpha\) and [NII]\(\lambda\lambda\)6548,6583) are strongly blended in our apertures due to the outflow kinematics and relatively low spectral resolution, and therefore are affected by major fit degeneracies. In Figure 5, we also plot [OIII](5007/4363) and HeII/H\(\beta\) as functions of A\({}_{\rm M/I}\); the ratio of the solid angles subtended by matter-bounded clouds and radiation-bounded clouds, from modelling by Binette et al. (1996). This ratio allows us to estimate the relative contribution of matter-bounded clouds and radiation-bounded clouds in our apertures. The modelling by Binette et al. (1996) assumes solar-metallicity gas, with an ionising source spectral index of \(\alpha\)\(=\) \(-\)1.3, an ionisation parameter of log \(U\)\(=\)\(-\)1.4, and a density of \(n_{\rm MB}\)\(=\) 50 cm\({}^{-3}\). The radiation-bounded clouds are ionised by UV photons which have passed through the matter-bounded component, thus the shape of the ionising spectrum reaching the radiation-bounded clouds has changed relative to that from the source -- the parameters of the radiation-bounded clouds are determined using the resulting ionising spectrum and by assuming that the clouds have fixed pressures. Due to the continuum underlying the H\(\beta\), [HeII]\(\lambda\)4686, and [OIII]\(\lambda\)4363 lines being more complex than that which underlies the [OIII]\(\lambda\lambda\)4959,5007 doublet and transauroral lines, we used a MCMC (Markov Chain Monte Carlo) fitting routine to fit the lines involved in the HeII\(\lambda\)4686H\(\beta\) and [OIII][5007/4363) ratios in each aperture for both objects -- this was done to ensure that we were not significantly overestimating the flux uncertainties due to blending of spectral lines and the continuum. We used the results of the Gaussian fits described in Section 3.2.4 (determined using least squares optimisation) to these lines as initial starting points for the MCMC routine, which fit the same models (namely one or two Gaussians and a low order polynomial) to the spectra -- taking into account the observational the flux uncertainty of the HST data -- with priors chosen to ensure the resulting models were physical (i.e. the line fluxes, mean wavelengths, and line widths must have been positive). For each fit, we initialised 500 walkers in a Gaussian distribution around the starting parameters, and used a total of 5000 iterations (including a 1000 iteration 'burn-in' phase). The MCMC fits them Figure 4: Grid of modelled transauroral (TR) [SII] and [OII] line ratios for radiation-bounded gas at different electron densities and reddenings, (black joined squares; as modelled with the Cloudy code and the CCM89 extinction curve) and measured line ratios for NGC 1068 (orange circles) and NGC 4151 (blue circles). selves were run using the emcee Python module (Foreman-Mackey et al. 2013). From Figure 5, we find clear evidence for significant matter-bounded emission in Apertures 1, 2, and 3 in NGC 1068, implied by high electron temperatures and HeII/H\(\beta\)\(>\) 0.4 (similar ratio values were also measured by Kraemer & Crenshaw 2000b); the approximate ratio of matter-bounded to radiation-bounded clouds is \(\rm A_{M/I}\sim 2\). The difference between the [OIII](5007/4363) ratios measured in the NGC 1068 apertures and those predicted from the Bienette et al. (1996) modelling can be explained as due to the models only representing one combination of parameters: it is possible for matter-bounded clouds with different parameters to have similar [OIII](5007/4363 ratios to those found for NGC 1068. Specifically, this ratio would be smaller for higher electron densities than the low density assumed by Binette et al. (1996). Moreover, the presence of matter-bounded emission in these apertures is supported by the strength of high-ionisation emission lines (E\({}_{\rm ion}>\) 100 eV), such as [NeV]\({}_{\lambda}\)3426, [FeVII]\({}_{\lambda}\)3759, and [FeVII]\({}_{\lambda}\)6087, relative to lower-ionisation lines (such as [OIII]) in our STIS spectra. These and other high-ionisation lines were previously identified in the same dataset by Kraemer & Crenshaw (2000a). For Aperture 4 of NGC 1068 (centered slightly above the nucleus: Figure 2), we measure HeII/H\(\beta\) ratios consistent with both matter-bounded AGN-photoionisation and shock-ionisation. To further probe the ionisation mechanism of the gas, we also measured the [NeV]\(\lambda\)3426/[NeIII]\(\lambda\)3869 ratio -- which is sensitive to higher ionisation gas -- using the same MCMC fitting routine described earlier. We produced a diagnostic diagram of [NeV]\(\lambda\)3426/[NeIII]\(\lambda\)3869 vs HeII/H\(\beta\) using the same radiation-bounded photoionisation, matter-bounded photoionisation, and shock-ionisation models as used for the [OIII](5007/4363) and HeII/H\(\beta\) diagram (Figure 5), and present this in Figure 6. We find that the values for all of the NGC 1068 apertures are consistent with matter-bounded AGN-photoionisation with 1 \(<\)\(\rm A_{M/I}\)\(<\) 2. This further indicates that the gas in these apertures is matter-bounded and AGN-photoionised, including Aperture 4. With the exception of Aperture 3, the [OIII](5007/4363) vs HeII/H\(\beta\) ratios measured in our NGC 4151 apertures (Figure 5), are consistent with both shock ionisation and radiation-bounded AGN photoionisation (assuming a relatively flat spectral index of \(\alpha=1.0\) and log \(U\sim-2.0\)). However, from the [NeV]\(\lambda\)3426/[NeIII]\(\lambda\)3869 vs HeII/H\(\beta\) diagram (Figure 6), it can be seen that the measured ratios for NGC 4151 are not consistent with pure shock-ionisation alone: if the gas is shock ionised, then a contribution from the precursor component is required. Alternatively, the gas in these apertures may have pure radiation-bounded AGN photoionisation, however we highlight that this requires a relatively flat spectral index (\(\alpha=1.0\)), and/or higher ionisation parameters (\(-3.0<\) log \(U<-2.0\)) and densities (\(n_{e}>\)10\({}^{5}\) cm\({}^{-3}\)) than can explain our transauroral line ratios (Section 4.1). Ultimately, it is not possible to determine unambiguously Figure 5: [OIII](5007/4363) vs HeII/H\(\beta\) diagnostic diagram (Villar-Martin et al. 1999), used to distinguish between radiation-bounded AGN-photoionisation, matter-bounded AGN-photoionisation and shock-ionisation. The black markers show the predicted line ratios from radiation-bounded Cloudy modelling (see Section 4.1) for solar-composition gas with a density of \(n_{e}=10^{4}\) cm\({}^{-3}\) and varying ionisation parameters (log \(U\); labelled) and spectral indices (squares \(\alpha\)=1.5; diamonds: \(\alpha\)=1.0). The solid red grid shows the line ratios predicted from shock modelling (Allen et al. 2008) for solar-composition gas with a pre-shock density of \(n=10^{2}\) cm\({}^{-3}\) and magnetic parameters of \(B/\sqrt{n}=2.4\)\(\mu\)G cm\({}^{-3/2}\), with light lighter regions on the plot corresponding to lower shock velocities. The purple dashed lines show the predicted emission from the precursor gas, which has not yet passed through (but is photoionised by) the shock, and the dotted green line shows the line ratios expected for different ratios of matter-bounded and radiation-bounded clouds (\(\rm A_{M/I}\), labelled and marked with green circles) from modelling by Bienette et al. (1996). Observed line ratios for each aperture are shown in orange for NGC 1068 and blue for NGC 4151, with the aperture number annotated. Figure 6: [NeV]\(\lambda\)3426/[NeIII]3869 vs HeII/H\(\beta\) diagnostic diagram — both ratios are sensitive to the presence of significant matter-bounded components. The line and marker scheme is the same as Figure 5. The line ratios measured in our NGC 1068 apertures are located in the matter-bounded photoionisation region of the diagram (corresponding to 1 \(<\)\(\rm A_{M/I}\)\(<\) 2; consistent with Figure 6) whereas the NGC 4151 line ratios fall in the shock/precursor/radiation-bounded AGN photoionisation region. the true, dominant ionisation mechanism of the gas in our NGC 4151 apertures with the diagnostic features that are available in our data. #### 4.2.3 The viability of shock-ionisation In order to further investigate the viability of shocks as the dominant ionisation mechanism along our slits for NGC 1068 and NGC 4151, we compared our measured H\(\beta\) fluxes to those expected from shock models -- a technique presented by Baron et al. (2017). First, we converted our measured (and dereddened) H\(\beta\) fluxes (\(F_{\rm H\beta}\)) into H\(\beta\) luminosities using the luminosity distances (\(D_{L}\)) for each galaxy. The resulting luminosities were then converted into luminosities per surface area using the aperture sizes in arcseconds (i.e. the aperture width multiplied by the slit width) and the spatial scales for each object (0.067 kpc/arcsecond and 0.078 kpc/arcsecond, respectively). We then compared the measured luminosities per surface area to those expected from the MAPPINGS III shock models of pre-shock density \(n=10^{2}\) cm\({}^{-3}\) (corresponding the densities measured in our apertures, assuming a compression factor of 10: Sutherland & Dopita 2017) and magnetic parameters \(B/\sqrt{n}=2,4\)\(\mu\)G cm\({}^{3/2}\). From this comparison, we find that the H\(\beta\) luminosities per surface area, as measured in each aperture for NGC 1068 (\(4.9\times 10^{-3}<\) L\({}_{\rm H\beta}<2.2\times 10^{-2}\) erg s\({}^{-1}\) cm\({}^{-2}\)) and NGC 4151 (\(2.2\times 10^{-3}<\) L\({}_{\rm H\beta}<4.6\times 10^{-3}\) erg s\({}^{-1}\) cm\({}^{-2}\)), can be accounted for by shocks with velocities \(v_{\rm shock}>425\) km s\({}^{-1}\) and \(v_{\rm shock}>225\) km s\({}^{-1}\) respectively. In both cases, the outflow velocities for our apertures (Section 4.4.1; Table 2) are above these required velocities. This demonstrates that shock-ionisation _could_ feasibly produce the recombination line fluxes measured in both objects, however this alone does not necessarily confirm the ionisation mechanism. Note that here we assumed a gas covering factor of unity relative to the shock (i.e. that the emitting-gas covers the entire area of the shock within each aperture), which may not be the case in reality. If this covering factor is in fact much lower than unity, then a larger shock area or higher shock velocities would be needed to produce the same H\(\beta\) luminosity. ### The high-ionisation gas in NGC 1068 The relative strengths of the high ionisation (E\({}_{\rm ion}>100\) eV) lines detected in several of our apertures for NGC 1068 indicate the presence of matter-bounded clouds, and therefore may play an important role in the structure of the cloud complexes present in the NLR. Determining the physical conditions of this high-ionisation component is therefore necessary. To this end, we measured the [FeVII](6087/3759) and [NeV]\(\lambda 3426\)/[FeVII]\(\lambda 6086\) emission-line ratios, which are sensitive to the density and ionisation parameter of the high-ionisation gas. These ratios were calculated using the measured line fluxes of the lines in the ratios, which were themselves determined using the same MCMC fitting method described in Section 4.2. We present the [FeVII](6087/3759) vs [NeV]\(\lambda 3426\)/[FeVII]\(\lambda 6086\) diagnostic diagram (see Rose et al. 2011) with our measured line ratios for the NGC 1068 apertures in Figure 7; a CLOUDY radiation-bounded photoionisation grid for a solar metallicity, plane-parallel, single-slab cloud of varying ionisation parameters (-\(3.5<\log U<2.0\)) and electron densities (\(5.0<\log(n_{e}[{\rm cm}^{-3}])<8.0\)), and a central ionising source with spectral index \(\alpha=1.5\) (see Appendix B), is shown. From this grid, we determine the densities of the high-ionisation gas to be in the range \(6.45<\log_{10}(n_{e}[{\rm cm}^{-3}])<8.00\): several orders of magnitude higher than the gas traced by the lower critical-density [OII] and [SII] lines. We discuss the implications of this for the gas structures within our apertures in Section 5.2.1. ### Energetics of the outflowing gas #### 4.4.1 Outflow kinematics In order to determine the mass outflow rates, kinetic powers and coupling efficiencies of the gas outflows detected in our STIS spectra, we required measurements of the kinematics of the outflowing gas3. For this purpose, we used the results from detailed kinematic modelling (based the same HST/STIS spectra used here) of NGC 1068 and NGC 4151 presented by Crenshaw & Kraemer (2000c) and Crenshaw et al. (2000) (hereafter CKN1068 and CKN4151), respectively. We note that, due to the different PAs used and the fact that the outflow geometry likely depends greatly on PA, we do not use the updated kinematic models from Das et al. (2005) and Das et al. (2006). Footnote 3: We do not use kinematics derived from our [OIII] models due to the relatively low spectral resolution and high instrumental widths of our spectra. To calculate deprojected velocities, we first derived a universal 'deprojection factor' by dividing the maximum observed velocities (located at the velocity 'turnover' position - see Crenshaw & Kraemer 2000c and Crenshaw et al. 2000) by the maximum model-deprojected velocities from the CKN1068 and CKN4151 bicone models. We then Figure 7: [FeVII](6087/3759) vs [NeV]\(\lambda 3426\)/[FeVII]\(\lambda 6086\) diagnostic diagram, sensitive to the spectral index, ionisation parameter, and electron density of the gas. The grey grid was generated using CLOUDY for radiation-bounded AGN-photoionisation with a spectral index of \(\alpha=1.5\) (see Appendix B) and varying electron densities and ionisation parameters: dashed lines connect points of constant density (labelled), and dotted lines connect points of constant ionisation parameter (also labelled). The measured ratio values in our NGC 1068 apertures are shown in orange and labelled. From this diagram, the high ionisation gas is found to have densities in the range \(6.45<\log_{10}(n_{e}[{\rm cm}^{-3}])<8.00\). took the the highest observed (projected) velocity at the position of each aperture, and divided these velocities by our determined deprojection factor to give the maximum deprojected outflow velocity in each aperture. We label the deprojected outflow velocities as \(v_{\rm out}\), and give their values in Table 2. #### 4.4.2 Mass outflow rates, kinetic powers and coupling efficiencies We used the H\(\beta\) luminosities to determine masses for the warm ionised gas in each aperture with \[M_{\rm ion}=\frac{L(H\beta)m_{\rm p}}{\alpha_{H\beta}^{\rm eff}\,n_{\rm r} \alpha_{\rm p}n_{\rm e}}, \tag{1}\] where \(M_{\rm ion}\) is the total mass of the warm ionised gas, \(m_{\rm p}\) is the proton mass, \(\alpha_{H\beta}^{eff}\) is the Case B recombination coefficient for H\(\beta\) (taken to be 1.61\(\times 10^{-14}\) cm\({}^{3}\)s\({}^{-1}\) for a gas of density \(n_{\rm e}=10^{4}\) cm\({}^{-3}\) and temperature \(T_{\rm e}=20,000\) K; Osterbrock and Ferland 2006) and \(v_{\rm H\beta}\) is the frequency of the H\(\beta\) line. Assuming that the derived masses (estimated using the total line fluxes) are dominated by outflowing gas, we combined them with the aperture crossing time to calculate mass outflow rates \[\dot{M}_{\rm out}=\frac{M_{\rm ion}v_{\rm out}}{\Delta R}, \tag{2}\] where \(v_{\rm out}\) is the outflow velocity from the CKN1068 and CKN4151 models, and \(\Delta R\) is the aperture width. Kinetic powers were estimated from the mass outflow rates using \[\dot{E}_{\rm kin}=\frac{1}{2}M_{\rm out}v_{\rm out}^{2}. \tag{3}\] Finally, the ratio of the kinetic power to the bolometric AGN luminosity (\(L_{\rm bol}\)) was taken to estimate coupling efficiencies for each aperture: \[\epsilon_{\rm f}=\frac{\dot{E}_{\rm kin}}{L_{\rm bol}}. \tag{4}\] NGC 1068 is estimated to have a bolometric luminosity in the range \(0.4<L_{\rm bol}<4.7\times 10^{38}\) W (Woo and Urry, 2002; Alonso-Herrero et al., 2011; Lopez-Rodriguez et al., 2018; Gravity Collaboration et al., 2020), of which we take the lowest value to ensure higher estimates of coupling efficiencies and thus determine the maximum potential impact of the outflowing gas on the host galaxy. For NGC 4151, we took the bolometric luminosity to be \(L_{\rm bol}=1.4\times 10^{37}\) W (Kraemer et al., 2020). We present our derived mass outflow rates, kinetic powers and coupling efficiencies for both cases in Table 2. For NGC 1068, our estimates are less than the maximum values determined from photoionisation modelling by Revalski et al. (2021) (\(\dot{M}_{\rm out}=9.0\pm 1.13\) M\({}_{\odot}\)yr\({}^{-1}\); \(\dot{E}_{\rm kin}=(5.4\pm 0.5)\times 10^{35}\) W, \(\epsilon_{\rm f}=0.54\pm 0.05\) per cent)4. For NGC 4151, our derived values are similar to the results of photoionisation modelling by Crenshaw et al. (2015) (\(\dot{M}_{\rm out}\sim 3.01\pm 0.45\) M\({}_{\odot}\)yr\({}^{-1}\); \(\dot{E}_{\rm kin}=(4.3\pm 1.0)\times 10^{34}\) W, \(\epsilon_{\rm f}=0.54\pm 0.11\) per cent). Our calculated mass outflow rates for NGC 4151 are also consistent with previous values derived for the warm ionised phase by Storchi-Bergmann et al. (2010) (\(\dot{M}_{\rm out}\approx 2.4\) M\({}_{\odot}\)) and the X-ray emitting gas (\(M_{\rm out}\approx 2\) M\({}_{\odot}\)yr\({}^{-1}\): Wang et al., 2011b and Kraemer et al., 2020). Footnote 4: Crenshaw et al. (2015) and Revalski et al. (2021) assume bolometric luminosities of Lbol \(=1\times 10^{38}\) W for NGC 1068 and Lbol \(=7.9\times 10^{36}\) W for NGC 4151 when calculating coupling efficiencies. For NGC 1068, the mass outflow rates for the warm-ionised phase are much below that of the cold molecular gas at a similar extent from the nucleus (i.e. traced by CO, HCN; \(T\sim\)100 K): Garcia-Burillo et al. (2014) derive a mass outflow rate of \(\dot{M}_{\rm out}=63^{+21}_{-37}\) M\({}_{\odot}\)yr\({}^{-1}\) within the \(r\sim\)200 pc circumnuclear disk (CND) of NGC 1068. This indicates that most of the outflowing mass may be present in the colder gas phases, as has been found for other objects (see Ramos Almeida et al., 2019 and Holden et al., 2023). ## 5 Discussion From our analysis of archival STIS spectra of the central regions (\(r<160\) pc) of NGC 1068 and NGC 4151, we find evidence for dense (\(10^{3.6}\) cm\({}^{-3}\)\(<n_{\rm e}\)\(\leq 10^{4.8}\) cm\({}^{-3}\)) gas that shows line ratios consistent with matter-bounded AGN-photoionisation in the case of NGC 1068, and shock-ionisation (with precursor gas ionisation) or radiation-bounded AGN-photoionisation in the case of NGC 4151. Furthermore, we find that the measured H\(\beta\) luminosities could be explained as being due to shock-ionisation for both objects, assuming a shock covering factor of unity. In both objects, we find coupling efficiencies that are close to the lowest value required by models of galaxy evolution, however these are likely underestimates. In this section, we discuss the implication of these results on the dominant ionisation and acceleration mechanisms of the gas seen in our slits, compare our results to past work on these two well-studied objects, and investigate the impact on the density diagnostic techniques used. Finally, we place our results in a broader context by comparing with those from a similar study of the nearby Seyfert 2 galaxy IC 5063. ### The outflow ionisation and acceleration mechanisms in the NLRs of NGC 1068 and NGC 4151 To determine the true impact of the outflowing gas on the host galaxies, quantitative comparison of observations to theoretical modelling is needed. However, both modelling of jet-ISM interactions (e.g. Mukherjee et al., 2018; Audibert et al., 2023) and AGN radiation-pressure-driven outflows (e.g. Crenshaw and Kraemer, 2000c; Meena et al., 2023) is able to explain outflow kinematics in different objects. In order to enable accurate future comparisons to theoretical models and therefore accurately quantify the impact of the outflows in NGC 1068 and NGC 4151 -- which have been conversely argued to be radiatively-accelerated (Crenshaw et al., 2000; Crenshaw and Kraemer, 2000c; Das et al., 2005, 2006; Revalski et al., 2021; Meena et al., 2023) and jet-accelerated (Capetti et al., 1997; Axon et al., 1998; May and Steiner, 2017; May et al., 2020) -- the dominant outflow acceleration mechanisms in these objects need to be robustly identified. #### 5.1.1 Matter-bounded ionisation and the acceleration mechanism in NGC 1068 It has been previously proposed that the outflows in NGC 1068 are driven via radiation pressure (Kraemer and Crenshaw, 2000a; Das et al., 2006; Revalski et al., 2021; Meena et al., 2023), instead of via shocks induced by the radio jet colliding with the ISM within the bicone. While we do not separate the outflowing gas from the quiescent gas in this work, our results are consistent with this mechanism: we find evidence for matter-bounded AGN-photoionisation of the warm-ionised gas in the form of simultaneous high [OIII] temperatures (Table 1; \(T_{\rm e}\sim\)15,000 K; corresponding to [OIII](50074363) \(<60\)) and line ratios (HeII44686 / H\(\beta\)\(>\) 0.4: Figure 5; [NeV]\(\lambda\)3426 / [NeIII]\(\lambda\)3869 \(>\) 1.0: Figure 6) within a 134 pc radius from the nucleus in the NE cone along the radio axis, consistent with radiative acceleration. However, it is _possible_ that the outflowing gas has been shock-ionised and accelerated by the jet, but has subsequently cooled and then been reionised by the AGN (e.g. as in Holden et al. 2023). Spatially-resolved, high spectral-resolution observations are needed to further investigate this situation by separating the emission from the outflowing and quiescent gas, and then determining the ionisation and excitation mechanisms of each kinematic component. In addition, comparing the electron densities of the outflowing and non-outflowing gas may reveal signs of shock compression, which is expected to be a factor of \(\sim\)4-100 (Sutherland & Dopita 2017). We note that the outflowing gas appears to be spatially confined to the extent of the radio structure: the broad (FWHM\({}_{\rm v}\)\(>\) 250 km s\({}^{-1}\)) [OIII]\(\lambda\lambda\)4959,5007 emission in our spectra is seen to a maximum radius of \(\sim\) 4.8 arcseconds from the nucleus in the NE cone (as measured from the line profiles of the [OIII] emission that extends beyond the regions covered by our apertures), similar to the maximum radial extent of the NE radio lobe (6.18 arcseconds; 420 pc) measured from radio imaging (e.g. 15 GHz: Wilson & Ulvestad 1987 5 GHz: Wilson & Ulvestad 1983, Gallimore et al. 1996, 22 GHz: Gallimore et al. 1996; 1.4 GHz: Gallimore et al. 1996, Garcia-Burillo et al. 2014). This is also in agreement with ground-based Fabry-Perot integral field spectroscopy by Cecil et al. (1990) -- which finds no significant velocity deviation from the systematic velocity beyond the radio lobe -- and kinematic modelling by Crenshaw & Kraemer (2000c), Das et al. (2006) and Meena et al. (2023), which find outflows extended up to \(\sim\)5.1 arcseconds from the nucleus5. Furthermore, VLT/MUSE spectroscopy presented by Venturi et al. (2021) shows that the measured [OIII] W70 velocity parameter6 has high values (400 km s\({}^{-1}\)! [OIII] W70! 1200 km s\({}^{-1}\)) between the nucleus and the lobe, out to a radius of 3.6 arcseconds along the bicone axis. Moreover, the NLR molecular CO(3-2) outflows (as seen in ALMA imaging by Garcia-Burillo et al. 2014) decelerate within the radio lobe, at a distance of \(\sim\) 400 pc (\(\sim\)5.7 arcseconds) from the nucleus. Taken together, this shows that the NE cone outflows have a similar extent to the NE radio lobe. This is evidence for the outflows being accelerated by the radio jet, although it does not entirely rule out radiative acceleration. Footnote 5: An [OIII] emission knot in the NLR of NGC 1068, labelled ‘A’ by Meena et al. (2023) and located 7.3 arcseconds from the nucleus (i.e. beyond the radio source), has outflow-like kinematics (200! FWHM! 1000 km s\({}^{-3}\); \(v_{\rm out}\) = 863 km s\({}^{-3}\)). As noted by Meena et al. (2023), this knot lies beyond the expected extent of radiatively-driven outflows. Regardless, we highlight that the vast majority of the outflows along the radio axis are located at lower radii than the maximum extent of the NE radio lobe. Footnote 6: W70 is defined as the difference between the velocities that contain 85 per cent and 15 per cent of the total flux of the fits to the line profile (see Venturi et al. 2021). ### Shock-ionisation and acceleration in NGC 4151 Our results for NGC 4151 indicate that the near-nuclear gas along the radio axis may be shock-ionised, since the measured [OIII](5007/436), HeII/H\(\beta\), and [NeV]\(\lambda\)3426/[NeIII]\(\lambda\)3869 line ratios and H\(\beta\) luminosities are consistent with those expected from a mixture of shock and shock-precursor ionisation (Figures 5 and 6; Section 4.2.3). The radio structure in the NLR of NGC 4151, as seen in low-resolution 1.5-5 GHz VLA radio imaging by Johnston et al. (1982), has a lobe-like component with a centroid 6.43 arcseconds from the nucleus along the radio axis in the NE cone. This structure lies beyond the maximum \(\sim\)4 arcseconds extent of the warm-ionised outflows (Meena et al. 2023; see also Das et al. 2005), and -- as we have argued for the situation in NGC 1068 -- is consistent with the outflows being launched by the radio jet. From HST/PC + HST/WFPC2 imaging, Williams et al. (2017) found higher [OIII]/H\(\alpha\) ratios close to the string of radio knots that are seen in their higher-resolution 1.51 GHz observations (shown here in Figure 1), with the values of this ratio decreasing beyond \(\sim\)4 arcseconds from the nucleus along the radio axis. The authors interpreted this as the radio jet having a contribution to the ionisation of the gas close to the nucleus, but AGN-photoionisation being dominant further out. This is also in agreement with the results from X-ray and optical imaging by Wang et al. (2011a), who propose a mixture of shock-ionisation and AGN-photoionisation in the NLR of NGC 4151. Taken together with the findings of these previous investigations, the results presented here may indicate that the outflows in NGC 4151 have been shock-accelerated and then re-ionised by photons from the AGN, with AGN-photoionisation being dominant further from the nucleus. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Aperture & \(v_{\rm out}\) (km s\({}^{-1}\)) & M\({}_{\rm out}\) (M\({}_{\odot}\)) & \(\dot{M}_{\rm out}\) (M\({}_{\odot}\)yr\({}^{-1}\)) & \(\dot{E}_{\rm kin}\) (W) & \(\epsilon_{\rm f}\) (per cent) \\ \hline \multicolumn{6}{c}{NGC 1068} \\ \hline 1 & -1300 & \((1.7\pm 0.3)\times 10^{5}\) & \(3.7\pm 0.6\) & \((2.0\pm 0.3)\times 10^{35}\) & \((4.9\pm 0.7)\times 10^{-1}\) \\ 2 & -1100 & \((6.3\pm 0.8)\times 10^{4}\) & \(1.6\pm 0.2\) & \((6.1\pm 0.7)\times 10^{34}\) & \((1.5\pm 0.2)\times 10^{-1}\) \\ 3 & -450 & \((7.3\pm 0.9)\times 10^{4}\) & \(1.2\pm 0.1\) & \((7.6\pm 0.9)\times 10^{33}\) & \((1.9\pm 0.2)\times 10^{-2}\) \\ 4 & -150 & \((9.4\pm 2.2)\times 10^{4}\) & \((6.0\pm 1.4)\times 10^{-1}\) & \((4.2\pm 1.0)\times 10^{32}\) & \((1.1\pm 0.2)\times 10^{-3}\) \\ \hline \multicolumn{6}{c}{NGC 4151} \\ \hline 1 & -700 & \((2.5\pm 0.6)\times 10^{5}\) & \(3.7\pm 1.0\) & \((5.8\pm 0.2)\times 10^{34}\) & \((4.1\pm 1.1)\times 10^{-1}\) \\ 3 & -800 & \((7.2\pm 3.0)\times 10^{4}\) & \(3.4\pm 1.4\) & \((6.9\pm 2.9)\times 10^{34}\) & \((4.9\pm 2.0)\times 10^{-1}\) \\ 5 & 800 & \((1.7\pm 0.4)\times 10^{5}\) & \(4.5\pm 1.2\) & \((9.2\pm 2.4)\times 10^{34}\) & \((6.5\pm 1.7)\times 10^{-1}\) \\ 6 & 800 & \((2.9\pm 0.7)\times 10^{5}\) & \(6.9\pm 1.8\) & \((1.4\pm 0.4)\times 10^{35}\) & \((9.9\pm 2.6)\times 10^{-1}\) \\ \hline \end{tabular} \end{table} Table 2: Outflow velocities, outflow masses, mass outflow rates, kinetic powers and coupling efficiencies for the apertures of our STIS spectra of NGC 1068 and NGC 4151. The outflow velocities used to calculate the mass outflow rates and kinetic powers presented here are from the CKN1068 and CKN4151 models (see Section 4.4.1). ### The effect of ionisation mechanisms on density diagnostics The ionisation mechanisms (Section 4.2), electron temperatures (Section 4.2.1), and densities (Sections 4.1 and 4.3) of the warm gas detected in our STIS slits allows us to investigate the structures and conditions of the line-emitting clouds, and therefore verify the origin of different emission lines and thus the precision of diagnostics which make use of them. For example, the TR density diagnostic (Section 4.1) relies on AGN-photoionisation being dominant, with no significant contribution from a matter-bounded component or shock-ionisation. Since we find evidence for matter-bounded emission in NGC 1068 and potential shock-ionisation in NGC 4151, it is important to investigate the effect of this on derived densities. #### 5.2.1 The impact of matter-bounded photoionisation If the higher ionisation lines are indeed emitted by matter-bounded gas structures in the outflow (as shown by the [OIII] temperatures, HeII/H\(\beta\) ratios, and [FeVII](6086/3759) vs [NeV]\(\lambda\)3426/[FeVII]\(\lambda\)6086 diagram: Sections 4.2 and 4.3), then the transauroral lines cannot be emitted by the same structures. However, it is possible that they are emitted by different clouds within the same cloud complexes, considering that we see these lines with similar profiles in each of our apertures. Alternatively, or perhaps in addition, it is possible that the outer layers of a single cloud are matter-bounded, while the denser core is radiation-bounded (one of the scenarios presented by Binette et al., 1996). In this scenario, the matter-bounded layers may represent lower density gas that was driven away from the ionisation front by the increase in pressure that occurred when the gas structure was first photoionised by the AGN. However, this is not consistent with our findings: in Section 4.3, we use the [FeVII](6087/3759) and [NeV]\(\lambda\)3426/[FeVII]\(\lambda\)6086 emission-line ratios to determine high-ionisation gas densities of \(6.45<\log_{10}(n_{e}[\mathrm{cm}^{-3}])<8.00\) in our NGC 1068 apertures: significantly higher than that of the lower-ionisation gas. A potential explanation is that the gas that is emitting the high-ionisation [FeVII] and [NeV] lines represents dense fragments of the expanding matter-bounded component: since these lines have high critical densities (\(7.1<\log_{10}(n_{\mathrm{crit}}[\mathrm{cm}^{-3}])<8.5\); Appendix A), they would only be emitted strongly by such dense cloud components. Therefore, given the ionisation energies of the lines (Appendix A), we propose that the [FeVII] and [NeV] lines trace matter-bounded, higher ionisation clouds within the complexes (or edges of individual clouds), and the [OII] and [SII] lines are emitted from radiation-bounded clouds (or cores of individual clouds). In this scenario, much of the [OIII] emission must arise from the matter-bounded regime in order to explain the high electron temperatures that we measure in our NGC 1068 apertures (Section 4.2.1). Hence, given the high density of the high-ionisation gas, it is likely that the gas emitting the [OIII] lines is denser than the gas that is emitting the transauroral lines. This reinforces the need for outflow diagnostics that are sensitive to high (\(>\)10\({}^{3.5}\) cm\({}^{-3}\)) densities. #### 5.2.2 The impact of shock-ionisation Since the gas in our NGC 4151 apertures may be shock-ionised, it is essential to quantify the effect of this on the transauroral ratio density diagnostic. In Appendix C, we plot the TR ratios from shock models over the TR photoionisation diagnostic grid used in Section 4.1, and quantify the impact of shock-ionisation on the TR electron density and reddening values derived from the photoionisation grid. We find that, overall, the effect on the derived density is \(\pm\)0.38 orders of magnitude, and the effect on derived reddenings is E(B-V)\(\pm\)0.13. Crucially, we note that this is much less than the impact of using lower-critical-density techniques (such as the [SII](6717/6731) ratio), and is similar to the effect of varying the parameters of the photoionisation model (Santoro et al., 2020: \(\log(n_{e}[\mathrm{cm}^{-3}])\)\(\pm\)(0.1-0.7); E(B-V)\(\pm\)(0.1-0.2)). In summary, while using the transauroral line method presented by Holt et al. (2011) as a density and reddening diagnostic for shock-ionised gas does incur some uncertainty on the derived densities, the derived densities are still likely more accurate than those derived from commonly used, traditional methods. Gas that has been shock-ionised by jet-ISM interactions presents a problem for the photoionisation modelling method used by Revalski et al. (2021), as the technique relies on assuming that the material at a given distance from the nucleus is being photoionised by the central AGN engine. In the case of shock-ionisation, the outflows are instead being shock-ionised _locally_ by the jet within the bicone at any given distance from the nucleus, and so any electron densities derived using an assumed ionisation parameter and distance will be incorrect. Revalski et al. (2021) used the standard BPT diagrams (Baldwin et al., 1981) in an attempt to ensure all of the measured line ratios were consistent with AGN-photoionisation. However, the regions of AGN shock and photoionisation in these diagrams overlap considerably, thus further diagnostics should also be used in order to disentangle the contribution from shocks and photoionisation, such as the [OIII](5007/4363) vs HeII\(\lambda\)4686/H\(\beta\)(Villar-Martin et al., 1999) and [FeII]\(\lambda\)12570/Pa\(\beta\) vs H\({}_{2}\lambda\)21218/Br\(\gamma\)(Rodriguez-Ardila et al., 2005; Riffel et al., 2013; Colina et al., 2015; Riffel et al., 2021; Holden et al., 2023) diagnostic diagrams, and/or the three-dimensional diagram (which makes use of line ratios and velocity dispersion) presented by D'Agostino et al. (2019). Overall, despite the challenges that shock-ionisation and significant matter-bounded photoionisation components present to the transauroral line technique and the Revalski et al. (2021) photoionisation modelling, we argue that these methods are nonetheless more robust density diagnostics than the commonly used [SII](6717/6731) and [OII](3726/3729) ratios. In the case of matter-bounded photoionisation, the [SII]\(\lambda\lambda\)6717,6731 and [OII]\(\lambda\lambda\)3726,3729 lines arise from the same part of the ionisation structure of the cloud as the transauroral lines, meaning they face the same issues as the TR method, while the Revalski et al. (2021) modelling allows for higher-ionisation components, and therefore is a more accurate diagnostic of the overall cloud density. Furthermore, we have established here that using radiation-bounded photoionisation grids to measure the TR densities of shock-ionised gas incurs an uncertainty on the overall density that is much less than using lower-critical density line ratios for high-density (\(n_{e}>10^{3}\)) gas: in the case of NGC 4151 (where there may be some contribution from shock-ionisation), the TR-derived densities are similar to those reported by Revalski et al. (2022) (see also Crenshaw et al., 2015), indicating that both methods still give more precise density determinations than traditional methods, despite some of their underlying assumptions potentially being incorrect. ### Comparison of the TR electron densities to other techniques Using the TR method, we find high electron densities in both objects: 4.00 \(<\log(n_{e}[\mathrm{cm}^{-3}])<\) 4.75 in NGC 1068 and 3.50 \(<\log(n_{e}[\mathrm{cm}^{-3}])<\) 4.10 in NGC 4151 (Section 4.1; Table 1). This agrees with the similarly high densities (\(>10^{3}\) cm\({}^{-3}\)) derived from multi-component photoionisation modelling of both objects presented in Crenshaw et al. (2015) and Revalski et al. (2021) (see also Collins et al. (2009) and Revalski et al. 2022). Crucially, the derived densities from both techniques lie above the sensitivity range of the traditional [SII](6717/6731) and [OII](3726/2739) techniques, which are commonly used (either directly or as a basis for assumption) to derive electron densities in studies of the warm-ionised phase (e.g. Nesvadba et al., 2006; Liu et al., 2013; Harrison et al., 2014; Fiore et al., 2017), thus further supporting the need for robust warm-ionised gas electron density diagnostics such as the transauroral line technique and multi-component photoionisation modelling. Considering the traditional [SII](6717/6731) ratio, Kraemer & Crenshaw (2000a) (using the same STIS dataset as used in this work) and Kakkad et al. (2018) and Mingozzi et al. (2019) (using IFU data) derived electron densities of \(n_{e}\sim 10^{3}\) cm\({}^{-3}\) for the outflows in the NLR of NGC 1068. These [SII]-derived densities are 1-1.5 orders of magnitude lower than those we find using the TR method, and are close to the upper limit of the density range for the [SII] ratio technique (Appendix A: \(n_{\rm crit}\sim 10^{3.5}\) cm\({}^{-3}\)). This provides further evidence that, for gas of electron density \(n_{e}>10^{3.5}\) cm\({}^{-3}\), the [SII](6717/6731) ratio may underestimate the true electron density by more than an order of magnitude. ### The impact of the outflowing gas on the host galaxies Using densities derived from the transauroral line ratios, reddening-corrected recombination line fluxes and kinematics taken from previous modelling, we find mass outflow rates in the range 0.6 \(<\)\(\dot{M}_{\rm out}\)\(<\) 6.9 M\({}_{\odot}\)yr\({}^{-1}\), and coupling efficiencies in the range 1.1x10\({}^{-3}\)\(<\)\(\epsilon_{\rm kin}\) \(<\) 0.99 per cent (Table 2). In many cases, our calculated coupling efficiencies are just above the lower limit required by models of the co-evolution of galaxies and their supermassive black holes (e.g. \(\sim\)0.5-10 per cent: Di Matteo et al., 2005; Springel et al., 2005; Hopkins & Elvis, 2010). It is important to note that there is likely more outflowing material within the bicones that is not covered by our slits (which are only 0.1 arcseconds wide), and that comparisons between coupling efficiencies from models and observations are not straightforward (see Harrison et al., 2018 for further discussion). To properly account for the impact of the warm ionised outflows, detailed studies that make use of robust density diagnostics, separate emission from the outflowing and quiescent gas and, importantly, cover the entire NLRs of both objects, are needed. Moreover, we highlight that assessments of _all_ gas phases -- not just the warm ionised phase -- are needed to robustly assess the _total_ impact of the AGN-driven outflows (Cicone et al., 2018), as the warm ionised gas may represent just a fraction of the total outflowing gas mass at a given radius (e.g. Ramos Almeida et al., 2019; Holden et al., 2023). Therefore, it is likely the the true coupling efficiencies of the total NLR outflows in NGC 1068 and NGC 4151 are higher than we calculate here. ### A tale of three Seyferts: NGC 1068, NGC 4151 and IC 5063 Finally, using the results for the nearby Seyfert 2 IC 5063 presented in Holden et al. (2023) along with the results for NGC 1068 and NGC 4151 that we present here, we can begin to construct a sample of nearby Seyferts with spatially-resolved, detailed studies of their NLR outflows. IC 5063 is a nearby (\(z=0.01131\)) early-type Seyfert 2 galaxy that is seen close to edge-on, with a radio jet propagating almost in the plane of the disk which drives fast (\(v_{\rm out}\) \(>\) 700 km s\({}^{-1}\)) outflows (Morganti et al., 1998; Oosterloo et al., 2000; Morganti et al., 2015; Mukherjee et al., 2018; Holden et al., 2023). These outflows are seen in multiple gas phases, including warm ionised (Morganti et al., 2007; Sharp & Bland-Hawthorn, 2010; Congiu et al., 2017; Venturi et al., 2021; Holden et al., 2023); neutral (Morganti et al., 1998; Oosterloo et al., 2000); warm molecular (Tadhunter et al., 2014; Holden et al., 2023) and cold molecular (Morganti et al., 2013, 2015; Dasyra et al., 2016; Oosterloo et al., 2017). In Holden et al. (2023), we presented evidence that both the outflowing and quiescent warm ionised gas in IC 5063 has dominant AGN-photoionisation -- even though the outflows show clear signatures of shock acceleration -- and that the different outflow phases may represent a post-shock cooling sequence. We interpreted this situation as the pre-shock gas being AGN-photoionised, and the closest post-shock gas to the AGN kept in an ionised state by photoionisation. In Figure 8, we add the [OIII](5007/4363) and HEII4686/H\(\beta\) ratios for IC 5063 from Holden et al. (2023) to the diagnostic diagram presented in this work (Figure 5). Furthermore, we present [OII](7319+7331)/[OIII]\(\lambda\)5007 and [SII](4068+4076)/H\(\beta\) ratios for IC 5063 (alongside NGC 1068 and NGC 4151) in Appendix D (Figure 10) -- determined using the dataset from Holden et al. (2023) -- and find that they are consistent with radiation-bounded AGN-photoionisation with gas densities \(10^{3}\) cm\({}^{-3}\)\(<n_{e}<10^{4}\) cm\({}^{-3}\) and ionisation parameters in the range \(-3<\) log \(U<-2\), in agreement with the values determined in Holden et al. (2023). It is interesting that the overall differences in ionisation conditions _between_ the three galaxies are significantly larger than the range of ionisation conditions _within_ the galaxies. Our small sample thus shows three distinct cases in the three objects: radiation-bounded AGN-photoionisation in IC 5063, matter-bounded AGN-photoionisation in NGC 1068, and shock-ionisation or radiation-bounded AGN photoionisation with a relatively flat spectral index and higher ionisation parameters in NGC 4151 -- despite all being classified as Seyferts, the details of the ionisation mechanisms in the Figure 8: [OIII](5007/4363) vs HeII4686/H\(\beta\) diagnostic diagram (as in Figure 5) with line ratios measured from the STIS spectra of NGC 1068 and NGC 4151 (presented in this work) and from Xshooter spectra of IC 5063 (presented in Holden et al., 2023). The AGN, shock and precursor models are the same as those described in Section 4.2. The three objects each show distinct ionisation conditions, hinting at the complex nature of the NLRs of Seyfert galaxies. objects vary greatly. This is particularly interesting considering that in all three objects, the outflows detected along the radio axes appear to be spatially-confined to the radio structures (i.e. the outflows do not extend beyond the radio lobes in the NLRs). As argued in Section 5.1, this is consistent with shock-acceleration, although it does not rule out radiative-acceleration. If the outflows in IC 5063, NGC 1068 and NGC 4151 are shock-accelerated, then this would highlight the importance of not deriving information regarding the outflow acceleration mechanisms based solely on the ionisation/excitation mechanisms or kinematics of the gas in NLRs: a full account, involving detailed multi-wavelength observations with multiple diagnostics, is required to properly evaluate the relative contributions of different mechanisms. Despite evidence that the outflows in all three objects are being driven by the radio jet, the densities of the outflowing gas differ by more than an order of magnitude: for IC 5063, we found that the outflowing gas has densities in the range 3.17 \(<\) log(\(n_{e}\)[cm\({}^{-3}\)] \(<\) 3.43, while for NGC 1068 and NGC 4151 we find densities in the ranges 4.00 \(<\) log(\(n_{e}\)[cm\({}^{-3}\)] \(<\) 4.75 and 3.50 \(<\) log(\(n_{e}\)[cm\({}^{-3}\)] \(<\) 4.10 respectively (Table 1). The reason for this may simply be due to different pre-shock gas densities in the different objects (assuming the outflows in all three are shock-accelerated): for IC 5063 the pre-shock density is 2.1 \(<\) log(\(n_{e}\)[cm\({}^{-3}\)] \(<\) 2.7, however without higher velocity resolution spectra, we are unable to determine the quiescent gas densities in NGC 1068 and NGC 4151. In addition, the differing post-shock densities in the three Seyferts may be due to different cooling conditions behind the shock front. Standard shock-jump conditions predict a compression factor of \(\sim\)4, however this may be much higher (\(\sim\)100) if the post-shock gas has cooled in pressure equilibrium (Sutherland & Dopita, 2017; Santoro et al., 2018). Moreover, all three objects have low-to-intermediate radio luminosities (1.6\(\times\)10\({}^{22}\)\(<\) L\({}_{1.4\ GHz}\)\(<\) 3.0\(\times\)10\({}^{23}\) W Hz\({}^{-1}\); Table 3) -- again, if the outflows in these Seyferts are shock-accelerated, then this would reinforce the importance of jet-driven shocks as a feedback mechanism in the inner regions of galaxies, even at lower radio luminosities, in agreement with a statistical study of nearby AGN presented by Mullaney et al. (2013). Furthermore, the radio jets in NGC 1068 and NGC 4151 are oriented out of galactic disks by \(\sim 45^{\circ}\) and \(\sim 36^{\circ}\) respectively, unlike IC 5063 in which the jet propagates almost directly into the plane of the disk. Therefore, at least within the central few hundred parsec of the AGN, this would show that inclined jets can still have an impact on the kinematics and ionisation of the NLR, as predicted by recent relativistic hydrodynamic simulations (Mukherjee et al., 2018; Meenakshi et al., 2022), which show that a jet inclined \(\theta_{\rm jet}\)\(\sim 45^{\circ}\) to the galaxy's disk may have a significant effect on the kinematics, density and temperature of the gas within the central few kpc (albeit less so than a jet inclined in the plane of the disk, such as is the case in IC 5063). Similar hydrodynamic simulations, specifically tailored to the situations in NGC 1068 and NGC 4151, could thus be used to quantify the impact of their radio jets on the star-forming gas in their NLRs, as well as the impact of inclined kpc-scale jets in general. Ultimately, further observations of NGC 1068 and NGC 4151 are required to decisively determine the outflow acceleration mechanism(s). Namely, wide wavelength coverage spectroscopy (to make available a range of diagnostics) with sufficient velocity resolution to kinematically discriminate between outflowing (post-shock?) and quiescent (pre-shock?) gas. ## 6 Conclusions By analysing archival HST/STIS spectra taken along the radio axes of the inner few hundred parsecs of the NLR of the prototypical Seyfert galaxies NGC 1068 and NGC 4151, we have found the following. * Using the transauroral line ratio technique, we derive spatially-resolved electron densities of 4.00 \(<\) log\({}_{10}\)\(n_{e}\)[cm\({}^{-3}\)] \(<\) 4.75 for NGC 1068 and 3.60 \(<\) log\({}_{10}\)\(n_{e}\)[cm\({}^{-3}\)] \(<\) 4.10 for NGC 4151. These values are an order of magnitude above those commonly reported and assumed based on traditional density estimates, but are in agreement with the results from alternative diagnostics such as multi-component photoionisation modelling. Overall, our results provide further motivation for the use of the transauroral lines in deriving electron densities of AGN-driven outflows. * The measured emission-line ratios for the warm ionised gas are consistent with the dominant ionisation mechanisms being matter-bounded AGN-photoionisation in NGC 1068, and shock-ionisation and/or radiation-bounded AGN-photoionisation with a relatively flat spectral index (and/or higher ionisation parameters and lower metallicities) in NGC 4151. * Along the radio axes, the outflows in the northeastern cones of both objects have similar spatial extents to the radio structures -- this is consistent with the outflows in their NLRs being shock-accelerated by the radio jets and reionised by radiation from the AGN, although it does not rule out radiative acceleration. * Applying the transauroral line technique to gas that has dominant shock-ionisation may incur an uncertainty on the derived electron densities by up to \(\pm\)0.38 orders of magnitude, which is still far below the potential order-of-magnitude error incurred when using techniques which are not sensitive to higher density gas. However, care must still be taken when using detailed density diagnostic techniques, as the ionisation mechanism of the gas may alter the results. Therefore, robust ionisation-mechanism diagnostics should be used to verify the validity of the density measurements. * Finally, by combining our findings with those for the nearby Seyfert 2 galaxy IC 5063, we find that the ionisation mechanisms and outflow conditions along the radio axes in the central few hundred parsecs vary significantly between the different objects. Thus overall, our study highlights the necessity of care when deriving information about outflow acceleration mechanisms from the ionisation of the \begin{table} \begin{tabular}{l c c c c} \hline Object & L\({}_{\rm bol}\) (W) & L\({}_{1.4\ GHz}\) (W Hz\({}^{-1}\)) & P\({}_{\rm jet}\) (W) & \(\theta_{\rm jet}\) & Ionisation mechanism\({}^{a}\) \\ \hline NGC 1068 & 0.4–4.7\(\times\)10\({}^{38}\) & 2.3\(\times\)10\({}^{23}\) & 1.8\(\times\)10\({}^{36}\) (Garcia-Burillo et al., 2014) & \(\sim\) 45\({}^{\circ}\) & Matter-bounded photoionisation \\ NGC 4151 & 1.4\(\times\)10\({}^{37}\) & 1.6\(\times\)10\({}^{22}\) & \(\sim\) 10\({}^{15}\) (Wang et al., 2011) & \(\sim\) 36\({}^{\circ}\) & Photo- and/or shock-ionisation \\ IC 5063 & 7.6\(\times\)10\({}^{37}\) & 3\(\times\)10\({}^{23}\) & 10\({}^{37-38}\) (Mukherjee et al., 2018) & \(\sim\) 5\({}^{\circ}\) & Radiation-bounded photoionisation \\ \hline \end{tabular} \({}^{a}\) Determined with line ratios detected in slits along PA=202\({}^{\circ}\) (NGC 1068), PA=70\({}^{\circ}\) (NGC 4151) and PA=115\({}^{\circ}\) (IC 5063). \end{table} Table 3: Bolometric luminosities, 1.4 GHz radio luminosities, jet powers (P\({}_{\rm jet}\)), jet orientations with respect to the disk (\(\theta_{\rm jet}\)) and ionisation mechanisms detected along the radio axis for NGC 1068, NGC 4151 and IC 5063. gas, and the need for robust ionisation-mechanism diagnostics with detailed observations. ## Acknowledgements We thank the anonymous referee for their helpful comments and suggestions, which improved the clarity of this manuscript. LRH and CNT acknowledge support from STFC. Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This work makes use of the Starlink software (Currie et al., 2014), which is currently supported by the East Asian Observatory. For the purposes of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript Arising. ## Data Availability The data used in this report is available from the Hubble Legacy Archive (HLA) ([https://hla.stsci.edu/hlaview.html](https://hla.stsci.edu/hlaview.html)) with proposal IDs GTO:5754 (PI Ford) and GTO:5124 (PI Ford) for the HST/WFPC2 [OIII] imaging, and proposal IDs GTO:7573 (PI Kraemer) and GTO:7569 (PI Hutchings) for the HST/STIS spectra.
2303.17153
Understanding the limit sets generated by general iterated function systems on unbounded spaces
In this paper, we reformulate the definition of the iterated function systems (denoted by general IFSs in this paper) and show the existence and uniqueness (in some sense) of the limit sets generated by the general IFSs, to unify the definitions of the limit sets introduced before. Note that the general IFSs are defined on (possibly unbounded) complete metric spaces and we instead assume a ``natural" condition of general IFSs to show the main result. To obtain the main result, we apply techniques in the Banach fixed point theorem to the general IFSs with the ``natural" condition. Besides, we consider an example of general IFSs.
Kanji Inui
2023-03-30T05:20:40Z
http://arxiv.org/abs/2303.17153v1
# Understanding the limit sets generated by general iterated function systems on unbounded spaces+ ###### Abstract In this paper, we reformulate the definition of the iterated function systems (denoted by general IFSs in this paper) and show the existence and uniqueness (in some sense) of the limit sets generated by the general IFSs, to unify the definitions of the limit sets introduced before. Note that the general IFSs are defined on (possibly unbounded) complete metric spaces and we instead assume a "natural" condition of general IFSs to show the main result. To obtain the main result, we apply techniques in the Banach fixed point theorem to the general IFSs with the "natural" condition. Besides, we consider an example of general IFSs. ## 1 Introduction Since many researchers recognized the importance of fractals and have studied iterated function systems (for short, IFSs) in the 1970s, IFS is one of the mathematically powerful tools to construct fractals (often called limit sets) and indeed there are many mathematical papers on the limit sets generated by IFSs. In particular, the limit sets generated by iterated function systems with finitely many mappings (henceforth, denoted by autonomous IFSs) have been well-studied, and there exist many results not only on some properties of the limit sets but also on dimensions and measures of the limit sets ([13], [2], [30], [9], [4], [17], [18]). Note that the limit sets generated by autonomous IFSs automatically have a nice property (self-similarity), which deduces rich results on the limit sets. On the other hand, there exist results on the limit sets which are not generated by autonomous IFSs (henceforth, denoted by generalized IFSs) and there are at least three generic lines of studies of the limit sets generated by the generalized IFSs. The study in the first line shows the estimates on the dimensions of the limit sets under the assumption of the existence of the limit sets (Moran set, see [25], [12], [11], [20], [10]) and these papers also include examples and applications of the results. The study in the second line ensures the existence of the limit sets generated by the non-autonomous (1-variable) IFSs (and some generalizations, see [28], [1], [19], [8], [21], [26]) and papers [28], [1] and [26] also gives the theorems on the estimation of the dimensions (and measures) of the limit sets. The study in the third line shows the existence of the limit sets generated by generalized IFSs (V-variable fractals) and the theorems on the estimation of the Hausdorff dimension of the limit sets by using probabilistic techniques ([5], [6], [29]). These studies indicate there is room to analyze not only the limit sets generated by autonomous IFSs but also the ones generated by generalized IFSs. But, these theories have proposed different definitions and assumptions for generalized IFSs, which deduce the results on the estimation of dimensions (and measures) of the limit set generated by each generalized IFS. In addition, it is worth mentioning that the above papers consider the generalized IFSs defined on bounded sets (in some sense) or compact sets. Indeed, the papers in the first line consider generalized IFSs under the assumptions which allow us to restrict the domains of the IFSs to a (common) bounded set. The papers in the second line consider the non-autonomous (1-variable) IFSs on compact sets or bounded sets (in some sense) to obtain some results (in particular, the existence of the limit sets). The papers in the third line consider generalized IFSs under the assumption which implicitly allows us to restrict the domains of the IFSs to a (common) bounded set (see, Remark 4.7). To address the issue, it is important to understand the connection of the definitions of the limit sets generated by the generalized IFSs, and it is natural to reformulate generalized IFSs (henceforth, these reformulated IFSs are denoted by general IFSs) to unify the definitions of the limit sets introduced before. Therefore, the aim of this paper is to present the reformulation of the definition of generalized IFSs to show the existence and uniqueness (in some sense) of the limit sets generated by the general IFSs and to unify the definitions of the limit sets introduced before. More precisely, we first introduce IFSs which consist of a family of uniformly contractive mappings and a set of all "infinite words" (called a tree, see Definition 3.1). We next define the projection map for general IFSs by using the "compatible" sequence for a non-autonomous (recursive) iteration generated by "infinite words" and the family of the contractive mappings of the general IFS, under the "natural" condition (see Lemma 3.6 and Definition 3.7). Note that the notion of the projection map is already introduced in the second and third line (called the address map in the third line), and we show that the projection map in this paper indeed coincides with the projection map introduced in the second and third line (see, Proposition 4.3 and Proposition 4.4). Then, we now construct the family of limit sets for general IFSs by using the projection map for the general IFS and show the uniqueness (in some sense) of the family, under the "natural" condition (see, Theorem 4.5 (the main result)). Note that the idea of the family of the limit sets is already introduced in the first line (called the basic sets with the Moran structure in the first line), and also note that the definition of the family of limit sets is derived from the definition of limit sets in the second line. In addition, we show that the family of the limit sets in this paper is compatible with the definition of basic sets with the Moran structure, and each limit set in the family is expressed as the limit point of the iterations in sense of the Hausdorff distance. It follows that the family of the limit sets in this paper is a generalization of the limit sets in the third line. We remark that, under the "natural" condition, we do not assume that general IFSs are defined on bounded sets (in some sense) or compact sets. Moreover, to indicate the importance of the general IFSs, we give an example of general IFSs which has a connection to the theory of continued fractions, and we obtain the result on the dimension of the limit set generated by the IFS in this example (see, Proposition 5.2). Note that, while the theory of continued fractions is often discussed in the theory of autonomous IFSs (see [22], [23]), it is not often discussed in the theory of generalized IFSs (you can find a recent paper [27] in the setting for non-autonomous IFSs). In addition, while we already obtain the existence of the limit set generated by the IFS in the example by applying results in the third line, this example is not much paid attention to the limit set since it does not satisfy the central condition (the \(V\)-variability). Before we present the strategy of the main result in this paper, we recall the Hutchinson technique which is a technique to construct the limit sets generated by autonomous IFSs (in detail, see [17]). Indeed, We first consider a complete metric space \(X\) (which is possibly unbounded) and the set of all non-empty compact subsets of \(X\) with the Hausdorff distance (denoted by \((\mathcal{K}(X),d_{H})\)). Note that \((\mathcal{K}(X),d_{H})\) is complete since \(X\) is complete. In the Hutchinson technique, for an autonomous IFS, we introduce an operator on \((\mathcal{K}(X),d_{H})\) associated with the autonomous IFS (called the Barnsley operator) and show the contractivity of the Barnsley operator on \((\mathcal{K}(X),d_{H})\). By the Banach fixed point theorem, we deduce that there exists the unique fixed point \(K\in\mathcal{K}(X)\) (the unique non-empty compact subset) of the Barnsley operator on \((\mathcal{K}(X),d_{H})\) and the unique fixed point \(K\) is called the limit set for the autonomous IFS (or called the self-similar set in this context). In addition, there is a connection between the limit sets generated by the autonomous IFS and a set of all infinite sequences of symbols (called the symbolic space). Indeed, recall that each point of the limit sets is expressed as some (recursive) iteration generated by the autonomous IFS. Since each iteration is expressed as the infinite sequence of the symbols, there is a "nice" map on the symbolic space such that the image of the "nice" map equals the limit set, and the "nice" map is called the code map. Note that the projection map is a generalization of the code map. Also, note that neither compactness nor boundedness of \(X\) is not assumed in the above arguments. Later we find that, in the theory of generalized IFSs, the compactness or boundedness (in some sense) of \(X\) is a sufficient condition to obtain the existence and uniqueness of the limit sets (see, Theorem 4.5 (the main result)). Now, we give the strategy to obtain the main results (Theorem 4.5). To obtain the projection map for general IFSs, we first consider non-autonomous (recursive) iterations generated by a sequence of uniformly contractive mappings on a complete metric space, and we recall results on the existence and uniqueness of the "compatible" sequence for the non-autonomous (recursive) iterations and some properties of the "compatible" sequence. Note that we need the "natural" condition to show the above results by using the techniques in the Banach fixed point theorem (indeed there is a counterexample, see Example 2.13). By the above argument, we next define the projection map for a general IFS and discuss some properties of the projection map by applying the above results to the non-autonomous (recursive) iterations generated by the general IFS on a complete metric space. In particular, we show the continuity of the projection map for general IFSs. Now, we finally construct the family of the limit sets for a general IFS by the continuity of the projection map. Then we obtain the uniqueness (in some sense) of the family of the limit sets and show that each limit set in the family is expressed as the limit point of the iterations in sense of the Hausdorff distance (the Theorem 4.5 (the main result)) by the Hutchinson technique. To this end, there are two points to show the main theorem. Indeed, by using some properties of the Hausdorff distance and the assumption in the definition of the general IFS, we first show that the convergence of limit sets in sense of the Hausdorff distance with the initial compact set \(\{x\}\in\mathcal{K}(X)\) (\(x\in X\)) and next show that the convergence of limit sets in sense of the Hausdorff distance does not depend on the initial compact set. The rest of the paper is organized as follows. In Section 2, we recall some basic properties of non-autonomous (right) iterations generated by a sequence of contractive mappings on complete metric spaces. In addition, we also present some examples of non autonomous iterations in this section. In Section 3, we give the definitions of general IFSs and the projection map for general IFSs. Indeed, we first introduce the notion of trees and discuss some properties of the trees. We next introduce the definition of general IFSs on complete metric spaces and the projection map for general IFSs. We also discuss the properties of the projection map for general IFSs in this section. In Section 4, we finally construct the family of limit sets generated by general IFSs and show the uniqueness of the family (in some sense) and properties of the family (the main result). In Section 5, we give an example of general IFSs and discuss its properties. ## 2 Preliminaries In this section, we recall non-autonomous (right) iterations of contractive mappings on a complete metric space to consider general IFSs. In Subsection 2.1, we first present the existence and uniqueness (in some sense) of the recursively compatible sequence (see, Definition 2.1) under a "natural" condition. Note that the recursively compatible sequence is a generalization of fixed points in the theory of dynamical systems. In Subsection 2.2, we present some examples of non-autonomous iterations of contractive mappings. Now, we first introduce the definition of sequence of contractive mapping with uniform contraction constant. Henceforth, \(\mathbb{N}\) is the set of positive integers and \(\mathbb{N}_{0}\) is the set of non-negative integers. **Definition 2.1**.: We say that \(f_{j}\colon X\to X\) (\(j\in\mathbb{N}\)) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniform contraction constant \(c\in(0,1)\) if \[\rho(f_{j}(x),f_{j}(y))\leq c\ \rho(x,y)\] for all \(j\in\mathbb{N}\) and \(x,y\in X\). Let \(\{f_{j}\}_{j\in\mathbb{N}}\) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniform contraction constant \(c\in(0,1)\). We say that \(\{x_{m}\}_{m\in\mathbb{N}}\subset X\) is a recursively compatible sequence for \(\{f_{j}\}_{j\in\mathbb{N}}\) if \(f_{m}(x_{m+1})=x_{m}\) for each \(m\in\mathbb{N}\). Note that there exists the unique fixed point \(z_{j}\) of \(f_{j}\) for each \(j\in\mathbb{N}\) since \(X\) is complete. We set \(Z:=\{z_{j}\in X\ |\ j\in\mathbb{N}\}\). Let \(f_{j}\colon X\to X\) (\(j\in\mathbb{N}\)) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniform contraction constant \(c\in(0,1)\). Then, for each \(m\in\mathbb{N}\), we call \(\{f_{[m,n]}\}_{n\geq m}\) and \(\{f_{[m,n)}\}_{n>m}\) a non-autonomous (right) iteration of the contractive mappings on \(X\) with an uniform contraction constant \(c\in(0,1)\). Here, \(f_{[m,n]}\colon X\to X\) and \(f_{[m,n)}\colon X\to X\) are defined by \[f_{[m,n]}:=f_{m}\circ\cdots\circ f_{n}\quad(n\geq m)\quad\text{and}\quad f_{ [m,n)}:=f_{m}\circ\cdots\circ f_{n-1}\quad(n>m).\] Also, for each \(m\in\mathbb{N}_{0}\), the mappings \(f_{(m,n]}\colon X\to X\) and \(f_{(m,n)}\colon X\to X\) is defined by \[f_{(m,n]}:=f_{m+1}\circ\cdots\circ f_{n}\quad\text{and}\quad f_{(m,n)}:=f_{m+ 1}\circ\cdots\circ f_{n-1}\] respectively if the relation of \(m,n\in\mathbb{N}_{0}\) is compatible with compositions of contractive mappings \(f_{j}\colon X\to X\) (\(j\in\mathbb{N}\)). ### Basic properties of non-autonomous iterations on complete metric spaces In this subsection, we recall some results on non-autonomous iterations which is a slight generalization of Banach's fixed point theorem. Note that we do not assume that \(Z\) is bounded if we do not mention it in the statements. For the readers, we give a proof of the results. Henceforth, we sometimes refer \(x\in X\) as a base point of \(X\) and \(y\in X\) as the starting point of \(X\). Before we present the main result in this subsection, we give the following lemmas. **Lemma 2.2**.: Let \(f_{j}\colon X\to X\) (\(j\in\mathbb{N}\)) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniform contraction constant \(c\in(0,1)\) and \(z_{j}\in X\) (\(j\in\mathbb{N}\)) be the unique fixed point of \(f_{j}\). If there exists \(x\in X\) such that \(\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})<\infty\), then \(\sum_{j\in\mathbb{N}}c^{j}\rho(x^{\prime},z_{j})<\infty\) for each \(x^{\prime}\in X\). Proof.: Let \(x^{\prime}\in X\) and \(x\in X\) with \(\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})<\infty\). Then, \[\sum_{j\in\mathbb{N}}c^{j}\rho(x^{\prime},z_{j})\leq\rho(x^{\prime},x)\sum_{j \in\mathbb{N}}c^{j}+\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})<\infty\] since \(\sum_{j\in\mathbb{N}}c^{j}<\infty\). Therefore, we have proved our lemma. **Remark 2.3**.: By Lemma 2.2, if there exists \(x\in X\) such that \(\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})=\infty\), then \(\sum_{j\in\mathbb{N}}c^{j}\rho(x^{\prime},z_{i})=\infty\) for each \(x^{\prime}\in X\). Therefore, the property \(\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})<\infty\) does not depend on the point \(x\in X\) but depend on the iteration \(\{f_{j}\}_{j\in\mathbb{N}}\). **Lemma 2.4** (Collage theorem, Inverse collage theorem [18]).: Let \(f\colon X\to X\) be a contractive mapping on a complete metric space \((X,\rho)\) with a contraction constant \(c\in(0,1)\). Let \(z\in X\) be the unique fixed point of \(f\). Then, for each \(a\in X\), we have \[\rho(f(a),a)\leq(1+c)\ \rho(z,a)\quad\text{and}\quad\rho(z,a)\leq\frac{\rho(f(a), a)}{1-c}.\] We now present a important lemma which is used Sections 3 and 4. **Lemma 2.5**.: Let \(f_{j}\colon X\to X\) (\(j\in\mathbb{N}\)) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniform contraction constant \(c\in(0,1)\) and \(z_{j}\in X\) (\(j\in\mathbb{N}\)) be the unique fixed point of \(f_{j}\). Suppose that there exists \(x\in X\) such that \[\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})<\infty. \tag{2.1}\] Then, for all \(m\in\mathbb{N}\), there exists \(x_{m}\in X\) such that for all \(y\in X\) and \(n\in\mathbb{N}_{0}\), \[\rho(f_{[m,m+n]}(y),x_{m})\leq(1+c)\cdot c^{-m}l_{y}(m+n+1), \tag{2.2}\] where \(l_{y}(n):=\sum_{k=n}^{\infty}c^{k}\rho(y,z_{k})\ \ (n\in\mathbb{N})\). In addition, \(\{x_{m}\}_{m\in\mathbb{N}}\) has the following properties: * \(\{x_{m}\}_{m\in\mathbb{N}}\) is a recursively compatible sequence for \(\{f_{j}\}_{j\in\mathbb{N}}\) and * there exists \(C>0\) such that \(\rho(y,x_{m})\leq C\cdot l_{y}(m)\cdot c^{-m}\) for each \(y\in X\) and \(m\in\mathbb{N}\). Moreover, the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) with the above properties is unique. **Remark 2.6**.: Note that it is necessary to assume that \(\sum_{j\in\mathbb{N}}c^{j}\rho(x,z_{j})<\infty\) for some \(x\in X\) (see, Example 2.13). Under the condition, we obtain the inequality (2.2) which shows that for all \(m\in\mathbb{N}\), the sequence \(\{f_{[m,m+n]}(y)\}_{n\in\mathbb{N}_{0}}\) converges to \(x_{m}\in X\) as \(n\) tends to infinity and the limit point \(x_{m}\in X\) does not depend on the starting point \(y\in X\) (but the convergence rate \(l_{y}(n)\) depends on the starting point \(y\in X\)). proof of Lemma 2.5.: We first show that for each \(m\in\mathbb{N}\), there exists \(x_{m}\in X\) such that the inequality (2.2) holds for all \(y\in X\) and \(n\in\mathbb{N}_{0}\). To show this, let \(m\in\mathbb{N}\) and we set \(x_{m}(n):=f_{[m,m+n]}(x)\) (\(n\in\mathbb{N}_{0}\)). Then, by Lemma 2.4, for all \(n_{1},n_{2}\in\mathbb{N}_{0}\) with \(n_{1}<n_{2}\), \[\rho(x_{m}(n_{1}),x_{m}(n_{2})) \leq\sum_{k=n_{1}+1}^{n_{2}}\rho(x_{m}(k-1),x_{m}(k))=\sum_{k=n_{1 }+1}^{n_{2}}\rho(f_{[m,m+k-1]}(x),f_{[m,m+k]}(x))\] \[=\sum_{k=n_{1}+1}^{n_{2}}c^{k}\rho(x,f_{m+k}(x))\leq c^{-m}(1+c) \sum_{k=n_{1}+1}^{n_{2}}c^{k+m}\rho(x,z_{m+k})\] \[\leq c^{-m}(1+c)\sum_{k=m+n_{1}+1}^{m+n_{2}}c^{k}\rho(x,z_{k}). \tag{2.3}\] It follows that \(\{x_{m}(n)\}_{n\in\mathbb{N}_{0}}\) is a Cauchy sequence in \(X\) and there exists \(x_{m}\in X\) such that \(x_{m}(n)\) converges to \(x_{m}\) as \(n\) tends to infinity. Now, let \(y\in X\). We set \(y_{m}(n):=f_{[m,m+n]}(y)\) (\(n\in\mathbb{N}_{0}\)). Note that by Lemma 2.2, \[\sum_{j\in\mathbb{N}}c^{j}\rho(y,z_{j})<\infty.\] By the same argument, we have \[\rho(y_{m}(n_{1}),y_{m}(n_{2}))\leq c^{-m}(1+c)\sum_{k=m+n_{1}+1}^{m+n_{2}}c^ {k}\rho(y,z_{k}) \tag{2.4}\] for all \(n_{1},n_{2}\in\mathbb{N}_{0}\) with \(n_{1}<n_{2}\), and there exists \(y_{m}\in X\) such that \(\{y_{m}(n)\}_{n\in\mathbb{N}_{0}}\) converges to \(y_{m}\) as \(n\) tends to infinity. In addition, for all \(n\in\mathbb{N}_{0}\), we have \[\rho(x_{m},y_{m}) \leq\rho(x_{m},x_{m}(n))+\rho(x_{m}(n),y_{m}(n))+\rho(y_{m}(n),y_ {m})\] \[\leq\rho(x_{m},x_{m}(n))+c^{n+1}\rho(x,y)+\rho(y_{m}(n),y_{m}).\] It follows that \(x_{m}=y_{m}\) for each \(m\in\mathbb{N}\). Besides, by the inequality (2.4), we have \[\rho(f_{[m,m+n_{1}]}(y),x_{m})=\lim_{n_{2}\to\infty}\rho(y_{m}(n_{1}),y_{m}(n_ {2}))\leq c^{-m}(1+c)\sum_{k=m+n_{1}+1}^{\infty}c^{k}\rho(y,z_{k}).\] Therefore, we have proved the inequality (2.2). We next show the properties (i) and (ii) in Lemma 2.5. Indeed, for all \(m\in\mathbb{N}\), \(n\in\mathbb{N}_{0}\), \[f_{m}(x_{m+1}(n))=f_{m}\circ f_{[m+1,m+1+n]}(x)=f_{[m,m+n+1]}(x)=x_{m}(n+1).\] Since \(x_{m+1}(n)\stackrel{{ n\longrightarrow\infty}}{{\longrightarrow}}x_{m+1}\), \(x_{m}(n+1)\stackrel{{ n\longrightarrow\infty}}{{\longrightarrow}}x_{m}\) and \(f_{m}\) is continuous, we deduce that \(f_{m}(x_{m+1})=x_{m}\) for each \(m\in\mathbb{N}\). In addition, we set \(C:=1+c\) (\(>0\)) and let \(m\in\mathbb{N}\) and \(y\in X\). By the inequality (2.4) with \(n_{1}=0\), we have \[c^{m}\rho(y,x_{m})=\lim_{n_{2}\to\infty}c^{m}\rho(y,y_{m}(n_{2} ))\leq\lim_{n_{2}\to\infty}c^{m}\left\{\rho(y,f_{m}(y))+\rho(y_{m}(0),y_{m}(n_ {2}))\right\}\] \[\leq c^{m}\left\{(1+c)\rho(y,z_{m})+c^{-m}(1+c)\sum_{k=m+1}^{ \infty}c^{k}\rho(y,z_{k})\right\}=(1+c)\sum_{k=m}^{\infty}c^{k}\rho(y,z_{k}). \tag{2.5}\] Thus, we have proved the properties (i) and (ii) in Lemma 2.5. We finally show the uniqueness of the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) with the properties (i) and (ii) in Lemma 2.5. Let \(m\in\mathbb{N}\) and \(\{\tilde{x}_{m}\}_{m\in\mathbb{N}}\) be a sequence in \(X\) with the properties. Then, by the properties (i) and (ii) for \(\{x_{m}\}_{m\in\mathbb{N}}\) and \(\{\tilde{x}_{m}\}_{m\in\mathbb{N}}\), there exists \(\tilde{C}>0\) such that, for each \(n_{1}\in\mathbb{N}_{0}\) and \(y\in\mathbb{N}_{0}\), \[\rho(x_{m},\tilde{x}_{m}) \leq\rho(f_{[m,m+n_{1}]}(x_{m}),f_{[m,m+n_{1}]}(y))+\rho(f_{[m,m+n _{1}]}(y),f_{[m,m+n_{1}]}(\tilde{x}_{m+n_{1}+1}))\] \[\leq c^{-m}\cdot c^{m+n_{1}+1}\rho(y,x_{m+n_{1}+1})+c^{-m}\cdot c ^{m+n_{1}+1}\rho(y,\tilde{x}_{m+n_{1}+1})\] \[=c^{-m}\max\{C,\tilde{C}\}\cdot l_{y}(m+n_{1}+1).\] It follows that \(x_{m}=x^{\prime}_{m}\) for each \(m\in\mathbb{N}\). Hence, we have proved our lemma. As we have mentioned in Remark 2.6, the rate \(l_{y}\) in Lemma 2.5 depends on the starting point \(y\in X\) of non-autonomous iterations. To obtain more similar results to Banach's fixed point theorem, we need the following Lemma. **Lemma 2.7**.: Let \(f_{j}\colon X\to X\) (\(j\in\mathbb{N}\)) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniform contraction constant \(c\in(0,1)\) and \(z_{j}\in X\) (\(j\in\mathbb{N}\)) be the unique fixed point of \(f_{j}\). Suppose that there exist \(x\in X\) and \(a\colon\mathbb{N}\to\mathbb{R}\) with \(\sum_{j\in\mathbb{N}}a(j)<\infty\) such that for all \(j\in\mathbb{N}\), \[c^{j}\rho(x,z_{j})\leq a(j). \tag{2.6}\] Then, for all \(m\in\mathbb{N}\), there exists \(x_{m}\in X\) such that for all \(y\in X\) and \(n\in\mathbb{N}_{0}\), \[\rho(f_{[m,m+n]}(y),x_{m})\leq\max\{(1+c),\rho(y,x)\}\cdot c^{-m}\cdot l^{ \prime}(m+n+1), \tag{2.7}\] where \(l^{\prime}(n):=\max\{c^{n},\sum_{k=n}^{\infty}a(k)\}\) (\(n\in\mathbb{N}\)). In addition, \(\{x_{m}\}_{m\in\mathbb{N}}\) have the following properties: 1. \(\{x_{m}\}_{m\in\mathbb{N}}\) is a recursively compatible sequence for \(\{f_{j}\}_{j\in\mathbb{N}}\) and 2. for each \(y\in X\), there exists \(C(y)>0\) such that for each \(m\in\mathbb{N}\), \[\rho(y,x_{m})\leq C(y)\cdot\max\left\{1,\ c^{-m}\sum_{k=m}^{\infty}a(k) \right\}\left(=C(y)\cdot l^{\prime}(m)\cdot c^{-m}\right).\] Moreover, the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) with the above properties is unique. **Remark 2.8**.: The inequality (2.7) shows that for all \(m\in\mathbb{N}\), \(\{f_{[m,m+n]}(y)\}_{n\in\mathbb{N}_{0}}\) converges to \(x_{m}\in X\) as \(n\) tends to infinity with the rate \(l^{\prime}\). In addition, the limit point \(x_{m}\in X\) does not depend on the starting point \(y\in X\) and the starting point depends on only the constant of the convergence rate. However, the convergence is not always a exponentially fast rate (see, Example 2.14 and Example 2.15). proof of Lemma 2.7.: We first show that the inequality (2.7). By the assumption (2.6), we have \(\sum_{j=1}^{\infty}c^{j}\rho(x,z_{j})\leq\sum_{j=1}^{\infty}a(j)<\infty\) and the condition in Lemma 2.5 is satisfied. By the inequality (2.2) with \(y=x\), we obtain that \[\rho(f_{[m,m+n]}(y),x_{m}) \leq\rho(f_{[m,m+n]}(y),f_{[m,m+n]}(x))+\rho(f_{[m,m+n]}(x),x_{m})\] \[\leq c^{-m}\left\{c^{m+n+1}\cdot\rho(y,x)+(1+c)\sum_{k=m+n+1}^{ \infty}a(k)\right\}\] \[\leq\max\{(1+c),\rho(y,x)\}\cdot c^{-m}\cdot\max\left\{c^{m+n+1}, \sum_{k=m+n+1}^{\infty}a(k)\right\}.\] Therefore, we have proved the inequality (2.7). We next show the properties (i) and (ii) in Lemma 2.7. Since we have shown that the condition in Lemma 2.5 is satisfied, we have already proved the properties (i) and (ii) in Lemma 2.5 (or the inequality (2.5)) for each \(m\in\mathbb{N}\) and \(y\in X\). By the inequality (2.5) with \(y=x\), it follows that \[c^{m}\rho(y,x_{m}) \leq c^{m}\rho(y,x)+c^{m}\rho(x,x_{m})\leq\rho(y,x)\cdot c^{m}+(1+ c)\sum_{k=m}^{\infty}a(k)\] \[\leq\max\{\rho(y,x),(1+c)\}\cdot\max\left\{c^{m},\sum_{k=m}^{ \infty}a(k)\right\}. \tag{2.8}\] Thus, we have proved the properties (i) and (ii) in Lemma 2.7. Finally, by the same argument in the proof of the uniqueness of the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) in Lemma 2.5, we also deduce that the uniqueness of the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\). Hence, we have proved our lemma. **Remark 2.9**.: In Lemma 2.7, if \(a(j)=C^{\prime}\cdot r^{j}\) for some \(r\in[c,1)\) and \(C^{\prime}>0\), then the condition in Lemma 2.5 is satisfied and we obtain the following: for all \(m\in\mathbb{N}\), there exists \(x_{m}\in X\) such that for all \(y\in X\), the sequence \(\{f_{[m,m+n]}(y)\}_{n\in\mathbb{N}_{0}}\) converges to \(x_{m}\) as \(n\) tends to infinity exponentially fast with the rate \(r\). In addition, \(\{x_{m}\}_{m\in\mathbb{N}}\) has the following properties: 1. \(\{x_{m}\}_{m\in\mathbb{N}}\) is a recursively compatible sequence for \(\{f_{j}\}_{j\in\mathbb{N}}\) and 2. \((c/r)^{m}\rho(y,x_{m})\)\((m\in\mathbb{N})\) is bounded for each \(y\in X\) ( equivalently, for each \(y\in X\), \(c^{m}\rho(y,x_{m})\to 0\) as \(m\to\infty\) exponentially fast with the rate \(r\)). Moreover, the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) with the above properties is unique. Indeed, by inequalities (2.2) with \(y=x\) and the assumption \(a(j)=C^{\prime}\cdot r^{j}\)\((j\in\mathbb{N})\), there exists \(x_{m}\in X\)\((m\in\mathbb{N})\) such that for all \(y\in X\) and \(n\in\mathbb{N}_{0}\), \[\rho(f_{[m,m+n]}(y),x_{m}) \leq\rho(f_{[m,m+n]}(y),f_{[m,m+n]}(x))+\rho(f_{[m,m+n]}(x),x_{m})\] \[\leq c^{n+1}\cdot\rho(y,x)+c^{-m}\cdot(1+c)C^{\prime}\sum_{k=m+n _{1}+1}^{\infty}r^{k}\] \[\leq\max\left\{\rho(y,x),\frac{1+c}{1-r}C^{\prime}\left(\frac{r}{ c}\right)^{m}\right\}\cdot r^{n+1}. \tag{2.9}\] In addition, since the condition in Lemma 2.5 is satisfied, we have already proved the properties (i). Moreover, by the property (ii) in Lemma 2.5 (or the inequality (2.5) ) and the assumption \(a(j)=C^{\prime}\cdot r^{j}\)\((j\in\mathbb{N})\), it follows that \[c^{m}\rho(y,x_{m}) \leq c^{m}\rho(y,x)+c^{m}\rho(x,x_{m})\leq c^{m}\rho(y,x)+(1+c)C^ {\prime}\sum_{k=m}^{\infty}r^{k}\] \[\leq\max\left\{\rho(y,x),C^{\prime}\frac{1+c}{1-r}\right\}\cdot r ^{m}, \tag{2.10}\] which shows that \((c/r)^{m}\rho(y,x_{m})\)\((m\in\mathbb{N})\) is bounded for each \(y\in X\). Finally, uniqueness of \(\{x_{m}\}_{m\in\mathbb{N}}\) is deduced by the same argument in Lemma 2.7 and inequality (2.10). **Remark 2.10**.: If \(Z\) is unbounded, then \(\{x_{m}\}_{m\in\mathbb{N}}\) is also unbounded in general even if we assume the condition in Lemmas 2.5, 2.7 or 2.9 (see Example 2.15). On the other hand, if \(Z\) is bounded, then the condition in Remark 2.9 is automatically satisfied with \(r:=c\) and \(C^{\prime}:=\sup_{j\in\mathbb{N}}\rho(x,z_{j})\), and we deduce that the unique recursively compatible sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) is bounded by the property (ii) in Remark 2.9 (or the inequality (2.10)). In particular, if \(X\) is bounded or \(\{f_{j}\}_{j\in\mathbb{N}}\) is autonomous (i.e. \(f_{j}:=f_{1}\) for all \(j\in\mathbb{N}\)), then the condition in Remark 2.9 is automatically satisfied. Note that if \(\{f_{j}\}_{j\in\mathbb{N}}\) is autonomous, then the unique recursively compatible sequence is the constant sequence of fixed point of \(f_{1}\) (i.e. \(\{x_{m}\}_{m\in\mathbb{N}}=\{z_{1}\}_{m\in\mathbb{N}}\)). Before we conclude this subsection, we show the corollary of Remark 2.9. **Corollary 2.11**.: Let \(\{f_{j}\}_{j\in\mathbb{N}}\) be a sequence of contractive mappings on a complete metric space \((X,\rho)\) with an uniformly contraction constant \(c\in(0,1)\). For each \(j\in\mathbb{N}\), let \(z_{j}\in X\) be the unique fixed point of \(f_{j}\). Suppose that there exists \(x\in X\), \[a:=\limsup_{j\longrightarrow\infty}\sqrt[j]{\rho(x,z_{j})}<\frac{1}{c}. \tag{2.11}\] Then, for all \(m\in\mathbb{N}\) and \(r\in\{r>0\mid c\leq r<1,ac<r\}\), there exists \(x_{m}\in X\) such that for all \(y\in X\), the sequence \(\{f_{[m,m+n]}(y)\}_{n\in\mathbb{N}_{0}}\) converges to \(x_{m}\) as \(n\) tends to infinity exponentially fast with the rate \(r\). In addition, \(\{x_{m}\}_{m\in\mathbb{N}}\) has the following properties: * \(\{x_{m}\}_{m\in\mathbb{N}}\) is a recursively compatible sequence for \(\{f_{j}\}_{j\in\mathbb{N}}\) and * \((c/r)^{m}\rho(y,x_{m})\) (\(m\in\mathbb{N}\)) is bounded for each \(y\in X\). Moreover, the sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) with the above properties is unique. Proof.: Let \(r\in\{r>0\mid c\leq r<1,ac<r\}\). Then, by the assumption (2.11), there exists \(M\in\mathbb{N}\) such that for all \(j\geq M\), we have \(c\cdot\sqrt[j]{\rho(x,z_{j})}<r\), which is equivalent to \(c^{j}\rho(x,z_{j})<r^{j}\). Therefore, we have \[c^{j}\rho(x^{\prime},z_{j})\leq C^{\prime}r^{j}\] for all \(j\in\mathbb{N}\), where \(C^{\prime}:=\max(\{\ c^{j}\rho(x,z_{j})/r_{j}\mid j<M\ \}\cup\{1\})(>0)\). It follow that the condition in Remark 2.9 is satisfied and our statement of the corollary holds. **Remark 2.12**.: The constant \(\alpha\geq 0\) in Corollary 2.11 does not depend on \(x\in X\) if \(Z\) is unbounded. To this end, let \(x,y\in X\) with \(y\neq x\). Note that, since \(Z\) is unbounded, sequences \(\{\rho(x,z_{j})\}_{j\in\mathbb{N}}\) and \(\{\rho(y,z_{j})\}_{j\in\mathbb{N}}\) are unbounded and we deduce that \[\alpha:=\limsup_{j\longrightarrow\infty}\sqrt[j]{\rho(x,z_{j})}\geq 1\quad\text{ and}\quad\alpha^{\prime}:=\limsup_{j\longrightarrow\infty}\sqrt[j]{\rho(y,z_{j})}\geq 1.\] We first show that \(\alpha^{\prime}>1\) if \(\alpha>1\). Note that there exists a subsequence \(n_{k}\in\mathbb{N}\) (\(k\in\mathbb{N}\)) such that \(\sqrt[n]{\rho(x,z_{n_{k}})}\) converges to \(\alpha\) as \(k\) tends to infinity with the following properties: \[\sqrt[n]{\rho(x,z_{n_{k}})}\geq\alpha\quad\text{and}\quad\alpha^{n_{k}}\geq \rho(x,y)+1\quad\text{for each $k\in\mathbb{N}$}.\] Since \(\rho(y,z_{n_{k}})\geq\rho(x,z_{n_{k}})-\rho(x,y)\geq\alpha^{n_{k}}-\rho(x,y)\geq 1\) for each \(k\in\mathbb{N}\), we deduce that \[\frac{1}{n_{k}}\log\rho(x,z_{n_{k}})\leq\frac{1}{n_{k}}\log\left\{\rho(y,z_{n _{k}})+\rho(x,y)\right\}\leq\frac{1}{n_{k}}\log\rho(y,z_{n_{k}})+\frac{1}{n_{k }}\log\left(1+\rho(x,y)\right)\] Now, let \(N\in\mathbb{N}\). Since there exists \(K\in\mathbb{N}\) such that \(n_{k}\geq N\) for each \(k\in\mathbb{N}\) with \(k\geq K\), we obtain that \[\frac{1}{n_{k}}\log\rho(x,z_{n_{k}})\leq\sup_{n\geq N}\frac{1}{n}\log\rho(y,z_{n })+\frac{1}{N}\log\left(1+\rho(x,y)\right)\] and it follows that \(\log\alpha\leq\log\alpha^{\prime}\). Therefore, we have proved that \(\alpha^{\prime}>1\) if \(\alpha>1\). We next show that \(\alpha^{\prime}=1\) if \(\alpha=1\). Indeed, since it is sufficient to show that \(\alpha^{\prime}\leq 1\), we assume that \(\alpha^{\prime}>1\). Then, by switching \(x\) and \(y\) in the above argument, we obtain that \(\alpha>1\) and this contradicts \(\alpha=1\). Thus, we have proved that \(\alpha^{\prime}=1\) if \(\alpha=1\). On the other hand, the constant \(\alpha\geq 0\) depends \(x\in X\) if \(Z\) is bounded. In fact, if we consider an autonomous iteration on a complete metric space \(X\) (and assume that \(X\) is not a single set), then we easily show that \(\alpha=0\) or \(1\). However, as we have mentioned in Remark 2.10, the condition in Remark 2.9 is automatically satisfied with \(r:=c\) and \(C^{\prime}:=\sup_{j\in\mathbb{N}}\rho(x,z_{j})\) if \(Z\) is bounded. Therefore, by Remark 2.9, we have already obtained a "compatible" result than Corollary 2.11 if \(Z\) is bounded. ### Examples of the sequence of contractive mappings In this subsection, we consider some examples of non-autonomous iterations. The following examples show that non-autonomous iterations have different properties from autonomous iterations and non-autonomous iterations on a bounded set. The following example shows that if we do not assume the condition in Lemma 2.5, the conclusion in Lemma 2.5 does not hold in general. **Example 2.13**.: Let \(f_{j}\colon\mathbb{R}\to\mathbb{R}\) (\(j\in\mathbb{N}\)) are defined by \[f_{j}(x):=c(x-a_{j})+a_{j}=cx+(1-c)a_{j}\ (x\in\mathbb{R}),\] where \(c\in(0,1)\) and \(a_{j}\in\mathbb{R}\). Note that for each \(j\in\mathbb{N}\), \(a_{j}\in\mathbb{R}\) is the unique fixed point of \(f_{j}\) i.e. \(z_{j}=a_{j}\). Then, by induction with respect to \(n\in\mathbb{N}\), for all \(m\in\mathbb{N}\), \(n\in\mathbb{N}_{0}\) and \(x\in\mathbb{R}\), we have \[f_{[m,m+n]}(x)=c^{n+1}x+(1-c)\sum_{j=0}^{n}c^{j}a_{m+j}. \tag{2.12}\] Now, we consider the example which does not satisfies the assumption in Lemma 2.5. Let \(c=1/2\), \(a_{i}=2^{i+1}\). Note that \[\sum_{j=1}^{\infty}c^{j}\cdot\rho_{\mathbb{R}}(0,a_{j})=\sum_{j=1}^{\infty}2^ {-j}\cdot 2^{j+1}=\sum_{j=1}^{\infty}2=\infty,\] where \(\rho_{\mathbb{R}}\) is the Euclidean metric on \(\mathbb{R}\). It follows that this example does not satisfies the assumption in Lemma 2.5. In addition, by the equation (2.12), we have \[f_{[m,m+n]}(x)=\frac{1}{2^{n+1}}x+\frac{1}{2}\sum_{j=0}^{n}\frac{1}{2^{j}}\cdot 2 ^{m+j}=\frac{1}{2^{k}}x+2^{m}\cdot(n+1)\] for each \(x\in\mathbb{R}\), \(m\in\mathbb{N}\) and \(n\in\mathbb{N}_{0}\). Therefore, we obtain that for each \(m\in\mathbb{N}\) and \(x\in\mathbb{R}\), \(f_{[m,m+n]}(x)\) does not converge as \(n\) tends to infinity, which deduce that the conclusion in Lemma 2.5 does not hold without the condition in Lemma 2.5. The following example shows that there is a example which satisfies the condition in Lemma 2.5 ( the inequality (2.2)) but which does not satisfies the condition in Remark 2.9. In addition, we estimate the convergence rate of the non-autonomous iteration and show that recursively compatible sequence is unbounded. **Example 2.14**.: In Example 2.13, we set \(c=1/2\), \(a_{i}=2^{i}/i^{l+1}\) (\(l\in\mathbb{N}\)). Then, note that \[\sum_{j=1}^{\infty}c^{j}\cdot\rho_{\mathbb{R}}(0,z_{j})=\sum_{j=1}^{\infty} \frac{1}{2^{j}}\cdot\frac{2^{j}}{j^{l+1}}=\sum_{j=1}^{\infty}\frac{1}{j^{l+1}} <\infty, \tag{2.13}\] where \(\rho_{\mathbb{R}}\) is the Euclidean metric on \(\mathbb{R}\). Then, we first show that this example does not satisfies the condition in Remark 2.9. Indeed, let \(r\in(0,1)\). If for each \(m\in\mathbb{N}\) and \(y\in X\), there exists and \(\tilde{C}_{m}(y)>0\) such that \[\rho_{\mathbb{R}}(f_{[m,m+n]}(x),x_{m}))\leq\tilde{C}_{m}(y)\cdot r^{n}\] for each \(n\in\mathbb{N}_{0}\). Then, by the equation (2.12), we have \[f_{[m,m+n]}(x)=\frac{1}{2}\sum_{k=0}^{n}\frac{1}{2^{k}}\frac{2^{m+k}}{(m+k)^{ l+1}}=2^{m-1}\cdot\sum_{k=0}^{n}\frac{1}{(m+k)^{l+1}}=2^{m-1}\cdot\sum_{k=m}^{m+ n}\frac{1}{k^{l+1}}.\] Since \(x_{m}=\lim_{n\to\infty}f_{[m,m+n]}(x)=2^{m-1}\cdot\sum_{k=m}^{\infty}1/k^{l+1}\), we deduce that \[\frac{1}{(m+n+1)^{l+1}}<2^{m-1}\cdot\sum_{k=m+n+1}^{\infty}\frac{1}{k^{l+1}}= \rho_{\mathbb{R}}(f_{[m,m+n]}(x),x_{m})\leq\tilde{C}_{m}(y)\cdot r^{n}\] for each \(n\in\mathbb{N}_{0}\). it follows that \(1<\tilde{C}_{m}(y)(m+n+1)^{l+1}r^{n}\to 0\) as \(n\) tends to infinity, which deduce the contradiction. Therefore, we have proved that this example does not satisfies the condition in Remark 2.9. Instead, let \(m\in\mathbb{N}\). Since \[c^{j}\rho_{\mathbb{R}}(x,z_{j})=1/2^{j}\cdot 2^{j}/j^{l+1}=1/j^{l+1}\] for each \(j\in\mathbb{N}\) and \(\sum_{j\in\mathbb{N}}1/j^{l+1}<\infty\), we obtain that this example satisfies the condition in Lemma 2.7 with \(a(j):=1/j^{l+1}\) (\(j\in\mathbb{N}\)). Moreover, we have \(k^{l-1}+k^{l}\leq(k+1)^{l}\) and \[\frac{1}{(k+1)^{l+1}}<\frac{1}{k(k+1)^{l}}=\frac{k^{l-1}}{k^{l}(k+1)^{l}}\leq \frac{(k+1)^{l}-k^{l}}{k^{l}(k+1)^{l+1}}=\frac{1}{k^{l}}-\frac{1}{(k+1)^{l}}\] for each \(k,l\in\mathbb{N}\). It follows that \[c^{-m}\cdot\sum_{k=m+n+1}^{\infty}a(k)=2^{m}\cdot\sum_{k=m+n}^{\infty}\frac{1 }{(k+1)^{l+1}}\leq 2^{m}\cdot\sum_{k=m+n}^{\infty}\left\{\frac{1}{k^{l}}-\frac{1}{(k+1) ^{l}}\right\}=\frac{2^{m}}{(m+n)^{l}}\] for each \(m\in\mathbb{N}\) and \(n\in\mathbb{N}_{0}\). By Lemma 2.7, we deduce that \[\rho_{\mathbb{R}}(f_{[m,m+n]}(y),x_{m})\leq\max\{(1+c),\rho(y,x)\}\cdot\max \left\{\frac{1}{2^{n+1}},\frac{2^{m}}{(m+n)^{l}}\right\}\] for each \(m\in\mathbb{N}\), \(n\in\mathbb{N}_{0}\) and \(y\in\mathbb{R}\). Thus, we obtain the convergence rate of the sequence \(\{f_{[m,m+n]}(y)\}_{n\in\mathbb{N}}\) in this example. It is obvious that \(\{x_{m}\in X\mid m\in\mathbb{N}\}\) is unbounded since \(x_{m}=2^{m-1}\cdot\sum_{k=m}^{\infty}1/k^{l+1}\) for each \(m\in\mathbb{N}\). The following example shows that the set \(\{x_{m}\in X\ |\ m\in\mathbb{N}\}\) is unbounded in general even if we assume that the condition in Remark 2.9 holds. **Example 2.15**.: In Example 2.13, we set \(c\in(0,1)\), \(a_{i}=i\) and \(x=0\in\mathbb{R}\). Note that \(\{z_{j}\ |\ j\in\mathbb{N}\}=\{a_{j}\ |\ j\in\mathbb{N}\}=\mathbb{N}\) is unbounded and there exist \(C>0\) and \(r\in(c,1)\) such that \[\rho_{\mathbb{R}}(x,z_{j})=j\leq C\cdot\left(\frac{r}{c}\right)^{j} \tag{2.14}\] for each \(j\in\mathbb{N}\), where \(\rho_{\mathbb{R}}\) is the Euclidean metric on \(\mathbb{R}\). Therefore, the non-autonomous iterations \(\{f_{j}\}_{j\in\mathbb{N}}\) satisfies the condition in Remark 2.9. In addition, let \(n\in\mathbb{N}\) and we set \(S_{n}:=\sum_{j=0}^{n}j\cdot c^{j}=\sum_{j=1}^{n}j\cdot c^{j}\). Then, we have \[(1-c)S_{n} =\sum_{j=1}^{n}j\cdot c^{j}-\sum_{j=1}^{n}j\cdot c^{j+1}=c+\sum_{ j=1}^{n-1}(j+1)\cdot c^{j+1}-\sum_{j=1}^{n-1}j\cdot c^{j+1}-n\cdot c^{n+1}\] \[=c+\sum_{j=1}^{n-1}c^{j+1}-n\cdot c^{n+1}=c+(1-c)^{-1}\left(c^{2} -c^{n+1}\right)-n\cdot c^{n+1}.\] By the equation (2.12), it follows that \[f_{[m,m+n]}(y)=c^{n+1}y+(1-c)\sum_{j=0}^{n}c^{j}\cdot(m+j)=c^{n+1 }y+(1-c)m\sum_{j=0}^{n}c^{j}+(1-c)S_{n}\] \[=c^{n+1}y+m\left(1-c^{n+1}\right)+c+(1-c)^{-1}\left(c^{2}-c^{n+1 }\right)-n\cdot c^{n+1}\longrightarrow m+c+c^{2}(1-c)^{-1}\] as \(n\) tends to infinity for all \(m\in\mathbb{N}\) and \(y\in\mathbb{R}\). Thus, we deduce that the set \(\{x_{m}\in X\ |\ m\in\mathbb{N}\}\) is also unbounded. ## 3 General iterated function systems In this section, we present the definition and properties of general IFSs which are the main notion of this paper. In Subsection 3.1, we first introduce the notion of words and trees which we need when we define the general IFSs. In Subsection 3.2, we next introduce the definition of general IFSs and consider the projection map for the general IFSs. ### Words and trees Let \(I\) be a countable set and \(\mathcal{J}:=\{J\subset I\ |\ \#(J)<\infty\ \}\), where \(\#(A)\) is the cardinality of \(A\) for each set \(A\). Besides, we set \(I^{*}:=\{\phi\}\cup\bigcup_{n\in\mathbb{N}}I^{n}\), where \(\phi\) is not a element of \(I\). We write \(\omega\in I^{m}\ (m\in\mathbb{N})\) as \(\omega_{1}\cdots\omega_{m}\ (\omega_{k}\in I,k=1,\ldots,m)\) and \(\omega\in I^{\mathbb{N}}\) as \(\omega_{1}\omega_{2}\cdots\ (\omega_{k}\in I,k\in\mathbb{N})\) respectively. For each \(\omega\in I^{*}\cup I^{\mathbb{N}}\), we set \[|\omega|:=\left\{\begin{aligned} 0&\text{ if }\omega=\phi\\ n&\text{ if }\omega\in I^{n}\ (n\in\mathbb{N})\.\\ \infty&\text{ if }\omega\in I^{\mathbb{N}}\end{aligned}\right.\] The set \(I\) is often called the alphabet and \(I^{*}\) is called the set of words with a finite length. Moreover, the convolution \(\omega\omega^{\prime}\) of \(\omega\in I^{*}\) and \(\omega^{\prime}\in I^{*}\cup I^{\mathbb{N}}\) is defined by \[\omega\omega^{\prime}:=\left\{\begin{aligned} \omega_{1}\cdots\omega_{| \omega|}\omega_{1}^{\prime}\cdots\omega_{|\omega^{\prime}|}^{\prime}& \text{ if }\omega\in I^{*}\\ \omega_{1}\cdots\omega_{|\omega|}\omega_{1}^{\prime}\cdots& \text{ if }\omega\in I^{\mathbb{N}}\end{aligned}\right..\] The maps \(\Pi_{[m,n]}\colon I^{\mathbb{N}}\to I^{n-m+1}\ \ (m,n\in\mathbb{N}\) with \(n\geq m)\) and \(\Pi_{[m,n)}\colon I^{\mathbb{N}}\to I^{n-m}\) (\(m,n\in\mathbb{N}\) with \(n>m\)) are defined by \[\Pi_{[m,n]}(\omega):=\omega_{m}\cdots\omega_{n}\quad\text{and}\quad\Pi_{[m,n )}(\omega):=\omega_{m}\cdots\omega_{n-1}\quad(\omega:=\omega_{1}\omega_{2} \cdots\in I^{\mathbb{N}})\] respectively. For simplicity, we write \(\Pi_{[n,n]}\) as \(\Pi_{n}\), \(\Pi_{[m,n]}(\omega)\) as \(\omega_{[m,n]}\) and \(\Pi_{[m,n)}(\omega)\) as \(\omega_{[m,n)}\) respectively. Also, the maps \(\Pi_{(m,n]}\colon I^{\mathbb{N}}\to I^{n-m}\) and \(\Pi_{(m,n)}\colon I^{\mathbb{N}}\to I^{n-m-1}\) are defined by \[\Pi_{(m,n]}(\omega):=\omega_{m+1}\cdots\omega_{n}\quad\text{and}\quad\Pi_{(m, n)}(\omega):=\omega_{m+1}\cdots\omega_{n-1}\quad(\omega:=\omega_{1}\omega_{2} \cdots\in I^{\mathbb{N}})\] respectively if the relation of \(m,n\in\mathbb{N}\) is compatible. Also, for simplicity, we write \(\Pi_{(m,n]}(\omega)\) as \(\omega_{(m,n]}\) and \(\Pi_{(m,n)}(\omega)\) as \(\omega_{(m,n)}\) respectively. Similarly, the maps \(\Pi_{[m,\infty)}\colon I^{\mathbb{N}}\to I^{\mathbb{N}}\) and \(\Pi_{(m,\infty)}\colon I^{\mathbb{N}}\to I^{\mathbb{N}}\ \ (m\in\mathbb{N})\) are defined by \[\Pi_{[m,\infty)}(\omega):=\omega_{m}\omega_{m+1}\cdots\quad\text{and}\quad\Pi_ {(m,\infty)}(\omega):=\omega_{m+1}\omega_{m+2}\cdots\quad(\omega:=\omega_{1} \omega_{2}\cdots\in I^{\mathbb{N}})\] respectively. Similarly, for simplicity, we write \(\Pi_{[m,\infty)}(\omega)\) as \(\omega_{[m,\infty)}\) and \(\Pi_{(m,\infty)}(\omega)\) as \(\omega_{(m,\infty)}\) respectively. we endow alphabet \(I\) with the discrete topology and \(I^{\mathbb{N}}\) with the product topology. We now introduce the definition of trees. **Definition 3.1**.: Let \(I\) be a countable set and \(\phi\) be a non-element of \(I\). We say that non-empty closed subset \(\mathbb{T}\) of \(I^{\mathbb{N}}\) is a tree with \(I\) if \(S(\mathbb{T},\phi):=\Pi_{1}(\mathbb{T})\in\mathcal{J}\), and \[S(\mathbb{T},\omega_{[1,n]}):=\{\tau_{n+1}\in I\ |\ \tau\in\mathbb{T},\ \tau_{[1,n]}= \omega_{[1,n]}\ \}\in\mathcal{J}\] for each \(n\in\mathbb{N}\) and \(\omega\in\mathbb{T}\). For a tree \(\mathbb{T}\), we set the following: \[\mathbb{T}^{0}:=\{\phi\},\quad\mathbb{T}^{[m,n]}:=\Pi_{[m,n]}(\mathbb{T})\ (m,n\in\mathbb{N}\text{ with }m\leq n)\quad\text{and}\quad\mathbb{T}^{[1,*]}:= \mathbb{T}^{0}\cup\bigcup_{n\in\mathbb{N}}\mathbb{T}^{[1,n]}.\] Also, we set \(\mathbb{T}^{n}:=\mathbb{T}^{[n,n]}\ (n\in\mathbb{N})\) for simplicity. In addition, we set \(\mathcal{J}_{1}:=\{\Pi_{1}(\mathbb{T})\}\ (\subset\mathcal{J})\) and \(I_{1}:=\Pi_{1}(\mathbb{T})\ (\subset I)\). Besides, for each \(n\in\mathbb{N}\) with \(n\geq 2\), we set \[\mathcal{J}_{n} :=\{S(\mathbb{T},\omega_{[1,n-1]})\ |\ \omega_{[1,n-1]}\in \mathbb{T}^{[1,n-1]}\ \}\ (\subset\mathcal{J})\text{ and}\] \[I_{n} :=\bigcup_{\omega_{[1,n-1]}\in\mathbb{T}^{[1,n-1]}}S(\mathbb{T}, \omega_{[1,n-1]})\ (\subset I)\] respectively. Note that \(\#(\mathbb{T}^{[1,n]})<\infty\) for each \(n\in\mathbb{N}\). In addition, note that \(\#(\mathcal{J}_{n})<\infty\) and \(\#(I_{n})<\infty\) for each \(n\in\mathbb{N}\). **Remark 3.2**.: By the similar argument, we also define the tree \(\mathbb{T}_{\omega}\) (called the sub-tree of \(\mathbb{T}\) conditioned by \(\omega\in\mathbb{T}^{[1,*]}\) ). Indeed, let \(\omega\in\mathbb{T}^{[1,*]}\) and we set the following: \[\mathbb{T}_{\omega} :=\{\Pi_{([\omega|,\infty)}(\tau)\in\Pi_{([\omega|,\infty)}( \mathbb{T})\ |\ \tau\in\mathbb{T},\ \Pi_{[1,|\omega|]}(\tau)=\omega\ \},\quad\mathbb{T}_{\omega}^{0}:=\{\phi\},\] \[\mathbb{T}_{\omega}^{[m,n]} :=\Pi_{[m,n]}(\mathbb{T}_{\omega})\ (m,n\in\mathbb{N}\text{ with }m\leq n)\quad\text{and}\quad\mathbb{T}_{\omega}^{[1,*]}:= \mathbb{T}_{\omega}^{0}\cup\bigcup_{n\in\mathbb{N}}\mathbb{T}_{\omega}^{[1,n]}.\] Also, we set \(\mathbb{T}_{\omega}^{n}:=\mathbb{T}_{\omega}^{[n,n]}\ \ (n\in\mathbb{N})\) for simplicity. We endow \(\mathbb{T}_{\omega}\ (\subset I^{\mathbb{N}})\) with the induced topology. Note that \(\mathbb{T}_{\omega}\subset I^{\mathbb{N}}\) is a tree for each \(\omega\in\mathbb{T}^{[1,*]}\). To show this, let \(\omega\in\mathbb{T}^{[1,*]}\). It is obvious that \(\mathbb{T}_{\omega}\) is closed, since \(\mathbb{T}\) is closed and the map \(\tau\mapsto\omega\tau\) is continuous on \(I^{\mathbb{N}}\). In addition, Since \(\Pi_{[m,n]}\circ\Pi_{([\omega],\infty)}=\Pi_{[[\omega]+m,|\omega|+n]}\) for each \(m,n\in\mathbb{N}\) with \(m\leq n\), we have \[S(\mathbb{T}_{\omega},\phi)=\Pi_{1}(\mathbb{T}_{\omega})=\{\Pi_{[\omega]+1}( \tau)\in I\ |\ \tau\in\mathbb{T},\ \Pi_{[1,|\omega|]}(\tau)=\omega\ \}=S(\mathbb{T},\omega)\in \mathcal{J}.\] Moreover, let \(n\in\mathbb{N}\) and \(\tilde{\omega}\in\mathbb{T}_{\omega}^{[1,n]}\). Since \(|\omega|+n=|\omega|+|\tilde{\omega}|=|\omega\tilde{\omega}|\), we have \[S(\mathbb{T}_{\omega},\omega^{\prime}) =\{\Pi_{n+1}(\tau^{\prime})\in I\ |\ \tau^{\prime}\in\mathbb{T}_{\omega},\ \Pi_{[1,n]}(\tau^{\prime})=\omega^{ \prime}\ \}\] \[=\{\Pi_{n+1}(\Pi_{([\omega],\infty)}(\tau))\in I\ |\ \tau\in \mathbb{T},\ \Pi_{[1,|\omega|]}(\tau)=\omega,\ \Pi_{[1,n]}(\Pi_{([\omega],\infty)}(\tau))= \omega^{\prime}\ \}\] \[=\{\Pi_{[|\omega|+n+1,|\omega|+n+1]}(\tau)\in I\ |\ \tau\in \mathbb{T},\ \Pi_{[1,|\omega|]}(\tau)=\omega,\ \Pi_{[|\omega|+1,|\omega|+n]}(\tau)= \omega^{\prime}\ \}\] \[=\{\Pi_{[|\omega\omega^{\prime}|+1,|\omega\omega^{\prime}|+1]}( \omega\tau)\in I\ |\ \tau\in\mathbb{T},\ \Pi_{[1,|\omega\omega^{\prime}|]}(\tau)= \omega\omega^{\prime}\ \}=S(\mathbb{T},\omega\omega^{\prime})\in \mathcal{J}.\] Therefore, we have proved that \(\mathbb{T}_{\omega}\) is a tree with \(I\). Note that, by the above argument and the equality \((\mathbb{T}_{\omega})_{\omega^{\prime}}=\mathbb{T}_{\omega\omega^{\prime}}\), \(S(\mathbb{T}_{\omega},\omega^{\prime}\tau)=S(\mathbb{T}_{\omega\omega^{\prime }},\tau)\) for each \(\omega\in\mathbb{T}^{[1,*]}\), \(\omega^{\prime}\in\mathbb{T}_{\omega}^{[1,*]}\) and \(\tau\in\mathbb{T}_{\omega\omega^{\prime}}^{[1,*]}\). Obviously, \(\mathbb{T}_{\phi}=\mathbb{T}\), \(\mathbb{T}_{\phi}^{0}=\mathbb{T}^{0}\), \(\mathbb{T}_{\phi}^{[m]}=\mathbb{T}^{[m,n]}\) for each \(m,n\in\mathbb{N}\) with \(m\leq n\) and \(\mathbb{T}_{\phi}^{[1,*]}=\mathbb{T}^{[1,*]}\). Note that the papers [5], [6] and [29] introduce the notion of \(V\)-variability by using the sub-tree. Indeed, we say that the tree \(\mathbb{T}\) with \(I\) is \(V\)-variable (\(V\in\mathbb{N}\)) if \[\#(\{\ \mathbb{T}_{\omega_{[1,n]}}\ |\ \omega_{[1,n]}\in\mathbb{T}^{[1,n]}\ \})\leq V\] for each \(n\in\mathbb{N}\). We finally prove the following proposition. **Proposition 3.3**.: Let \(I\) be a countable set and \(\mathbb{T}\) be a subset of \(I^{\mathbb{N}}\). Then, \(\mathbb{T}\) is tree with \(I\) if and only if \(\mathbb{T}\) is non-empty and compact. Proof.: Let \(\mathbb{T}\) be a subset of \(I^{\mathbb{N}}\). Assume that \(\mathbb{T}\) is a tree with \(I\). Note that \(\#(I_{n})<\infty\) for each \(n\in\mathbb{N}\) and \(\mathbb{T}\subset\prod_{i=1}^{\infty}I_{i}\). Indeed, let \(\omega=\omega_{1}\omega_{2}\cdots\in\mathbb{T}\). Then, \(\omega_{1}\in\Pi_{1}(\mathbb{T})=I_{1}\) and \(\omega_{n}\in S(\mathbb{T},\omega_{[1,n-1]})\subset I_{n}\) for each \(n\in\mathbb{N}\) with \(n\geq 2\). Therefore, we deduce that \(\mathbb{T}\) is compact. It remains to show that if \(\mathbb{T}\subset I^{\mathbb{N}}\) is non-empty and compact, then \(\mathbb{T}\) is a tree with \(I\). Note that \(\Pi_{[1,n]}(\mathbb{T})\subset I^{n}\) is non-empty and compact for each \(n\in\mathbb{N}\). Now, let \(n\in\mathbb{N}\) and \(\omega_{[1,n]}\in\Pi_{[1,n]}(\mathbb{T})\ \ (\omega\in\mathbb{T})\) and we set \[S:=\{\tau_{n+1}\in I\ |\ \tau=\tau_{1}\tau_{2}\cdots\in\mathbb{T},\ \tau_{[1,n]}=\omega_{[1,n]}\ \} \ (\neq\emptyset).\] Since the mapping \(e\colon S\to\Pi_{[1,n+1]}(\mathbb{T})\) defined by \(e(\tau_{n+1}):=\omega_{[1,n]}\tau_{n+1}\ (\tau_{n+1}\in S)\) is well-defined and injective, we have \((0<)\#(S)\leq\#(\Pi_{n+1}(\mathbb{T}))(<\infty)\). Thus, we have proved our proposition. **Remark 3.4**.: If \(\mathbb{T}\) is \(1\)-variable, then the tree \(\mathbb{T}\) has another representation. Indeed, by the definition of \(1\)-variability of \(\mathbb{T}\), we deduce that \(\#(\mathcal{J}_{n})=1\) for each \(n\in\mathbb{N}\) and \(I_{n}=S(\mathbb{T},\omega_{[1,n]})\) for each \(\omega_{[1,n]}\in\mathbb{T}^{[1,n]}\ (n\in\mathbb{N})\), and by the induction we deduce that \(\mathbb{T}^{1}=\{\omega_{1}\in I\ |\ \omega\in I_{1}\ \}=I_{1}\) and \[\mathbb{T}^{[1,n+1]} =\Pi_{[1,n+1]}(\mathbb{T})=\bigcup_{\omega_{[1,n]}\in\mathbb{T}^{[1,n ]}}\{\omega_{[1,n]}\}\times S(\mathbb{T},\omega_{[1,n]})=\bigcup_{\omega_{[1,n]} \in\mathbb{T}^{[1,n]}}\{\omega_{[1,n]}\}\times I_{n+1}\] \[=\mathbb{T}^{[1,n]}\times I_{n+1}=I_{1}\times\cdots\times I_{n} \times I_{n+1}=\prod_{i=1}^{n+1}I_{i}\] for each \(n\in\mathbb{N}\). Now, let \(x=x_{1}x_{2}\cdots\in\prod_{i=1}^{\infty}I_{i}\). Since \(x\in\mathbb{T}^{[1,n]}\times\prod_{i=n+1}^{\infty}I_{i}\) for each \(n\in\mathbb{N}\), there exists \(\omega^{(n)}=\omega_{1}^{(n)}\omega_{2}^{(n)}\cdots\in\mathbb{T}\) such that \(\omega_{[1,n]}^{(n)}=x_{[1,n]}\) and we deduce that \(\omega^{(n)}\) converges to \(x\) as \(n\in\mathbb{N}\) tends to infinity. Since \(\mathbb{T}\) is closed, we have \(x=\lim_{n\to\infty}\omega^{(n)}\in\mathbb{T}\). Therefore, we obtain that \[\mathbb{T}=\prod_{i=1}^{\infty}I_{i}\quad\text{and}\quad\mathbb{T}^{[m,n]}= \prod_{i=m}^{n}I_{i}\ (m,n\in\mathbb{N}\ \text{with}\ m\leq n).\] By the similar argument, we also obtain that if \(\mathbb{T}\) is \(1\)-variable, then \[\mathbb{T}_{\omega}=\prod_{i=|\omega|+1}^{\infty}I_{i},\quad\text{and}\quad \mathbb{T}_{\omega}^{[m,n]}=\prod_{i=|\omega|+m}^{|\omega|+n}I_{i}\ (m,n\in\mathbb{N}\ \text{with}\ m\leq n).\] for each \(\omega\in\mathbb{T}^{[1,*]}\). Note that this is the case in the paper [28]. ### General iterated function systems and the projection maps In this subsection, we introduce the notion of general IFSs and we consider the projection map for the general IFSs. In this paper, general IFS is a pair of a tree and a family of (uniformly) contractive mappings on a complete metric space. Later, we introduce the family of the limits for a general IFS by using the projection map for the general IFSs. We now introduce the definition of general IFSs. **Definition 3.5**.: Let \((X,\rho)\) be a complete metric space. We say that a pair \((\{f_{i}\}_{i\in I},\mathbb{T})\) is a general IFS on \((X,\rho)\) with the uniform contraction constant \(c\in(0,1)\) if 1. \(\mathbb{T}\) is a tree with a set \(I\) and 2. \(f_{i}\colon X\to X\ (i\in I)\) is a family of contractive mappings on \(X\) with the uniform contraction constant \(c\), that is, for all \(i\in I\) and \(x,y\in X\), \[\rho(f_{i}(x),f_{i}(y))\leq c\ \rho(x,y).\] Note that, for each \(i\in I\), there exists the unique fixed point \(z_{i}\) of \(f_{i}\) since \(X\) is complete. **Lemma 3.6**.: Let \((\{f_{i}\}_{i\in I},\mathbb{T})\) be a general IFS with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) and \(z_{i}\in X\) be the unique fixed point of \(f_{i}\ (i\in I)\). Suppose that there exists \(x\in X\) such that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] Then, there exists \(\{x_{\omega}\}_{\omega\in\mathbb{T}}\) such that, \(x_{\omega}\in X\) for each \(\omega\in\mathbb{T}\) and \[\sup\Big{\{}\ \rho(x_{\omega},x_{\omega^{\prime}})\ |\ \omega,\omega^{\prime} \in\mathbb{T},\omega_{[1,s]}=\omega^{\prime}_{[1,s]}\ \Big{\}}\leq 2\cdot(1+c)\cdot c^{-1}\sum_{k=s+1}^{\infty} \left\{\max_{i\in I_{k}}\rho(x,z_{i})\right\}\cdot c^{k} \tag{3.1}\] for each \(s\in\mathbb{N}\). Proof.: Let \(\omega\in\mathbb{T}\) and we set \(g_{m}:=f_{\omega_{m}}\) (\(m\in\mathbb{N}\)). Note that \(\{g_{m}\}_{m\in\mathbb{N}}\) is a sequence of contractive mappings on \(X\) with an uniform contraction constant \(c\in(0,1)\) (see, Definitions 2.1 and 3.5) and satisfies the assumption in Lemma 2.5. Note that \(\omega_{k}\in I_{k}\) for each \(k\in\mathbb{N}\). By the inequality (2.2) with \(m=1\), we deduce that \[\rho(g_{[1,1+n]}(x),x_{\omega})\leq(1+c)\cdot c^{-1}\sum_{k=n+2}^{\infty}c^{k} \rho(x,z_{\omega_{k}})\leq(1+c)\cdot c^{-1}\sum_{k=n+2}^{\infty}\left\{\max_{i \in I_{k}}\rho(x,z_{i})\right\}\cdot c^{k} \tag{3.2}\] for all \(n\in\mathbb{N}_{0}\), where \(x_{\omega}\in X\) is the first element \(x_{1}\) of the recursively compatible sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) for \(\{g_{m}\}_{m\in\mathbb{N}}\) and \(z_{\omega_{k}}\) is the unique fixed point of \(g_{k}=f_{\omega_{k}}\) (\(k\in\mathbb{N}\)). By the same argument, we also deduce that \[\rho(g^{\prime}_{[1,1+n]}(x),x_{\omega^{\prime}})\leq(1+c)\cdot c^{-1}\sum_{k= n+2}^{\infty}\left\{\max_{i\in I_{k}}\rho(x,z_{i})\right\}\cdot c^{k} \tag{3.3}\] for all \(n\in\mathbb{N}_{0}\) and \(\omega^{\prime}\in\mathbb{T}\), where \(g^{\prime}_{m}:=f_{\omega^{\prime}_{m}}\) (\(m\in\mathbb{N}\)) and \(x_{\omega^{\prime}}\in X\) is the first element \(x^{\prime}_{1}\) of the recursively compatible sequence \(\{x^{\prime}_{m}\}_{m\in\mathbb{N}}\) for \(\{g^{\prime}_{m}\}_{m\in\mathbb{N}}\). Now, let \(s\in\mathbb{N}\) and \(\omega,\omega^{\prime}\in\mathbb{T}\) with \(\omega_{[1,s]}=\omega^{\prime}_{[1,s]}\). Note that \(g_{m}=g^{\prime}_{m}\) for each \(m=1,\ldots,s\). By the inequality (3.2) and (3.3) with \(n=s-1\in\mathbb{N}_{0}\), we have \[\rho(x_{\omega},x_{\omega^{\prime}}) \leq\rho(x_{\omega},g_{[1,s]}(x))+\rho(g^{\prime}_{[1,s]}(x),x_{ \omega^{\prime}})\] \[\leq 2\cdot(1+c)\cdot c^{-1}\sum_{k=s+1}^{\infty}\left\{\max_{i\in I _{k}}\rho(x,z_{i})\right\}\cdot c^{k}.\] Thus, we have proved our lemma. We now introduce the definition of the projection map on trees. **Definition 3.7**.: Let Let \((\{f_{i}\}_{i\in I},\mathbb{T})\) be a general IFS with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) and \(z_{i}\in X\) be the unique fixed point of \(f_{i}\) (\(i\in I\)). Suppose that there exists \(x\in X\) such that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n} <\infty.\] Then, the projection map \(\pi\colon\mathbb{T}\to X\) for \((\{f_{i}\}_{i\in I},\{J_{n}\}_{n\in\mathbb{N}})\) is defined by \[\pi(\omega):=x_{\omega}\quad(\omega\in\mathbb{T}),\] where \(x_{\omega}\in X\) (\(\omega\in\mathbb{T}\)) are the elements introduced in Lemma 3.6 ( the first elements \(x_{1}\) of the recursively compatible sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) for \(\{f_{\omega_{m}}\}_{m\in\mathbb{N}}\)). **Remark 3.8**.: By the same argument in Lemma 2.2, if a general IFS \((\{f_{i}\}_{i\in I},\mathbb{T})\) with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) satisfies the the condition \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n} <\infty.\] for some \(x\in X\) ( where \(z_{i}\in X\) be the unique fixed point of \(f_{i}\) (\(i\in I\))), then we have \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(y,z_{i})\right\}\cdot c^{n} <\infty.\] for all \(y\in X\). In addition, if a general IFS \((\{f_{i}\}_{i\in I},\mathbb{T})\) with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) satisfies the above condition for some \(x\in X\), then we also define the projection map for \((\{f_{i}\}_{i\in I},\mathbb{T}_{\omega})\) for each \(\omega\in\mathbb{T}^{[1,*]}\). Indeed, Let \(\omega\in\mathbb{T}^{[1,*]}\). By Remark 3.2, we have \(I^{\omega}_{\mathbb{I}}:=\Pi_{1}(\mathbb{T}_{\omega})=S(\mathbb{T},\omega) \subset I_{[\omega]+1}\). In addition, let \(n\geq 2\). By Remark 3.2, we deduce that \(\{\omega\}\times\mathbb{T}^{[1,n-1]}_{\omega}\subset\mathbb{T}^{[1,|\omega|+ n-1]}\), \(S(\mathbb{T}_{\omega},\omega^{\prime}_{[1,n-1]})=S(\mathbb{T},\omega\omega^{ \prime}_{[1,n-1]})\) for each \(\omega^{\prime}_{[1,n-1]}\in\mathbb{T}^{[1,n-1]}_{\omega}\) and \[I^{\omega}_{n}:=\bigcup_{\omega^{\prime}_{[1,n-1]}\in\mathbb{T}^{[1,n-1]}_{ \omega}}S(\mathbb{T}_{\omega},\omega^{\prime}_{[1,n-1]})\subset\bigcup_{\tau \in\mathbb{T}^{[1,|\omega|+n-1]}}S(\mathbb{T},\tau)=I_{|\omega|+n}\] and it follows that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I^{\omega}_{n}}\rho(x,z_{i})\right\} \cdot c^{n}\leq\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{|\omega|+n}}\rho(x,z_ {i})\right\}\cdot c^{n}\leq c^{-|\omega|}\sum_{n\in\mathbb{N}}\left\{\max_{i \in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] Therefore, by Lemma 3.6, the projection map for \((\{f_{i}\}_{i\in I},\mathbb{T}_{\omega})\) is well-defined for each \(\omega\in\mathbb{T}^{[1,*]}\) Henceforth, we also denote by \(\pi\) the projection map for \((\{f_{i}\}_{i\in I},\mathbb{T}_{\omega})\). We next define the notions used in Theorem 3.10. **Definition 3.9**.: Let \(I\) be a set. The shift map \(\sigma\colon I^{\mathbb{N}}\to I^{\mathbb{N}}\) is defined by \[\sigma(\omega):=\omega_{2}\omega_{3}\cdots\quad(\omega=\omega_{1}\omega_{2} \cdots\in I^{\mathbb{N}}).\] Similarly, the map \(\sigma_{i}\colon I^{\mathbb{N}}\to I^{\mathbb{N}}\)\((i\in I)\) is defined by \[\sigma_{i}(\omega):=i\omega\cdots\quad(\omega\in I^{\mathbb{N}}).\] In addition, for each \(\omega=\omega_{1}\cdots\omega_{n}\in I^{n}\)\((n\in\mathbb{N})\), we set \(f_{\omega}:=f_{\omega_{1}}\circ\cdots\circ f_{\omega_{n}}\), and \(f_{\phi}:=\mathrm{id}_{X}\). We now show the following important theorem in this paper. Note that the following theorem is a generalization of the result on [14]. **Theorem 3.10**.: Let \(\pi\colon\mathbb{T}\to X\) be the projection map for a general IFS \((\{f_{i}\}_{i\in I},\mathbb{T})\) with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) such that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] for some \(x\in X\), where \(z_{i}\in X\) be the unique fixed point of \(f_{i}\)\((i\in I)\). Then, \(\pi\colon\mathbb{T}\to X\) is uniformly continuous on \(\mathbb{T}\). In addition, for each \(\omega\in\mathbb{T}^{[1,*]}\), \(f_{\omega^{\prime}}\circ\pi=\pi\circ\sigma_{\omega^{\prime}}\) on \(\mathbb{T}_{\omega\omega^{\prime}}\) for each \(\omega^{\prime}\in\mathbb{T}^{[1,*]}_{\omega}\). Proof.: We first show that \(\pi\colon\mathbb{T}\to X\) is uniformly continuous on \(\mathbb{T}\). Let \(\epsilon>0\). Then, there exists \(M\in\mathbb{N}\) such that \(2\cdot(1+c)\cdot c^{-1}\sum_{k=M+1}^{\infty}\left\{\max_{i\in I_{k}}\rho(x,z_ {i})\right\}\cdot c^{k}<\epsilon\). We set \(U:=\{\omega_{[1,M]}\}\times\prod_{i=M+1}^{\infty}I\subset I^{\mathbb{N}}\)\((\omega_{[1,M]}\in\mathbb{T}^{[1,M]})\) and let \(\tau,\tau^{\prime}\in U\). Since \(\tau_{i}=\tau^{\prime}_{i}\) for each \(i=1,\ldots M\) and by Lemma 3.6 we deduce that \[d(\pi(\tau),\pi(\tau^{\prime}))\leq 2\cdot(1+c)\cdot c^{-1}\sum_{k=M+1}^{\infty} \left\{\max_{i\in I_{k}}\rho(x,z_{i})\right\}\cdot c^{k}<\epsilon. \tag{3.4}\] Therefore, we have proved that \(\pi\colon\mathbb{T}\to X\) is uniformly continuous on \(\mathbb{T}\). We now show that, for each \(\omega\in\mathbb{T}^{[1,*]}\), \(f_{\omega^{\prime}}\circ\pi=\pi\circ\omega_{\omega^{\prime}}\) on \(\mathbb{T}_{\omega\omega^{\prime}}\) for each \(\omega^{\prime}\in\mathbb{T}_{\omega}^{[1,*]}\). Let \(\omega\in\mathbb{T}^{[1,*]}\), \(\omega^{\prime}\in\mathbb{T}_{\omega}^{[1,*]}\) and \(\tau\in\mathbb{T}_{\omega\omega^{\prime}}\). Then, we have \(\sigma_{\omega^{\prime}}(\tau)\in\mathbb{T}_{\omega}\). Indeed, by the definition of \(\mathbb{T}_{\omega\omega^{\prime}}\), we have \(\omega\omega^{\prime}\tau\in\mathbb{T}\) and \(\pi_{(|\omega|,\infty)}(\omega\omega^{\prime}\tau)=\omega^{\prime}\tau\). It follows that \[\sigma_{\omega^{\prime}}(\tau)=\omega^{\prime}\tau=\pi_{(|\omega|,\infty)}( \omega\omega^{\prime}\tau)\in\mathbb{T}_{\omega}.\] Now, recall that, by the condition in Theorem 3.10, \(\{f_{\tau_{m}}\}_{m\in\mathbb{N}}\) is a sequence of contractive mappings on \(X\) with an uniform contraction constant \(c\in(0,1)\) and satisfies the condition in Lemma 2.5. By definition of \(\pi\) for general IFS \((\{f_{i}\}_{i\in I},\mathbb{T}_{\omega\omega^{\prime}})\) and the inequality (2.2) with \(m=1\) and \(y=x\), we have \[\rho(f_{\tau_{[1,n+1]}}(x),\pi(\tau))\leq(1+c)\cdot c^{-1}\sum_{k =n+2}^{\infty}c^{k}\rho(x,z_{\tau_{k}})\] \[\leq(1+c)\cdot c^{-1}\sum_{k=n+2}^{\infty}\left\{\max_{i\in I_{k} ^{\omega\omega^{\prime}}}\rho(x,z_{i})\right\}\cdot c^{k}\leq(1+c)\cdot c^{-( |\omega\omega^{\prime}|+1)}\sum_{k=n+2}^{\infty}\left\{\max_{i\in I_{k}}\rho( x,z_{i})\right\}\cdot c^{k}\] for all \(\tau\in\mathbb{T}_{\omega\omega^{\prime}}\) and \(n\in\mathbb{N}_{0}\), where \(z_{\tau_{k}}\) is the unique fixed point of \(f_{\tau_{k}}\) (\(k\in\mathbb{N}\)) and we use the fact that \(\tau_{k}\in I_{k}^{\omega\omega^{\prime}}\subset I_{|\omega\omega^{\prime}|+k}\) for each \(k\in\mathbb{N}\) in Remark 3.8. Thus, by the definition of \(\pi(\tau)\), we have \[\rho(f_{(\omega^{\prime}\tau)_{[1,n+|\omega^{\prime}|+1]}}(x),f_{ \omega^{\prime}}(\pi(\tau)))\leq c^{|\omega^{\prime}|}\cdot\rho(f_{\tau_{[1,n+ 1]}}(x),\pi(\tau))\] \[\leq(1+c)\cdot c^{-(|\omega|+1)}\cdot\sum_{k=n+2}^{\infty}\left\{ \max_{i\in I_{k}}\rho(x,z_{i})\right\}\cdot c^{k}\] for each \(n\in\mathbb{N}\) and it follows that \(f_{(\omega^{\prime}\tau)_{[1,n+|\omega^{\prime}|+1]}}(x)\) converges to \(f_{\omega^{\prime}}(\pi(\tau))\) as \(n\) tends to infinity. On the other hand, by the definition of \(\pi(\omega^{\prime}\tau)\), \(f_{(\omega^{\prime}\tau)_{[1,n+|\omega^{\prime}|+1]}}(x)\) converges to \(\pi(\omega^{\prime}\tau)\) as \(n\) tends to infinity and it follows that \(f_{\omega^{\prime}}(\pi(\tau))=\pi(\omega^{\prime}\tau)=\pi(\sigma_{\omega^{ \prime}}(\tau))\) for each \(\tau\in\mathbb{T}_{\omega\omega^{\prime}}\). Hence, we have proved our lemma. **Remark 3.11**.: In Theorem 3.10, if there exist \(C>0\) and a non-negative-valued sequence \(b\) such that \(\{\max_{i\in I_{l}}\rho(x,z_{i})\}\cdot c^{l}\leq C\cdot b(l)\) with \(\sum_{l\in\mathbb{N}}b(l)<\infty\), then the similar result as the Lemma 3.6 and Theorem 3.10 also holds. ## 4 The family of limit sets for general IFSs In this section, we now introduce the definition of the family of the limit set for general IFSs and show the uniqueness (in some sense) of the family of the limit sets for the general IFS. Note that the main result (Theorem 4.5) is a generalization of the result on [14]. **Definition 4.1**.: Let \((\{f_{i}\}_{i\in I},\mathbb{T})\) be a general IFS with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) and \(z_{i}\in X\) be the unique fixed point of \(f_{i}\) (\(i\in I\)). Suppose that there exists \(x\in X\) such that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] Then, the non-empty compact subset \(\pi(\mathbb{T})\) of \(X\) is called the limit set for \((\{f_{i}\}_{i\in I},\mathbb{T})\), where continuous map \(\pi\colon\mathbb{T}\to X\) is the projection map for \((\{f_{i}\}_{i\in I},\mathbb{T})\) introduced in Definition 3.7 (also, see Proposition 3.3 and Theorem 3.10 ). **Remark 4.2**.: By Remark 3.8, if a general IFS \((\{f_{i}\}_{i\in I},\mathbb{T})\) with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) satisfies the condition \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] for some \(x\in X\) ( where \(z_{i}\in X\) is the unique fixed point of \(f_{i}\)\((i\in I)\)), then we also define the limit set \(\pi(\mathbb{T}_{\omega})\) for the general IFS \((\{f_{i}\}_{i\in I},\mathbb{T}_{\omega})\) for each \(\omega\in\mathbb{T}^{[1,*]}\). Note that \(\pi(\mathbb{T}_{\omega})\) is also non-empty compact subset of \(X\). To discuss the uniqueness of the limit set for general IFSs, we introduce the notion of the Hausdorff distance and its properties. Let \(\mathcal{K}(X)\) be the set of non-empty compact subsets in a complete metric space \((X,\rho)\). For each \(\epsilon>0\) and set \(A\subset X\), we set \(A_{\epsilon}:=\{x\in X\ |\ ^{\exists}a\in A,\ \text{s.t.}\ \rho(a,x)\leq\epsilon\}\). Let \(\rho_{H}\) be the Hausdorff distance on \(\mathcal{K}(X)\) defined by \[\rho_{H}(A,B):=\inf\{\epsilon>0\ |\ A\subset B_{\epsilon},B\subset A_{\epsilon }\}\quad(A,B\in\mathcal{K}(X)).\] Note that \(\rho_{H}(\{a\},\{a^{\prime}\})=\rho(a,a^{\prime})\) for each \(a,a^{\prime}\in X\) and \(\rho_{H}(A,\{a^{\prime}\})=\sup_{a\in A}\rho(a,a^{\prime})\) for each \(A\in\mathcal{K}(X)\) and \(a^{\prime}\in X\). Also, note that since \((X,\rho)\) is complete, \((\mathcal{K}(X),\rho_{H})\) is also complete (For example, see [17]). Indeed, if \(\{L_{n}\}_{n\in\mathbb{N}}\) is a Cauchy sequence in \((\mathcal{K}(X),\rho_{H})\), then \(\{L_{n}\}_{n\in\mathbb{N}}\) converges to \[L:=\bigcap_{n\in\mathbb{N}}\overline{\bigcup_{k\in\mathbb{N},k\geq n}{L_{k}} }^{\rho}\in\mathcal{K}(X) \tag{4.1}\] as \(n\) tends to infinity, where \(\overline{A}^{\rho}\) is the closure of \(A\subset X\) with respect to the metric \(\rho\). Before we prove the main theorem, we show the characterization of the projection map for a general IFSs \((\{f_{i}\}_{i\in I},\mathbb{T})\). **Proposition 4.3**.: Let \((\{f_{i}\}_{i\in I},\mathbb{T})\) be a general IFS with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) and \(z_{i}\in X\) be the unique fixed point of \(f_{i}\)\((i\in I)\). Suppose that there exists \(x\in X\) such that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] Then, for each \(\omega\in\mathbb{T}\) and \(A\in\mathcal{K}(X)\), the sequence \(\{f_{\omega_{[1,n]}}(A)\}_{n\in\mathbb{N}}\) of compact subsets converges to the single set \(\{\pi(\omega)\}\) as \(n\) tends to infinity in sense of the Hausdorff distance. Proof.: Let \(\omega\in\mathbb{T}\) and we set \(g_{m}:=f_{\omega_{m}}\)\((m\in\mathbb{N})\). Note that \(\{g_{m}\}_{m\in\mathbb{N}}\) is a sequence of contractive mappings on \(X\) with an uniform contraction constant \(c\in(0,1)\) (see, Definitions 2.1 and 3.5) and satisfies the condition in Lemma 2.5. Also, note that \(\omega_{k}\in I_{k}\) for each \(k\in\mathbb{N}\). By the inequality (2.2) with \(m=1\), we deduce that \[\rho_{H}(f_{\omega_{[1,n]}}(\{x\}),\{\pi(\omega)\})=\rho_{H}(\{f_{ \omega_{[1,n]}}(x)\},\{\pi(\omega)\})=\rho(f_{\omega_{[1,n]}}(x),\pi(\omega))\] \[\leq(1+c)\cdot c^{-1}\sum_{k=n+1}^{\infty}c^{k}\rho(x,z_{\omega_{k }})\leq(1+c)\cdot c^{-1}\sum_{k=n+1}^{\infty}\left\{\max_{i\in I_{n}}\rho(x,z_ {i})\right\}\cdot c^{n}\] for all \(n\in\mathbb{N}\), where \(z_{\omega_{k}}\) is the unique fixed point of \(g_{k}=f_{\omega_{k}}\)\((k\in\mathbb{N})\) and we use the fact \(\pi(\omega)\) is the first element \(x_{1}\) of the recursively compatible sequence \(\{x_{m}\}_{m\in\mathbb{N}}\) for \(\{g_{m}\}_{m\in\mathbb{N}}\). In addition, for each \(\omega\in\mathbb{T}\), \(A\in\mathcal{K}(X)\) and \(n\in\mathbb{N}\), we have \[\rho_{H}(f_{\omega_{[1,n]}}(\{x\}),f_{\omega_{[1,n]}}(A))\leq c^{n}\cdot\rho_{H} (\{x\},A).\] Therefore, we have proved our proposition. **Remark 4.4**.: Proposition 4.3 shows the reason why we call \(\pi\) the projection map, and that if we only consider the construction of the projection map for general IFSs then we do not assume the boundedness or compactness of \((X,\rho)\) and we can choice any non-empty compact subset as the initial point for each iteration. Indeed, the projection map (or the coding map) for IFSs is "usually" defined by the unique element of the intersection of non-increasing compact subsets generated by contractive mappings. For example, in Rempe-Gillen's and Urbanski's paper [28], the projection map on compact metric space \((X,\rho)\) is defined by the intersection of non-increasing compact subsets \(\{f_{\omega_{[1,m]}}(X)\}_{m\in\mathbb{N}}\) generated by sequence of contractive mappings \(\{f_{\omega_{m}}\}_{m\in\mathbb{N}}\). Note that, if a non-increasing sequence \(\{A_{m}\}_{m\in\mathbb{N}}\) of non-empty compact subsets satisfies the condition \(\operatorname{diam}_{\rho}(A_{m})\to 0\) as \(m\) tends to infinity, then we deduce that the intersection \(\cap_{m\in\mathbb{N}}A_{m}\) is a single set and is the limit point of the sequence \(\{A_{m}\}_{m\in\mathbb{N}}\) in sense of the Hausdorff distance. We now prove the main theorem in this paper. **Theorem 4.5**.: Let \((\{f_{i}\}_{i\in I},\mathbb{T})\) be a general IFS with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) and \(z_{i}\in X\) be the unique fixed point of \(f_{i}\)\((i\in I)\). Suppose that there exists \(x\in X\) such that \[\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty.\] Then, we have the following properties: 1. \(\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1, n]}^{\prime}}(\pi(\mathbb{T}_{\omega\omega^{\prime}}))=\pi(\mathbb{T}_{\omega})\) for each \(\omega\in\mathbb{T}^{[1,*]}\) and 2. for each \(A\in\mathcal{K}(X)\), there exists \(D^{\prime}(A)>0\) such that for each \(\omega\in\mathbb{T}^{[1,*]}\) \[\rho_{H}(A,\pi(\mathbb{T}_{\omega}))\leq D^{\prime}(A)\cdot\max\left\{1,c^{-( |\omega|+1)}b_{x}(|\omega|+1)\right\},\] where \(b_{x}(l):=\sum_{k\geq l}\{\max_{i\in I_{k}}\rho(x,z_{i})\}\cdot c^{k}\)\((l\in\mathbb{N})\). In addition, the family of compact subsets \(\{K_{\omega}\}_{\omega\in\mathbb{T}}\) of \(X\) with the above properties (i) and (ii) is unique. Moreover, for each \(A\in\mathcal{K}(X)\), there exists \(D(A)>0\) such that for each \(\omega\in\mathbb{T}^{[1,*]}\) and \(n\in\mathbb{N}\), \[\rho_{H}\left(\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}} f_{\omega_{[1,n]}^{\prime}}(A),\pi(\mathbb{T}_{\omega})\right)\leq D(A)\cdot c ^{-(|\omega|+1)}\cdot\max\{c^{n+|\omega|+1},b_{x}(n+|\omega|+1)\}. \tag{4.2}\] **Remark 4.6**.: Note that it is necessary to assume that \(\sum_{n\in\mathbb{N}}\left\{\max_{i\in I_{n}}\rho(x,z_{i})\right\}\cdot c^{n}<\infty\) for some \(x\in X\). Indeed, if the tree is a single set then the general IFSs is reduced to the non-autonomous iteration (see, Example 2.13). Under the condition, we obtain the inequality (4.2) which shows that for all \(\omega\in\mathbb{T}^{[1,*]}\), \(\{\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[ 1,n]}^{\prime}}(A)\}_{n\in\mathbb{N}}\) subsets converges to \(\pi(\mathbb{T}_{\omega})\) as \(n\) tends to infinity in sense of the Hausdorff distance. In addition, the inequality (4.2) shows that the limit point does not depend on the starting point \(A\in\mathcal{K}(X)\) and the starting point depends on only the constant of the convergence rate (note that the base point \(x\in X\) depends on the convergence rate). However, the convergence is not always a exponentially fast rate. Indeed, if the tree is a single set then the general IFSs is reduced to the non-autonomous iteration (see, Example 2.14). proof of Theorem 4.5.: We first show that the property (i) in Theorem 4.5. Indeed, let \(\omega\in\mathbb{T}^{[1,\ast]}\) and \(n\in\mathbb{N}\). Note that by the definition of sub-trees (see, Remark 3.2), \(\omega\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,|\omega|+n]}\) and the set \(\mathbb{T}_{\omega\omega^{\prime}_{[1,n]}}\) is well-defined for each for each \(\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}\), and we deduce that \(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}}\{\omega^{ \prime}_{[1,n]}\}\times\mathbb{T}_{\omega\omega^{\prime}_{[1,n]}}=\mathbb{T}_ {\omega}\). Then, by Theorem 3.10, \[\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}}f_{\omega^{ \prime}_{[1,n]}}(\pi(\mathbb{T}_{\omega\omega^{\prime}}))=\bigcup_{\omega^{ \prime}\in\mathbb{T}^{[1,n]}_{\omega}}\pi(\sigma_{\omega^{\prime}_{[1,n]}}( \mathbb{T}_{\omega\omega^{\prime}}))=\pi\left(\bigcup_{\omega^{\prime}\in \mathbb{T}^{[1,n]}_{\omega}}\{\omega^{\prime}\}\times\mathbb{T}_{\omega\omega^ {\prime}}\right)=\pi(\mathbb{T}_{\omega}).\] Therefore, we have proved the property (i) in Theorem 4.5. Note that \(f_{\omega^{\prime}}(\pi(\mathbb{T}_{\omega}))\in\mathcal{K}(X)\) since \(f_{\omega^{\prime}}\) is continuous on \(X\) for each \(\omega^{\prime}\in\mathbb{T}^{[1,\ast]}_{\omega}\). We next show that for each \(n\in\mathbb{N}\), \(\omega\in\mathbb{T}^{[1,\ast]}\) and \(A\in\mathcal{K}(X)\), \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n ]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(\{x\}),\bigcup_{\omega^{\prime}_{[1,n] }\in\mathbb{T}^{[1,n]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(A)\right)\leq c^{ n}\cdot\sup_{a\in A}\rho(x,a)\quad\text{and} \tag{4.3}\] \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n ]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(\{x\}),\pi(\mathbb{T}_{\omega})\right) \leq(1+c)\cdot c^{-(|\omega|+1)}\sum_{k=|\omega|+n+1}^{\infty}c^{k}\max_{i\in I _{k}}\rho(x,z_{i}). \tag{4.4}\] Indeed, by properties of the Hausdorff distance, we have \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1, n]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(\{x\}),\bigcup_{\omega^{\prime}_{[1,n]} \in\mathbb{T}^{[1,n]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(A)\right)\leq\max_{ \omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}}\rho_{H}\left(f_{\omega^{\prime}_ {[1,n]}}(\{x\}),f_{\omega^{\prime}_{[1,n]}}(A)\right)\] \[\leq\max_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}}c ^{n}\cdot\rho_{H}(\{x\},A)\leq c^{n}\cdot\rho_{H}(\{x\},A)\leq c^{n}\cdot \sup_{a\in A}\rho(x,a).\] In addition, by the property (i) in Theorem 4.5 and the above argument, we have \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}} f_{\omega^{\prime}_{[1,n]}}(\{x\}),\pi(\mathbb{T}_{\omega})\right)\leq c^{n}\cdot \sup_{\tau\in\mathbb{T}_{\omega\omega^{\prime}_{[1,n]}}}\rho\left(x,\pi(\tau) \right).\] On the other hand, recall that \(\omega\omega^{\prime}_{[1,n]}\tau\in\mathbb{T}\) and the first element of the recursively compatible sequence for \(\{f_{\tau_{m}}\}_{m\in\mathbb{N}}\) is the \((|\omega|+n+1)\)-th element of the recursively compatible sequence for \(\{f_{(\omega\omega^{\prime}_{[1,n]}\tau)_{m}}\}_{m\in\mathbb{N}}\). By the property (ii) in Lemma 2.5 (or the inequality (2.5)) with \(y=x\) and \(m=|\omega|+n+1\), it follows that \[c^{n}\cdot\sup_{\tau\in\mathbb{T}_{\omega\omega^{\prime}_{[1,n]}}} \rho\left(x,\pi(\tau)\right) \leq c^{n}\cdot(1+c)\cdot c^{-(|\omega|+n+1)}\sum_{k=|\omega|+n+1} ^{\infty}c^{k}\rho(x,z_{(\omega\omega^{\prime}_{[1,n]}\tau)_{k}})\] \[\leq(1+c)\cdot c^{-(|\omega|+1)}\sum_{k=|\omega|+n+1}^{\infty}c^{ k}\max_{i\in I_{k}}\rho(x,z_{i}).\] Therefore, we have proved the inequalities (4.3) and (4.4). Now, we show the property (ii) in Theorem 4.5 and the uniqueness of the family of the compact sets with the properties (i) and (ii) in Theorem 4.5. To show the property (ii) in Theorem 4.5, let \(A\in\mathcal{K}(X)\) and \(\omega\in\mathbb{T}^{[1,\ast]}\), and we set \(D^{\prime}(A):=\max\{(1+c),\sup_{a\in A}\rho(x,a)\}\ (>0)\). Note that \(\omega_{1}^{\prime}\in I_{|\omega|+1}\) for each \(\omega_{1}^{\prime}\in\mathbb{T}_{\omega}^{1}\) and by Lemma 2.4 we have \[\rho_{H}\left(\{x\},\bigcup_{\omega_{1}^{\prime}\in\mathbb{T}_{ \omega}^{1}}f_{\omega_{1}^{\prime}}(\{x\})\right)\leq\max_{\omega_{1}^{\prime} \in\mathbb{T}_{\omega}^{1}}\rho_{H}\left(\{x\},f_{\omega_{1}^{\prime}}(\{x\}) \right)=\max_{\omega_{1}^{\prime}\in\mathbb{T}_{\omega}^{1}}\rho_{H}\left(\{x \},\{f_{\omega_{1}^{\prime}}(x)\}\right)\] \[=\max_{\omega_{1}^{\prime}\in\mathbb{T}_{\omega}^{1}}\rho\left(x,f_{\omega_{1}^{\prime}}(x)\right)=\max_{\omega_{1}^{\prime}\in\mathbb{T}_{ \omega}^{1}}(1+c)\cdot\rho\left(x,z_{\omega_{1}^{\prime}}\right)\leq(1+c)\cdot \max_{i\in I_{|\omega|+1}}\rho\left(x,z_{i}\right). \tag{4.5}\] By the inequality (4.3) with \(n=1\) and the above inequality, we have \[\rho_{H}(\{x\},\pi(\mathbb{T}_{\omega})) \leq\rho_{H}\left(\{x\},\bigcup_{\omega_{1}^{\prime}\in\mathbb{T}_ {\omega}^{1}}f_{\omega_{1}^{\prime}}(\{x\})\right)+\rho_{H}\left(\bigcup_{ \omega_{1}^{\prime}\in\mathbb{T}_{\omega}^{1}}f_{\omega_{1}^{\prime}}(\{x\}), \pi(\mathbb{T}_{\omega})\right)\] \[\leq(1+c)\cdot\max_{i\in I_{|\omega|+1}}\rho\left(x,z_{i}\right) +(1+c)\cdot c^{-(|\omega|+1)}\sum_{k=|\omega|+2}^{\infty}c^{k}\max_{i\in I_{k} }\rho(x,z_{i})\] \[=(1+c)\cdot c^{-(|\omega|+1)}b_{x}(|\omega|+1).\] It follows that \[\rho_{H}(A,\pi(\mathbb{T}_{\omega}))\leq\rho_{H}(A,\{x\})+\rho(\{x\},\pi( \mathbb{T}_{\omega}))\leq D^{\prime}(A)\cdot\max\left\{1,c^{-(|\omega|+1)}b_{ x}(|\omega|+1)\right\}.\] To show the uniqueness of the family of the compact sets with the properties (i) and (ii) in Theorem 4.5, let \(\{K_{\omega}\}_{\omega\in\mathbb{T}^{[1,\ast]}}\) be a family of the compact sets with the properties. Also, let \(\omega\in\mathbb{T}^{[1,\ast]}\) and \(n\in\mathbb{N}\). Then, by the property (i) in Theorem 4.5 and properties of the Hausdorff distance, we have \[\rho_{H}(\pi(\mathbb{T}_{\omega}),K_{\omega})\leq\rho_{H}\left( \bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1,n ]}^{\prime}}(\pi(\mathbb{T}_{\omega\omega_{[1,n]}^{\prime}})),\bigcup_{ \omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1,n]}^{ \prime}}(\{x\})\right)\] \[\qquad\qquad\qquad\qquad\qquad+\rho_{H}\left(\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1,n]}^{\prime}}(\{x\}),\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1, n]}^{\prime}}(K_{\omega\omega_{[1,n]}^{\prime}})\right)\] \[\qquad\qquad\qquad\qquad\leq c^{n}\rho_{H}\left(\pi(\mathbb{T}_{ \omega\omega_{[1,n]}^{\prime}}),\{x\}\right)+c^{n}\rho_{H}(\{x\},K_{\omega \omega_{[1,n]}^{\prime}}).\] Moreover, by the property (ii), we deduce that \[c^{n}\rho_{H}\left(\{x\},\pi(\mathbb{T}_{\omega\omega_{[1,n]}^{ \prime}})\right)\leq D^{\prime}(\{x\})\cdot c^{n}\cdot\max\left\{1,c^{-(|\omega \omega_{[1,n]}^{\prime}|+1)}b_{x}(|\omega\omega_{[1,n]}^{\prime}|+1)\right\}\] \[\qquad\qquad=D^{\prime}(\{x\})\cdot c^{-(|\omega|+1)}\cdot\max \left\{c^{|\omega|+n+1},b_{x}(|\omega|+n+1)\right\}\to 0\] as \(n\) tends to infinity, and by the same argument \(c^{n}\rho_{H}(\{x\},K_{\omega\omega_{[1,n]}^{\prime}})\to 0\) as \(n\) tends to infinity. It follows that \(\pi(\mathbb{T}_{\omega})=K_{\omega}\) for each \(\omega\in\mathbb{T}^{[1,\ast]}\). Thus, we have proved the property (ii) in Theorem 4.5 and the uniqueness of the family of the compact sets with the properties (i) and (ii) in Theorem 4.5. We finally show that the inequality (4.2). To show this, let \(A\in\mathcal{K}(X)\), \(\omega\in\mathbb{T}^{[1,\ast]}\) and \(n\in\mathbb{N}\). We set \(D(A):=\max\{(1+c),\sup_{a\in A}\rho(x,a)\}\ (>0)\). Then, by the inequalities (4.3) and (4.4), we have \[\rho_{H}\left(\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^ {[1,n]}}f_{\omega_{[1,n]}^{\prime}}(A),\pi(\mathbb{T}_{\omega})\right)\] \[\leq\rho_{H}\left(\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{ \omega}^{[1,n]}}\hskip-14.226378ptf_{\omega_{[1,n]}^{\prime}}(A),\bigcup_{ \omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}\hskip-14.226378ptf_{ \omega_{[1,n]}^{\prime}}(\{x\})\right)+\rho_{H}\left(\bigcup_{\omega_{[1,n]}^{ \prime}\in\mathbb{T}_{\omega}^{[1,n]}}\hskip-14.226378ptf_{\omega_{[1,n]}^{ \prime}}(\{x\}),\pi(\mathbb{T}_{\omega})\right)\] \[\leq c^{n}\cdot\sup_{a\in A}\rho(x,a)+(1+c)\cdot c^{-(|\omega|+1)} \sum_{k=|\omega|+n+1}^{\infty}c^{k}\max_{i\in I_{k}}\rho(x,z_{i})\] \[\leq D(A)\cdot c^{-(|\omega|+1)}\cdot\max\{c^{n+|\omega|+1},b_{x}( n+|\omega|+1)\}.\] Hence, we have proved our theorem. **Remark 4.7**.: In the paper [5], the limit set is defined by the limit point of the sequence \(\{\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1,n]}^{\prime}}(A)\}_{n\in\mathbb{N}_{0}}\) of non-empty compact subsets with the uniform contractivity condition and the following condition: \(\sup_{i\in I}\rho(f_{i}(x),x)<\infty\) for some \(x\in X\). However, Theorem 4.5 shows that if we only consider the existence and uniqueness family of the limit sets for general IFSs, we obtain the existence and uniqueness of family of the limit sets under the weaker conditions than under the above condition. Indeed, by Lemma 2.4, \[\sup_{i\in I}\rho(x,z_{i})\leq\frac{1}{(1-c)}\sup_{i\in I}\rho(f_{i}(x),x)<\infty\] and it follows that \(\sum_{n\in\mathbb{N}}\{\max_{i\in I_{n}}\rho(x,z_{i})\}\cdot c^{n}<\infty\). In papers [25], [11], [10] and [12], the limit set is generated by the compatible compact subsets (that is, the limit set (the Moran set) is generated by the basic sets with the Moran structure). However, the family of limit sets \(\{\pi(\mathbb{T})\}_{\omega\in\mathbb{T}^{[1,\ast]}}\) for general IFSs in this paper is compatible with definition of the Moran structure \(\{\pi(\mathbb{T}_{\omega})\}_{\omega\in\mathbb{T}^{[1,\ast]}}\) and equals the Moran sets. Indeed, by Lemma 3.6, \(\pi(\mathbb{T}_{\omega})\) is compact for each \(\omega\in\mathbb{T}^{[1,\ast]}\) and \(\mathrm{diam}_{\rho}(\pi(\mathbb{T}_{\omega}))\) converges \(0\) as \(|\omega|\) tends to \(\infty\) uniformly with respect to \(\omega\in\mathbb{T}^{[1,\ast]}\). In addition, by Theorem 4.5, we have \(f_{i}(\pi(\mathbb{T}_{\omega i}))\subset\pi(\mathbb{T}_{\omega})\) and \(f_{i}(\pi(\mathbb{T}_{\omega}))=\pi(\mathbb{T}_{\omega i})\) for each \(\omega\in\mathbb{T}^{[1,\ast]}\) and \(i\in S(\mathbb{T},\omega)\). We finally show the following corollary of Theorem 4.5. **Corollary 4.8**.: Let \((\{f_{i}\}_{i\in I},\mathbb{T})\) be a general IFS with the uniform contraction constant \(c\in(0,1)\) on a complete metric space \((X,\rho)\) and \(z_{i}\in X\) be the unique fixed point of \(f_{i}\)\((i\in I)\). Suppose that there exists \(x\in X\) such that \[\alpha:=\limsup_{n\to\infty}\sqrt{\max_{i\in I_{n}}\rho(x,z_{i})}<\frac{1}{c}.\] Then, for each \(r\in\{r>0\ |\ c\leq r<1,\alpha c<r\}\), we have the following properties: * \(\bigcup_{\omega_{[1,n]}^{\prime}\in\mathbb{T}_{\omega}^{[1,n]}}f_{\omega_{[1,n ]}^{\prime}}(\pi(\mathbb{T}_{\omega\omega^{\prime}}))=\pi(\mathbb{T}_{\omega})\) for each \(\omega\in\mathbb{T}^{[1,\ast]}\) and * \((c/r)^{|\omega|+1}\cdot\rho_{H}(A,\pi(\mathbb{T}_{\omega}))\)\((\omega\in\mathbb{T}^{[1,\ast]})\) is bounded for each \(A\in\mathcal{K}(X)\). In addition, the family of compact subsets \(\{K_{\omega}\}_{\omega\in\mathbb{T}}\) of \(X\) with the above properties (i) and (ii) is unique. Moreover, for all for all \(A\in\mathcal{K}(X)\), \(\omega\in\mathbb{T}^{[1,*]}\), the sequence \(\{\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}}f_{\omega^{ \prime}_{[1,n]}}(A)\}_{n\in\mathbb{N}}\) of compact subsets converges to \(\pi(\mathbb{T}_{\omega})\) as \(n\) tends to infinity exponentially fast with the rate \(r\), in sense of the Hausdorff distance. Proof.: Let \(r\in\{r>0\ |\ c\leq r<1,\alpha c<r\}\). By the similar argument in the proof of Corollary 2.11, there exist \(D^{\prime}>0\) such that \[\left\{\max_{i\in I_{n}}\rho(x^{\prime},z_{i})\right\}\cdot c^{n}\leq D^{\prime }\cdot r^{n} \tag{4.6}\] for all \(n\in\mathbb{N}\). It follows that the condition in Theorem 4.5 is satisfied and we obtain the property (i) in Corollary 4.8 for each \(\omega\in\mathbb{T}^{[1,*]}\), and by the inequalities (4.3), (4.4), (4.5) and (4.6) it follows that for each \(n\in\mathbb{N}\), \(\omega\in\mathbb{T}^{[1,*]}\) and \(A\in\mathcal{K}(X)\), \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}} f_{\omega^{\prime}_{[1,n]}}(\{x\}),\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]} _{\omega}}f_{\omega^{\prime}_{[1,n]}}(A)\right)\qquad\leq c^{n}\cdot\sup_{a \in A}\rho(x,a), \tag{4.7}\] \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}} f_{\omega^{\prime}_{[1,n]}}(\{x\}),\pi(\mathbb{T}_{\omega})\right)\leq(1+c)\cdot c ^{-(|\omega|+1)}\sum_{k=|\omega|+n+1}^{\infty}D^{\prime}\cdot r^{k}\quad\text{ and} \tag{4.8}\] \[\rho_{H}\left(\{x\},\bigcup_{\omega^{\prime}_{1}\in\mathbb{T}^{[1,n]}_{\omega} }f_{\omega^{\prime}_{1}}(\{x\})\right)\leq(1+c)\cdot\max_{i\in I_{|\omega|+1} }\rho\left(x,z_{i}\right)\leq(1+c)\cdot D^{\prime}\cdot\left(\frac{r}{c} \right)^{|\omega|+1}. \tag{4.9}\] We next show the properties (ii) in Corollary 4.8. Let \(A\in\mathcal{K}(X)\) and \(\omega\in\mathbb{T}^{[1,*]}\). By the inequalities (4.9) and (4.8) with \(n=1\), it follows that \[\rho_{H}(A,\pi(\mathbb{T}_{\omega})) \leq\rho_{H}(A,\{x\})+\rho_{H}\left(\{x\},\bigcup_{\omega^{\prime }_{1}\in\mathbb{T}^{1}_{\omega}}f_{\omega^{\prime}_{1}}(\{x\})\right)+\rho_{H} \left(\bigcup_{\omega^{\prime}_{1}\in\mathbb{T}^{1}_{\omega}}f_{\omega^{\prime }_{1}}(\{x\}),\pi(\mathbb{T}_{\omega})\right)\] \[\leq\sup_{a\in A}\rho(a,x)+(1+c)\cdot D^{\prime}\cdot\left(\frac {r}{c}\right)^{|\omega|+1}+(1+c)\cdot c^{-(|\omega|+1)}\sum_{k=|\omega|+2}^{ \infty}D^{\prime}\cdot r^{k}\] \[\leq\sup_{a\in A}\rho(a,x)+(1+c)D^{\prime}\cdot c^{-(|\omega|+1)} \sum_{k=|\omega|+1}^{\infty}r^{k}\] \[\leq\max\left\{\sup_{a\in A}\rho(a,x),\frac{1+c}{1-r}D^{\prime} \right\}\cdot\left(\frac{r}{c}\right)^{|\omega|+1}. \tag{4.10}\] Thus, we have proved the properties (ii) in Corollary 4.8. We now show that the uniqueness of the family of the compact sets with the properties (i) and (ii) in Corollary 4.8, let \(\{K_{\omega}\}_{\omega\in\mathbb{T}^{[1,*]}}\) be a family of the compact sets with the properties. Also, Let \(\omega\in\mathbb{T}^{[1,*]}\) and \(n\in\mathbb{N}\). Then, by the same argument in the proof of the uniqueness of in Theorem 4.5, we have \[\rho_{H}(\pi(\mathbb{T}_{\omega}),K_{\omega})\leq c^{n}\rho_{H}\left(\pi( \mathbb{T}_{\omega\omega^{\prime}_{[1,n]}}),\{x\}\right)+c^{n}\rho_{H}(\{x\}, K_{\omega\omega^{\prime}_{[1,n]}}).\] Moreover, by the inequality (4.10) with \(A:=\{x\}\), we deduce that \[c^{n}\rho_{H}\left(\{x\},\pi(\mathbb{T}_{\omega\omega^{\prime}_{[1,n]}})\right) \leq c^{n}\cdot\frac{1+c}{1-r}D^{\prime}\cdot\left(\frac{r}{c}\right)^{|\omega \omega^{\prime}_{[1,n]}|+1}=\frac{1+c}{1-r}D^{\prime}\cdot c^{-(|\omega|+1)} \cdot r^{|\omega|+n+1}\to 0\] as \(n\) tends to infinity, and by the same argument \(c^{n}\rho_{H}(\{x\},K_{\omega\omega^{\prime}_{[1,n]}})\to 0\) as \(n\) tends to infinity. It follows that \(\pi(\mathbb{T}_{\omega})=K_{\omega}\) for each \(\omega\in\mathbb{T}^{[1,s]}\). Thus, we have proved the uniqueness of the family of the compact sets with the properties (i) and (ii) in Theorem 4.5. We finally show that, for all \(A\in\mathcal{K}(X)\), \(\omega\in\mathbb{T}^{[1,s]}\), \(\{\cup_{\omega_{[1,n]}}\in\mathbb{T}^{n}f_{\omega_{[1,n]}}(A)\}_{n\in\mathbb{N}}\) converges to \(\pi(\mathbb{T})\) as \(n\) tends to infinity exponentially fast with the rate \(r\), in sense of the Hausdorff distance. To show this, let \(A\in\mathcal{K}(X)\), \(\omega\in\mathbb{T}^{[1,s]}\) and \(n\in\mathbb{N}\). Then, by the inequalities (4.7) and (4.8), we deduce that \[\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1, n]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(A),\pi(\mathbb{T}_{\omega})\right)\] \[\leq\rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{ [1,n]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(A),\bigcup_{\omega^{\prime}_{[1,n ]}\in\mathbb{T}^{[1,n]}_{\omega}}f_{\omega^{\prime}_{[1,n]}}(\{x\})\right)+ \rho_{H}\left(\bigcup_{\omega^{\prime}_{[1,n]}\in\mathbb{T}^{[1,n]}_{\omega}} f_{\omega^{\prime}_{[1,n]}}(\{x\}),\pi(\mathbb{T}_{\omega})\right)\] \[\leq c^{n}\cdot\sup_{a\in A}\rho(x,a)+(1+c)\cdot c^{-(|\omega|+1 )}\sum_{k=|\omega|+n+1}^{\infty}D^{\prime}\cdot r^{k}\] \[\leq\sup_{a\in A}\rho(x,a)\cdot r^{n}+\frac{1+c}{1-r}c^{-(|\omega |+1)}D^{\prime}r^{|\omega|+n+1}\leq\max\left\{\sup_{a\in A}\rho(x,a),\frac{1+ c}{1-r}D^{\prime}\left(\frac{r}{c}\right)^{|\omega|+1}\right\}\cdot r^{n}.\] Hence, we have proved our theorem. **Remark 4.9**.: By the same argument in Remark 2.12, if \(\{\max_{i\in I_{n}}\rho(x,z_{i})\}_{n\in\mathbb{N}}\) is unbounded ( if and only if \(\{z_{i}\ |\ i\in\cup_{n\in\mathbb{N}}I_{n}\ \}\subset X\) is unbounded), then the constant \(\alpha\geq 0\) in Corollary 4.8 does not depend on \(x\in X\). Note that \(\{\pi(\mathbb{T}_{\omega})\}_{\omega\in\mathbb{T}^{[1,s]}}\) is not uniformly bounded in general even if the assumption in Corollary 4.8 holds. Indeed, if the tree is a single set, then the general IFSs is reduced to the non-autonomous iteration (see, Example 2.15). On the other hand, by the same argument in Remark 2.12, we also deduce that if \(\{\max_{i\in I_{n}}\rho(x,z_{i})\}_{n\in\mathbb{N}}\) is bounded ( if and only if \(\{z_{i}\ |\ i\in I\ \}\subset X\) is bounded), then the constant \(\alpha\geq 0\) in Corollary 4.8 depend on \(x\in X\). However, by the same argument in Remark 2.9, if \(\{\max_{i\in I_{n}}\rho(x,z_{i})\}_{n\in\mathbb{N}}\) is bounded, then the condition in Theorem 4.5 is automatically satisfied and \(\{\pi(\mathbb{T}_{\omega})\}_{\omega\in\mathbb{T}^{[1,s]}}\) is uniformly bounded by the property (ii) in Theorem 4.5. ## 5 An example of general IFSs In this section, we consider an example of general IFSs and the limit sets. Indeed, We first give an example of general IFSs which has a connection to the theory of continued fractions and we later give a proposition to indicate the importance of the example of the limit sets. Note that while the theory of continued fractions is often discussed in the theory of autonomous IFSs (see, [22], [23]), it is not often discussed in the theory of generalized IFSs (you can find a recent paper [27] in the setting for non-autonomous IFSs). In addition, while we already obtain the existence of the limit set generated by the IFS in the example by applying results in the third line, this example is not much paid attention to the limit set since it does not satisfy the central condition (the \(V\)-variability). Therefore, it is important to describe an example of the limit set for general IFSs even if the space \(X\) is bounded. We now give the setting of the example of general IFSs. Let \(I:=\mathbb{N}\) and \(X:=\{z\in\mathbb{C}\ |\ |z-1/2|\leq 1/2\}\) where \(|\cdot|\) is the Euclidean metric on \(\mathbb{C}\). For each \(b\in I\), \(S_{I}:=\{\phi_{b}\colon X\to X\ |\ b\in I\}\) is called the IFS of regular continued fractions. Here, \[\phi_{b}(z):=\frac{1}{z+b}\quad(z\in X).\] Note that for all \(b\in I\), \(\phi_{b}(X)\subset X\). Indeed, let \(Y:=\{z\in\mathbb{C}\ |\ \Re z\geq 1\}\) and let \(f\colon\hat{\mathbb{C}}\to\hat{\mathbb{C}}\) be the Mobius transformation defined by \(f(z):=1/z\ \ (z\in\hat{\mathbb{C}})\). Since \(f(0)=\infty\), \(f(1)=1\), \(f(1/2+i/2)=2/(1+i)=(1-i)\), we have \(f(\partial X)=\partial Y\cup\{\infty\}\) and Since \(\mathrm{f}(1/2)=2\), we have \(f(X)=Y\cup\{\infty\}\). Therefore, \(f\colon X\to Y\cup\{\infty\}\) is a homeomorphism and we deduce that \(\phi_{b}=f^{-1}\circ g_{b}\) and \(\phi_{b}(X)\subset f^{-1}(Y)\subset X\), where \(g_{b}\colon X\to Y\) be the map defined by \(g_{b}(z):=z+b\ \ (b\in I)\). Moreover, \(S_{I}\) is a family of contractive mappings on \(X\) with uniform contraction constant \(c:=4/5\). Indeed, note that \[|z+b|^{2}=|x+b+iy|^{2}=(x+b)^{2}+y^{2}=x^{2}+2bx+b^{2}+y^{2}\geq|z|^{2}+b^{2}= \frac{5}{4}\] for each \(z=x+iy\in X\) and for each \(b\in I\). It follows that \[|\phi_{b}(z)-\phi_{b}(z^{\prime})|=\left|\frac{1}{z+b}-\frac{1}{z^{\prime}+b} \right|=\frac{|z-z^{\prime}|}{|z+b||z^{\prime}+b|}\leq\frac{4}{5}|z-z^{\prime}|\] for each \(z,z^{\prime}\in X\). Therefore, \(S_{I}\) satisfies the condition (ii) in Definition 3.5. **Example 5.1**.: Let \(\alpha>1\) and \[\mathbb{T}_{\alpha}:=\{\omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}}\ |\ \omega_{1}+\cdots+\omega_{n}<n\alpha\text{ for each }n\in\mathbb{N}\ \}.\] We show that \((S_{I},\mathbb{T}_{\alpha})\) is a general IFS. To show this, note that \(1\ 1\ 1\cdots\in I^{\mathbb{N}}\) is a element of \(\mathbb{T}_{\alpha}\) since \(\alpha>1\). Also, note that \(\mathbb{T}_{\alpha}\subset I^{\mathbb{N}}\) is closed since we endow \(I\) with the discrete topology and \(I^{\mathbb{N}}\) with the product topology, and \(\Pi_{[1,n]}\colon I^{\mathbb{N}}\to I^{n}\) is continuous on \(I^{\mathbb{N}}\) for each \(n\in\mathbb{N}\). It remains to show that \(\mathbb{T}_{\alpha}\) is a tree with \(I\). It is easy to show that \(\Pi_{1}(\mathbb{T}_{\alpha})\) is finite. Let \(n\in\mathbb{N}\) and \(\omega\in\mathbb{T}_{\alpha}\). Then, if \(\omega_{n+1}\geq(n+1)\alpha-\sum_{l=1}^{n}\omega_{l}\ (>0)\), then we deduce that \[\omega_{1}+\cdots+\omega_{n}+\omega_{n+1}\geq\sum_{l=1}^{n}\omega_{l}+(n+1) \alpha-\sum_{l=1}^{n}\omega_{l}=(n+1)\alpha.\] We obtain that \(\#(S(\mathbb{T}_{\alpha},\omega_{[1,n]}))<\infty\) for each \(n\in\mathbb{N}\) and \(\omega\in\mathbb{T}_{\alpha}\), and \(\mathbb{T}_{\alpha}\) is a tree with \(I\). Thus, we have proved that \((S_{I},\mathbb{T}_{\alpha})\) is a general IFS. Note that, by Definitions 3.7 and 4.1, the limit set for \((S_{I},\,\mathbb{T}_{\alpha})\) is the following non-empty and compact subset: \[\pi(\mathbb{T}_{\alpha}) =\{x_{1}\in X\ |\ \omega\in\mathbb{T}_{\alpha},\{x_{m}\}_{m\in \mathbb{N}}\text{ is recursively compatible for }\{f_{\omega_{n}}\}_{n\in\mathbb{N}}\ \}\] \[=\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\ \omega_{1}+\omega_{2}\cdots+\omega_{n}\leq n\alpha \text{ for each }n\in\mathbb{N}\ \}\,,\] where \([0;\omega_{1},\omega_{2},\ldots]\) is the continued fraction defined by \[[0;\omega_{1},\omega_{2},\ldots]:=\frac{1}{\omega_{1}+\frac{1}{\omega_{2}+\cdots}} \quad(\omega:=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}}).\] Here, we use the general theory of continued fractions (for example, see [16]). Now, we give a proposition to indicate the importance of the limit sets in the above example by using the results in [7]. For \(A\subset\mathbb{C}\), we denote by \(\dim_{\mathcal{H}}A\) the Hausdorff dimension of \(A\). **Proposition 5.2**.: Let \((S_{I},\mathbb{T}_{\alpha})\) be general IFSs defined above. We set \[X_{\alpha}:=\left\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\ \omega=\omega_{1} \omega_{2}\cdots\in I^{\mathbb{N}},\ \limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\omega_{i}< \alpha\ \right\}.\] Then, we have \(\dim_{\mathcal{H}}X_{\alpha}=\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha})\) and \(\dim_{\mathcal{H}}X_{\alpha}(=\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha}))\) converges to \(1\) as \(\alpha\) tends to infinity. Proof.: Let \(\alpha>1\). We first show that \(\dim_{\mathcal{H}}X_{\alpha}\leq\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha})\) For each \(N\geq 2\), we set \[\mathcal{I}_{N} :=\left\{\begin{array}{l}\tau\in I^{N-1}\ |\ \ \ \ \ \ \ \text{ there exists }\omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}}\text{ s.t.}\\ \omega_{[1,N-1]}=\tau,\ \sum_{i=1}^{n}\omega_{i}<n\alpha\text{ for all }n\geq N \end{array}\right\}\] \[=\left\{\begin{array}{l}\tau\in I^{N-1}\ |\ \ \tau=\tau_{1}\cdots\tau_{N-1},\ \sum_{i=1}^{N-1}\tau_{i}+(n-N+1)<n\alpha\text{ for all }n\geq N \end{array}\right\}\] \[=\left\{\begin{array}{l}\tau\in I^{N-1}\ |\ \tau=\tau_{1}\cdots\tau_{N-1}, \ \sum_{i=1}^{N-1}\tau_{i}+1<N\alpha\ \right\}.\] Note that \(\limsup_{n\to\infty}(\sum_{i=1}^{n}\omega_{i})/n<\alpha\) if and only if there exists \(N\in\mathbb{N}\) with \(N\geq 2\) such that \(\sum_{i=1}^{n}\omega_{i}<n\alpha\) for all \(n\in\mathbb{N}\) with \(n\geq N\), and by direct calculations we have \[\left\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\ \ \begin{array}{l}\omega=\omega_{1} \omega_{2}\cdots\in I^{\mathbb{N}},\omega_{[1,N-1]}=\tau\text{ and }\\ \sum_{i=1}^{n}\omega_{i}<n\alpha\ \text{ for all }n\geq N\end{array}\right\}\] \[=\left\{\begin{array}{l}\phi_{\omega_{[1,N-1]}}([0;\omega_{N}, \omega_{N+1},\ldots])\in X\ |\ Therefore, since \(\phi_{\tau}\) is bi-Lipschitz on \(X\) for each \(N\geq 2\) and \(\tau\in\mathcal{I}_{N}\) (for example, see [22], [15]), we deduce that \[\dim_{\mathcal{H}}X_{\alpha}\] \[\leq\sup_{N\geq 2}\sup_{\tau\in\mathcal{I}_{N}}\dim_{\mathcal{H}} \phi_{\tau}\left(\left\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\begin{array}{cc}\omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}}& \text{and}\ \ \text{for all}\ m\geq 1,\\ &\sum_{i=1}^{N-1}1+\sum_{i=1}^{m}\omega_{i}<m\alpha+(N-1)\alpha\end{array} \right\}\right)\] \[=\sup_{N\geq 2}\sup_{\tau\in\mathcal{I}_{N}}\dim_{\mathcal{H}} \left(\left\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\begin{array}{cc}\omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}}& \text{and}\ \ \text{for all}\ m\geq 1,\\ \sum_{i=1}^{N-1}1+\sum_{i=1}^{m}\omega_{i}<m\alpha+(N-1)\alpha\end{array} \right\}\right)\] \[=\sup_{N\geq 2}\dim_{\mathcal{H}}\left(\left\{[0;\omega_{1}, \omega_{2},\ldots]\in X\ |\begin{array}{cc}\omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}},\ \omega_{[1,N-1]}=\mathbb{I},\\ \sum_{i=1}^{n}\omega_{i}<n\alpha\ \text{for all}\ n\geq N\end{array} \right\}\right)\] \[\leq\dim_{\mathcal{H}}\left(\left\{[0;\omega_{1},\omega_{2}, \ldots]\in X\ |\begin{array}{cc}\omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}},\\ \sum_{i=1}^{n}\omega_{i}<n\alpha\ \text{for all}\ n\geq 1\end{array} \right\}\right)=\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha}),\] where \(\mathbb{I}=\underbrace{1\cdots 1}_{N-1}\in\mathcal{I}_{N}\). To show that \(\dim_{\mathcal{H}}X_{\alpha}\geq\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha})\), Let \(\epsilon\in(0,\alpha-1)\). Since \[\pi(\mathbb{T}_{\alpha-\epsilon}) \subset\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\ \omega=\omega_{1}\omega_{2}\cdots\in I^{\mathbb{N}},\ \limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\omega_{i}\leq\alpha-\epsilon\ \}\] \[\subset\{[0;\omega_{1},\omega_{2},\ldots]\in X\ |\ \omega=\omega_{1}\omega_{2}\cdots\in I^{ \mathbb{N}},\ \limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\omega_{i}<\alpha\ \}=X_{\alpha},\] we deduce that \(\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha-\epsilon})\leq\dim_{\mathcal{H}}X_{\alpha}\). Now, by the continuity of the dimension function \(\alpha\mapsto\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha})\) (Theorem 1 in [7]), it follows that \(\dim_{\mathcal{H}}\pi(\mathbb{T}_{\alpha})\leq\dim_{\mathcal{H}}X_{\alpha}\). Also, by Theorem 1 in [7], we have proved the rest of the claims. Hence, we have proved our proposition. ## Acknowledgment The author would like to thank Hiroki Sumi, Yuto Nakajima and Mitsuhiro Shishikura for giving me helpful comments in Sections 3 and 4. The author also would like to thank Shunsuke Usuki for giving me helpful comments in Section 5. The author is supported by JST CREST Grant Number JPMJCR1913.
2308.03254
Electrostatic interactions between charge regulated spherical macroions
We study the interaction between two charge regulating spherical macroions with dielectric interior and dissociable surface groups immersed in a monovalent electrolyte solution. The charge dissociation is modelled via the Frumkin-Fowler-Guggenheim isotherm, which allows for multiple adsorption equilibrium states. The interactions are derived from the solutions of the mean-field Poisson-Boltzmann type theory with charge regulation boundary conditions. For a range of conditions we find symmetry breaking transitions from symmetric to asymmetric charge distribution exhibiting annealed charge patchiness, which results in like-charge attraction even in a univalent electrolyte -- thus fundamentally modifying the nature of electrostatic interactions in charge-stabilized colloidal suspensions.
Hu Ruixuan, Arghya Majee, Jure Dobnikar, Rudolf Podgornik
2023-08-07T02:18:40Z
http://arxiv.org/abs/2308.03254v4
# Electrostatic interactions between charge regulated spherical macroions ###### Abstract We study the interaction between two charge regulating spherical macroions with dielectric interior and dissociable surface groups immersed in a monovalent electrolyte solution. The charge dissociation is modelled via the Frumkin-Fowler-Guggenheim isotherm, which allows for multiple adsorption equilibrium states. The interactions are derived from the solutions of the mean-field Poisson-Boltzmann type theory with charge regulation boundary conditions. For a range of conditions we find symmetry breaking transitions from symmetric to asymmetric charge distribution exhibiting annealed charge patchiness, which results in like-charge attraction even in a univalent electrolyte - thus fundamentally modifying the nature of electrostatic interactions in charge-stabilized colloidal suspensions. + Footnote †: Email: [email protected] + Footnote †: Email: [email protected] Also affiliated with: Department of Physics, Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI-1000 Ljubljana, Slovenia. ## I Introduction Electrostatic interactions are a fundamental component of molecular forces in the colloid and nanoscale domains [1], dominating in particular various phenomena in biological and biomolecular context, as exemplified by their role in the physics of DNA [2] as well as other macromolecules [3], physics of polyelectrolytes [4; 5; 6; 7] and polyelectrolyte brushes [8; 9; 10], protein physics [11], membrane physics [12; 13; 14; 15], physics of nucleic acids [16] and physics of viruses [17; 18], with many outstanding contributions of Fyl Pincus. The nanoscale electrostatics has been modeled on various levels [19], being standardly based on the Deryagin-Landau-Verwey-Overbeek (DLVO) paradigm [20], and in particular on the mean-field formulation of the Poisson-Boltzmann (PB) theory [21; 22; 23], which has well defined limits of applicability, some pertaining to the model and some to the methodology [24; 25]. A very common variety of the PB theory is its linearized version, the Debye-Huckel (DH) theory [26], that in many cases allows for analytic calculations of interactions between macromolecular ions [26; 27; 28; 29]. Within the mean-field PB descriptions of the electrostatic interaction between two charged surfaces [30; 31; 32; 33], one usually assumes a constant surface charge density or constant surface potential boundary conditions [21; 34], unless the surface charges are inhomogeneous [35; 36]. Although this simplifies the problem, most common naturally occurring nanoparticle and macromolecular surfaces of interest, e.g., hard colloidal particles [37], soft biological molecules including proteins [38], membranes, and lipid vesicles [39; 40], rarely satisfy either of them [41; 42]. They respond to their environment, especially to the presence of each other, in a way that modifies both the charge density as well as the surface potential, adjusting them according to the separation between them and the bathing solution conditions [43; 44]. This conceptual framework with a long history is formally referred to as the _charge regulation_[45]. Electrostatic interactions of two charge-regulated macroions in the case of the Langmuir adsorption isotherm [46] have been studied for two small point-like particles and their connection with the Kirkwood-Shumaker attractive fluctuation interactions has been elucidated [47; 48; 49]. In addition, charge regulation framework has been applied to planar, chemically identical macroion surfaces with equal adsorption/desorption properties [50; 51; 52; 53], to chemically non-identical surfaces with different adsorption/desorption properties [54; 55; 56; 57], and to patchy surfaces with inhomogeneous charge distribution [58]. Interactions in non-planar systems have been studied to a lesser extent [59; 60] because the electrostatic potential inhomogeneities induced on curved surfaces need to be approached either by additional analytical approximations [61; 62; 63] or by intensive numerical schemes. The important point of departure for us is that for chemically identical surfaces the surface charge densities have been without exception assumed to be equal on both surfaces based upon general symmetry considerations [64; 65; 66]. However, the underlying physical reasoning for such an assumption is not general and not based upon the detailed chemical nature of the surfaces bearing charge. The fact that two surfaces are chemically identical and, therefore, interact in the same way with the adjacent liquid, is _not sufficient_ to infer equal surface charge densities.
2306.14284
Learning Broadcast Protocols
The problem of learning a computational model from examples has been receiving growing attention. For the particularly challenging problem of learning models of distributed systems, existing results are restricted to models with a fixed number of interacting processes. In this work we look for the first time (to the best of our knowledge) at the problem of learning a distributed system with an arbitrary number of processes, assuming only that there exists a cutoff, i.e., a number of processes that is sufficient to produce all observable behaviors. Specifically, we consider fine broadcast protocols, these are broadcast protocols (BPs) with a finite cutoff and no hidden states. We provide a learning algorithm that can infer a correct BP from a sample that is consistent with a fine BP, and a minimal equivalent BP if the sample is sufficiently complete. On the negative side we show that (a) characteristic sets of exponential size are unavoidable, (b) the consistency problem for fine BPs is NP hard, and (c) that fine BPs are not polynomially predictable.
Dana Fisman, Noa Izsak, Swen Jacobs
2023-06-25T16:26:48Z
http://arxiv.org/abs/2306.14284v2
# Learning Broadcast Protocols ###### Abstract The problem of learning a computational model from examples has been receiving growing attention. Models of distributed systems are particularly challenging since they encompass an added succinctness. While positive results for learning some models of distributed systems have been obtained, so far the considered models assume a fixed number of processes interact. In this work we look for the first time (to the best of our knowledge) at the problem of learning a distributed system with an arbitrary number of processes, assuming only that there _exists_ a cutoff. Specifically, we consider _fine broadcast protocols_, these are broadcast protocols (BPs) with a finite cutoff and no hidden states. We provide a learning algorithm that given a sample consistent with a fine BP, can infer a correct BP, with the help of an SMT solver. Moreover we show that the class of fine BPs is teachable, meaning that we can associate a finite set of words \(S_{B}\) with each BP \(B\) in the class (a so-called characteristic set) so that the provided learning algorithm can correctly infer a correct BP from any consistent sample subsuming \(S_{B}\). On the negative size we show that (a) characteristic sets of exponential size are unavoidable, (b) the consistency problem for fine BPs is NP hard, and (c) fine BPs are not polynomially predictable. ## I Introduction Learning computational models has a long history starting with the seminal works of Gold [23, 24] and Angluin [3]. Questions regarding learning computational models have raised a lot of interest also in the verification community [33, 34]. Many results regarding the learnability of various computational models used in verification have already been obtained (e.g. [8, 10, 13, 5, 1, 14, 4, 35, 21]). Particularly challenging is learning of concurrent computational models, as they offer another level of succinctness, and usually have no unique minimal model. Various results regarding learning concurrent models have already been obtained [11, 19, 31], but only for models with a fixed number of processes. Broadcast protocols (BPs) are a powerful concurrent computational model, allowing the synchronous communication of the sender of an action with an arbitrary number of receivers [16]. BPs have mainly been studied in the context of parameterized verification, i.e., solving the question whether a given property holds for all systems where an arbitrary number of processes executes a given protocol. Esparza et al. [18] have shown that this problem is decidable for safety properties, and undecidable for liveness. The challenge in verifying parameterized systems such as broadcast protocols, is that a parameterized system concisely represents an infinite family of systems: for each natural number \(n\) it includes the system where \(n\) processes interact. The system is correct only if it satisfies the specification for any number \(n\) of processes interacting. A variety of approaches has been investigated to overcome this. Some of these are based on the notion of _cutoff_. Generally speaking, a cutoff is a number \(c\) of processes such that a given property holds for any instance of the system with \(n\geq c\) processes if and only if it holds for the cutoff system. Then, cutoffs provide a complete method for proving safety properties of parameterized systems [32], i.e., for every safety property there exists a cutoff. In the literature, many results exist that provide cutoffs for whole _classes of properties_ in a given computational model. For token rings, Emerson and Namjoshi provide cutoffs for branching-time safety and liveness properties [17]. For guarded protocols, cutoffs for linear-time safety and liveness properties have been studied [15, 7, 28]. For rendezvous systems, it has been shown that there are protocols that do not have a cutoff for all linear-time properties, but if a protocol has a cutoff then all of its executions are (\(\omega\)-)regular [2]. Finally, cutoffs also enable the _synthesis_ of implementations for parameterized systems from formal specifications [27, 26, 30], a problem closely related to learning. In this paper, we develop a learning approach for broadcast protocols. Given the expressiveness of BPs and the complexity of the general problem, we make some assumptions to keep the problem manageable. In particular, we assume that the BP under consideration has no hidden states, i.e., every state has a least one broadcast sending action by which it can be recognized, and that there _exists_ a cutoff, i.e., a number \(c\) such that the language derived by \(c\) processes is the same as the language derived by any number greater than \(c\). We call such broadcast protocols _fine_. We note that not all broadcast protocols have a cutoff (whether or not they have hidden states), and that when a cutoff exists the derived language is regular.1 The fact that the derived language is regular also holds in previous work on learning concurrent models (communicating automata [11], workfklow petri nets [19], and negotiation protocols [31]) merely since a finite number of essentially finite state machines is in consideration. We emphasize that this does not reduce the problem to learning a regular language, since the aim is to obtain the concurrent representation, which we show to be much more succinct than a respective DFA for the language. Moreover, the problem we consider goes way beyond what has been considered in previous work in the sense that our approach works if _there exists_ a cutoff, but in contrast to existing approaches it does not require that the cutoff is known a priori, or that the system is assumed to consist of a known fixed number of processes. We focus on passive learning paradigms [12]. Specifically, we consider the following problems 1) _Consistency_ -- whether there exists a BP with at most \(k\) states that agrees with a given sample, 2) _Inference_ -- given a sample consistent with a BP, return a BP that is consistent with the sample, 3) _Teachability_ -- whether there exists a finite sample \(\mathcal{S}_{B}\), a so called _characteristic set_, that can be associated with any BP \(B\) such that a learning algorithm can correctly infer a BP equivalent to \(B\) from any sample subsuming \(\mathcal{S}_{B}\) (and consistent with \(B\)), 4) _CS Polynomiality_ -- if the class is teachable, whether characteristic sets are of polynomial size, and 5) _Polynomial Predictability_ -- whether a learner can correctly classify an unknown word with high probability after asking polynomially many membership and draw queries. We show, in Sec.III, that consistency is NP-hard for the class of fine BPs, and prove a few basic properties of BPs relevant to learning in Sec.IV. In Sec.V, we provide an inference algorithm that, given a sample of words that are consistent with a fine BP, can infer a correct BP. The inference approach is constraint-based and can be implemented using an SMT solver. In Sec.VI, we show that this class of BPs is _teachable_. In Sec.VII, we show that there exists a family of fine BPs for which a characteristic set of polynomial size cannot be obtained. The same family shows the succinctness of fine BPs, since the minimal corresponding DFA is exponentially larger than the fine BP. In Sec.VIII we show that fine BPs are not polynomially predictable. We have implemented the inference algorithm and report on some preliminary experimental data in Sec.IX. ## II Preliminaries ### _Broadcast Protocols_ Broadcast Protocols (BP)A broadcast protocol \(B=(S,s_{0},L,R)\) consists of a set of states \(S\) with an initial state \(s_{0}\in S\), a set of labels \(L\) and a transition relation \(R\subseteq S\times L\times S\), where \(L=\{a!!,a??\mid a\in A\}\) for some set of actions \(A\). A transition labeled with \(a!!\) is a _sending transition_, and a transition labeled with \(a??\) is a _receiving transition_, also called a _response_.2 For each action \(a\in A\), a receiving transition should be enabled from every state. In addition, following [9], we assume that for each action \(a\), there is a unique state \(s_{a}\) with an outgoing sending transition on \(a!!\). A state \(s\) in a broadcast protocol is said to be _hidden_ if it has no outgoing sending transition. In this paper we consider broadcast protocols with no hidden states. Footnote 2: Some models of BPs also consider rendezvous transitions, usually labeled with \(a!\) and \(a?\), but these can be simulated by broadcast transitions with a quadratic blowup in the number of states. The Counter System \(B^{n}\)For systems composed of \(n\) instances of a given broadcast protocol \(B\), we will assume some ordering \(s_{0},s_{1},...s_{|S|-1}\) on \(S\), and identify global states with vectors from \([n]^{|S|}\) where \([n]\) denotes the set \(\{0,1,\ldots,n\}\). We sometimes refer to a global state as a state-vector. We use bold font for global states and \(\textbf{q}[i]\) to denote the entry in position \(i\) of a state-vector **q**. For example, let \(\textbf{u}_{j}\) be the unit vector with \(\textbf{u}_{j}[j]=1\) and \(\textbf{u}_{j}[i]=0\) for all \(i\neq j\). Then the global state where all \(n\) instances of \(B\) are in \(s_{0}\) is the vector \(n\cdot\textbf{u}_{0}\). If **q** is a state vector with \(\textbf{q}[i]\geq 1\) we say that \(i\) is _lit_ in **q**. If local state \(i\) has a sending transition on action \(a\) we say that \(a\) is _enabled_ from \(i\); if \(i\) is lit in **q** we also say that \(a\) is _enabled_ from **q**. With each action \(a\) we can associate the _broadcast matrix_ of action \(a\) in \(P\), denoted \(\textbf{M}_{a}\), which is an \(|S|\times|S|\) matrix with \(\textbf{M}_{a}(k,m)=1\) if \((s_{k},a?,s_{m})\in R\), and \(\textbf{M}_{a}(k,m)=0\) otherwise. Then, the parallel composition of \(n\) copies of a broadcast protocol \(B=(S,s_{0},L,R)\) is the _counter system_\(B^{n}=([n]^{|S|},n\cdot\textbf{u}_{0},A,T)\), where \((\textbf{q},a,\textbf{q}^{\prime})\in T\) iff there exists \((s_{i},a!!,s_{j})\in R\) with \(\textbf{q}[i]\geq 1\) and \(\textbf{q}^{\prime}\) is obtained from **q** in the following way: \[\begin{array}{rcl}\textbf{p}&=&\textbf{q}-\textbf{u}_{i}\\ \textbf{p}^{\prime}&=&\textbf{p}\cdot\textbf{M}_{a}\\ \textbf{q}^{\prime}&=&\textbf{p}^{\prime}+\textbf{u}_{j},\end{array}\] That is, given the current state vector **q**, the state vector \(\textbf{p}=\textbf{q}-\textbf{u}_{i}\) corresponds to the sending process leaving the state \(i\). The state vector \(\textbf{p}^{\prime}=\textbf{p}\cdot\textbf{M}_{a}\) describes the situation after the other processes take the receiving transition on \(a\). Finally, \(\textbf{q}^{\prime}=\textbf{p}^{\prime}+\textbf{u}_{j}\) is the resulting state-vector after the sending process arrives its target location. An _execution_ of \(B^{n}\) is a sequence \(\textbf{q}_{0},a_{1},\textbf{q}_{1},a_{2},\ldots,a_{m},\textbf{q}_{m}\) such that \((\textbf{q}_{i},a_{i+1},\textbf{q}_{i+1})\in T\) for every \(0\leq i\leq m-1\). We say that the execution is _based on_ the sequence of actions \(a_{1},\ldots,a_{m}\) and that \(B^{n}(a_{1}\ldots a_{m})=\textbf{q}_{m}\). We say that a word \(w\in A^{*}\) is _feasible_ in \(B^{n}\) if there is an execution of \(B^{n}\) based on \(w\). The _language_ of \(B^{n}\), denoted \(L(B^{n})\), is the set of all feasible words for \(n\) processes, and the language of \(B\), denoted \(L(B)\), is the union of \(L(B^{n})\) over all \(n\in\mathbb{N}\). A broadcast protocol \(B\) is said to have a _cutoff_\(k\in\mathbb{N}\) if for any \(k^{\prime}>k\) it holds that \(L(B^{k})=L(B^{k^{\prime}})\). We say that a broadcast protocol with no hidden states is _fine_ if it has a cutoff. We use \(\mathcal{F}\) for the class of fine BPs. Let \(B_{1}\) and \(B_{2}\) be two BPs. We say that \(B_{1}\) and \(B_{2}\) are _equivalent_ iff \(L(B_{1})=L(B_{2})\). Note that unlike the case of DFAs, there is no unique minimal fine BP, as shown by the example in Fig.1. ### _Passive Learning and Characteristic Sets_ A _sample_ for a BP \(B\) is a set \(\mathcal{S}\) of triples in \(A^{*}\times\mathbb{N}\times\mathbb{B}\) where \(\mathbb{B}=\{\texttt{r},\texttt{r}\}\). A triple \((w,n,\texttt{r})\) is consistent with a BP \(B\) if \(w\) is feasible in \(B^{n}\). Similarly, a triple \((w,n,\texttt{r})\) is consistent with a BP \(B\) if \(w\) is infeasible in \(B^{n}\). A sample is _consistent_ with a BP \(B\) if all triples in it are consistent with \(B\). We consider the following problems related to passive learning a class \(\mathcal{C}\) of broadcast protocols. **Definition II.1** (Consistency).: _Given a sample \(\mathcal{S}\) and \(k\in\mathbb{N}\) determine whether there exists a BP \(B\in\mathcal{C}\) consistent with \(\mathcal{S}\) with at most \(k\) states._ **Definition II.2** (Inference).: _Devise an algorithm that given a sample \(\mathcal{S}\) that is consistent with some BP in \(\mathcal{C}\) returns a BP \(B\in\mathcal{C}\) that is consistent with \(\mathcal{S}\). We refer to such an algorithm as an inference algorithm._ **Definition II.3** (Teachability).: _We say that \(\mathcal{C}\) is teachable if there exists an inference algorithm \(\mathfrak{A}\) for \(\mathcal{C}\) such that for every \(B\in\mathcal{C}\) it is possible to construct a sample \(\mathcal{S}_{B}\) such that \(\mathfrak{A}\) will return a BP \(B^{\prime}\) for which \(L(B^{\prime})=L(B)\) when applied to any sample \(\mathcal{S}\) that subsumes \(\mathcal{S}_{B}\) and is consistent with \(B\). In this case we refer to the sample \(\mathcal{S}_{B}\) as the characteristic set for \(B\)._ **Definition II.4** (Polynomial CS).: _Given \(\mathcal{C}\) is teachable, determine whether every \(B\in\mathcal{C}\) has a characteristic set of size polynomial in \(B\)._ We use the number of states in \(B\) as a measure for its size. The size of a characteristic set is measured by the sum of the lengths of the words in it. ## III Consistency is NP-Hard for fine BPs We show below that consistency is NP-hard even for fine BPs. We note that hardness is expected since DFA consistency is NP-hard, but it does not directly follow from hardness of DFA consistency. This is since a DFA is not a special case of a fine BP. Below we give a direct proof for the NP-hardness of BP consistency. In appendix A we give an alternative proof, that goes via a reduction from DFA consistency. Regarding completeness, we note that given a BP \(B\) and a pair \((w,n)\in A^{*}\times\mathbb{N}\) it is possible to check in polynomial time whether \(w\) is feasible in \(B^{n}\) by developing the state vector \(n\cdot\textbf{u}_{0}\) along the word \(w\) in \(B\). Consequently, and since a BP with \(m\) states over set of actions \(A\) can be described in size polynomial in \(m\) and \(|A|\), if \(m\) is given in unary then BP-consistency is NP-complete. **Theorem III.1**.: _The consistency problem for fine BPs is NP-hard._ Proof.: The proof is by reduction from the problem of _all-eq-3SAT_ and is inspired by a recent proof on the hardness of DFA consistency [29]. The problem of _all-eq-3SAT_ asks given an all-eq-3CNF formula \(\varphi\) whether it has a satisfying assignment. Where an all-eq-3CNF formula is a 3CNF formula where in each clause either all literals are positive or all literals are negative. This problem is known to be NP-complete. Let \(\varphi=C_{1}\wedge C_{2}\wedge\ldots\wedge C_{m}\) be an all-eq-3CNF formula over a set of variables \(V=\{x_{1},x_{2},\ldots,x_{n}\}\). We take the number \(k\) to be \(n+m+4\). For the alphabet of the sample we take \(A=\{a_{i}\ |\ 1\leq i\leq m+n\}\cup\{a,b,c,d,e\}\). We devise a sample \(\mathcal{S}\) using four disjoint sets of words: \(P_{1}\), \(P_{2}\), \(N_{1}\), \(N_{2}\). The sample then consists of the triples \(\{(w,1,\text{T})\ |\ w\in P_{1}\}\cup\{(w,2,\text{T})\ |\ w\in P_{2}\}\cup\{(w,1, \text{F})\ |\ w\in N_{1}\}\cup\{(w,2,\text{F})\ |\ w\in N_{2}\}\). The sets are defined as follows \[\begin{array}{l}P_{1}\ =\ \{a_{1}a_{2}\ldots a_{m+n}aa\}\cup\{bcc\}\\ P_{2}\ =\ \{a_{1}a_{2}\ldots a_{i}bcdd\ |\ 1\leq i\leq m,\ \text{$C_{i}$ is positive}\}\ \cup\\ \ \(c??\) response to state F making the words \(a_{1}a_{2}\ldots a_{i}bcee\) feasible in \(B^{2}\) for every negative clause \(C_{i}\) for which \(\overline{x_{j}}\in C_{i}\) (again, conditioned a corresponding \(b??\) transition from \(C_{i}\) to \(x_{j}\) exists). The first set of \(N_{2}\) guarantee that \(b??\) from a state corresponding to certain clause does not land in a state corresponding to another clause. The second set of \(N_{2}\) guarantee that \(b??\) from a state corresponding to a certain clause does not land in a state corresponding to a variable that does not belong to the clause. Similar arguments show that if \(\varphi\) has no satisfying assignments then no BP with less than \(k\) states that agrees with the sample exists. We note that the BP constructed in the proof of Thm.III.1 has no hidden states and a cutoff (the cutoff is \(2\)). It follows that BP consistency is NP-hard also for the class \(\mathcal{F}_{2}\) of fine BPs with a cutoff \(2\). ## IV Properties of Broadcast Protocols Below we establish some properties regarding broadcast protocols that will be useful in devising the learning algorithm. **Lemma IV.1** (Prefix-closedness and monotonicity).: _If \(B\) is a BP then \(L(B)\) is prefix-closed. If \(w\in A^{*}\) is feasible in \(B^{k}\) then \(w\) is feasible in \(B^{\ell}\), that is, \(L(B^{k})\subseteq L(B^{\ell})\) for all \(\ell>k\)._ Proof.: Prefix-closedness holds since if \(a_{1}a_{2}\ldots a_{n}\) is feasible, then for every \(1\leq i\leq n\) the action \(a_{i}\) is feasible after reading the prefix \(a_{1}a_{2}\ldots a_{i-1}\). For monotonicity, for two state vectors \(\mathbf{p}\) and \(\mathbf{q}\) we say that \(\mathbf{q}\geq\mathbf{p}\) if for every \(i\in[|S|]\) we have that \(\mathbf{q}[i]\geq\mathbf{p}[i]\). Note that if \(a\in A\) is enabled in \(\mathbf{p}\) then it is also enabled in \(\mathbf{q}\). Let \(w=a_{1}a_{2}\ldots a_{m}\) and let \(\mathbf{p}_{0},a_{1},\mathbf{p}_{1},a_{2},\ldots,a_{m},\mathbf{p}_{m}\) be the execution of \(B^{k}\) on \(w\). We can construct an execution \(\mathbf{q}_{0},a_{1},\mathbf{q}_{1},a_{2},\ldots,a_{m},\mathbf{q}_{m}\) of \(B^{\ell}\) by induction on the length of \(w\) such that for every \(i\) we have that \(\mathbf{q}_{i}>\mathbf{p}_{i}\) entailing \(w\) is feasible in \(B^{\ell}\) as well. **Lemma IV.2** (Step by step progress).: _Let \(w\in A^{*}\), \(a\in A\), and \(m<n\). If \(w\in L(B^{m})\) and \(wa\notin L(B^{m})\) yet \(wa\in L(B^{n})\), then \(wa\in L(B^{m+1})\)._ Proof.: Suppose \(w\in L(B^{m})\) and \(wa\notin L(B^{m})\) yet \(wa\notin L(B^{m+1})\). I.e., neither one of the processes that take some sending transitions when executing \(w\) in \(B^{m+1}\), nor one of the processes only taking receiving transitions (including the additional process compared to \(B^{m}\)) is in the state that has the sending transition on \(a\). Since any further additional process will behave in the same way as the additional process in \(B^{m+1}\), \(wa\) can never become feasible. Contradicting that it is feasible in \(B^{n}\). Recall that fine BPs have no canonical minimal representation, in the sense that, as shown in Fig.1 there could be two non-isomorphic BPs for the same language. The fact that there is no canonical minimal representation is often the main obstacle in obtaining a learning algorithm. The following important lemma asserts, that while two minimal fine BPs may not be isomorphic there is a tight correspondence between them. Since every action can be fired from a unique state, and from every state at least one action can be fired, in a minimal BP the set of actions is partitioned between states, and if there is a state \(s_{1}\) in \(B_{1}\) whose set of sending transitions is \(A^{\prime}=\{a_{i_{1}},a_{i_{2}},\ldots,a_{i_{k}}\}\) then there should be a state \(s_{2}\) in \(B_{2}\) for which the set of sending transitions is exactly \(A^{\prime}\). So we can define such a mapping between the states of two minimal BPs, and it must be that on every word \(w\) if \(\mathbf{p}_{w}\) and \(\mathbf{q}_{w}\) are the state vectors \(B_{1}\) and \(B_{2}\) reach after reading \(w\), resp., then if state \(s_{1}\) is lit in \(\mathbf{p}_{w}\) then the corresponding state \(s_{2}\) (that agrees on the set of sending actions) is lit in \(\mathbf{q}_{w}\). In the following we use \(f^{\textsf{act}}(s)=A^{\prime}\) if \(A^{\prime}\) is the set of sending actions from \(s\). **Lemma IV.3** (Relation between two minimal equivalent BPs).: _Let \(B_{1}\) and \(B_{2}\) be two minimal BPs with sets of states \(S_{1}\) and \(S_{2}\) such that \(L(B_{1})=L(B_{2})\). Then for every \(m\in\mathbb{N}\) it holds that \(L(B_{1}^{m})=L(B_{2}^{m})\) and there exists a bijection \(h:S_{1}\to S_{2}\) satisfying that \(f^{\textsf{act}}(s)=f^{\textsf{act}}(h(s))\) for any \(s\in S_{1}\); and for any \(m\in\mathbb{N}\) and \(w\in A^{*}\)_ * _if reading_ \(w\) _in_ \(B_{1}^{m}\) _leads to global state_ \(\textbf{p}_{w}\) _and reading_ \(w\) _in_ \(B_{2}^{m}\) _leads to global state_ \(\textbf{q}_{w}\) _then for every state_ \(i\)_,_ \(\textbf{p}_{w}[i]\) _is lit if and only if_ \(\textbf{q}_{w}[h(i)]\) _is lit._ Proof.: The proof is by induction on the number \(m\) of interacting processes and the length of \(w\). For \(m=1\) and \(w=\epsilon\), we have \(\textbf{p}_{\epsilon}=\textbf{u}_{i}\) and \(\textbf{q}_{\epsilon}=\textbf{u}_{j}\) for some \(i,j\in\{0,\ldots,|S|-1\}\) where \(n=|B_{1}|=|B_{2}|\). It follows that \(s_{i}\) and \(s_{j}\) are the initial states of \(B_{1}\) and \(B_{2}\), resp. Thus, \(f^{\textsf{act}}(s_{i})=f^{\textsf{act}}(s_{j})\), as otherwise there is an action \(a\) that is enabled in \(B_{1}^{1}\) and not in \(B_{2}^{1}\) or vice versa (and therefore also in any \(B_{1}^{m}\) and with \(m\geq 1\)), contradicting that \(L(B_{1})=L(B_{2})\). Hence we can set \(h(i)=j\) and the claim holds. For \(m=1\) and \(w=ua\) for \(u\in A^{*}\) and \(a\in A\), by the induction hypothesis we know that for every state \(i\), \(\textbf{p}_{w}[i]\) is lit if and only if \(\textbf{q}_{w}[h(i)]\) is lit. Since we have only one process all global states are unit vectors. Thus, it must be that \(\textbf{p}_{u}=\textbf{u}_{i}\) and \(\textbf{q}_{u}=\textbf{u}_{j}\) for some \(i,j\in\{0,\ldots,|S|-1\}\). That is, \(i\) and \(j\) are the indices of the states in \(B_{1},B_{2}\) from which \(a\) is enabled, resp. Let \(i^{\prime},j^{\prime}\in\{0,\ldots,|S|-1\}\) be the indices of the states that \(B_{1},B_{2}\) reach after reading \(ua\). Then \(\textbf{p}_{ua}=\textbf{u}_{i^{\prime}}\) and \(\textbf{q}_{ua}=\textbf{u}_{j^{\prime}}\). Since this is true for any action \(b\) that is feasible in \(B_{1}\) or \(B_{2}\) after \(u\), \(h(i^{\prime})=j^{\prime}\) satisfies the claim. Note that this also shows that any word \(w\) that is feasible in \(B_{1}^{1}\) is also feasible in \(B_{2}^{1}\) and vice versa. Assume the claim holds for \(m\) we show it holds for \(m+1\). Consider a word \(w\). Let \(\textbf{p}_{w}\) and \(\textbf{q}_{w}\) be the state vectors \(B_{1}^{m}\) and \(B_{2}^{m}\) reach after reading \(w\). Then by the induction hypothesis, \(L(B_{1}^{m})=L(B_{2}^{m})\) and for every state \(i\) we have that \(\textbf{p}_{w}[i]\) is lit if and only if \(\textbf{q}_{w}[h(i)]\) is lit. Let \(\textbf{p}^{\prime}_{w}\) and \(\textbf{q}^{\prime}_{w}\) be the state vectors \(B_{1}^{m+1}\) and \(B_{2}^{m+1}\) reach after reading \(w\). Then \(\textbf{p}_{w}\) and \(\textbf{p}^{\prime}_{w}\) are the same for every \(i\in\{0,\ldots,|S|-1\}\) but one (and similarly for the **q**'s). For all of these indices the claim holds by the induction hypothesis. Let \(j\) be the index with \(\textbf{p}_{w}[j]\neq\textbf{p}^{\prime}_{w}[j]\), i.e., \(\textbf{p}^{\prime}_{w}[j]=\textbf{p}_{w}[j]+1\). If \(\textbf{p}_{w}[j]\geq 1\), then by the induction hypothesis for every state \(i\) we have that \(\textbf{p}_{w}[i]\) is lit if and only if \(\textbf{q}_{w}[h(i)]\) is lit. In particular, this holds for \(j\). Otherwise, \(\textbf{p}^{\prime}_{w}[j]=1\), which implies that there is at least one action \(a\) enabled from \(\textbf{p}^{\prime}_{w}\) that is not enabled from \(\textbf{p}_{w}\). By induction hypothesis, \(L(B_{1}^{m})=L(B_{2}^{m})\), and therefore we know that \(a\) is also not enabled from \(\textbf{q}_{w}\). Moreover, by Lemma IV.2, if \(w\in L(B_{2}^{m})\), \(wa\in L(B_{2})\) and \(wa\notin L(B_{2}^{m})\), then \(wa\in L(B_{2}^{m+1})\). Thus, there must be a local state \(k\) enabling \(a\) such that \(\textbf{q}_{w}[k]=0\) and \(\textbf{q}^{\prime}_{w}[k]=1\). Hence, letting \(h(j)=k\) satisfies the claim. ## V Inferring a Broadcast Protocol from a Sample Let \(\mathcal{S}\) be a sample. The inference algorithm \(\mathcal{I}\) we devise will construct a BP that agrees with \(\mathcal{S}\), such that moreover, if \(\mathcal{S}\) subsumes a characteristic set for a fine BP \(B\) then the inference algorithm will return a minimal BP equivalent to \(B\). Let \(A_{\mathcal{S}}\) be the set of actions that appear in the sample at least once as feasible. In order to return a BP with no hidden states, we allow the resulting BP to have more actions than in \(A_{\mathcal{S}}\). We use \(A\) for the set of actions used by the constructed BP, \(S\) for the set of states of the constructed BP, and \(s_{0}\) for its initial state. We will construct a set of constraints that define a BP. More precisely, we will construct constraints regarding the behavior of three partial functions \(f^{\textsf{st}}:A\to S\), \(f^{\textsf{st}}:A\to S\), and \(f^{\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} {s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf{s} \textsf{s}\textsf{s}\textsf{s}\textsf{s}\textsf This requires that at least one of the letters in the word is not enabled in the state the process reached, implying the entire word is infeasible with one process. 5. For the induction step \(i>1\), let \(w\in\mathcal{P}_{i}\cup\mathcal{N}_{i}\) and assume \(w=a_{1}a_{2}\ldots a_{m}\). We define \(i(m+1)\) variables \(p_{1,0},p_{2,0},\ldots p_{i,m}\). The variable \(p_{j,k}\) indicates the state the \(j\)-th process reaches after the system reads \(a_{1}\ldots a_{k}\). Accordingly, we set \(p_{j,0}=s_{0}\) for every \(1\leq j\leq i\). The state of the processes after reading the next letter, \(a_{l+1}\), depends on their state after reading \(a_{l}\). Let \(w\in\mathcal{P}_{i}\) and let \(w=a_{1}a_{2}\ldots a_{m}\), we add the constraint \(\psi_{w,i}\) defined as follows. \[\psi_{w,i}=\ \ \bigwedge_{1\leq\ell\leq m}\left(\bigvee_{1\leq j\leq i} \left(\big{(}p_{j,\ell-1}=f^{\mathfrak{st}}(a_{\ell})\big{)}\wedge\varphi_{j,\ell}\right)\right)\] where \[\varphi_{j,\ell}=\left(\begin{array}{c}p_{j,\ell}=f^{\mathfrak{ th}}(a_{\ell})\quad\wedge\\ \bigwedge_{\begin{subarray}{c}1\leq j^{\prime}\leq i\\ j^{\prime}\neq j\end{subarray}}p_{j^{\prime},\ell}=f^{\mathcal{??}}(p_{j^{ \prime},\ell-1},a_{\ell})\\ \end{array}\right)\] Intuitively, \(\psi_{w,i}\) requires that for every letter \(a_{\ell}\) of \(w\) one of the processes, call it \(j\), reached a state from which \(a_{\ell}\) is enabled. The formula \(\varphi_{j,\ell-1}\) states that the \(j\)-th process took the sending transition on \(a_{\ell-1}\) and the rest of the processes took the respective receiving transition. Let \(w\in\mathcal{N}_{i}\) and let \(w=a_{1}a_{2}\ldots a_{m}\). We then add the following requirement \[\bigvee_{0\leq\ell<m}\left(\psi_{w[..\ell],i}\wedge\bigwedge_{1\leq j\leq i} \left(p_{j,\ell}\neq f^{\mathfrak{st}}(a_{\ell+1})\right)\right)\] where \(w[..\ell]\) denotes the \(\ell\)'th prefix of \(w\), namely \(a_{1}a_{2}\ldots a_{\ell}\), and we let \(\psi_{e,i}=\tau\) for every \(i\). Intuitively, if \(w\) is infeasible with \(i\) processes, then there exists a (possibly empty) prefix \(w[..\ell]\) which is feasible with \(i\) processes, therefore \(\psi_{w[..\ell],i}\) holds, while \(w[..\ell+1]\) is infeasible, meaning none of the \(i\) processes is in a state where \(a_{\ell+1}\) is enabled. **Theorem V.1**.: _Let \(\mathcal{S}\) be a sample that is consistent with some fine BP. Let \(\Psi_{\mathcal{S}}\) be the prescribed constraints with respect to \(\mathcal{S}\). Let \(B\) be a BP that satisfies \(\Psi_{\mathcal{S}}\). Then \(B\) is a BP consistent with \(\mathcal{S}\)._ Proof.: We prove that if \(w\in\mathcal{P}_{i}\) then \(w\) is feasible in \(B^{i}\), and if \(w\in\mathcal{N}_{i}\) then \(w\) is infeasible in \(B^{i}\) by induction first on the length of \(w\) and then on \(i\). For \(w\) of length \(1\), this holds by the constraints in item (3). Let \(w=a_{1}a_{2}\ldots a_{n}\in\mathcal{P}_{i}\). If \(i=1\) then this holds by induction on \(w\) thanks to constraint (4). Next we consider words in the sample of the form \(w\) that are in \(\mathcal{P}_{i}\cup\mathcal{N}_{i}\). If \(w\) is already in \(\mathcal{P}_{i-1}\) then by the induction hypothesis it is already feasible for \(i-1\) processes in the constructed BP, and by Lemma.IV.1, it is also feasible with \(i\) processes. Otherwise, \(w\in\mathcal{P}_{i}\setminus\mathcal{P}_{i-1}\). In this case, constraint (5) makes sure that every prefix of \(w\) is feasible with \(i\) processes by going letter by letter, and requiring that for the next letter \(a_{\ell}\) one of the \(i\) processes reached the state enabling \(a_{\ell}\) after reading the prefix up to \(a_{\ell-1}\). If \(w\in\mathcal{N}_{i}\) then \(w\) is infeasible with \(i\) processes. In this case, there exists a letter \(a_{\ell}\) for \(1\leq\ell\leq m\) such that while \(w[..\ell-1]\) is feasible, \(a_{\ell}\) is not enabled from any of the states that the \(i\) processes reach after reading (the possibly empty) prefix \(w[..\ell-1]\). This is exactly what constraint (5) stipulates. **Corollary 1**.: _There exists an inference algorithm 3 for the class of fine BPs._ ## VI The class of fine BPs is teachable In order to show that the class of fine BPs is teachable we have to show that every BP \(B\) in the class can be associated with a sample \(\mathcal{S}_{B}\) so that there exists an inference algorithm \(\mathfrak{A}\) that when applied to any sample \(\mathcal{S}\) that subsumes \(\mathcal{S}_{B}\) and is consistent with \(B\), returns a minimal fine BP that is equivalent to \(B\). Recall that when the class is teachable, the sample \(\mathcal{S}_{B}\) associated with \(B\) is called a characteristic set. We start by describing in Sec.VI-A a procedure \(\mathfrak{G}\) that generates a sample \(\mathcal{S}_{B}\) from a fine BP \(B\). Then in Sec.VI-B we prove that an inference algorithm \(\mathfrak{A}\) can correctly infer a minimal BP \(B^{\prime}\) equivalent to \(B\) from any sample subsuming \(\mathcal{S}_{B}\). ### _Generating a Characteristic Set_ The characteristic set generation algorithm \(\mathfrak{G}\) builds a sequence of trees \(\mathcal{T}_{i}\) starting with \(i=0\) and incrementing \(i\) by one until \(\mathcal{T}_{i+1}=\mathcal{T}_{i}\). The edges of the tree are action symbols. The name of a node is taken to be the unique sequence of actions \(w\) that leads to it. Thus the root is named \(\varepsilon\) and a child of a node \(w\in A^{*}\) is named \(wa\) for some \(a\in A\). A node \(w\in A^{*}\) in tree \(\mathcal{T}_{i}\) is annotated with \(\textbf{p}_{w,i}\), the state-vector \(B^{i}\) reaches when reading \(w\), if \(w\) is feasible in \(B^{i}\), and with the special symbol \(\bot\) otherwise. We call a node in the tree _positive_ if it is annotated with a state-vector, and _negative_ otherwise. All nodes are either leaves or have exactly \(|A|\) children. Negative nodes are always leaves. The tree \(\mathcal{T}_{0}\) consists of only a root \(\varepsilon\) and is annotated with the state vector of all zeros. The tree \(\mathcal{T}_{i+1}\) is constructed from the tree \(\mathcal{T}_{i}\) by first re-annotating all its nodes: The annotation of a positive \(\textbf{p}_{w,i}\) is replaced by \(\textbf{p}_{w,i+1}\), a negative node \(w\) in \(\mathcal{T}_{i}\) may become positive in \(\mathcal{T}_{i+1}\) (if \(w\) is feasible with \(i+1\) processes) and will be annotated accordingly with \(\textbf{p}_{w,i+1}\). Then we check, from every positive node, whether further exploration is needed. A positive node will be declared a leaf if it is of the form \(va\) and it has an ancestor \(u\), a prefix of \(v\), for which \(\textbf{p}_{u,i+1}=\textbf{p}_{v,i+1}\). Otherwise its \(|A|\) children are constructed. That is, once we reach a node whose state-vector is the same as one of its ancestors, we develop its children, but the children are not developed further. The entire process terminates when \(\mathcal{T}_{i+1}=\mathcal{T}_{i}\). Note that given the BP has a cutoff, such an \(i\) must exist. We use \(\mathcal{T}\) for the last tree constructed, namely \(\mathcal{T}_{i+1}\). The sample is then produced as follows. For \(n\in[1..i+1]\), let \(\mathcal{P}_{n}=\{(u,n,\tau)\mid n\) is the minimal for which \(u\) is positive in \(\mathcal{T}_{n}\}\), \(u\) is the maximal for which \(u\) is negative in \(\mathcal{T}_{n}\). Then the sample is the union of all these, i.e., \(\mathcal{S}_{B}=\bigcup_{n=1}^{i+1}\left(\mathcal{P}_{n}\cup\mathcal{N}_{n}\right)\). ### _Proving that \(\mathfrak{G}\) generates characteristic sets_ We first note that for any state \(s\) of the original BP, there exists at least one node \(v\) in the tree where \(s\) is lit (i.e. the entry for the \(s\) in the state vector annotating the node is at least one). **Lemma VI.1**.: _Let **p** be a state vector that is reachable in \(B^{m}\). Then for every shortest word \(w\) that reaches **p** in \(B^{m}\) there exists a node \(w\) in \(\mathcal{T}_{m}\) such that \(\textbf{p}_{w}=\textbf{p}\)._ Proof.: The proof is by induction, first on \(m\) then on the length of \(w\). For \(m=1\), the construction of the tree clearly guarantees that all states reachable with one process have a respective node in the tree. Suppose **p** is reachable with \(w\) in \(B^{m}\) for \(m>1\). By Lemma IV.2, there exists a prefix \(u\) of \(w\), such that \(u\) is feasible in \(B^{m-1}\). By the induction hypothesis, \(u\) is a node of \(\mathcal{T}_{m-1}\). Suppose \(w=ua_{1}a_{2}\ldots a_{n}\). We can show by induction on \(1\leq n\) that \(ua_{1}a_{2}\ldots a_{i}\) is a node of \(\mathcal{T}_{m}\) for every \(i\leq n\) by simply following the tree construction. Next we claim that if the sample \(\mathcal{S}\) subsumes a characteristic set then \(\#_{\mathcal{S}}\) induces an equivalence relation between the actions. **Lemma VI.2**.: _For two actions \(a\) and \(b\) define \(a\sim_{\mathcal{S}}b\) iff it is not the case that \(a\#_{\mathcal{S}}b\). If \(\mathcal{S}\) subsumes a characteristic set then \(\sim_{\mathcal{S}}\) is an equivalence relation._ Proof.: Clearly \(\sim_{\mathcal{S}}\) is symmetric and reflexive. To see that it is transitive, we first refer to Lem. VI.1 to deduce that there exists a node \(v\) in \(\mathcal{T}\) in which \(s\) is lit. Let \(m\) be the minimal for which \(v\) is in \(\mathcal{T}_{m}\). It follows that for every action \(a\) that is feasible after \(v\) with \(m\) processes we have \((wa,n,\texttt{r})\in\mathcal{S}\) and for every action \(a\) that is infeasible after \(v\) with \(m\) processes we have \((wa,n,\texttt{r})\in\mathcal{S}\). Therefore, for any two actions we have \(a\#_{\mathcal{S}}b\) iff \(a\) and \(b\) are not enabled from the same state, and \(a\sim_{\mathcal{S}}b\) otherwise. Assume now that \(a\sim_{\mathcal{S}}b\), \(b\sim_{\mathcal{S}}c\). Then \(a\) and \(b\) are enabled from the same state, and \(b\) and \(c\) are enabled from the same state, implying \(a\) and \(c\) are enabled from the same state, i.e., \(a\sim_{\mathcal{S}}c\) as required. **Theorem VI.3**.: _Let \(B\) be a fine minimal BP, and let \(\mathcal{S}_{B}\) be the sample generated for it as above. There is an inference algorithm \(\mathfrak{A}\) such that if \(B^{\prime}\) is the result of \(\mathfrak{A}\) on when applied to any set subsuming \(\mathcal{S}_{B}\) and consistent with \(B\) then \(B^{\prime}\) is minimal and \(L(B^{\prime})=L(B)\)._ Proof.: The inference algorithm \(\mathfrak{A}\) we use to prove this claim runs in two steps. First it runs a variation \(\mathfrak{I}^{\prime}\) of the inference algorithm \(\mathfrak{I}\) presented in Sec.V that turns the constraint (1) into an iff constraint. I.e. adding that \(f^{\mathfrak{A}}(a)=f^{\mathfrak{A}}(b)\) unless \(a\#_{\mathcal{S}}b\). If running \(\mathfrak{I}^{\prime}\) returns that there is no satisfying assignment then it runs \(\mathfrak{I}\). In both cases Thm. V.1 guarantees that the returned BP is consistent with the given sample. Therefore \(\mathfrak{A}\) is an inference algorithm. Next we claim that if the given sample subsumes \(\mathcal{S}_{B}\) then \(B^{\prime}\), the resulting BP, is minimal. This holds since Lem. VI.2 ensures that \(\#_{\mathcal{S}}\) defines the desired equivalence \(\sim_{\mathcal{S}}\) between actions, and the revised constraint (1) guarantees that actions are not enabled from the same state if and only if the sample separates them. (Note that any word consistent with the BP cannot separate actions \(a\) and \(b\) if they are enabled from the same state.) Therefore \(\mathfrak{I}^{\prime}\) will not return that there is no satisfying assignment. Next we note that by Lem. VI.1 for every state vector **p** that is reachable in \(B^{m}\). And for every shortest word \(w\) that reaches **p** in \(B^{m}\) there exists a node \(w\) in \(\mathcal{T}_{m}\) such that \(\textbf{p}_{w}=\textbf{p}\). If \(w=a_{1}a_{2}\ldots a_{n}\) then for each \(1\leq i\leq n\) one process took the sending transition \(a_{i}!!\) and the rest of the processes responded with \(a_{i}??\). Constraint (5) makes sure the assignment to \(f^{\mathfrak{A}}\), \(f^{\mathfrak{A}}\) and \(f^{\mathfrak{A}}\)? respect all the possible options that enabled this, making sure that for every two options for enabling \(w\) that result in state vectors \(\textbf{p}_{1}\) and \(\textbf{p}_{2}\), resp., the same states are lit in both \(\textbf{p}_{1}\) and \(\textbf{p}_{2}\). It follows that for any BP \(B^{\prime}\) that adheres to the constraints there exists a mapping \(h\) between the states of \(B\) and \(B^{\prime}\) satisfying the requirements of Lem. IV.3. Therefore \(L(B)=L(B^{\prime})\). **Corollary 2**.: _The class of fine broadcast protocols is teachable._ ## VII Characteristic sets may be inevitably large In this section we show that there exist fine BPs for which there is no characteristic set of polynomial size. We start by showing that there exist fine BPs with cutoff of size quadratic in the size of the BP. We adapt a family of BPs used in [25] for showing a quadratic cutoff for BPs without the restriction of no-hidden states, and for a slightly different definition of cutoff (reaching a particular state). The adaptation for no-hidden states is seamless. To work with our definition of cutoff we needed to introduce some auxiliary states. The family is given in Fig.3. The family is parameterized by three natural numbers \(m\), \(n\), and \(\ell\). The BP \(B_{m,n,\ell}\) has \(n\) states in the lower loop, \(m\) states in the upper loop, and \(\ell\) helper states (overall \(n+m+\ell+4\) states). We use \(H??\) and \(A??\) as a shortcuts for \(\{h_{i}??\ |\ i\in[1..\ell]\}\) and \(\{a_{i}??\ |\ i\in[1..n^{\prime}-1]\}\), respectively. From all of the states except for m we assume \(c?\) takes to the \(\bot\) state, and \(a_{\top}??\) takes all of the states to the \(\top\) state (we didn't add these transitions to avoid clutter). For any action in \(\{b_{j}\colon j\in[1..m]\}\) and any state, \(b_{j}??\) is a self loop. One can see that traversing the lower loop requires at least \(n\) processes: one process for each of the \(a_{i}\) transitions, for \(1\leq i\leq n-1\), and one process for the \(H\) transition. Each \(H\) transition requires that one of the \(h_{i}\) transitions to be taken. The structure of the \(h_{i}\) sending transitions thus restrict the number of times N' can be reached to \(\ell+1\) (since it can be reached once without executing an \(h_{i}\)). In order to enable \(a_{\top}\) there must be one process in state n' that sends \(c!!\) and one process in state m responding to it. Therefore if \(n\) and \(m\) are co-prime, this can occur only once the lower loop is traversed \(m\) times (which requires \(m\) helper states). Thus an overall of \(n\cdot m\) processes are required to make a word using action \(a_{\top}\) feasible. Accordingly, the shortest word using the \(a_{\top}\) action is of length at least \(n\cdot m\). This shows that there exist no characteristic sets for this family of size less than quadratic in the size of the given BP. To obtain larger lower bounds we can introduce additional loops. Let \(p_{1}<p_{2}<p_{3}\) be three different primes, and consider the BP with three loops of sizes \(p_{1}\), \(p_{2}\) and \(p_{3}\) given in Fig. 4. To allow the \(p_{3}\) loop to be traversed at most \(p_{1}\cdot p_{2}\) times, we can introduce \(p_{1}+p_{2}\) helper auxiliary states and arrange them in a line of size \(p_{1}\) and a loop of size \(p_{2}\), respectively. We also need to introduce an additional state (\(d\) in the figure) to make the transition to \(\top\) conditioned on the synchronization of the three loops, and an additional initialization state (\(i_{3}\) in the figure). To obtain an exponential size lower bound we extend this idea as follows. Instead of having two such loops, we have \(k\) such loops of sizes \(p_{1},p_{2},\ldots,p_{k}\), where \(p_{i}\) is the \(i\)-th prime number. The overall size of the BP will be \(2(p_{1}+p_{2}+\ldots+p_{k-1})+p_{k}+2(k-1)+k\). This is since we need \(p_{1}+p_{2}+\ldots+p_{k}\) states for the main loops; \(p_{1}+p_{2}+\ldots+p_{k-1}\) for the helper states (arranged in \(k-2\) loops and one line); \(2(k-1)\) states for the initialization (to place one process in each of the \(k\) main loops, and the \(k-1\) helper loops/line); and \(k\) states for the synchronization of the main loops (these includes \(d,\top,\bot\) in Fig.4). Let \(\{P_{n}\}_{n\in\mathbb{N}}\) be the family of fine BPs constructed as above with loops for all primes up to size \(n\). Then the size of the BP \(P_{n}\) is quadratic in \(n\). Since all primes are of size \(2\) at least, and since the number of primes of size \(n\) or less is \(\Theta(n/\log n)\) the number of processes required to enable \(a_{\top}\) is \(2^{\Theta(n\log n)}\), and so is the size of a shortest word that includes the \(a_{\top}\) action. Hence a word of length exponential in the size of the BP is required to be in the sample, entailing this family has no characteristic set of polynomial size. **Corollary 3**.: _There exists a family of fine BPs with no characteristic set of polynomial size._ We note that the same family also shows that fine BPs can be exponentially smaller than the minimal DFA accepting the same language. Indeed, any DFA for the family \(P_{n}\) has to include some words that uses \(a_{\top}\) as one of the letters, and the smallest such word is of size exponential in \(n\). Therefore, the DFA must have at least that many states. **Corollary 4**.: _There exists a family of fine BPs for which the corresponding DFA is of exponential size._ Fig. 4: The BP \(B_{p_{1},p_{2},p_{3}}\). From state \(d\) responding to any action except for \(d\)? will take us to the sink state. For state \(1\) to \(p_{1}\) as well as \(1^{\prime}\) to \(p_{2}\), their actions is a self-loop and responding to them will be a self-loop as well from any action. Responding to \(c!\) will take every one to the sink state except states \(p_{1}\) and \(p_{2}\). Fig. 3: The BP \(B_{m,n,\ell}\) from a family of fine BP’s with quadratic cutoffs. ## VIII BPs are not polynomially predictable In this section we show that fine BPs are not polynomially predictable with membership queries. The learning paradigm of polynomial predictability of a class \(\mathcal{C}\) can be explained as follows. The learner has access to an oracle answering two types of queries with regard to the target concept \(C\in\mathcal{C}\): _membership queries_ (mq) and _draw queries_ (dr). A membership query receives a word \(w\) as input and answers whether \(w\) is or is not in \(C\). A draw query receives no inputs and returns a pair \((w,b)\) where \(w\) is a word that is randomly chosen according to some probability distribution \(D\) and \(b\) is \(\textsc{mq}(w)\). We assume some bound \(\ell\) on the length of the relevant examples, so that \(D\) is a probability distribution on the set of relevant words. We assume the learner knows \(\ell\) but \(D\) is unknown to her. At some point, the learner is expected to ask for a word whose membership it needs to predict, in which case it is handed a word \(w\) (drawn randomly according to the same distribution \(D\)) and it should then answer whether \(w\) is or is not in \(C\). We say that the class \(\mathcal{C}\) is _polynomially predictable_ with membership queries, if given a bound \(s\) on the size of the target language, the mentioned bound \(\ell\) on the length of relevant examples, and an accuracy parameter \(\epsilon>0\), there exists a learner that will classify the word to predict correctly with probability at least \((1-\epsilon)\), after asking a number of queries that is polynomial in the size of the minimal BP of the target language. We show that under plausible cryptography assumptions fine BPs (and hence BPs in general) are not polynomially predictable. **Theorem VIII.1**.: _Assuming the intractability of any of the following three problems: testing quadratic residues modulo a composite, inverting RSA encryption, or factoring Blum integers, fine broadcast protocols are not polynomially predictable with membership queries._ Proof.: The proof is via a reduction from the class \(\mathcal{D}\) of intersection of DFAs, for which Angluin and Kharitonov have shown that \(\mathcal{D}\) is not polynomially predictable under the same assumptions [6]. We show that given a predictor \(\mathfrak{B}\) for fine BPs we can construct a predictor \(\mathfrak{D}\) for the intersection of DFAs as follows. First, we show how to associate with any given set \(D_{1},D_{2},\ldots,D_{k}\) of DFAs a particular BP \(B\). Let \(D_{i}=(\Sigma,Q_{i},\iota_{i},\delta_{i},F_{i})\) and assume without loss of generality that the states of the DFAs are disjoint and the DFAs are complete (i.e. from every state there is an outgoing transition on every letter). We construct a fine BP \(B=(A,S,s,R)\) with cutoff \(k+1\) as follows (see Fig.5). Let \(Q=\cup_{i\in[1..k]}Q_{i}\). The set of states \(S\) is \(Q\cup\{\bot,\textsc{s},\textsc{c},\textsc{x}\}\cup\{\textsc{G}_{i},\textsc{H} _{i}\mid i\in[1..k]\}\). The initial state is \(\textsc{H}_{1}\). Let \(\Sigma_{h}=\{h_{j}\mid j\in[1..k]\}\) and \(\Sigma_{g}=\{g_{j}\mid j\in[1..k]\}\). The set of actions \(A\) is \(\Sigma\ \cup\ \Sigma_{h}\ \cup\Sigma_{g}\cup\{\$,\bot,x,s\}\). The transitions are as follows: for every \(i\in[1..k]\) we have \((\textsc{H}_{i},h_{i}!,\textsc{G}_{i})\). The receiving transitions on \(h_{i}\) are \((\textsc{H}_{i},h_{i}??,\textsc{H}_{i+1})\) for every \(i\in[1..k]\), and \((\textsc{H}_{k},h_{k}??,\textsc{s})\). For state \(\textsc{s}\), the transitions are as follows: \((\textsc{s},s!!,\textsc{c})\), \((\textsc{s},s??,\textsc{c})\). For every \(\sigma\in\Sigma\) we have \((\textsc{c},\sigma!!,\textsc{c})\). The transitions from the \(\textsc{G}_{i}\)'s states are \((\textsc{G}_{i},s??,\iota_{i})\) for every \(i\in[1..k]\). For every \((q,\sigma,q^{\prime})\in\delta_{i}\) we have \((q,\sigma??,q^{\prime})\). For the \(\bot\) state we have \((\bot,\bot!!,\textsc{x})\) and \((\textsc{x},x!\! predictor for the intersection of DFAs. ## IX Experimental Results We have implemented our approach in a prototype tool. It uses the \(Z3\) Theorem Prover (v\(4.12.2\)) as its underlying SMT solver via PyCharm, with Python \(3.9\). All experiments were run on a 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, with 32.0 GB RAM. In the first experiment, we randomly generated BPs with no hidden states as follows. A number \(s\) between \(2\) to \(10\) is chosen randomly for the number of states. Then a number between \(1\) to \(6\) of actions are chosen and distributed randomly between the states. Then additional actions are added to states who weren't associated with an action to make sure the BP has no hidden states. Using this method we randomly generated over \(1000\) BPs with no hidden states, and applied \(\mathfrak{G}\) (see Sec. VI-A) to generate a characteristic set for them with a bound on \(20\) for the cutoff. Out of those, \(422\) successfully terminated. The rest either may have a bigger cutoff or no cutoff at all. On the produced characteristic sets, we ran the inference algorithm \(\mathfrak{I}\) (see Sec. V, not assuming that the sample subsumes a characteristic set), limiting the SMT solving time to 2h.3 The two graphs on top of Fig.6 show the results of this experiment. The \(x\)-axis shows the cutoff of the BP, the \(y\)-axis the number of words in the CS, the left color-bar shows the running time of the SMT solver in seconds, and the right color bar the number of states. Correctness of the obtained BP was verified by means of exhaustive membership queries. We observe that \(92\%\) of the instances run in less than \(2\) minutes, \(4\%\) run in Fig. 5: A state \(s\) for which no sending transitions is shown, has a self-loop on \(s!!\) (for a unique action \(s\)). Responses that are not shown are self-loops. Fig. 6: Experimental Results \(2\) to \(15\) minutes. The hardest instance, which took about one and a half hour, has \(18\) states and a cutoff of \(8\). Other hard instances had a cutoff of \(14\) to \(15\). In the second experiment we wanted to test the algorithms on fine BPs with a large cutoff. To this aim, we used the family \(\{B_{n,m,\ell}\}\) of fine BPs from Fig.3. For \(n\) we used values between \(2\) to \(20\), for \(m\) we used values between \(2\) to \(7\), for \(\ell\) between \(0\) to \(3\). Note that here \(m\) and \(n\) need not be prime or co-prime. Here as well we used \(40\) to bound the size of the cutoff. One can clearly see in Fig.6 (bottom) the exponential growth in the size of the characteristic set: looking e.g. at \(x=38\) the size of the characteristic set grow from \(3K\), \(25K\), \(97K\), \(311K\) as the number of helper states grow from \(0\) to \(3\). ## X Conclusion We investigated the learnability of the class of fine broadcast protocols. To the best of our knowledge, this is the first work on learning concurrent models that does not assume a fixed number of processes interact. On the positive we have shown a passive learning algorithm that can infer a BP consistent with a given sample, and have proved that this class is teachable in the sense that a system of characteristic sets (CS) can be associated with each BP in the class, so that an inference algorithm can correctly identify the BP, when given a consistent sample that subsumes a CS. On the negative side we have shown that the consistency problem for fine BPs is NP-hard, that characteristic sets may be inevitably of exponential size, and that the class is not polynomially predictable.
2310.09255
Magnetotransport Properties of Epitaxial Films and Hall Bar Devices of the Correlated Layered Ruthenate Sr$_3$Ru$_2$O$_7$
For epitaxial Sr$_3$Ru$_2$O$_7$ films grown by pulsed laser deposition, we report a combined structural and magnetotransport study of thin films and Hall bar devices patterned side-by-side on the same film. Structural properties of these films are investigated using X-ray diffraction and high-resolution transmission electron microscopy, and confirm that these films are epitaxially oriented and nearly phase pure. For magnetic fields applied along the $c-$axis, a positive magnetoresistance of 10\% is measured for unpatterned Sr$_3$Ru$_2$O$_7$ films, whereas for patterned Hall bar devices of channel widths of $10$ and $5\, \mu$m, magnetoresistance values of 40\% and 140\% are found, respectively. These films show switching behaviors from positive to negative magnetoresistance that are controlled by the direction of the applied magnetic field. The present results provide a promising route for achieving stable epitaxial synthesis of intermediate members of correlated layered strontium ruthenates, and for the exploration of device physics in thin films of these compounds.
Prosper Ngabonziza, Anand Sharma, Anna Scheid, Sethulakshmi Sajeev, Peter A. van Aken, Jochen Mannhart
2023-10-13T17:12:37Z
http://arxiv.org/abs/2310.09255v4
Magnetotransport Trends in Epitaxial Films and Electronic Devices of Correlated Layered Ruthenate Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) ###### Abstract For epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films grown by pulsed laser deposition, we report a combined structural and magnetotransport study of thin films and Hall bar devices patterned side-by-side on the same film. Structural properties of these films are investigated using X-ray diffraction and high-resolution transmission electron microscopy, and confirm that these films are epitaxially oriented and nearly phase pure. For magnetic fields applied along the \(c-\)axis, large positive magnetoresistance up to 40% and 140% are achieved in unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films and Hall bar devices, respectively. These films show switching behaviors from positive to negative magnetoresistance that are controlled by the direction of the applied magnetic field. The present results provide a promising route for achieving stable epitaxial synthesis of intermediate members of correlated layered strontium ruthenates, and for the exploration of device physics in thin films of these compounds. ## 1 Introduction Layered ruthenates are peculiar strongly correlated materials in which several comparable interactions compete to give rise to a variety of novel electronic and magnetic phenomena. Intricate phenomena observed in layered ruthenates range from superconductivity [1] to emergent ferromagnetism and insulator-metal transitions [2], to colossal and large magnetoresistance [3], and other electron- and spin-ordering states [4]. The layered strontium ruthenates of the Ruddlesden-Popper (R-P) phases, Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) (\(n=1,2,3,\infty\)), play a pivotal role in the study of strongly correlated electron systems. Depending on the number \(n\) of the RuO\({}_{6}\) octahedra layers in the unit cell, the phenomena obtained in these systems include potentially unconventional superconductivity in Sr\({}_{2}\)RuO\({}_{4}\) (\(n=1\)) [5, 6, 7, 1], metamagnetic quantum criticality and electron nematic fluid in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) (\(n=2\)) [8, 9, 10, 11], orbital-dependent double metamagnetic transition and characteristics of Hund's metal correlations in Sr\({}_{4}\)Ru\({}_{3}\)O\({}_{10}\) (\(n=3\)) (n=3) [12, 13, 14, 15], and itinerant ferromagnetism coexisting with localized correlated behavior in SrRuO\({}_{3}\) (\(n=\infty\)) [16, 17, 18, 19]. The rich array of distinct collective phenomena in these materials show that Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) are attractive for exploring the rich physics of strongly correlated layered materials. Beyond fundamental interests, the potential of Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) for applied physics has driven interests in thin films, particularly in Sr\({}_{2}\)RuO\({}_{4}\) (\(n=1\)) films due to its unconventional superconductivity and Shubnikov-de Haas effect [20, 21], and in SrRuO\({}_{3}\) (\(n=\infty\)) films as an excellent electrode material integrated in diverse oxide thin-film devices for a variety of oxide electronic and nanoionic applications [16, 22], and also as a candidate Weyl semimetal magnetic material [23, 24]. However, due to the complexity of synthesizing phase-pure epitaxial films of intermediate Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) members (\(1<n<\infty\)), only a few publications reported attempts to epitaxially grow double- and triple-layered strontium ruthenate films, Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) and Sr\({}_{4}\)Ru\({}_{3}\)O\({}_{10}\)[25, 10]. Understanding the synthesis science of epitaxial films of these intermediate members provide a unique opportunity for further explorations of their correlated phases in thin films and electronic devices. Also, the precise control of their functional interfaces at the atomic level will open new routes towards the realization of novel interface-induced functionalities in correlated layered ruthenates, which would otherwise not be accessible when using single crystals. We focus on epitaxial films of the bilayer strontium ruthenate Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\). The material Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) is an enhanced Pauli paramagnet in its ground state that undergoes an anisotropic metamagnetic transition (_i.e._, a sudden increase in the magnetization within a small change of applied magnetic field) [26, 27]. Under magnetic field applied in the out-of-plane direction, Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) displays unusual double metamagnetic transitions at critical magnetic fields of \(\sim 8\) T and 13.5 T [28]. However, for in-plane magnetic fields parallel to the ruthenium oxygen planes, metamagnetic behaviors shift to a lower critical field of \(\sim 5.5\) T [27]. In magnetotransport, a peak in the magnetoresistance (MR) has been observed around 5T under in-plane applied magnetic fields, and it has been attributed to originate from in-plane metamagnetic behavior of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[29, 10, 30]. Metamagnetic transitions in double- and triple-layered ruthenates have been proposed to arise from magnetic fluctuations due to the presence of flat bands near the Fermi level in the electronic band dispersions [31, 32, 33, 34, 35] Also, electron nematic behaviors [36], ascribed to a spin-dependent symmetry breaking Fermi surface distortion, have been observed both in single crystals and strained epitaxial films of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[8, 10, 29]. Although magnetransport properties of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) have been characterized both in bulk single crystals [37], single crystalline nanosheets [30], and in compressively-strained epitaxial films [10], little is known about the MR characteristics of epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films and thin-film devices patterned side-by-side on the same film. Such epitaxial films are ideal for the exploration of lateral dimensional confinement effects in epitaxial films for tuning their electronic ground states. Also, having various micro- and nano-scale devices fabricated side-by-side on the same film provides an advantage over single crystal-based devices, because it provides the opportunity to perform comparative study of magnetotransport properties from the same sample, measured in similar conditions. In this work, we report on the epitaxy of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) thin films, as well as on the magnetotransport trends of epitaxial films and electronic devices of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\). The synthesis of near phase-pure Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films was previously reported, but only for epitaxial films grown by molecular beam epitaxy (MBE) [10, 25]. Here, we use pulsed laser deposition (PLD) for the epitaxial growth of these thin films. First, we focus on optimizing the PLD growth conditions for achieving phase-pure epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films. Subsequent structural analyses of these samples, using x-ray diffraction (XRD) and high-resolution scanning transmission electron microscopy (STEM), confirm that these films are epitaxially oriented and nearly phase pure. From magnetization characterization, the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films show no ferromagnetic transition over the entire measured temperature range. Second, for epitaxial films prepared at optimal growth conditions, we fabricate side-by-side on the same Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) sample several Hall bar devices of various channel widths, to then explore their electronic transport properties. We find that the sheet resistance of Hall bar devices increases as the active channel width of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films decreases, while the sheet resistance of unpatterned films is higher than those of structured Hall bar devices. Lastly, we perform a comparative MR study of unpatterned epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films and Hall bar devices. At low temperatures, for magnetic fields applied along the \(c-\)axis (\(B\parallel c\)), with the excitation current perpendicular to the direction of applied field, we observe negative MR behavior of \(40\%\) in unpatterned films and enhanced negative MR up to \(140\%\) in patterned Hall bar devices. Furthermore, these films show switching behaviors from positive to negative MR that are controlled by the direction of the applied magnetic field ## 2 Thin Film Epitaxy Figure 1(a) illustrates the epitaxy of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films on a TiO\({}_{2}\)-terminated SrTiO\({}_{3}\) substrate. All Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films were grown on (100)-oriented SrTiO\({}_{3}\) single crystalline substrates (\(5\times 5\times 1\) mm\({}^{3}\)) [Fig. 1(b)]. Prior to deposition, the SrTiO\({}_{3}\) substrates were terminated in situ using a CO\({}_{2}\) laser [38]. We deposited Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films using PLD by ablating a stoichiometric Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) target [39] with an excimer laser (\(\lambda=248\) nm) at 2 Hz to a thickness of 25 nm. During growth, the molecular oxygen pressure in the chamber was kept at \(P_{\alpha_{2}}=8.0\times 10^{-2}\) mbar. Immediately after growth, under the same \(P_{\alpha_{2}}\), samples were cooled to room temperature at a cooling rate of \(80^{\circ}\)C/min. To establish stable epitaxial growth conditions, we have prepared a series of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films at various substrate temperatures (T\({}_{\text{sub}}\)), from 680 to 800\({}^{\circ}\)C in steps of 20\({}^{\circ}\)C. The optimal substrate temperature for the epitaxial growth of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films was found to be T\({}_{\text{sub}}=720^{\circ}\)C. At this temperature, we have also prepared a series of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films varying the laser fluence from 1.5 to 3 J/cm\({}^{2}\) in steps of 0.5 J/cm\({}^{2}\), and better quality films were obtained at a laser fluence of 2.5 J/cm\({}^{2}\). ## 3 Results and Discussion The surface quality of epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films was characterized by reflection high-energy electron diffraction (RHEED) and atomic force microscopy (AFM). The deposition of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) was in situ monitored by RHEED [Fig. 1(c)]. We observe that the intensity of time-dependent RHEED oscillations remains roughly the same throughout the deposition of the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) layers on SrTiO\({}_{3}\) substrates. The RHEED oscillations indicate that the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films are grown in a layer-by-layer mode with a smooth surface as also demonstrated by sharp, diffracted, and specular RHEED patterns [inset in Fig. 1(c)]. After PLD growth, the surface morphology of the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films was investigated using AFM. Figure 1(d) depicts a typical AFM image for a representative 25-nm-thick Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film. The surface of the resulting Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films at optimal substrate temperature is smooth and it exhibits well-pronounced terraces, demonstrating a high quality surface morphology of these films. For a lateral scan size of \(\sim 5\times 5\,\mu\)m\({}^{2}\), the extracted surface roughness is \(\leq 0.2\) nm for a 25-nm-thick Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film [bottom panel in Fig. 1(d)]. The crystalline quality and phase purity of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films were characterized by XRD. Figure 2(a) shows representative \(\theta-2\theta\) scans for the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films at various substrate temperatures and laser fluence. For the films grown at optimal growth conditions (T\({}_{\text{sub}}=720^{\circ}\)C and laser fluence \(=2.5\,\text{J/cm}^{2}\)), only the substrate peaks and phase-pure \(00l\) family of the film diffraction peaks are resolved, which indicates a high crystallinity and verifies that the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films were aligned along the c-axis. The \(002,004,008,001\underline{0},001\underline{2},001\underline{4},00\underline{2}\underline{0}\), and \(00\underline{2}\underline{2}\underline{2}\) Bragg diffraction peaks are well resolved, while the \(006,001\underline{6}\), and \(001\underline{8}\) peaks are not present due to the small film thickness and the low structure factors of these peaks [10]. The extracted out-of-plane lattice parameter for epitaxial films grown at the optimal growth conditions is \(c=20.72\) A. This value is consistent to the bulk \(c-\)axis lattice constant of 20.7194 A reported previously in single crystals of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[30, 40, 41, 42]. Extra phases, indicated by (#) in Fig. 2(a), were detected in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films grown at high substrate temperatures, underlining the role of proper tuning of T\({}_{\rm sub}\) for achieving phase stability in these films. Figure 2(b) shows a closeup view of the \(\theta-2\theta\) scans around the 0010 diffraction peak for the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films grown at various T\({}_{\rm sub}\) and laser fluence. For the films grown at the optimal growth conditions, only the 0010 peaks are resolved with noticeable thickness fringes, which highlight phase purity and smooth growth. However, we observed shifting of the 0010 peaks to lower 2\(\theta\) angles for films that were not grown at the optimal growth conditions. The shifting of the peak demonstrates that the \(c\)-axis lattice constant expands, indicating inter-growth of other Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) phases and change of stability of the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films. In particular, the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films grown at T\({}_{\rm sub}\) = 680\({}^{\circ}\)C exhibited a peak shoulder around the substrate peak, indicating an inter-growth of the SrRuO\({}_{3}\) phase [43]. Figure 2(c) shows four reciprocal space maps (RSM) Figure 1: **Crystal structure and surface quality of epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films**. Schematic representation of (a) the crystal structure and (b) layout of the double-layered Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) thin film grown by pulsed laser deposition on a (001)-oriented SrTiO\({}_{3}\) substrate. Dashed-line rectangle in (a) delineates the unit cell of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\). (c) Time-dependent RHEED intensity oscillations recorded during the growth of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\). The inset depicts RHEED patterns of a Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film, where the blue and orange rectangles mark the region from which the integrated intensity as a function of time was recorded during the deposition of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\). (d) A representative AFM image displaying the surface morphology and (bottom panel) corresponding surface roughness profile of a typical Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film. around the asymmetric pseudocubic \(\bar{1}03_{p}\) reflection peak of the substrate for different \(\phi\) angle orientations. From all four RSMs, it is evident that the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film is fully epitaxially strained and single phase, as confirmed by extracted pseudocubic in-plane lattice constant of \(a=3.887\)A. This extracted in-plane lattice parameter is consistent with the literature value for single crystals of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[40; 41]. Although the epitaxial films appear to be single phase from XRD characterization, it is also essential to use transmission electron microscopy (TEM) to check for inter-growths that are known to be difficult to discern in XRD patterns of layered R-P materials [44; 45; 25]. To provide a complementary real-space microstructural characterization of these films, cross-sectional TEM imaging was performed. With these studies, we investigated the defect populations of these films including possible inter-growths of \(n\neq 2\) members of the Figure 2: **Structural characterization of epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films**. (a) Representative XRD scans of 25-nm-thick Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films prepared at different growth temperatures and laser fluences together with a scan of a SrRuO\({}_{3}\) control film (orange curve). For films prepared at optimal substrate temperature (720\({}^{\circ}\)C), only the SrTiO\({}_{3}\) Bragg’s reflection peaks (*) and the 00f family of the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films’ diffraction peaks are resolved. Extra phases (#) in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films are observed at high growth temperatures. (b) Closeup XRD patterns around the 001\(\underline{0}\) peak of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\). For growth temperatures higher or lower than 720\({}^{\circ}\)C, the 001\(\underline{0}\) peaks of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) shift to lower values. (c) Rereciprocal space maps for a Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film around the \(\bar{1}03_{p}\) reflection of SrTiO\({}_{3}\) substrate, where \(p\) refers to pseudocubic indices, measured at four different \(\phi\) angle orientations of the SrTiO\({}_{3}\) substrate. Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) series. Figure 3 depicts a cross-sectional STEM image of the entire film thickness of a representative Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film prepared at the optimal growth conditions. For films prepared at the optimal substrate temperature, no surface steps at the substrate and defects throughout the entire film thickness were observed. Selected area STEM studies of these films corroborate the epitaxial orientations established by XRD and confirm that the growth occurred along the [001] direction with the \(c-\)axis out of plane and the films being fully epitaxially strained to the SrTiO\({}_{3}\) substrate. The STEM images of two representatives Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films distinctly show the double-layered structure of the films, the interface between the substrate and deposited layers being coherent without misfit dislocations along the interface [Figs. 3 and 3]. The annular dark-field (center, Fig. 3) and bright-field (right, Fig. 3) images show the Sr and Ru columns with enhanced atomic number contrast. The Ru atomic columns appear brightest due to the much higher atomic number of Ru compared with Sr, as also indicated in the inset with a superimposed structural model [Fig. 3)]. The alternate stacking of the rock salt SrO layers and the RuO\({}_{6}\) octahedra sheets observed in these images confirms the formation of the desired \(n=2\) member of the R-P structure. To evaluate the composition of the epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films, electron energy-loss spectroscopy (EELS) measurements were performed on the cross section of the samples during STEM characterization. Figure 3 displays atomically resolved EELS elemental maps of a Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film. The black color suggests zero intensity, whereas the other colors indicate the expected chemical composition of different elements in the film (Ru, Sr, Ti and O) that are resolved in the scanned region. For epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films grown at optimal growth conditions (T\({}_{\text{sub}}=720\)degC and laser fluence = 2.5 J/cm\({}^{2}\)), the defects detected are infrequently regions of \(n=3\) inter-growth within \(n=2\) layers (indicated by black arrows in Figs. 3 and 3). Such inter-growths were found in localized regions with insufficiently ordered volume to give rise to detectable diffraction spots in the XRD patterns. Similar inter-growth of extra \(n\) Figure 3: **Microstructural characterization of representative Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films prepared at optimal growth conditions**. (a) Overview scanning transmission electron microscopy (STEM) image of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) in [100] zone axis orientation. (b) The magnified annular dark-field (center) and annular bright-field (right) images show the film with a good structural order. A schematic atomic structural model, displaying the sequence of Sr, Ru and O atoms is shown, overlaying the resolved double-layered atomic structure. (c) High-resolution STEM image of a Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film in [100] orientation together with corresponding (d) electron energy-loss spectroscopy (EELS) elemental mapping of the Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)/SrTiO\({}_{3}\) interface showing the elemental distribution in the sample. Due to the overlap of Ru \(M_{4,5}\)-edges with low-intensity Sr \(M_{2,3}\), residual noticeable Sr signals remain when extracting the Ru signal. Black arrows in (a) and (c) point to regions with \(n=3\), which were observed occasionally in films prepared at optimal growth conditions phases in R-P layers were reported in epitaxial films of Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) and Sr\({}_{n+1}\)Ti\({}_{n}\)O\({}_{3n+1}\) (\(n=2\) to 5) grown by MBE [44, 46, 25, 10]. STEM investigations of films grown outside the optimal substrate temperature window showed increased defect density and populations of inter-growths of other \(n\)-phases of the R-P series [See Fig. S1 of the supplementary material]. These inter-growths are difficult to control in epitaxial growth using PLD. This is because minor stoichiometric deviations that can exist between the target and deposited films, together with the fact that the difference in the formation energies among nearby \(n\) members of the Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) series is small, could lead to the formation of mixed \(n\)-phase films of these materials [45]. Magnetization as a function of temperature, \(M\)(T), of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films were studied using a physical property measurements systems (PPMS) vibrating sample magnetometer (VSM). The film was first pre-cooled to 5 K while in a 0.1 T, and then warmed to 300 K in the presence of 0.01 T, the \(M\)(T) data were collected. Figure 4 depicts the \(M\)(T) curve (green) of a representative Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film grown on SrTiO\({}_{3}\) at the optimal growth conditions. These magnetization data are obtained after the subtraction of the diamagnetic contribution from the SrTiO\({}_{3}\) substrate. The Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films show no ferromagnetic transition, as the \(M\)(T) curve is nearly flat in the measured temperature range. This observation is in agreement with previous reports on the magnetic ground state of single crystals and epitaxial films of this material [25, 26]. However, as it was expected, the \(M\)(T) data of a SrRuO\({}_{3}\) control film prepared in the same PLD chamber show robust ferromagnetic ordering, as revealed by a sharp paramagnetic to ferromagnetic transition at round 150 K, in agreement with previous reports [47, 25, 16]. Now we turn to device fabrication and electronic transport characteristics of epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films prepared at the optimal growth conditions. To investigate the magnetotransport properties in these films and patterned devices, we fabricated side-by-side on the same film a series of Hall bar devices of various channel widths (from 10 \(\mu\)m to 500 nm) [Fig. 5(a)]. The devices were patterned using electron-beam lithography (EBL) followed by ion-beam etching and the deposition of Ti/Au (5 nm/45 nm) electrodes. Figures 5(b) and 5(c) show top-view scanning electron microscopy (SEM) images of patterned Hall bar devices and a schematic side view of a device together with transport measurement configuration, respectively. Unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films were measured in a Van der Pauw configuration by wire bonding aluminum wires to the samples' corners. Electronic transport characteristics of patterned thin-film devices, which were fabricated side-by-side on the same film after transport measurements in a Van der Pauw geometry, were studied in a Hall bar configuration with aluminum wires connected to the electrodes [Fig. 5(d)]. For magnetotransport measurements, we used a Quantum Design PPMS operated at 300-5 K. An excitation current of 1 \(\mu\)A was applied. The MR measurements were performed with the magnetic field (\(B\)) oriented both along the \(c-\)axis (\(B\parallel c\)) and in the \(ab-\)plane (\(B\parallel ab\)) of the film. Figure 6(a) presents the temperature-dependent zero magnetic field sheet resistance (\(R_{S}\)) between 5 and 300 K for a 25-nm-thick unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film and for two representative patterned Hall bar devices fabricated side-by-side on this same film after Van der Pauw measurements. The unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film and devices show metallic behavior over the entire temperature range. At 5 K, the \(R_{S}\) is roughly a factor of 4 higher in unpatterned film than in Hall bar devices. The extracted residual resistivity ratio RRR, defined as \(\rho\)(300 K)/\(\rho\)(5 K), is RRR\(\simeq 6.8\) for 25 nm-thick films. This value is consistent with reported resistivity data of MBE-grown Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films of similar thickness [10]. From the temperature dependence of the normalized resistances, \(R(T)/R\)(5 K), at various applied \(B-\)field orientations, the measured temperature dependence is isotropic in the absence of an applied field, while enhanced scattering is observed for in-plane applied \(B-\)field of 5 T [see, Fig. S2 of the supplementary material]. This enhanced scattering near the metamagnetic field transition was observed in a strained Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films [10], and it has been attributed to field-controlled instabilities of the Fermi surface of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[31]. Figures 6(b)-6(d) show the MR at several temperatures for the same unpatterned film and two Hall bar devices fabricated on the same sample after measurements in Van der Pauw geometry. The magnetoresistance is defined as \(\text{MR}=\left(\frac{R_{\text{xx}}(B)-R_{\text{xx}}(0)}{R_{\text{xx}}(0)} \right)\times 100\), where \(R_{\text{xx}}(B)\) and \(R_{\text{xx}}(0)\) are the longitudinal resistance measured at a field \(B\) and at zero field, respectively. MR data were acquired with the \(B-\)field in out-of-plane direction (\(B\parallel c-\)axis, \(\theta=0\)deg), the excitation current being perpendicular to the Figure 4: **Magnetic ground state of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films**. Magnetization as a function of temperature of a typical Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film prepared at optimal growth conditions. For comparison, the magnetization data of a SrRuO\({}_{3}\) film is also presented. field (\(B\perp I\)). No hysteresis loop was observed, which indicates that no intrinsic long-range ferromagnetic order occurs. Details on the analysis of MR data are discussed elsewhere [48; 49; 50] For unpatterned films measured in Van der Pauw geometry at 5 K, a positive MR is measured at low magnetic fields together with a shallow change of slope slightly above 6 T [Fig. 6(b)]. As the temperature is increased further, this shallow slope at high \(B-\)field disappears and MR data show quasi-quadratic field dependence. Positive MR with change of slope from a positive to a negative, resulting into a MR peak around 6 T, has been reported in thin Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) single crystal nanosheets measured at 500 mK with \(B\parallel c-\)axis [30]. The positive MR peak has been associated with the metamagnetic transition in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[27; 30]. For patterned Hall bar devices, a positive and quasi-quadratic field dependence MR is observed independent of temperature [Fig. 6(c)-6(d)]. We Figure 5: **Device fabrication and measurement configuration**. (a) Optical micrograph image showing a top view of the whole Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film on which several Hall bar devices of different channel widths (\(W\)) were fabricated side-by-side on the same film. The channel widths are \(W=10\,\mu\)m, 5 \(\mu\)m, 1 \(\mu\)m and 500 nm for Hall bar devices D-01 to D-04, D-05 to D-09, D-10 to D-14, and D-15 to D-18, respectively. (b) SEM images of representative Hall bar devices of \(W=10\,\mu\)m and 500 nm. (c) Configuration of magnetotransport measurement for Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) Hall bar devices. The length (\(L\)) between longitudinal voltage contacts is the same for all Hall bar devices in (a), \(L=200\,\mu\)m. (d) Photograph of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) devices that are wire-bonded for magnetotransport measurements. Due to the limited number of available contacts on a chip carrier, only two devices on a sample could be bonded at once. are not able to resolve a MR peak in these Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films, because MR data were acquired only down to 5 K and up to 9 T. For Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) single crystals, a broad MR peak at 5 K was resolved only for MR data acquired up to 15 T [27]. The MR increases with decreasing temperature and it evolves into quasi-linear behavior in the high field limit. For the unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film, a maximum MR (\(T=5\) K, \(B=9\) T) of 10% is obtained [Fig. 6(b)]. However, Hall bar devices fabricated on this same film show large positive MR values of 40% and 140% for devices of channel widths of 10 and 5 \(\mu\)m, respectively [Fig. 6(c)-6(d)]. The positive MR for \(B\parallel c\) is an indication of paramagnetic behavior of these films, which is consistent with known resistivity increase as magnetic field increases in most materials [51]. While for ferromagnets, MR is negative; for paramagnetic metals, the movement of carriers will be deflected under a Lorentz force, which increases the probability of carrier scattering, thereby increasing the resistance. For the same film characterized in Figure 6, we have performed angle dependence MR measurements at various temperatures. Figure 7(a) depicts representative MR data at 5 K for unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film. The MR data were acquired for \(\theta\) (the angle between the \(B-\)field and the \(c-\)axis of the sample) values of \(0^{*},45^{*}\) and \(90^{*}\). As \(\theta\) increases from \(0^{*}\) to \(90^{*}\), the MR undergoes a gradual transition from positive to negative. For \(\theta=90\), the MR is negative and it reaches a local minimum at a magnetic field of \(\simeq 5.8\) T, which is consistent with the in-plane metamagnetic transition reported for Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) single crystals [27; 28]. MR data of Hall bar device fabricated on the same film exhibit qualitatively similar switching behaviors from positive to negative MR as the magnetic field is applied from out-of-plane (\(B\parallel c-\)axis) to in-plane direction (\(B\parallel ab-\)plane) [see Fig. S3 of Figure 6: **Magnetotransport characteristics of a Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) epitaxial film at various temperatures**. (a) Temperature dependence of the zero-field sheet resistance (\(R_{S}\)) of a 25 nm-thick Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film before device fabrication (unpatterned thin film), and the \(R_{S}\) of two representative patterned Hall bar devices (D-03 and D-07) fabricated on the same Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film. The channel widths of D-03 and D-07 are 10 \(\mu\)m and 5 \(\mu\)m, respectively. (b) Magnetoresistance (MR) curves of a Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) film measured at various temperatures in the Van der Pauw geometry before device fabrication (unpatterned thin film), and for (c)-(d) patterned Hall bar thin-film devices fabricated on the same film after measurements in the Van der Pauw configuration. MR data were acquired with \(B\parallel c-\)axis, \(\theta=0^{*}\) (\(\theta\) is the angle between the \(B-\)field and \(c\)-axis, as schematically illustrated in the right-side of the figure). the supplementary material]. A maximum negative MR (\(T=5\) K, \(B=9\) T, \(\theta=90^{\circ}\)) values of -4% and -70% are obtained for unpatterned film [Fig. 7(a)] and Hall bar device [Fig. S3], respectively. The observation of a negative MR at \(\theta=90^{\circ}\), accompanied with a local minimum and shallow positive slope at high magnetic field, is attributed to the coexistence of short-range in-plane ferromagnetic order in the surface layer and metamagnetism in these films. The Ru\({}^{4+}\) spin becomes aligned due to the applied magnetic field, thus spin-dependent scattering decreases, which gives rise to a negative MR. This observation is consistent with a recent magnetic field-controlled spectroscopic study that resolved in-plane ferromagnetism in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) single crystals that is sensitive to the direction of applied magnetic field [52]. Furthermore, we conjecture that the high MR values in these films is due to spin fluctuations of the mobile electronic carriers when the material is under applied magnetic fields. In summary, we have explored an approach to stabilize the epitaxy of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films by varying the substrate temperature and laser fluence in the PLD process, and investigated the magnetotransport trends in these epitaxial films. Structural characterizations of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films prepared at optimal growth conditions confirm that these films are epitaxially oriented and nearly phase pure. Resistivity measurements of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films and Hall bar devices exhibit characteristic metallic behavior. The magnetization characteristics of the films show no intrinsic ferromagnetic transition over the entire measured temperature range. These films exhibit a qualitatively similar MR behavior as previously reported for bulk single crystals and strained epitaxial films, specifically positive MR for the magnetic field applied along the \(c-\)axis and negative MR for the magnetic field applied along the \(ab-\)plane of the film. However, we achieve unprecedented large magnetoresistance values up to 40% and 140% in unpatterned Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films and Hall bar devices, respectively. This provides an opportunity to explore more the physics of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)-based electronic devices for the realization and modulation of novel emergent phenomena (\(e.g.\), nematic phase and non-Fermi-liquid behavior) in submicrometer devices. Based on these results, future directions are expected to focus on the investigation of magnetotransport properties of quantum structures (\(e.g.\), nanowires and quantum dots) patterned on epitaxial Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\) films, as well as on nanoscale devices fabricated side-by-side on the same film in which magnetotransport properties can be modulated through quantum size effects and applied electric fields. **Supplementary Material** See the supplementary material for additional microstructural characterizations using TEM and magnetotransport measurements of epitaxial Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) films. **Acknowledgments** The authors acknowledge technical support from Sarah Parks. P. Ngabonziza acknowledges startup funding from the College of Science and the Department of Physics & Astronomy at Louisiana State University. A. Scheid acknowledges the invaluable assistance of Tobias Heil for his support with STEM examinations, and Y. Eren Sujolcu for his exceptional efforts in coordinating the STEM examinations and interpreting the results. A. Scheid and P. van Aken acknowledge funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No.823717-ESTEEM3. **Data Availability** The data that support the findings of this study are available from the corresponding author upon reasonable request.
2301.07539
Grouping promotes both partnership and rivalry with long memory in direct reciprocity
Biological and social scientists have long been interested in understanding how to reconcile individual and collective interests in iterated Prisoner's Dilemma. Many effective strategies have been proposed, and they are often categorized into one of two classes, `partners' and `rivals.' More recently, another class, `friendly rivals,' has been identified in longer-memory strategy spaces. Friendly rivals qualify as both partners and rivals: They fully cooperate with themselves, like partners, but never allow their co-players to earn higher payoffs, like rivals. Although they have appealing theoretical properties, it is unclear whether they would emerge in evolving population because most previous works focus on memory-one strategy space, where no friendly rival strategy exists. To investigate this issue, we have conducted large-scale evolutionary simulations in well-mixed and group-structured populations and compared the evolutionary dynamics between memory-one and memory-three strategy spaces. In a well-mixed population, the memory length does not make a major difference, and the key factors are the population size and the benefit of cooperation. Friendly rivals play a minor role because being a partner or a rival is often good enough in a given environment. It is in a group-structured population that memory length makes a stark difference: When memory-three strategies are available, friendly rivals become dominant, and the cooperation level nearly reaches a maximum, even when the benefit of cooperation is so low that cooperation would not be achieved in a well-mixed population. This result highlights the important interaction between group structure and memory lengths that drive the evolution of cooperation.
Yohsuke Murase, Seung Ki Baek
2023-01-16T12:56:05Z
http://arxiv.org/abs/2301.07539v1
# Grouping promotes both partnership and rivalry with long memory in direct reciprocity ###### Abstract Biological and social scientists have long been interested in understanding how to reconcile individual and collective interests in iterated Prisoner's Dilemma. Many effective strategies have been proposed, and they are often categorized into one of two classes, 'partners' and 'rivals.' More recently, another class, 'friendly rivals', has been identified in longer-memory strategy spaces. Friendly rivals qualify as both partners and rivals: They fully cooperate with themselves, like partners, but never allow their co-players to earn higher payoffs, like rivals. Although they have appealing theoretical properties, it is unclear whether they would emerge in evolving population because most previous works focus on memory-one strategy space, where no friendly rival strategy exists. To investigate this issue, we have conducted large-scale evolutionary simulations in well-mixed and group-structured populations and compared the evolutionary dynamics between memory-one and memory-three strategy spaces. In a well-mixed population, the memory length does not make a major difference, and the key factors are the population size and the benefit of cooperation. Friendly rivals play a minor role because being a partner or a rival is often good enough in a given environment. It is in a group-structured population that memory length makes a stark difference: When memory-three strategies are available, friendly rivals become dominant, and the cooperation level nearly reaches a maximum, even when the benefit of cooperation is so low that cooperation would not be achieved in a well-mixed population. This result highlights the important interaction between group structure and memory lengths that drive the evolution of cooperation. ## Significance Statement In the evolution of cooperation, to what extent is cognitive intelligence essential? This study shows that structure in populations is a critical factor in promoting a sophisticated form of cooperation based on direct reciprocity. When the population is formed in groups, one has to be strict enough to survive within the group, whereas generosity to form cooperation is also essential for inter-group competition. These conflicting demands can be met only by intelligent strategies that sustain robust cooperation among competitors by referring to more than one previous round. This study thus suggests that the population structure and cognitive capacity can jointly impact evolution, although these have often been studied independently. ## Introduction A game describes interactions among agents that are governed by a set of rules to specify each agent's possible moves and the resulting outcome from the combination of moves [1]. A wide range of social and biological phenomena are thus covered by the theory of games. A successful strategy in a game can often be constructed by requiring certain reasonable properties in a top-down manner. Then the question is whether natural selection can achieve the same goal in a bottom-up way. Sometimes the answer is straightforward: For a symmetric two-person game, if a symmetric strategy profile \((x,x)\) is the unique strict Nash equilibrium, \(x\) is evolutionarily stable, and replicator dynamics will converge there. However, this would be the case for relatively simple games. If we can construct a weak Nash equilibrium at best, keeping the evolutionary trajectory close to the equilibrium will be hard. Or, the evolutionary path can be highly nontrivial when the system has multiple equilibria. Let us consider the iterated Prisoner's Dilemma (IPD) game. It has long been investigated to deepen our understanding of direct reciprocity, one of the fundamental mechanisms to sustain cooperation by means of repeated interactions. Still, the idea that a nontrivial strategy can be derived mathematically by imposing a few requirements is relatively recent: A major breakthrough was the discovery of zero-determinant (ZD) strategies [2], each of which is made to unilaterally enforce a linear relationship between long-term payoffs regardless of the co-player's strategy. An interesting subclass of the ZD strategies consists of 'extortioners,' which guarantee that the player's long-term payoff grows more than the co-player's. However, such extortionate strategies are not favored by selection unless the population size is small enough because they exploit each other so heavily [3, 4]. In contrast, generous ZD strategies make the co-player's payoff higher until mutual cooperation is reached. Those strategies are fairly successful in evolving populations, especially when the mutation rate is moderately high [5]. More importantly, the discovery of ZD strategies has considerably altered our viewpoint on strategic analysis: Recall that a player's payoff depends not only on his or her own strategy but also on the co-player's by the very definition of a game. In a sense, the discovery of ZD strategies has reduced this two-body problem to a one-body problem because a strategy is now characterized by how it restricts the co-player's performance. The restriction imposed by the strategy is its own invariant properties that can be analyzed, modified, and even designed _a priori_ in terms of long-term payoffs. According to this viewpoint, many well-known strategies are categorized into a couple of classes. Figure 1(a) shows a schematic diagram of the strategy space, in which generous strategies are overall placed on the left whereas more strict strategies are on the right. First, we have "efficient" strategies, which are depicted as the blue area on the left of the figure. This class is also called "self-cooperators" because each of its member strategies maintains full cooperation when it is used by both players even in the presence of implementation errors [6]. Figure 1(b) shows the two players' payoffs, and the blue dot indicates their payoffs when they adopt an efficient strategy. For instance, Win-Stay-Lose-Shift (WSLS) players can recover cooperation from erroneous defection, so WSLS is efficient. By contrast, Tit-For-Tat (TFT) players fall into a series of retaliation after a mistake, so TFT is not efficient. The "partners" constitute a subset of efficient strategies, depicted as the area surrounded by the dashed blue square. The partners are also denoted as "good" [7, 8], and all the memory-one partner strategies have been identified [5, 8]. When one of the players, say, Alice, uses a partner strategy, her co-player Bob cannot obtain a payoff greater than the payoff from mutual cooperation, no matter which strategy he takes. It means that Alice unilaterally restricts their payoffs to the shaded area shown in Fig. 1(c). One of Bob's best responses is taking the same strategy as Alice's to reach full cooperation, which forms a cooperative Nash equilibrium. The other class, rivals, also called "unbeatable" or "defensible," [10] is shown as the red area in Fig. 1(a). If Alice plays a rival strategy, she never allows her co-player Bob to get a higher payoff than Alice's irrespective of his strategy, thus unilaterally restricting the possible payoff to the shaded area in Fig. 1(e). This class includes AllD, TFT, and the extortionate ZD strategies. Based on the above two classes, we can now introduce the idea of friendly rival (FR) strategies [10, 11, 12, 13, 14]. These strategies qualify both as partners and as rivals simultaneously, which is indicated as the intersection of partners and rivals in Fig. 1(a). It achieves full cooperation against itself, and it never allows a lower payoff than the co-player's, as shown in Fig. 1(d). In this sense, one may regard FRs as the most strict partners, or as self-cooperative rivals. It is straightforward to show that FR strategies are evolutionary robust [5] for any benefit-to-cost ratio and any population size [12]. It has been demonstrated by an evolutionary simulation that one example of FR strategies, called CAPRI, outperforms memory-one strategies overwhelmingly [12]. Nevertheless, the role of FR strategies in the evolution of cooperation remains unclear. FR strategies exist only when memory length is \(m=2\) or longer [11], whereas previous studies on longer-memory strategies are relatively limited [15, 16, 17] compared to those on the memory-1 strategy space [3, 4, 5, 6, 18, 19, 20, 21, 22, 23, 24, 25, 26]. Because of the conflicting requirements, FR strategies are quite rare in the longer-memory strategy spaces: the fractions of FR strategies are \(1.2\times 10^{-4}\) and \(3.8\times 10^{-7}\) among memory-2 and memory-3 strategies, respectively. Whether these tiny fractions of FR strategies seriously impact the evolutionary dynamics of cooperation is nontrivial. According to a recent understanding [24], partners are typically selected when the population size \(N\) and the benefit of the cooperation \(b\) are large, resulting in cooperative states. On the other hand, when \(N\) or \(b\) is small, a player has a better chance of survival by being spiteful to others [27], which lowers the cooperation level. Therefore, we speculate that the selection of FR strategies is more prominent in an environment where both large- and small-population effects are simultaneously present. In this paper, we consider a group-structured population in addition to the standard well-mixed population of size \(N\). In a group-structured population, players are divided into groups and play the IPD game with their in-group members while occasionally imitating strategies of out-group members. The evolutionary dynamics among memory-one strategies in a group-structured population have been studied in detail [28]. We speculate that the group structure plays a more critical role as the memory length increases because FR strategies become available. In a broader context, we study an interplay among different mechanisms of cooperation [29]. Whereas traditional approaches have focused on characterizing each single mechanism, human cooperation is often ensured by multiple mechanisms working simultaneously, and their interactions are becoming an active area of research. For instance, the joint effect of direct reciprocity and structured populations has been studied intensively [19, 22, 28, 30, 31], and a model unifying direct and indirect reciprocity was proposed [32]. Group structure, in particular, is known to contribute to the emergence of reputation-based norms [33], fairness [34], and kinship structure [35, 36]. This study aims to add another finding to the literature by showing that the underlying tension between inter- and intra-group dynamics induced by the group structure can guide the evolutionary trajectory of direct reciprocity toward the tiny intersection between partners and rivals. In this paper, we will conduct large-scale Monte Carlo simulations of evolutionary dynamics to see the roles of FR strategies in the evolution of cooperation. Specifically, we compare the evolutionary dynamics within the memory-one and memory-three strategy spaces, whose cardinalities are \(2^{4}=16\) and \(2^{64}\approx 10^{19}\), respectively. We will see that a stark difference is observed in the group-structured population and that cooperation approaches the theoretical optimum because of the FR strategies even when the benefit of cooperation is low. ## Model ### Evolutionary dynamics In this paper, we study the donation game, a special form of the Prisoner's Dilemma (PD). Its payoff matrix is defined as follows: \[\begin{pmatrix}b-1&-1\\ b&0\end{pmatrix}, \tag{1}\] where the benefit and the cost of the donation are \(b\) and \(1\), respectively. When a donor cooperates, they pay a unit amount of cost, and the co-player gets the benefit of \(b>1\), whereas nothing happens when the donor defects. We consider the repeated donation game without discounting the future. Players take unintended actions in each round of the donation game with a small probability \(e(>0)\) because of implementation errors. The long-term payoff of player \(X\) against player \(Y\) is defined as the average over infinitely many rounds: \[\pi_{XY}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{i=1}^{T}\pi_{XY}^{(t)}, \tag{2}\] where \(\pi_{XY}^{(t)}\) is \(X\)'s payoff against \(Y\) in round \(t\). When both strategies have finite memory lengths, the sequence of moves is described as a Markov chain. The long-term payoff always converges to a unique stationary value \(\pi_{XY}\) and can be calculated by a linear-algebraic calculation. (See Methods for details.) Players' strategies are updated according to evolutionary dynamics at a longer time scale. Here, we study two types of populations: one is well-mixed, and the other is structured in groups. The well-mixed population, the most standard model in the literature, assumes that each player plays the game with everyone else in the population equally likely. On the other hand, the group-structured population assumes an internal structure in the population so that players play the game only with in-group members while they may imitate out-group members as well. The well-mixed population is a particular case of the group-structured one, so we describe the latter in detail below. We consider a population of size \(NM\), which is subdivided into \(M\) groups of size \(N\) as shown in Fig. 2. Suppose that a player currently uses strategy \(X\). This focal player is then given a chance to adapt its strategy through either intra-group imitation (with probability \(\mu_{\text{in}}\)), out-group imitation (with probability \(\mu_{\text{out}}\)), or mutation (with probability \(\nu\)), where \(\mu_{\text{in}}+\mu_{\text{out}}+\nu=1\) Figure 1: (a) A schematic diagram of some important strategies showing the four classes of strategies, efficient, partner, friendly rival, and rival strategies. Strategies that tend to cooperate (defect) are shown on the left (right). The bottom panels (b-e) show accessible regions in the payoffs space. Each blue dot in (b-d) represents the payoffs when both players use a strategy belonging to the given class. Each shaded area in (c-e) indicates possible payoffs when one of the players, \(A\), uses a strategy in the class. The intersection of partners and rivals, indicated by the purple area in (a), defines friendly rivals. In the case of intra-group imitation, the focal player randomly chooses another player in the group as a role model. If the role model has adopted strategy \(Y\), the focal player switches to \(Y\) with probability given by the Fermi function \[f_{X\to Y}^{\text{in}}=\frac{1}{1+\exp\left[\sigma_{\text{in}}\left(\pi_{X}-\pi_{ Y}\right)\right]}, \tag{3}\] where \(\sigma_{\text{in}}\) represents the selection strength of intra-group imitation. Second, out-group imitation takes place in a similar manner: The focal player randomly chooses a role model from the other groups with equal probability. If the role model uses strategy \(Y\), the focal player adopts the strategy with probability \[f_{X\to Y}^{\text{out}}=\frac{1}{1+\exp\left[\sigma_{\text{out}}\left(\pi_{X}- \pi_{Y}\right)\right]}, \tag{4}\] where \(\sigma_{\text{out}}\) is the selection strength for out-group imitation. Note that the focal player and the role model are now in different groups so that they do not directly play the game with each other, and this is one of the key differences from a model without group structure. Still, out-group imitation allows strategies to spread from one group to another, just as migration does in genetic evolution models. Finally, when the focal player changes his or her strategy through mutation, the player replaces the current strategy \(X\) with \(Y\) that is randomly sampled from a given strategy space (see below). This group-structured population reduces to the standard well-mixed one when \(M=1\) and \(\mu_{\text{out}}=0\). In the following, we assume that intra-group dynamics is faster than both out-group imitation and mutation, whereas the latter two processes have similar time scales, i.e., \(\mu_{\text{out}}\ll\mu_{\text{in}}\) and \(\nu\ll\mu_{\text{in}}\). In this limit, each group contains two strategies at most: When a new strategy \(X\) appears in a group of resident players with \(Y\) through either mutation or inter-group imitation, no other mutant strategies will appear until \(Y\) takes over the whole group or dies out. The fixation probability of a \(Y\)-individual in a group of \(X\) is then given as [28] \[\rho_{X\to Y}=\left\{\sum_{j=0}^{N-1}\exp\left[\sigma_{\text{in}}j\frac{(2N-j -3)\pi_{XX}+(j+1)\pi_{XY}-(2N-j-1)\pi_{YX}-(j-1)\pi_{YY}}{2(N-1)}\right]\right\} ^{-1}. \tag{5}\] Therefore, the probability for a group of \(X\) to change its strategy to \(Y\) via out-group imitation is given as follows: \[T_{X\to Y}=f_{X\to Y}^{\text{out}}\rho_{X\to Y}. \tag{6}\] Let us define relative mutation probability as \(r\equiv\nu/(\mu_{\text{out}}+\nu)\), which denotes the frequency of mutation relative to that of out-group imitation. We will see in Results that the ratio between \(\mu_{\text{out}}\) and \(\nu\) plays a pivotal role in determining an evolutionary trajectory. For completeness, we show the results for an alternative model where the time scales for mutations and out-group imitations are completely separated, \(\nu\ll\mu_{\text{out}}\), in Appendix. Now we are ready to simulate the time evolution of our model (see Methods for more details). We begin by preparing an initial state with randomly sampled \(M\) strategies, one for each group. We define the event of either a mutant or resident taking over a group as the unit of time. At each time step, we randomly pick a focal group using \(X\) among \(M\) groups. Out-group imitation occurs with probability \(1-r\): Out of the other \(M-1\) groups, we choose one of them, say, using \(Y\). The focal group adopts \(Y\) with probability \(T_{X\to Y}\). Or, a mutation event occurs with probability \(r\), so a mutant strategy \(Y\) takes over the focal group with probability \(\rho_{X\to Y}\). The detailed procedure to sample mutants will be explained in the next section. This completes a one-time step, and these steps are repeated until we obtain enough statistics. Figure 2: A schematic diagram of the group-structured population. In this example, the population is divided into \(M=4\) groups of size \(N=3\). (a) The players in the same group play the game with each other. A player’s fitness is determined according to the interactions that he or she has been involved in. Each player’s strategy is updated by (b) intra-group imitation, (c) out-group imitation, and (d) mutation. ### Memory lengths of strategies Although it is common to define the memory length of a strategy by a single integer, here we define memory length as a pair of integers \((m_{1},m_{2})\) to characterize the strategy space in more detail. A memory-\((m_{1},m_{2})\) strategy prescribes an action based on its own moves over the last \(m_{1}\) rounds and its co-player's moves over the last \(m_{2}\) rounds. For instance, TFT prescribes its action based only on the last move taken by the co-player, so its memory is represented as \((m_{1},m_{2})=(0,1)\). So-called "reactive memory-one" strategies belong to this category. As another example, unconditional strategies such as AlIC and AllID have \((m_{1},m_{2})=(0,0)\). The set of reactive strategies contains unconditional strategies as a subset. However, when we categorize a given strategy in the following, we use the smallest memory length necessary to represent the corresponding behavior. For instance, the unconditional strategies belong to the memory-\((0,0)\) class but not the memory-\((0,1)\) class. In other words, the set of memory-\((0,1)\) strategies and the set of memory-\((0,0)\) strategies are disjoint. See Fig. 3(a) for more examples. Let \(S(m_{1},m_{2})\) denote the set of strategies that have memory-\((m_{1},m_{2})\). The memory-\(m\) strategy space \(\mathcal{S}(m)\) is defined as the set of the strategies satisfying \(m_{1}\leq m\) and \(m_{2}\leq m\), i.e., \(\mathcal{S}(m)\equiv\bigcup_{m_{1}=0}^{m}\bigcup_{m_{2}=0}^{m}S(m_{1},m_{2})\). The size of memory-\(m\) strategy space \(|\mathcal{S}(m)|\) equals \(2^{2^{2n}}\) because a strategy prescribes either \(C\) or \(D\) for each of \(2^{2n}\) possible memory states. The number of strategies that have exactly memory-\((m_{1},m_{2})\) is obtained by excluding shorter-memory strategies as \[|S(m_{1},m_{2})|=2^{2^{(m_{1}+m_{2})}}-2\times 2^{2^{(m_{1}+m_{2}-1)}}+2^{2^{( m_{1}+m_{2}-2)}}. \tag{7}\] The number of pure strategies for each memory-length pair is shown in Fig. 3(a). As in Eq. (7) and Fig. 3(a), the number of strategies increases super-exponentially as \((m_{1}+m_{2})\) grows. If we naively sample a strategy from \(\mathcal{S}(m)\), strategies with small memory lengths will not appear. For this reason, in order to consider interactions among strategies with different memory lengths, we simulate mutation by using the following two-step process: \[\left\{\begin{array}{l}\text{First, we sample $m_{1}$ and $m_{2}$ as two independent random numbers uniformly from $[0,m]$.}\\ \text{A strategy is then uniformly sampled from $S(m_{1},m_{2})$.}\end{array}\right. \tag{8}\] This scheme allows us to sample shorter-memory strategies such as AllID, AllC, and TFT with significant probabilities. Furthermore, the average memory lengths would be \((m/2,m/2)\) under neutral selection. Efficient, rival, and FR strategies are distributed disproportionately in the strategy space. Figure 3(b-d) shows the fractions of efficient, rival, and FR strategies relative to the whole number of memory-\((m_{1},m_{2})\) strategies (see Methods for more details). According to Fig. 3(b,c), the fractions of efficient and rival strategies tend to decrease as \(m_{1}\) or \(m_{2}\) increases, although the decreasing trend is milder for efficient strategies. Figure 3(d) shows that FRs are far rarer than efficient or rival strategies: They exist only when \((m_{1}+m_{2})\geq 4\), and the fraction goes down to \(4\times 10^{-7}\) in \(S(3,3)\). Thus, the chance of finding an FR strategy through random search is negligibly small. ## Results ### Evolution in well-mixed populations First, we show Monte Carlo results for well-mixed populations in Fig. 4. The upper panels (a-d) are the results for \(\mathcal{S}(1)\), whereas the lower panels (e-h) are for \(\mathcal{S}(3)\). As shown in the figure, both \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\) show qualitatively similar behavior: Figure 3: (a) The numbers of pure memory-\((m_{1},m_{2})\) strategies according to Eq. (7). Some well-known strategies in each class are also shown. (b-d) The fractions of efficient, rival, and FR strategies in each memory-length pair. For instance, there are 10 memory-\((1,1)\) strategies in total. Among them, 20% are efficient strategies, other 20% are rival strategies, and no FRs exist. These fractions are independent of the benefit and the cost of cooperation. When \(b\) and \(N\) are high, the cooperation level is high, and the population primarily consists of non-FR efficient strategies with few rivals. By contrast, when \(b\) or \(N\) is small, non-FR rivals occupy most of the population, lowering the cooperation level. One might find it puzzling that memory length is almost irrelevant despite the presence of FRs in \(\mathcal{S}(3)\). However, as shown in Fig. 4(h), FRs actually occupy only a small fraction of \(O(10^{-3})\). Although significantly greater than expected from neutral selection, it is still a negligible fraction compared with non-FR efficient or rival strategies, indicating that FRs play a marginal role in a well-mixed population. This result shows that cooperation is still challenging for \(b=1.5\) even with \(\mathcal{S}(3)\). Although FRs are evolutionarily robust even for \(b=1.5\), other strategies can still replace FRs via neutral drift. In other words, FRs are not successful enough to compensate for their small numbers in \(\mathcal{S}(3)\). The same argument also applies to All-or-Nothing-3 (AON3) strategy.[17] AON3 is a memory-3 strategy that forms a subgame perfect Nash equilibrium for \(b/c>4/3\). While \(\mathcal{S}(3)\) contains FRs and AON3, it also contains other strategies that can replace them. You might wonder why the cooperation level for \(\mathcal{S}(1)\) is higher than that for \(\mathcal{S}(3)\) when \(b\) and \(N\) are high. For instance, when \(b=6\) and \(N=8\), the cooperation level is approximately 1 for \(\mathcal{S}(1)\) while it is around 0.8 for \(\mathcal{S}(3)\). This unusually high cooperation level for \(\mathcal{S}(1)\) is not because WSLS is exceptional but because there is no dangerous mutant that can threaten WSLS in \(\mathcal{S}(1)\). We confirmed by simulations that WSLS is replaced more easily when we add mutants from \(\mathcal{S}(3)\). Typical time series for these simulations are shown in Fig. 5. When \(\mathcal{S}(1)\) and \(b=6\), the population adopts WSLS throughout the observation period as expected. For \(\mathcal{S}(3)\), efficient strategies are again the majority for most of the time, although we observe frequent turnovers. If the benefit is low (\(b=1.5\)), on the other hand, non-FR rival strategies are the majority for both \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\). One noticeable difference in the latter case is the occasional surges of FR strategies, but they do not last long. ### Evolution in a group-structured population Next, we show Monte Carlo results for a group-structured population of group size \(N=2\) and the number of groups \(M=10^{3}\). Figure 6 shows the cooperation level, together with the fractions of strategies categorized into non-FR efficient, non-FR rival, and FR strategies, where the horizontal axis is the relative mutation rate \(r\). In \(\mathcal{S}(1)\), as shown in Fig. 6(a-d), efficient strategies (typically WSLS) and rivals coexist, so the cooperation level is intermediate. This coexistence is observed in a broad range of \(r\) and insensitive to \(b\), as reported previously.[28] If we consider \(\mathcal{S}(3)\), the cooperation level is close to 100% [Fig. 6(e-h)]. The results again show little dependence on \(b\), so a high degree of cooperation is possible even with low \(b\). While the cooperation level remains high in a broad range of \(r\), the fractions of strategies show non-trivial dependence on \(r\): When the relative mutation rate \(r\) is higher than \(O(10^{-3})\), which is the order of \(O(1/M)\), the population is mainly composed of FRs, whereas non-FR efficient strategies replace them as \(r\) decreases. Figure 4: Evolutionary simulations for well-mixed populations. The panels on top (a-d) and bottom (e-h) show the results for \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\), respectively. From left to right, (a,e) the cooperation levels, (b,f) the fractions of non-FR efficient strategies, (c,g) those of non-FR rival strategies, and (d,h) those of FR strategies are shown as functions of the population size \(N\). When out-group imitation occurs less frequently with small \(\sigma_{\text{out}}=3/(b-1)\), the pattern changes to some extent in that non-FR rivals are more favored than non-FR efficient strategies for low \(r\) [Fig. 6(i-l)]. Nevertheless, FRs are always the most prevalent as long as \(r\) is higher than \(O(10^{-3})\). In other words, a high mutation rate helps to promote cooperative behavior supported by FRs. Typical time series in a group-structured population are shown in Fig. 7. For \(\mathcal{S}(1)\), non-FR efficient strategies such as WSLS and non-FR rival strategies stably coexist as shown in Fig. 7(a). This is because of the conflicting requirements between in-group and out-group selection: In-group selection favors a rival to take over the group, whereas efficiency is more important in out-group selection for a strategy to spread across different groups. Fig. 8(a) illustrates what typically happens between WSLS and AllD. Because WSLS is an efficient strategy with a higher payoff, WSLS is more likely to be imitated by AllD players via out-group imitations. However, the newly appeared WSLS player cannot spread in the group because WSLS is weak against AllD within the group. The result is that efficient strategies keep wandering among groups and failing to conquer any of them. Note that both large- and small-\(N\) effects can thus be experienced in this group-structured population. As soon as FRs become available in \(\mathcal{S}(3)\), they can survive both the in-group and out-group dynamics. Fig. 8(b) illustrates a typical competition between an FR strategy and AllD. Because of the efficiency of the FR strategy, they are more likely to be imitated by AllD players via out-group imitations. The newly appeared FR player is also good at the intra-group selection because of its rivalry. Thus, it can take over the group and eventually the entire population. Once they enter the system, they are stable for a long time, as shown in Fig. 7(b). If residents have adopted an FR strategy, their evolutionary robustness assures that no mutant strategy has a fixation probability greater than \(1/(NM)\). The greatest threat is a neutral drift process caused by non-FR efficient strategies cooperating with the residents. For instance, while CAPRI has a strictly higher payoff when pitted against AllC or WSLS, there are some non-FR efficient strategies that tie with CAPRI in \(\mathcal{S}(3)\). These efficient strategies can thus replace FRs via nearly neutral drift. Indeed, Fig. 7(b) shows that efficient strategies coexist with FRs to some extent while the rivals are almost entirely suppressed. The above argument also explains why a higher mutation rate stabilizes cooperation formed by FRs: it introduces rivals into the population, by which potentially threatening efficient strategies can be driven out. Figure 7(c) shows an example of time series in a group-structured population when \(r\) is low. In the first half of this time series, an FR strategy occupies the majority, although it is invaded a few times by non-FR efficient strategies. At \(t\approx 3.5\times 10^{8}\), the FR strategy is replaced by an efficient one and thus wiped out from the population. Once this happens, the system is not as stable as in the first half, and we see rivals Figure 5: Typical time series of the fractions of the non-FR efficient, non-FR rival, and FR strategies in the well-mixed populations. The top and bottom panels show the results for \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\), respectively. The population size is \(N=64\), and the other simulation parameters are the same as in Fig. 4. For the sake of better visualization, each time series plots every thousandth data point. Figure 6: Evolutionary simulations for a group-structured population, where the number \(\mathcal{G}\) groups is \(M=10^{3}\), and each group has \(N=2\). The panels on top (a-d) and middle (e-h) show the results for \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\), respectively. The selection strength for out-group imitations is \(\sigma_{\text{in}}=30/(b-1)\). In the bottom panels (i-l), we show the results for \(\mathcal{S}(3)\) with a weaker out-group selection strength \(\sigma_{\text{out}}=3/(b-1)\). From left to right, cooperation levels and the fractions of non-FR efficient strategies, non-FR rival strategies, and FR strategies are shown as functions of the relative mutation probability \(r\). The error probability is \(e=10^{-6}\). Each data point has been obtained by averaging the results over \(10^{2}\) independent runs. begin to rise. Because FRs are so rare, it will take a long time for another FR strategy to appear and settle down the situation. For FRs to survive long, therefore, \(r\) needs to be high enough to suppress non-FR efficient strategies. Specifically, a rough guess would be that \(r\) has to be greater than the fixation probability \(\sim 1/M^{28}\) that non-FR efficient strategies replace FRs through neutral selection. In Appendix, we show the simulation results for \(M=100\). We find a crossover at the relative mutation rate \(r\sim O(1/M)\), which is consistent with the above argument. ### Evolution of memory lengths Finally, let us check how the memory lengths of strategies evolve. We measured the memory lengths \((m_{1},m_{2})\) of the resident strategy in each group and averaged these over the groups and over time. Figure 9 shows average memory lengths in well-mixed and group-structured populations. We have already seen that the evolutionary process depends on \(b\) and \(N\) in a well-mixed population. When the cooperation level is low, rivals with shorter memory lengths are the majority. More specifically, when \(N\) or \(b\) is low, the memory lengths are shorter than expected from the neutral case of \(m_{1}=m_{2}=1.5\). The opposite is true when \(N\) and \(b\) are high because the memory lengths become longer than the baseline. This observation is consistent with Fig. 3(b-c), which shows that rivals are easily found when memory lengths are small, compared with efficient strategies. A similar trend is also observed for a group-structured population [Fig. 9(b)]: The cooperation level is positively correlated with the average memory lengths. The tendency is particularly striking when \(m_{2}\) approaches three as \(r\) increases. More interestingly, there is a Figure 8: Typical time evolutions in group-structured populations for (a) memory-1 and (b) memory-3 strategy spaces. (a) For \(\mathcal{S}(1)\), WLS and AllD often coexist in the population. An AllD player switches to WLS via out-group imitation as WLS players have higher payoffs. However, the newly appeared WSLs player is weak against ALLD within the group, and it is almost surely replaced by AllD. Thus, the coexisting state lasts for a long time. (b) A different scenario is observed for FR strategies. After an AllD player switches to FR via out-group imitation, the FR player can resist the AllD opponent within the group. In this way, FRs can take over the entire population. Figure 7: Typical time series showing the fractions of non-FR efficient, non-FR rival, and FR strategies. (a,b) \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\) when the relative mutation probability is high (\(r=10^{-2}\)). (c) An example of time series for \(\mathcal{S}(3)\) with a low relative mutation probability (\(r=10^{-4}\)), in which a non-FR efficient strategy replaces an FR strategy. The benefit of cooperation is \(b=3\), and the out-group selection strength is \(\sigma_{\text{out}}=15\). The other simulation parameters are the same as in Fig. 6. For the sake of better visualization, each time series plots every millionth data point. notable difference between \(m_{1}\) and \(m_{2}\): We observe \(m_{2}>m_{1}\), which implies that FRs tend to memorize the opponent's history better than their own history of moves. It is also consistent with Fig. 3(d), according to which a greater number of FRs exist when \(m_{2}>m_{1}\). ## Summary and Discussion In this paper, we have studied the evolutionary dynamics of well-mixed and group-structured populations in memory-1 and memory-3 strategy spaces. Our result demonstrates that group structure is an essential factor in manifesting the effects of memory. In group-structured populations, a strategy must succeed in both in-group and out-group selections, but their requirements are conflicting: In-group selection requires a strategy not to be beaten by its co-players, whereas out-group selection favors self-cooperative strategies. Since these conflicting demands for survival can be accommodated only by FRs, a group-structured population leads to a drastically different evolutionary consequence between \(\mathcal{S}(1)\) and \(\mathcal{S}(3)\). Namely, when only \(\mathcal{S}(1)\) is available, we see stable coexistence between WSLS and AllD (or GRIM) irrespective of \(b\), which is consistent with our previous study [28]. When the strategy space expands to \(\mathcal{S}(3)\), by contrast, FRs prevail at \(r\gtrsim O(1/M)\) with a cooperation level \(\approx 100\%\) even for low \(b\) (Fig. 6). Whereas group structure has often been considered in the context of multilevel selection [37], our work proposes another use of it. An optimal condition for FRs is provided by creating different selection pressure depending on whether competition occurs within a group or between groups [38]. Once an FR strategy is adopted, the population plays a cooperative Nash equilibrium [10] with evolutionary robustness [12], combining cooperation and competition in a productive way [39]. In a well-mixed population, memory makes few differences in evolutionary trajectories whether we consider \(\mathcal{S}(1)\) or \(\mathcal{S}(3)\): The cooperation level is high because of the proliferation of efficient strategies when \(b\) and \(N\) are high, and rivals exhibit a low cooperation level otherwise, as has been reported previously [4, 23]. Although \(\mathcal{S}(3)\) does contain FRs, they cease to play a pivotal role in a well-mixed population because they are easily outnumbered either by efficient strategies or by rivals depending on the environmental condition. Although FRs are observed more frequently than expected from random sampling, it is not enough to compensate for their small number, as shown in Fig. 4(h). Thus, it is hard to form cooperation in well-mixed populations for low \(b\), even when complex strategies using additional memory lengths are available. By introducing group structure, we can see that nearly full cooperation is established because of FRs in the longer-memory strategy space. Interestingly, the higher relative mutation rate \(r\) contributes to the stability of FRs [Fig. 7(b)], which may look strange because they would be challenged by mutants frequently. The reason is that frequent mutation suppresses non-FR efficient strategies, which could potentially replace FRs via neutral drift. FRs are invulnerable to various mutants, while non-FR strategies are often weak against some mutants. This invulnerability makes FRs more advantageous when diverse mutants may appear. The mutation rate for which FRs are selected \(r\gtrsim O(10^{-2})\) might look unusually high, but we note that \(r\) is the relative frequency compared to the inter-group imitation events. Also, note that cultural transmission experiences more frequent explorative "mutation" than those assumed in biological models [40, 41]. We could even argue that a large amount of uncertainty may arise when someone tries to learn a strategy by observation, which could also result in a high effective mutation rate [25]. Even if group structure provides a favorable environment for FRs, one of the natural questions left in this study is how such Figure 9: The average memory lengths \((m_{1},m_{2})\) of evolved strategies in \(\mathcal{S}(3)\). (a) A well-mixed population and (b) a group-structured one show different behavior. The baselines \(m_{1}=m_{2}=3/2\) are depicted as dotted horizontal lines. The simulation parameters are the same as in Fig. 6 unless otherwise mentioned. The benefit of cooperation \(b=3\) is used in (b). structure emerges in the first place. It could be a matter of biology as is the case of _Dictyostelium discoideum_,[42] but it can also be spontaneously induced by co-evolutionary network dynamics of interacting agents playing the PD game.[43, 44] The generality of this co-evolutionary mechanism implies that it can be ubiquitous across many different scales in society. It would be an interesting future direction to investigate the co-evolutionary dynamics of FRs and population structure. We may think of FRs in terms of the emergence of other-regarding preference[45, 46, 47, 48] in the sense that selection can favor an FR that compares its own payoff with the other player's. The existence of other-regarding is an interesting question because, in classical game theory, every player is assumed to care only about his or her own payoff. As we see in behavioral experiments and everyday experiences, by contrast, people often manifest other-regarding preferences known as 'inequity aversion'.[45, 49] In our model, FR players express 'disadvantageous-inequity aversion' in the sense that they never let their co-players have higher payoffs whereas they do not care as long as their own payoffs are higher. Such a preference spontaneously emerges by playing FR strategies, while each player tries to increase his or her own payoff through imitation [see Eqs. (3) and (4)]. Another recent study[50] also shows that selection favors learning rules that incorporate other-regarding preferences than selfish learning. These findings may help to understand the origin of other-regarding preferences in human society. Note also that the relation between other-regarding preference and group structure via FRs differs from the conventional idea that associates social preference with group selection:[51, 52] In our model, groups do not compete directly as is often assumed in the group-selection literature,[37] and we do not view other-regarding preference as necessarily prosocial.[53, 54] especially when it takes the form of rivalry. It is worth noting that the other-regarding preferences for partnership and rivalry lead to a refinement of Nash equilibrium, which still makes total sense among self-interested players. Our study has also given theoretical predictions on the evolution of memory lengths. As shown in Fig. 3, rivals can work well with short memory lengths, whereas cooperation seems to require a higher cognitive capacity. In particular, a large fraction of FRs exists when \(m_{2}\geq m_{1}\geq 2\), meaning that an FR player has to remember the co-player's history better than his or her own. For this reason, the average memory length tends to be low (high) in environmental conditions where non-FR rivals (FRs) are favored. Recent studies of learning dynamics between two players predict a'memory dilemma' in the sense that cooperative strategies with long memory lengths are invaded by simpler, less cooperative strategies.[55, 26] That is not inconsistent with our result, according to which rivalry will be favored in such a small population of two players. So far, our observation seems to be in qualitative agreement with previous studies that cooperative strategies with longer memory lengths will evolve.[16, 17] However, if we had an even larger strategy space, whether the average memory length in use would keep increasing[56, 16] is an open question. While there is a theoretical minimum memory length \(m=2\) to construct FR strategies for iterated Prisoner's Dilemma,[11] the density of FRs will become extremely low as the memory length grows even longer [see Fig. 3(d)]. If this trend continues, it is practically impossible to discover sophisticated FRs through random mutation from an even longer-memory strategy space. Therefore, as far as a two-person game is considered, there seem to be few reasons to go beyond \(m=3\). A recent experimental study also suggests that the optimal cooperation level occurs when the memory length is around \(m=2\).[57] On the other hand, long memory may help to identify defectors,[16] outperform a wide spectrum of strategies,[12] and even reduce the cognitive load by providing a simple generalization,[13, 58, 14] although such a complexity cost has not been incorporated in our model. Indeed, a previous simulation study shows that memory length gets continuously longer without the cost to memory capacity.[16] Testing these theoretical predictions and hypotheses with behavioral experiments will deepen our understanding of how much cognitive capacity is required in direct reciprocity.[57, 59, 60, 61, 62, 63, 64] ## Methods ### Calculation of long-term payoffs and cooperation levels In general, strategies for the IPD need to define which action has to be taken after any history of previous interactions. Among infinitely many possible strategies, we focus on those with limited memory lengths. A well-known example is memory-one strategies, which condition their decision on the previous round. The relevant set of history profiles is \(\{CC,CD,DC,DD\}\), where the first and second letters refer to the focal player's and the co-player's last actions, respectively. Thus, a memory-one strategy can be represented as a 4-tuple, \[\mathbf{p}=(p_{CC},p_{CD},p_{DC},p_{DD}), \tag{9}\] where \(p_{ij}\) represents the player's cooperation probability for each given history profile \((i,j)\) from the previous round. We focus on deterministic strategies by setting \(p_{ij}\) to either zero or one. The total number of memory-one deterministic strategies is therefore \(2^{4}=16\). A player may defect despite the intention to cooperate with probability \(e\ll 1\) and vice versa. As a result, instead of the original strategy \(\mathbf{p}\), the player effectively plays \((1-e)\mathbf{p}+e\mathbf{\bar{p}}\), where \(\mathbf{\bar{p}}\) is a vector with elements \(\bar{p}_{ij}:=1-p_{ij}\) for \(i,j\in\{C,D\}\). When both players adopt memory-one strategies, the game is represented as a Markov chain, from which one can explicitly compute their payoffs and the cooperation levels. The states of this Markov chain are the possible outcomes of each round. When the players' (effective) strategies \(\mathbf{p}=(p_{CC},p_{CD},p_{DC},p_{DD})\) and \(\mathbf{q}=(q_{CC},q_{CD},q_{DC},q_{DD})\) are given, the transition matrix \(T\) of the Markov chain takes the following form: \[T=\left(\begin{array}{cccc}p_{CC}\cdot q_{CC}&p_{CC}\cdot\bar{q}_{CC}&\bar{p}_{ CC}\cdot q_{CC}&\bar{p}_{CC}\cdot\bar{q}_{CC}\\ p_{CD}\cdot q_{DC}&p_{CD}\cdot\bar{q}_{CC}&\bar{p}_{CD}\cdot q_{DC}&\bar{p}_{CD} \cdot\bar{q}_{DC}\\ p_{DC}\cdot q_{CD}&p_{DC}\cdot\bar{q}_{CD}&\bar{p}_{DC}\cdot q_{CD}&\bar{p}_{DC} \cdot\bar{q}_{CD}\\ p_{DD}\cdot q_{DD}&p_{DD}\cdot\bar{q}_{DD}&\bar{p}_{DD}\cdot q_{DD}&\bar{p}_{DD }\cdot\bar{q}_{DD}\end{array}\right), \tag{10}\] where \(\bar{p}_{ij}:=1-p_{ij}\) and \(\bar{q}_{ij}:=1-q_{ij}\) for \(i,j\in\{C,D\}\). If \(e>0\), according to the Theorem of Perron-Frobenius, \(T\) has a unique invariant distribution \(\mathbf{v}=(v_{CC},v_{CD},v_{DC},v_{DD})\). In particular, the \(\mathbf{p}\)-player's average cooperation level is \(\gamma_{\mathbf{p},\mathbf{q}}:=v_{CC}+v_{CD}\) whereas the \(\mathbf{q}\)-player's cooperation level is \(\gamma_{\mathbf{q},\mathbf{p}}:=v_{CC}+v_{DC}\). Consequently, the \(\mathbf{p}\)-player's long-term average payoff is given by \[\pi_{\mathbf{p},\mathbf{q}}=b\cdot\gamma_{\mathbf{q},\mathbf{p}}-c\cdot\gamma _{\mathbf{p},\mathbf{q}}. \tag{11}\] It is straightforward to extend this method to longer memory strategies. Memory-three strategies determine their subsequent actions based on the previous three rounds of interaction. Let us denote the relevant history profile by six letters separated by a comma, such as \((a_{3}a_{2}a_{1},b_{3}b_{2}b_{1})\), where \(a_{t}\in\{C,D\}\) refers to what the focal player did \(t\) rounds before, and \(b_{t}\in\{C,D\}\) means the co-player's. For instance, a history profile \((CCC,CCD)\) indicates that the focal player continued cooperation over the last three rounds whereas the co-player defected in the previous round. A memory-three strategy prescribes an action for each of the \(2^{6}=64\) history profiles, so it is represented by a 64-tuple, \[\mathbf{p}=(p_{CCC,CCC},p_{CCC,CCD},\ldots,p_{DDD,DDD}), \tag{12}\] where each element \(p_{a_{3}a_{2}a_{1},b_{3}b_{2}b_{1}}\) represents the player's cooperation probability for the given history profile. Similarly to the memory-one case, we work with an effective strategy \((1-e)\mathbf{p}+e\bar{\mathbf{p}}\) in the presence of implementation error with probability \(e>0\). The repeated game between \(\mathbf{p}\) and \(\mathbf{q}\) is now represented by a Markov chain of 64 states, and the transition probability from \((a_{3}a_{2}a_{1},b_{3}b_{2}b_{1})\) to \((a^{\prime}_{3}a^{\prime}_{2}a^{\prime}_{1},b^{\prime}_{3}b^{\prime}_{2}b^{ \prime}_{1})\) is written as follows: \[T_{(a_{3}a_{2}a_{1},b_{3}b_{2}b_{1})\rightarrow(a^{\prime}_{3}a^{\prime}_{2}a ^{\prime}_{1},b^{\prime}_{3}b^{\prime}_{2}b^{\prime}_{1})}=\begin{cases}0&\text{if }a^{ \prime}_{3}\neq a_{2}\text{ or }a^{\prime}_{2}\neq a_{1}\text{ or }b^{\prime}_{3}\neq b_{2}\text{ or }b^{\prime}_{2}\neq b_{1}\\ p_{a_{3}a_{2}a_{1},b_{3}b_{2}b_{1}}\cdot q_{b_{3}b_{2}b_{1},a_{3}a_{2}a_{1}}& \text{if }(a^{\prime}_{3}a^{\prime}_{2}a^{\prime}_{1},b^{\prime}_{3}b^{\prime}_{2}b^{ \prime}_{1})=(a_{2}a_{1}C,b_{2}b_{1}C)\\ p_{a_{3}a_{2}a_{1},b_{3}b_{2}b_{1}}\cdot q_{b_{3}b_{2}b_{1},a_{3}a_{2}a_{1}}& \text{if }(a^{\prime}_{3}a^{\prime}_{2}a^{\prime}_{1},b^{\prime}_{3}b^{ \prime}_{2}b^{\prime}_{1})=(a_{2}a_{1}C,b_{2}b_{1}D)\\ \bar{p}_{a_{3}a_{2}a_{1},b_{3}b_{2}b_{1}}\cdot q_{b_{3}b_{2}b_{1},a_{3}a_{2}a_{1}} &\text{if }(a^{\prime}_{3}a^{\prime}_{2}a^{\prime}_{1},b^{\prime}_{3}b^{ \prime}_{2}b^{\prime}_{1})=(a_{2}a_{1}D,b_{2}b_{1}D)\\ \bar{p}_{a_{3}a_{2}a_{1},b_{3}b_{2}b_{1}}\cdot\bar{q}_{b_{3}b_{2}b_{1},a_{3}a_{2}a_ {1}}&\text{if }(a^{\prime}_{3}a^{\prime}_{2}a^{\prime}_{1},b^{\prime}_{3}b^{ \prime}_{2}b^{\prime}_{1})=(a_{2}a_{1}D,b_{2}b_{1}D)\\ \end{cases}. \tag{13}\] Again, as a non-negative, irreducible, and aperiodic matrix, \(T\) has a unique invariant distribution \(\mathbf{v}\), from which the cooperation level of the \(\mathbf{p}\)-player is calculated as \[\gamma_{\mathbf{p},\mathbf{q}}:=\sum_{a_{3},a_{2},b_{3},b_{2},b_{1}}v_{a_{3}a_{ 2}C,b_{3}b_{2}b_{1}}. \tag{14}\] ### Memory length of a strategy As already mentioned, strategies of \((m_{1},m_{2})\), where \(m_{1}\leq m\) and \(m_{2}\leq m\), constitute the memory-\(m\) strategy space. It means that the set of memory-\(m\) strategies includes strategies with shorter memory lengths as special cases. For instance, memory-one strategy space includes the so-called reactive memory-one strategies that condition the action on the co-player's previous action but not on its own. These strategies have \(p_{CC}=p_{DC}\) and \(p_{CD}=p_{DD}\) in common, we can say that \(m_{1}=0\) in this case. Similarly, those with \(p_{CC}=p_{CD}\) and \(p_{DC}=p_{DD}\) can be said to have \(m_{2}=0\) because they are indifferent to the co-player's history. If both \(m_{1}\) and \(m_{2}\) are zero, the strategies unconditionally have \(p_{CC}=p_{CD}=p_{DC}=p_{DD}\). In general, we calculate \((m_{1},m_{2})\) for a strategy represented by Eq. (12) in the following way: 1. If there exists \((a_{2},a_{1},b_{3},b_{2},b_{1})\) such that \(p_{Ca_{2}a_{1},b_{3}b_{2}b_{1}}\neq p_{Da_{2}a_{1},b_{3}b_{2}b_{1}}\), it has \(m_{1}=3\). 2. Else if there exists \((a_{1},b_{3},b_{2},b_{1})\) such that \(p_{*Ca_{1},b_{3}b_{2}b_{1}}\neq p_{*Da_{1},b_{3}b_{2}b_{1}}\), where \(*\) denotes a wildcard, it has \(m_{1}=2\). 3. Else if there exists \((b_{3},b_{2},b_{1})\) such that \(p_{*C,b_{3}b_{2}b_{1}}\neq p_{*D,b_{3}b_{2}b_{1}}\), it has \(m_{1}=1\). 4. Otherwise, \(m_{1}=0\). A similar algorithm is used for calculating \(m_{2}\) as well. ### Judging efficiency and rivalry When a strategy \(\mathbf{p}\) is given, it is straightforward to judge its efficiency: It is an efficient strategy if \(\lim_{e\to 0}\chi_{\mathbf{p},\mathbf{p}}=1\). Numerically, we judge efficiency if \(\chi_{\mathbf{p},\mathbf{p}}\) for \(e=10^{-4}\) is greater than \(0.99\). We have confirmed that the judgment is not sensitive to the threshold value. Another way of judgement is to use a graph-theoretical method,[12] which checks probability currents with adding transitions of probability \(O(e^{k})\) systematically (\(k=0,1,2,\ldots\)). We have compared the linear-algebraic and graph-theoretical methods with various strategies and verified their consistency. We also note that whether a strategy is a partner depends on \(b\), whereas efficiency is independent of \(b\), the benefit of cooperation. This is one of the reasons why we mainly work with efficiency in this paper. To judge rivalry, we use a method based on the Floyd-Warshall algorithm.[11, 12, 14, 65] The idea can be explained as follows: A strategy \(\mathbf{p}\) is a rival if it ensures that its co-player cannot obtain a higher long-term payoff regardless of the co-player's strategy as well as the initial state, when error probability \(e\) is zero. We emphasize that the statement must be true even if the co-player's strategy has a long memory and/or if the strategy \(\mathbf{p}\) is known to the co-player. One can judge the criterion by constructing a directed weighted graph \(G(\mathbf{p})\). Consider the graph \(G\) for \(\mathbf{p}=\)TFT as an example. As a memory-one strategy, its action depends only on the the last round, so the relevant states are \(CC\), \(CD\), \(DC\), and \(DD\), each of which is represented as a node in \(G\). We represent possible transitions among these four nodes as directed edges. For instance, at \(CC\), TFT prescribes C, so the subsequent state is either \(CC\) or \(CD\). Each edge is assigned a weight corresponding to the relative payoff difference between the players. In our example, the self-edge from \(CC\) to \(CC\) thus has weight zero, whereas the edge from \(CC\) to \(CD\) has \(-1\). In this way, we construct \(G(\mathbf{p})\). The point is that only cycles in \(G\) can contribute to the long-term payoff. Let a negative cycle denote a cycle along which the total sum of weights is negative. If a negative cycle exists, the co-player can take advantage of it to exploit the focal player. Conversely, the absence of such negative cycles in \(G(\mathbf{p})\) guarantees that no strategy can obtain a higher long-term payoff than \(\mathbf{p}\). The presence of a negative cycle in a graph can be detected by the Floyd-Warshall algorithm in polynomial time, and this method is straightforwardly extensible to longer-memory strategies.[12] The numbers of strategies in Fig. 3 are obtained in the following way: When \(m_{1}+m_{2}\leq 4\), we can enumerate all the possible strategies and check their efficiency and rivalry one by one. This enumeration approach becomes impractical when \(m_{1}+m_{2}>4\), so we have estimated the fraction of efficient ones and that of rival ones from randomly sampled \(10^{6}\) strategies. As for FR strategies, it is possible to directly obtain the complete list of FR strategies using the algorithm proposed in[12] because they are infrequent. We obtained the exact number of FR strategies instead of Monte Carlo sampling. ### Monte Carlo simulations The Monte Carlo simulations for well-mixed populations have been conducted as follows. We assume the limit of a low mutation rate, in which at most one mutant can compete with the resident strategy, and no other mutation occurs until this mutant takes over the population or dies out. At each time step, a mutant strategy \(Y\) is randomly sampled according to the two-step process [Eq. (8)], and it replaces the resident strategy \(X\) with fixation probability \(\rho_{X\to Y}\) [Eq. (5)].[66] We iterate this process for \(10^{6}\) time steps with discarding the initial \(10^{5}\) steps. The cooperation level in Fig. 4(a) is calculated as the time average of the cooperation levels of the resident strategies, given by Eq. (14). For the simulations of group-structured populations, we assume that intra-group dynamics is fast enough compared to inter-group dynamics and mutation, i.e., \(\mu_{\text{in}}\gg\mu_{\text{out}}\) and \(\mu_{\text{in}}\gg\nu\). As we have assumed in the case of well-mixed populations, a group is usually occupied by a single resident strategy, which can be replaced by a different one that appears through either mutation or out-group imitation and succeeds in fixation. The Monte Carlo simulations have been conducted as follows. 1. Prepare a set of \(M\) randomly selected strategies as the initial state. 2. Choose one of the \(M\) groups randomly. Let us denote the strategy of this group as \(X\). 3. With probability \(r\), the group undergoes mutation. 1. Introduce a mutant strategy \(Y\) according to the two-step sampling scheme [Eq. (8)]. 2. Replace \(X\) by \(Y\) with fixation probability \(\rho_{X\to Y}\) [Eq. (5)]. 4. With probability \(1-r\), the group undergoes out-group imitation. 1. Choose randomly one of the other groups. Let \(Y\) denote its strategy. 2. Replace \(X\) by \(Y\) with probability \(T_{X\to Y}\) [Eq. (6)]. 5. Go back to step 2. The above process is repeated for \(10^{9}\) steps, from which we discard the initial \(10^{8}\) steps. We have used OACIS and CARAVAN to manage the simulation results.[67, 68] ## Appendix ### Simulation with different parameters To test the robustness of our simulation results, we have conducted simulations with different parameters. Figure 10 shows results for \(M=10^{2}\). These results are qualitatively similar to those with \(M=10^{3}\) in Fig. 6. The main difference from Fig. 6 is the dependency on the relative mutation probability \(r\) in the memory-three strategy space: For \(M=10^{3}\), non-FR strategies make way for FRs at \(r\approx 10^{-3}\), whereas a similar transition occurs at \(r\approx 10^{-2}\) when \(M=10^{2}\). To see why, let us recall that the biggest threat to FRs is the neutral drift caused by efficient strategies. For FRs to survive long, therefore, \(r\) needs to be high enough \(\gtrsim O(1/M)\) to suppress non-FR efficient strategies. If \(r\) is even higher, the dynamics is mostly driven by mutation while out-group imitation hardly occurs. As a result, the results becomes similar to those from a well-mixed population with \(N=2\). We have also investigated the evolution of average memory lengths in these simulations and found similar results to the ones in Fig. 9. ### Full separation of time scales In the main text, we have considered the case where the time scale for out-group imitation is comparable with that of mutation. Here, for the sake of completeness, we study the case where the time scales are completely separated, i.e., by setting \(\nu\ll\mu_{\text{out}}\ll\mu_{\text{in}}\), so that mutation occurs far less frequently than the other processes. Again, once a mutant is introduced, no other mutation occurs until the mutant takes over the population or dies out. The fixation probability that a mutant \(Y\) takes over the population with strategy \(X\) is \[\Psi_{X\to Y}=\rho_{X\to Y}\,\frac{1}{1+\sum_{j=1}^{M}\eta^{j}}, \tag{15}\] where \(\eta\equiv T_{Y\to X}/T_{X\to Y}\).[28] Monte Carlo simulations are conducted in the same way as explained in Methods above, and the parameter values are \(M=10^{3}\), \(N=2\), \(e=10^{-6}\), and \(\sigma_{\text{in}}=\sigma_{\text{out}}=30/(b-1)\). Figure 11 shows the results. In \(\mathcal{S}(1)\), the cooperation level strongly depends on \(b\), the benefit of cooperation: Cooperation level is almost 100% for \(b=6\), but it is less than a half at \(b=3\) and accompanied by a proliferation of rivals. This is consistent with our previous study.[28] When \(\mathcal{S}(3)\) is available, the cooperation level is less sensitive to \(b\) and actually higher than in \(\mathcal{S}(1)\) except for \(b=6\). The first reason is that the cooperation level for \(b=6\) and \(\mathcal{S}(1)\) is unusually high because of the absence of the dangerous mutants that can threaten WSLS in \(\mathcal{S}(1)\). Namely, Figure 10: Effects of \(M=10^{2}\), to be compared with Fig. 6. The other parameters are \(N=2\), \(e=10^{-6}\) and \(\sigma_{\text{in}}=30/(b-1)\). For panels (a-h), we have used \(\sigma_{\text{out}}=30/(b-1)\) whereas \(\sigma_{\text{out}}=3/(b-1)\) for the bottom panels. Each simulation runs for \(10^{8}\) time steps, and the results are averaged over 10 independent runs. WSLS is stable against mutants in \(\mathcal{S}(1)\) due to their poor performance, but it is no longer stable against the mutants in \(\mathcal{S}(3)\). The second and more important reason is that FRs contribute to increasing the cooperation level even for \(b=1.5\) and \(b=3\) as shown in the right panel of Fig. 11. We point out that the fraction of FRs does not reach 100% in this setting because they still suffer from the neutral drift due to non-FR efficient strategies when \(r\to 0\).
2303.11522
Online Learning for Equilibrium Pricing in Markets under Incomplete Information
The study of market equilibria is central to economic theory, particularly in efficiently allocating scarce resources. However, the computation of equilibrium prices at which the supply of goods matches their demand typically relies on having access to complete information on private attributes of agents, e.g., suppliers' cost functions, which are often unavailable in practice. Motivated by this practical consideration, we consider the problem of setting equilibrium prices in the incomplete information setting wherein a market operator seeks to satisfy the customer demand for a commodity by purchasing the required amount from competing suppliers with privately known cost functions unknown to the market operator. In this incomplete information setting, we consider the online learning problem of learning equilibrium prices over time while jointly optimizing three performance metrics -- unmet demand, cost regret, and payment regret -- pertinent in the context of equilibrium pricing over a horizon of $T$ periods. We first consider the setting when suppliers' cost functions are fixed and develop algorithms that achieve a regret of $O(\log \log T)$ when the customer demand is constant over time, or $O(\sqrt{T} \log \log T)$ when the demand is variable over time. Next, we consider the setting when the suppliers' cost functions can vary over time and illustrate that no online algorithm can achieve sublinear regret on all three metrics when the market operator has no information about how the cost functions change over time. Thus, we consider an augmented setting wherein the operator has access to hints/contexts that, without revealing the complete specification of the cost functions, reflect the variation in the cost functions over time and propose an algorithm with sublinear regret in this augmented setting.
Devansh Jalota, Haoyuan Sun, Navid Azizan
2023-03-21T00:53:37Z
http://arxiv.org/abs/2303.11522v2
# Online Learning for Equilibrium Pricing in Markets under Incomplete Information ###### Abstract The study of market _equilibria_ is central to economic theory, particularly in efficiently allocating scarce resources. However, the computation of _equilibrium_ prices at which the supply of goods matches their demand typically relies on having access to complete information on private attributes of agents, e.g., suppliers' cost functions, which are often unavailable in practice. Motivated by this practical consideration, we consider the problem of setting equilibrium prices in the incomplete information setting wherein a market operator seeks to satisfy the customer demand for a commodity by purchasing the required amount from competing suppliers with privately known cost functions unknown to the market operator. In this incomplete information setting, we consider the online learning problem of learning equilibrium prices over time while jointly optimizing three performance metrics--_unmet demand, cost regret_, and _payment regret_--pertinent in the context of equilibrium pricing over a horizon of \(T\) periods. We first consider the setting when suppliers' cost functions are fixed and develop algorithms that achieve a regret of \(O(\log\log T)\) when the customer demand is constant over time, or \(O(\sqrt{T}\log\log T)\) when the demand is variable over time. Next, we consider the setting when the suppliers' cost functions can vary over time and illustrate that no online algorithm can achieve sublinear regret on all three metrics when the market operator has no information about how the cost functions change over time. Thus, we consider an augmented setting wherein the operator has access to hints/contexts that, without revealing the complete specification of the cost functions, reflect the variation in the cost functions over time and propose an algorithm with sublinear regret in this augmented setting. ## 1 Introduction The study of market mechanisms for efficiently allocating scarce resources traces back to the seminal work of Walras [1]. In his work, Walras investigated the design of pricing schemes to mediate the allocation of scarce resources such that the economy operates at an _equilibrium_, i.e., the supply of each good matches its demand. Market equilibria exist under mild conditions on agents' preferences [2] and, under convexity assumptions on their preferences, can often be computed by solving a large-scale centralized optimization problem. As a case in point, in electricity markets with convex supplier cost functions, the equilibrium prices correspond to the shadow prices [3, 4] of a convex optimization problem that minimizes the sum of the supplier costs subject to a market clearing (or load balance) constraint [5]. While methods such as convex programming provide computationally tractable approaches to computing market equilibria, the efficacy of such centralized optimization approaches for equilibrium computation suffers from several inherent limitations. First, centralized optimization approaches rely on complete information on agents' utilities and cost functions that are often unavailable to a market operator. For instance, with the deregulation of electricity markets, suppliers' cost functions are private information, which has led to strategic bidding practices by suppliers seeking to maximize their profits [6, 7, 8, 9] and has been associated with tens of millions of dollars of over-payments to suppliers [10]. Moreover, even if a market operator has access to some information on agents' utilities and cost functions, such information can typically only provide a noisy or imperfect estimate of their preferences due to inadequate information or uncertainty [11]. In the context of electricity markets, the advent of renewables and distributed energy resources has accompanied a high degree of uncertainty in the supply of energy to meet customer demands at different times of the day and year, as these energy sources are sensitive to weather conditions. To further compound these challenges, agents' preferences in markets such as electricity markets may also be time-varying, e.g., in electricity markets, customer demands may change over time and the cost functions of suppliers may depend on fluctuating weather conditions. Thus, a market operator may need to periodically collect agents' preferences and solve a large-scale centralized optimization at each time period to set equilibrium prices, which may be computationally challenging. Motivated by these practical considerations that limit the applicability of centralized optimization approaches to computing equilibrium prices, in this work, we study the problem of setting equilibrium prices in the incomplete information setting where a market operator seeks to satisfy customer demand for a commodity by purchasing the required amount from competing suppliers with privately known cost functions. We investigate this problem under several informational settings regarding the time-varying nature of the customer demands and supplier cost functions and develop online learning algorithms for each of these settings that iteratively adjust the prices in the market over time. Our proposed algorithms employ the observation that a market operator can effectively learn information on suppliers' costs and equilibrium prices through observations of their cumulative production relative to the customer demand given different market prices. To analyze the performance of our algorithms, we combine techniques from online learning and parametric optimization as we seek to simultaneously optimize multiple, often competing, performance metrics pertinent in the context of equilibrium pricing. ### Contributions In this work, we study the problem of setting equilibrium prices faced by a market operator that seeks to satisfy an inelastic customer demand for a commodity by purchasing the required amount from \(n\) competing suppliers. Crucially, we study this problem in the incomplete information setting when the cost functions of suppliers are private information and thus unknown to the market operator. Since traditional centralized methods of setting equilibrium prices are typically not conducive in this incomplete information setting, we consider the problem of learning equilibrium prices over \(T\) periods to minimize three performance (regret) metrics: (i) _unmet demand_, (ii) _cost regret_, and (iii) _payment regret_. Here unmet demand refers to the cumulative difference between the demand and the total production of the commodity corresponding to an online pricing policy. Furthermore, cost regret (payment regret) refers to the difference between the total cost of all suppliers (payment made to all suppliers) corresponding to the online allocation and that of the offline oracle with complete information on suppliers' cost functions. For a more thorough discussion of these regret metrics, we refer to Section 3.2. In this incomplete information setting, we investigate the design of online algorithms to set a sequence of prices that achieve sub-linear regret, in the number of periods \(T\), on the above three performance metrics. To this end, we first consider the setting when suppliers' cost functions are fixed over the \(T\) periods and develop algorithms that achieve a regret of \(O(\log\log T)\) when the customer demand is constant over time (Section 4), and \(O(\sqrt{T}\log\log T)\) when the demand is variable over time (Section 5), for strongly convex cost functions. To establish these regret guarantees for the three performance metrics, we leverage and combine techniques from parametric optimization and online learning. We further demonstrate through an example that if the strong convexity condition on suppliers' cost functions is relaxed, no online algorithm can achieve a sub-linear regret guarantee on all three regret metrics. Then, we consider the setting when suppliers' cost functions can vary across the \(T\) periods (Section 6) and show that if the operator does not know the process that governs the variation of the cost functions, no online algorithm can achieve sub-linear regret on all three regret metrics. Thus, in alignment with real-world markets, e.g., electricity markets, we consider an augmented setting, wherein the market operator has access to some hints (contexts) that, without revealing the complete specification of the cost functions, reflect the change of the cost functions over time. In this setting, we propose an algorithm that achieves sub-linear regret on all three performance metrics, where the exact dependence of the regret guarantee on \(T\) relies on the statistical properties of the function class that suppliers' cost functions belong to. Literature Review The design of market mechanisms to efficiently allocate resources under incomplete information on agents' preferences and costs has received considerable attention in the operations research, economics, and computer science communities. For instance, mechanism design has enabled designing optimal resource allocation strategies even in settings when certain information is privately known to agents [12, 13, 14, 15]. Furthermore, inverse game theory [16] and revealed preference based approaches [17, 18, 19, 20] have emerged as methods to learn the underlying utilities and costs of agents given past observations of their actions. While, in line with these works, we consider an incomplete information setting wherein suppliers' cost functions are private information, we do not directly learn or elicit suppliers' cost functions to make pricing decisions as in these works and instead study the problem of learning equilibrium prices as an online decision-making problem. The paradigm of online-decision making has enabled the allocation of scarce resources in settings with incomplete information where data is revealed sequentially to an algorithm designer and has found several applications [21, 22, 23]. Two of the most well-studied classes of online-decision making problems include online linear programming (OLP) and online convex optimization (OCO). While OLP has been studied extensively under different informational settings, including the adversarial [21, 24], random permutation [25, 26], and stochastic input models [27, 28, 29, 30], in this work, we consider the setting when suppliers' cost functions are convex and, in general, non-linear. Given the prevalence of non-linear objectives in various resource allocation settings, there has been a growing interest in OCO [31], wherein several works [32, 33, 34] have investigated the design of algorithms with near-optimal regret guarantees under the adversarial, random permutation, and stochastic input models. Additionally, there have been many works on a smoothed variant of OCO where the agent must pay switching costs for changing decisions between rounds [35, 36, 37, 38]. As in the works on OCO, we also consider general non-linear convex objectives; however, as opposed to the resource constraints that need to be satisfied over the entire time horizon in these works, we adopt a stronger performance metric where we accumulate regret at each period when the customer demand is not satisfied (see Section 3.2 for more details on our performance metrics). Thus, our algorithms are considerably different from the dual-based algorithms based on applying dual sub-gradient descent developed in these works on OCO [32, 33, 34]. Our algorithms are inspired by the multi-armed bandit literature and involve a tradeoff between exploration and exploitation in an unknown environment [39, 40]. In a typical multi-armed bandit (MAB) setting, a decision-making agent performs sequential trials on a set of permissible actions (arms), observes the outcome of the actions, and maximizes its rewards. Several extensions of MAB have been proposed over the years, including bandits with partial observations [41], contextual bandits [42, 43], Lipschitz bandits [44], and bandits with constrained resources [45, 46]. These results have contributed to many applications, such as online posted-price auctions [47], dynamic pricing with limited supply [48], and dark pools in stock market [49]. In typical bandit frameworks, the decision maker's objective is to optimize a single reward function, wherein the rewards are revealed sequentially as part of the observation to the decision maker. However, in our setting, we seek to jointly optimize multiple performance metrics where suppliers' cost functions are not revealed to the market operator (see Section 3.2 for details). ## 3 Model In this section, we present the offline model of a market operator seeking to set equilibrium prices in the market to satisfy a customer demand for a commodity (Section 3.1) and performance metrics used to evaluate the efficacy of a pricing policy in the online setting (Section 3.2). ### Market Model and Equilibrium Pricing We study a market run by an operator seeking to meet the customer demand \(d>0\) for a commodity, e.g., energy, by purchasing the required amount from \(n\) competing suppliers. Each supplier \(i\in[n]\) has a cost function \(c_{i}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\), where \(c_{i}(x_{i})\) represents the cost incurred by supplier \(i\) for producing \(x_{i}\) units of the commodity. Furthermore, to meet the customer demand, the market operator posts a price \(p\) in the market, which represents the payment made by the market operator for each unit of the commodity produced by a given supplier. In particular, for producing \(x_{i}\) units of the commodity, a supplier \(i\) receives a payment of \(px_{i}\) from the market operator. Then, given a posted price \(p\) for the commodity and a cost function \(c_{i}(\cdot)\), each supplier makes an individual decision on the optimal production quantity \(x_{i}^{*}(p)\) to maximize their total profit, as described through the following optimization problem \[\max_{x_{i}\geq 0}\quad px_{i}-c_{i}(x_{i}). \tag{1}\] The posted price that suppliers best respond to is set by a market operator that seeks to determine an equilibrium price \(p^{*}\) that satisfies the following three desirable properties: 1. Market Clearing: The total supply equals the total demand, i.e., \(\sum_{i=1}^{n}x_{i}^{*}(p^{*})=d\). 2. Minimal Supplier Production Cost: The total production cost of all suppliers, given by \(\sum_{i=1}^{n}c_{i}(x_{i}^{*}(p^{*}))\), is minimal among all feasible production quantities \(x_{i}\geq 0\) for all suppliers \(i\in[n]\) satisfying the customer demand, i.e., \(\sum_{i=1}^{n}x_{i}=d\). 3. Minimal Payment: The total payment made to all suppliers, given by \(\sum_{i=1}^{n}p^{*}x_{i}^{*}(p^{*})\), is minimal among all feasible production quantities \(x_{i}\geq 0\) for all suppliers \(i\in[n]\) satisfying the customer demand, i.e., \(\sum_{i=1}^{n}x_{i}=d\). While these properties are, in general, not possible to achieve simultaneously, e.g., in markets where the supplier cost functions are non-convex [5], in markets where the cost functions \(c_{i}(\cdot)\) of all suppliers are convex, there exists an equilibrium price \(p^{*}\) that satisfies the above three properties. Moreover, in markets with convex cost functions, the equilibrium price can be computed through the dual variables of the market clearing constraint of the following convex optimization problem \[C^{*}= \min_{x_{i}\geq 0,\forall i\in[n]} \sum_{i=1}^{n}c_{i}(x_{i}),\] (2a) s.t. \[\sum_{i=1}^{n}x_{i}=d, \tag{2b}\] where (2a) is the minimum supplier production cost objective and (2b) is the market clearing constraint. In particular, from the KKT condition, the optimal solution to Problem (2a)-(2b) satisfies \[\begin{cases}\sum_{i=1}^{n}x_{i}^{*}=d,x_{i}^{*}\geq 0,\forall\,i=1,\ldots,n, \\ \frac{\partial c_{i}}{\partial x_{i}}(x_{i}^{*})\geq p^{*},\forall\,i=1,\ldots, n,\\ \frac{\partial c_{i}}{\partial x_{i}}(x_{i}^{*})=p^{*},\forall\,i\text{ s.t. }x_{i}^{*}>0,\end{cases}\] so that the optimal dual variable \(p^{*}\) satisfies all conditions of equilibrium pricing. While the equilibrium price \(p^{*}\) has several desirable properties, such an equilibrium price typically cannot be directly computed by solving Problem (2a)-(2b) as the cost functions of suppliers are, in general, unknown to the market operator. Furthermore, both the cost functions of the suppliers and the customer demands tend to be time-varying and thus would involve the market operator periodically re-solving Problem (2a)-(2b) to determine equilibrium prices at short time intervals, which may be computationally prohibitive. To overcome these challenges, in this work, we propose online learning algorithms to learn equilibrium prices over multiple periods in the incomplete information setting when the cost functions of the suppliers are unknown (or only partially known) to the market operator. ### Performance Metrics to Set Equilibrium Prices in Online Setting We now introduce the online learning setting, wherein the market operator sets prices for the commodity over multiple periods, and present the performance metrics to evaluate the efficacy of an online pricing policy. In particular, we consider the setting when the market operator seeks to satisfy the customer demand over multiple periods \(t=1,\ldots,T\). At each period \(t\in[T]\), the customer demand for the commodity is given by \(d_{t}\) and each supplier \(i\in[n]\) has a private cost function \(c_{it}(\cdot)\) that is increasing, continuously differentiable, strongly convex, and normalized to satisfy \(c_{it}(0)=0\). We assume that the demand at each period \(t\) lies in a bounded interval, i.e., \(d_{t}\in[\underline{d},\bar{d}]\) for all \(t\) for some \(\underline{d},\bar{d}>0\). Furthermore, for ease of exposition, we normalize the set of feasible prices corresponding to any customer demand and realization of supplier cost functions to be such that the corresponding optimal price of the commodity belongs to the normalized interval \([0,1]\). In addition, we note that we consider strongly convex cost functions of suppliers, as opposed to general convex costs, due to the performance limitations of any online algorithm under the incomplete information setting studied in this work for non-strongly convex cost functions (see Section 4.2 for further details). In this work, we begin by considering the informational setting wherein the cost functions of the suppliers are fixed over time (see Sections 4 and 5) and upon observing the customer demand \(d_{t}\), the market operator makes a pricing decision \(p_{t}\) that depends on the past observations of supplier productions, i.e., revealed preference feedback in response to set prices as in [50, 51, 52, 53], and the realized customer demands. In particular, over the \(T\) periods, the market operator sets a sequence of prices given by the pricing policy \(\mathbf{\pi}=(\pi_{1},\ldots,\pi_{T})\), where \(p_{t}=\pi_{t}(\{(x^{*}_{it^{\prime}})^{n}_{1=1},d_{t^{\prime}}\}_{t^{\prime}= 1}^{t-1},d_{t})\), where \(x^{*}_{it}\) represents the optimal production quantity corresponding to the solution of Problem (1) for supplier \(i\) at period \(t\). When the pricing policy is evident from the context, we will overload the notation and simply write \(\mathbf{\pi}=(p_{1},p_{2},\ldots,p_{T})\). We then consider the informational setting when suppliers' cost functions are time-varying (see Section 6), for which we introduce an augmented problem setting and the corresponding class of online pricing policies that we consider in Section 6.2. We evaluate the efficacy of an online pricing policy \(\mathbf{\pi}\) using three regret metrics: (i) unmet demand, (ii) cost regret, and (iii) payment regret. These regret metrics represent the performance loss of the policy \(\mathbf{\pi}\) relative to the optimal offline algorithm with complete information on the three desirable properties of equilibrium prices elucidated in Section 3.1. We also note that these performance metrics naturally generalize to the augmented problem setting we consider when suppliers' cost functions are time-varying and present the corresponding generalizations of the regret metrics in Appendix D for completeness. Unmet Demand:We evaluate the unmet demand of an online pricing policy \(\mathbf{\pi}\) as the sum of the differences between the demand and the total supplier productions corresponding to the pricing policy \(\mathbf{\pi}\) at each period \(t\). In particular, for an online pricing policy \(\mathbf{\pi}\) that sets a sequence of prices \(p_{1},\ldots,p_{T}\), the cumulative unmet demand is given by \[U_{T}(\mathbf{\pi})=\sum_{t=1}^{T}\left(d_{t}-\sum_{i=1}^{n}x^{*}_{it}(p_{t}) \right)_{+},\] where \(x^{*}_{it}(p_{t})\) is the optimal production quantity corresponding to the solution of Problem (1) for supplier \(i\) at period \(t\). Cost Regret:We evaluate the cost regret of an online pricing policy \(\mathbf{\pi}\) through the difference between the total supplier production cost corresponding to algorithm \(\mathbf{\pi}\) and the minimum total production cost, given complete information on the supplier cost functions. In particular, the cost regret \(C_{T}(\mathbf{\pi})\) of an algorithm \(\mathbf{\pi}\) is given by \[C_{T}(\mathbf{\pi})=\sum_{t=1}^{T}\sum_{i=1}^{n}\left(c_{it}(x^{*}_{it}(p_{t}))-c _{it}(x^{*}_{it}(p^{*}_{t}))\right),\] where the price \(p^{*}_{t}\) for each period \(t\in[T]\) is the optimal dual variable of the market clearing constraint of Problem (2a)-(2b) given the demand \(d_{t}\) and cost functions \(c_{it}\) for all \(i\in[n]\). Payment Regret:Finally, we evaluate the payment regret of online pricing policy \(\mathbf{\pi}\) through the difference between the total payment made to all suppliers corresponding to algorithm \(\mathbf{\pi}\) and the minimum total payment, given complete information on the supplier cost functions. In particular, the payment regret \(P_{T}(\mathbf{\pi})\) of an algorithm \(\mathbf{\pi}\) is given by \[P_{T}(\mathbf{\pi})=\sum_{t=1}^{T}\sum_{i=1}^{n}\left(p_{t}x^{*}_{it}(p_{t})-p^{*} _{t}x^{*}_{it}(p^{*}_{t})\right).\] In this work, we focus on developing algorithms that jointly optimize these three regret metrics over \(T\) periods. In particular, because it is desirable that the performance of the pricing policy improves as the market operator receives more information, we intend to design algorithms that guarantee all three regret metrics are sub-linear in \(T\). Note that achieving good performance on one of these metrics is typically easy as setting low prices will lead to low cost and payment regrets while setting very high prices will lead to no unmet demand. Thus, the challenge in simultaneously optimizing these different regret metrics in the incomplete information setting is to find the right price, i.e., the equilibrium price, at which all these regret metrics are kept small. A few comments about our regret metrics are in order. First, our unmet demand metric aligns with real-world markets, e.g., electricity markets, where the demand needs to be satisfied at each period, and over-production at particular periods cannot compensate for unmet demand at subsequent periods. Therefore, we define our unmet demand metric as a stronger benchmark than the typical constraint violation metrics in the literature of jointly optimizing multiple regret metrics [54, 55, 56], where resource constraints only need to be approximately satisfied in the long run. Formally, \(U_{T}(\mathbf{\pi})=\sum_{t=1}^{T}\left(d_{t}-\sum_{i=1}^{n}x_{it}^{*}(p_{t}) \right)_{+}\geq\left[\sum_{t=1}^{T}\left(d_{t}-\sum_{i=1}^{n}x_{it}^{*}(p_{t}) \right)\right]_{+}\), where the latter term corresponds to the setting when the customer demand only needs to be satisfied in the long-run. Further, since we obtain regret guarantees for the above unmet demand metric, using techniques from parametric optimization, our regret guarantees naturally extend for the corresponding stronger notions of the payment and cost regret metrics as well. However, we present our payment and cost regret metrics in alignment with the classical regret metrics in the literature, wherein lower payments (costs) at particular periods can compensate for excess payments (costs) at other periods. ## 4 Fixed Cost Functions and Demand We now investigate the design of online pricing policies achieving good performance on the three regret metrics, i.e., sub-linear unmet demand, cost regret, and payment regret in the number of periods \(T\). As a warm-up, we first consider the setting when the cost functions of the suppliers and customer demand are fixed over the \(T\) periods. Formally, the supplier cost functions satisfy \(c_{it}(\cdot)=c_{it^{\prime}}(\cdot)\) for all \(t,t^{\prime}\in[T]\) and the demands satisfy \(d_{t}=d_{t^{\prime}}\) for all periods \(t,t^{\prime}\in[T]\). For ease of exposition, in this section, we drop the subscript \(t\) in the notation for the customer demand (and supplier cost functions) and denote \(d_{t}=d\) (and \(c_{it}(\cdot)=c_{i}(\cdot)\) for all suppliers \(i\)) for all periods \(t\in[T]\). In this setting, we develop an algorithm that achieves a regret of \(O(\log\log T)\) on the three regret metrics when the suppliers' cost functions are strongly convex (Section 4.1). We further present an example to demonstrate that if the strong convexity condition on suppliers' cost functions is relaxed, then no sub-linear regret guarantee on all three regret metrics is, in general, possible for any online algorithm (Section 4.2). ### Algorithm with Sub-linear Regret for Strongly Convex Cost Functions In this section, we consider the setting of fixed supplier cost functions and customer demands and present an algorithm that achieves a regret of \(O(\log\log T)\) on the three regret metrics when the suppliers' cost functions are strongly convex. To motivate our algorithm, we first note that since the customer demand and the supplier cost functions are fixed over time, the optimal price \(p^{*}\in[0,1]\) for all periods \(t\in[T]\) is also fixed and given by the dual of the demand constraint of Problem (2a)-(2b). Furthermore, the cumulative production \(x_{t}^{*}(p)=\sum_{i=1}^{n}x_{it}^{*}(p)\) is monotonically non-decreasing in the price \(p\) because suppliers' cost functions are increasing. Utilizing this monotonicity property, we note that if we set two prices \(p_{1},p_{2}\in[0,1]\) such that the cumulative production \(\sum_{i=1}^{n}x_{it}^{*}(p_{1})>d\) and \(\sum_{i=1}^{n}x_{it}^{*}(p_{2})<d\), then \(p_{1}\) and \(p_{2}\) respectively serve as upper and lower bounds on the optimal price \(p^{*}\) when the supplier cost functions and customer demands are fixed over time. Following these observations, we present Algorithm 1, akin to the algorithm in [47], which maintains a feasible interval for the optimal price \(p^{*}\) and sets a sequence of prices for each arriving user to continuously shrink this feasible price set. In particular, the feasible price interval \([a,b]\) is initialized to \(\mathcal{S}_{p}=[0,1]\) and a precision parameter \(\varepsilon\) is set to \(0.5\). Then, for a given algorithm sub-phase associated with feasible price interval \([a,b]\), the operator posts prices \(a,a+\varepsilon,a+2\varepsilon,\ldots\) (up to \(b\)) at each period until the total supply exceeds the demand at the offered price. If \(a+k\varepsilon\) for some \(k\in\mathbb{N}\) was the last price such that \(x_{t}^{*}(a+k\varepsilon)\leq d\) then \([a+k\varepsilon,a+(k+1)\varepsilon]\) is set as the new feasible interval for the optimal price, and the precision parameter is re-set to \(\varepsilon^{2}\). This process of shrinking the feasible interval and updating the precision parameter is repeated until the length of the feasible interval is smaller than \(\frac{1}{T}\), following which the market operator posts the price at the lower end of the feasible interval for the remaining periods. This process is presented formally in Algorithm 1. ``` Input : Feasible set of prices \(\mathcal{S}_{p}=[0,1]\), Precision Parameter \(\varepsilon=\frac{1}{2}\) Set the lower and upper bounds of the feasible price set: \(a\gets 0,b\gets 1\) ; whilelength of feasible price set is greater than \(\frac{1}{T}\)do Offer prices \(a,a+\varepsilon,\ldots,a+(k+1)\varepsilon\) (all of which are \(\leq b\)) to each subsequent user where \(a+k\varepsilon\) is the last price such that \(\sum_{i=1}^{n}x_{it}^{*}(a+k\varepsilon)<d\) ; Set the new feasible interval to \([a+k\varepsilon,a+(k+1)\varepsilon]\) and reduce the precision parameter to \(\varepsilon^{2}\) ; end while forthe remaining time periodsdo Offer price \(p_{t}=a\) ; end while ``` **Algorithm 1**Feasible Price Set Tracking under Fixed Demand and Costs While Algorithm 1 is similar to the corresponding algorithm in [47] for the setting of fixed user valuations, our market setting is considerably different than the revenue maximization setting in [47]. First, in this work, suppliers have a continuous rather than a binary action space as in the revenue maximization setting in [47], where consumers either purchase one unit of the resource at the given price or do not purchase it. Furthermore, as opposed to the single regret measure analyzed in [47], we consider and analyze three different regret measures that often compete against each other. We now present the main result of this section, which establishes that Algorithm 1 simultaneously achieves an \(O(\log\log T)\) regret on the three regret measures studied in this work. **Theorem 1**.: _The unmet demand, cost regret, and payment regret of Algorithm 1 are \(O(\log\log T)\) if the cost functions of the suppliers are strongly convex._ The proof of Theorem 1 relies on the following Lipschitzness condition between the optimal supplier production and the prices set by the market operator. **Lemma 2** (Lipschitzness of Production in Prices).: _Suppose that the suppliers' cost functions \(c_{i}(\cdot)\) are \(\mu_{i}\)-strongly convex. Then, at any period \(t\), the optimal production quantity for supplier \(i\) corresponding to the solution of Problem (1) is Lipschitz in the price \(p\), i.e., \(|x_{it}^{*}(p_{1})-x_{it}^{*}(p_{2})|\leq L|p_{1}-p_{2}|\) for some constant \(L>0\) for all \(p_{1},p_{2}\in[0,1]\)._ Proof.: Fix a period \(t\in[T]\). Then, by computing the first-order optimally condition of Problem (1) for each supplier \(i\), we have that \[p=c_{i}^{\prime}(x_{it}^{*}(p))\implies x_{it}^{*}(p)=(c_{i}^{\prime})^{-1}(p).\] Then, by the inverse function theorem, we have that \[(x_{it}^{*})^{\prime}(p)=\frac{d}{dp}(c_{i}^{\prime})^{-1}(p)=\frac{1}{c_{i}^{ \prime\prime}(x_{it}^{*}(p))}\leq\frac{1}{\mu_{i}},\] where the inequality follows since the function \(c_{i}(\cdot)\) is \(\mu_{i}\)-strongly convex. Hence, at each period \(t\), the optimal supplier production \(x_{it}^{*}\) is \((1/\mu_{i})\)-Lipschitz in the prices. Lemma 2 establishes that small changes in the price set by the market operator correspond to small changes in the optimal production of suppliers. Using Lemma 2, we now present a proof sketch of Theorem 1 and present its complete proof in Appendix A. Proof (Sketch).: To establish this result, we first note that we need \(O(\log\log T)\) sub-phases of repeated squaring of the parameter \(\varepsilon\) to reduce \(\varepsilon\) from \(0.5\) to \(\frac{1}{T}\). Due to the monotonicity of the optimal supplier production in the prices, both payment and cost regret are only accumulated when \(p_{t}>p^{*}\). Further, since \(p_{t}>p^{*}\) at most once in each sub-phase, the total payment and cost regret are \(O(\log\log T)\). Next, to bound the unmet demand, we use Lemma 2 to map prices to productions and show that there is a constant unmet demand accumulated in each sub-phase in Algorithm 1, resulting in an \(O(\log\log T)\) unmet demand when the length of the feasible price interval is more than \(\frac{1}{T}\), as there are \(O(\log\log T)\) sub-phases. In the final phase, when the length of the feasible price interval is less than \(\frac{1}{T}\) and the price is fixed, we again use Lemma 2 to show that the unmet demand through this phase is constant. Thus, the unmet demand is \(O(\log\log T)\), establishing our claim. We reiterate that the proof of Theorem 1 crucially relies on the strong convexity of the cost functions of the suppliers, which was necessary to establish the Lipschitzness relation between the optimal production of suppliers and the prices set by the market operator (Lemma 2). As a result, in addition to leveraging tools from the analysis of the corresponding algorithm in [47], our regret analysis additionally uses tools from parametric optimization to develop the necessary sensitivity relations required to analyze the three regret metrics considered in this work. We also note that the \(O(\log\log T)\) regret guarantee obtained in Theorem 1 indicates that in the setting with fixed supplier cost functions and customer demand, Algorithm 1 incurs little performance loss on all three regret metrics as compared to when equilibrium prices are set with complete knowledge of the cost functions of the suppliers, where Algorithm 1's prices converge to the equilibrium price \(p^{*}\) super-exponentially. Furthermore, the obtained upper bound on the regret of Algorithm 1 compares favorably to the \(\Omega(\log\log T)\) regret lower bound for any online algorithm in the revenue maximization setting with fixed user valuations studied in [47]. ### Performance Limitations for Non-Strongly Convex Cost Functions While Algorithm 1 achieved sub-linear regret on all three performance measures in the incomplete information setting for strongly convex cost functions, we note that this result does not generalize to the setting of general convex costs. In particular, in this section, we show that if the cost functions of the suppliers are linear, then no online algorithm can achieve sub-linear regret guarantees for the unmet demand, cost regret, and payment regret metrics simultaneously. This result highlights the difficulty of the incomplete information setting compared to that with complete information, where the equilibrium price satisfying the three desirable properties in Section 3.1 exists and can be computed through the dual variable of the market clearing constraint of Problem (2a)-(2b) when the cost functions of all suppliers are convex (and, not necessarily, strongly convex). To motivate why sub-linear regret cannot be attained simultaneously on the three desirable metrics for linear cost functions, we note that the optimal production of suppliers is zero if \(p<p^{*}\) and their optimal production is the maximum feasible if \(p>p^{*}\). Thus, there is a jump discontinuity in the production of suppliers at the price \(p=p^{*}\) and so the production is not Lipschitz in the prices, which breaks a key property (Lemma 2) required to prove Theorem 1. We now formally present an example demonstrating that, even in a market with a single supplier, no online algorithm can achieve sub-linear regret on all three regret measures if the supplier's cost function is linear. **Example 1** (Sub-linear Regret is not Possible for Linear Cost Functions).: We consider a market with one supplier with a linear cost function that is fixed over time. In this setting, suppose that the optimal price is \(p^{*}\) and the cost function of the supplier is given by \(c(x)=cx\). Then, given a price \(p\) set by the market operator, the individual decision-making problem for the supplier is to produce a quantity \(x^{*}(p)\geq 0\) that maximizes \((p-c)\cdot x\). Note that if \(p<c\), then \(x^{*}(p)=0\), and if \(p>c\), then the supplier generates the maximum amount that a supplier can feasibly generate. Thus, it is only when \(p=p^{*}=c\) that the supplier will generate the amount equal to the customer demand \(d\). Since the perfect identification of the optimal price \(p^{*}\) is, in general, not possible under incomplete information on the cost coefficient \(c\), we have that the market operator will either set prices \(p_{t}<p^{*}\) or \(p_{t}>p^{*}\). Next, we observe that there is an unmet demand of \(d\) at any period when \(p_{t}<p^{*}\) (as the supplier generates nothing if the price is below \(p^{*}\) by the linearity of the cost function). Thus, to achieve sub-linear unmet demand, the market operator must set the price \(p>p^{*}\) for \(O(T)\) periods; however, doing so results in a linear cost and payment regret as the supplier produces strictly more than \(d\) units of the commodity when \(p>p^{*}\). Thus, no online algorithm can achieve a sub-linear regret on all three regret measures if the cost functions of the suppliers are linear. ## 5 Fixed Cost Functions and Time-Varying Demand Having studied the setting of fixed supplier cost functions and customer demands over time, in this section, we investigate a more general market setting when the suppliers' cost functions are static while customer demands can vary across the \(T\) periods. In particular, we suppose that the customer demands for the commodity are time-varying and lie in a continuous but bounded interval, i.e., the customer demand at each period \(t\) is some variable quantity \(d_{t}\in[\underline{d},\overline{d}]\). In this setting, we extend the algorithm developed for fixed supplier cost functions and customer demands (Algorithm 1) and show that it achieves a regret of \(O(\sqrt{T}\log\log T)\) on all three performance measures for strongly convex cost functions. Our approach for the time-varying demand setting builds upon the algorithmic ideas for the fixed demand setting. To address the challenge that the demands can vary between the interval \([\underline{d},\bar{d}]\), we first consider a direct extension of Algorithm 1 to the time-varying demand setting, wherein a feasible price set is maintained for each realized demand. However, as there may be up to \(O(T)\) different demand realizations over the \(T\) periods, the worst-case regret of such an algorithm is \(O(T)\). To resolve this issue, we leverage the intuition that customer demands that are close to each other correspond to equilibrium prices that are also close together. Thus, we uniformly partition the demand interval \([\underline{d},\bar{d}]\) into sub-intervals of width \(\gamma\) and consider any demand in the same sub-interval the same. In particular, any demand lying in a given sub-interval, i.e., \(d_{t}\in[\underline{d}+k\gamma,\underline{d}+(k+1)\gamma]\) for some \(k\in\mathbb{N}\), is considered as a demand equal to the lower bound of that interval. Note then that from the perspective of the algorithm, there are \(O(\frac{1}{\gamma})\) distinct demands, as opposed to \(O(T)\) possible demand realizations, as the feasible demand interval is partitioned into \(O(\frac{1}{\gamma})\) sub-intervals. Finally, for these \(O(\frac{1}{\gamma})\) distinct demands, corresponding to the lower bounds of the \(O(\frac{1}{\gamma})\) sub-intervals, we apply the aforementioned direct extension of Algorithm 1. Our algorithmic approach is formally presented in Algorithm 2. ``` Input : Discretized demand intervals \(I_{1},\ldots,I_{K}\) with \(I_{k}=\{\underline{d}+(k-1)\gamma,\underline{d}+k\gamma\}\) such that \(K\gamma=\overline{d}-\underline{d}\) Initialize a feasible price set \(\mathcal{S}_{k}=(0,1]\), current price \(p_{k}=0\), and price precision \(\varepsilon_{k}=1/2\) for each demand interval \(I_{k}\); for\(t=1,\ldots,T\)do Determine \(k_{t}\) such that \(d_{t}\in I_{k_{t}}=:[a_{k_{t}},b_{k_{t}}]\); Offer price \(p_{k_{t}}\) to the supplier; ifwidth of feasible price set \(|\mathcal{S}_{k_{t}}|\) is greater than \(\frac{1}{\sqrt{T}}\)then /* If production exceeds target demand, then narrow down the search interval */ if\(\sum_{i=1}^{n}x_{it}^{*}(p_{k_{t}})\geq a_{k_{t}}\)then Set \(\mathcal{S}_{k_{t}}\leftarrow(p_{k_{t}}-\varepsilon_{k_{t}},p_{k_{t}}]\); Set next price \(p_{k_{t}}\gets p_{k_{t}}-\varepsilon_{k_{t}}\); Reset the precision to \(\varepsilon_{k_{t}}\leftarrow\varepsilon_{k_{t}}^{2}\); else Set next price \(p_{k_{t}}\gets p_{k_{t}}+\varepsilon_{k_{t}}\) end if end for ``` **Algorithm 2**Feasible Price Set Tracking for Time-Varying Demands We now present the main result of this section, which establishes that Algorithm 2 achieves a regret of \(O(\sqrt{T}\log\log T)\) if the sub-interval width \(\gamma=\frac{1}{\sqrt{T}}\) for strongly convex cost functions of suppliers. We note that choosing \(\gamma=\frac{1}{\sqrt{T}}\) optimally balances between two different sources of regret in the time-varying demand setting, as is elucidated through the proof sketch of the following theorem. **Theorem 3**.: _Let the demand sub-interval width \(\gamma=\frac{1}{\sqrt{T}}\). Then, the unmet demand, cost regret, and payment regret of Algorithm 2 are \(O(\sqrt{T}\log\log T)\) if the cost functions of the suppliers are strongly convex._ Proof (Sketch).: For each of the three regret metrics, the regret incurred by Algorithm 2 can be broken down into two parts: (i) the regret incurred by the Algorithm 1 sub-routine for each demand sub-interval, and (ii) the inaccuracies of considering all demands in a given sub-interval to be equal to the lower bound of that sub-interval. By invoking Theorem 1, the first part is of order \(O(K\log\log T)\) for all three regret measures, where \(K:=\lceil(\overline{d}-\underline{d})/\gamma\rceil\). Next, since all demands in a given sub-interval are treated as a demand equal to the lower bound of that sub-interval and the suppliers' optimal production is monotonic in the price, every price \(p_{t}\) offered by Algorithm 2 is an under-estimate to the equilibrium price for demand \(d_{t}\). Thus, the second part of the regret is only positive for the unmet demand and is at most \(O(\gamma T)\), as the width of each demand sub-interval is \(\gamma\), and regret is only accumulated over \(T\) periods. Finally, choosing \(\gamma=\frac{1}{\sqrt{T}}\) achieves an optimal balance (up to logarithmic terms) between the two quantities above, i.e., \(O(\gamma^{-1}\log\log T)\) and \(O(\gamma T)\), which establishes the \(O(\sqrt{T}\log\log T)\) regret bound. For a complete proof of Theorem 3, see Appendix B. We reiterate that Theorem 3 applies to strongly convex cost functions as with Theorem 1 and that extending this result to general convex cost functions, e.g., linear functions, is, in general, not possible (see Example 1 in Section 4.2). Furthermore, compared to the regret guarantee obtained in Theorem 1, Theorem 3 establishes that the time-varying nature of the customer demand incurs an additional factor of \(O(\sqrt{T})\) in the regret guarantee as compared to the setting with fixed demands. However, we do note that if the set of demand realizations \(D\) is known _a priori_ to be \(o(\sqrt{T})\), then the regret guarantee in Theorem 2 can be improved to \(O(|D|\log\log T)\) by running the direct extension of Algorithm 1, wherein a feasible price interval is maintained for each realized demand. Finally, we note that the regret guarantee obtained in Theorem 3 compares favorably to classical \(O(\sqrt{T})\) regret guarantees in the OCO or MAB literature [31]. ## 6 Time-Varying Cost Functions In this section, we consider the general setting, where, in addition to customer demands changing over time, suppliers' cost functions can also vary across the \(T\) periods. Formally, at each period \(t\), each supplier \(i\) has a privately known and time-varying cost function \(c_{it}(\cdot)\). Compared to the setting with fixed cost functions, the fundamental ideas underlying the performance guarantees of Algorithms 1 and 2 do not directly apply to this setting, as suppliers' production may not remain the same when the operator offers the same price at different periods due to the time-varying nature of their cost functions. In fact, in Section 6.1, we show that if the operator does not know the process that governs the variation of the cost functions, then sub-linear regret is impossible to achieve on all three regret metrics. To this end, in Section 6.2, in alignment with real-world markets, we consider an augmented problem setting wherein the market operator is provided with a hint (i.e., context) on the variation in suppliers' cost functions over time, e.g., due to weather conditions in electricity markets, while still keeping the full description of the costs away from the operator. In this setting, where the operator has access to additional hints on suppliers' cost functions, we then develop an algorithm with sub-linear on all three regret metrics (see Sections 6.3 and 6.4) through an adaptation of an algorithm in the contextual bandits literature [57]. ### Impossibility of Setting Equilibrium Price Under Time-Varying Costs We initiate our study of the setting of time-varying costs by presenting an example that illustrates the impossibility of setting equilibrium prices if the market operator has no information on how the cost functions of suppliers change over time. In particular, Proposition 4 presents a counterexample establishing that even if suppliers' cost functions are drawn i.i.d. from a known distribution, no online algorithm can achieve a sub-linear regret on all three regret metrics as long as the operator is not informed about the outcome of the random draws from the distribution. **Proposition 4** (Impossibility of Sub-linear Regret for Time-varying Costs).: _There exists an instance with fixed time-invariant demand and a single supplier whose cost functions are drawn i.i.d. from some (potentially known) distribution such that no online algorithm can achieve sub-linear regret on all three regret metrics._ Proof (Sketch).: We consider a setting with a fixed demand of \(d=1\) at every period and a single supplier whose cost functions at each period are drawn from a distribution such that at each period \(t\), its cost function could be either \(c_{1}(x)=\frac{1}{8}x^{2}\) or \(c_{2}(x)=\frac{1}{16}x^{2}\), each with probability \(0.5\). We suppose that the market operator has knowledge of the distribution from which the supplier's cost function is sampled i.i.d. but does not know the outcome of the random draw at any period and show that any pricing strategy adopted by the operator must incur a linear regret on at least one of the three regret metrics for this instance. To this end, we first note that the equilibrium price corresponding to the cost function \(c_{1}(x)\) is \(p_{1}^{*}=\frac{1}{4}\) while that corresponding to the cost function \(c_{2}(x)\) is \(p_{2}^{*}=\frac{1}{8}\). Then, we analyze the total regret, i.e., the sum of the unmet demand, payment regret, and cost regret, for three different price ranges - (i) \(p<\frac{1}{8}\), (ii) \(1/8\leq p\leq 1/4\), and (iii) \(p>1/4\) - and show that irrespective of the set price \(p\) at any period \(t\), the expected total regret at any period is at least \(\frac{7}{64}\), i.e., the total regret is at least \(\frac{7}{64}T\). Finally, since the sum of the three regret metrics is linear in \(T\), at least one of the three metrics must be linear in \(T\), establishing our claim. For a complete proof of Proposition 4, see Appendix C. While it was possible to achieve sub-linear regret in the setting with time-varying customer demands (see Theorem 3), Proposition 4 establishes that such a result is, in general, not possible in the setting with time-varying cost functions. The setting with time-varying cost functions is more challenging because the market operator observes customer demands, which it can use to make pricing decisions, but does not observe the cost functions of suppliers. Further, in contrast to the settings in [33, 54], where online gradient descent approaches can simultaneously achieve sub-linear regret for multiple performance metrics, we note that our definition of unmet demand is considerably stronger as over-production at particular periods cannot compensate for unmet demand at other periods (see Section 3.2 for a further discussion). Thus, Proposition 4 shows that, with the stronger unmet demand metric, it is impossible to jointly optimize the three regret metrics, and illustrates the difficulty of balancing the three regret metrics, as decreasing the payment or cost regret causes an increase in the unmet demand and vice versa. ### Adding Contexts for Time-varying Costs Proposition 4 highlights that if the operator does not have any information on the change in suppliers' cost functions over time, it is impossible to achieve sub-linear regret on all three regret metrics. To this end, in this section, we consider a natural augmented problem setting wherein the market operator, without knowing the complete specification of cost functions of suppliers, additionally has access to a hint (i.e., context) that reflects the variation in cost functions of suppliers over time. We note that such a setting aligns with real-world markets, e.g., electricity markets, wherein the cost functions of suppliers are private information yet will typically vary over time based on observed quantities, such as changes in the ambient weather conditions. To specify the augmented problem setting with contexts, we first introduce some notation regarding the cost functions of suppliers. In particular, we assume that each supplier's cost function is composed of two parts: (i) an unknown component that is time-invariant, and (ii) a time-varying component that is revealed to the marker operator. More precisely, the cost function of each supplier \(i\) is parameterized as follows: \[c_{it}(\cdot)=c_{i}(\cdot;\phi_{i},\theta_{it}),\] where \(\phi_{i}\) is private information and \(\theta_{it}\) is the time-varying component of the cost function given to the operator as _contexts_. Note that for any fixed \(\phi_{i}\), the context \(\theta_{it}\) uniquely determines the cost function of supplier \(i\) at time \(t\). We stress that we do not assume any structure on the parameterization of the cost functions and so the time-varying and time-invariant components of the cost functions need not be separable. Further, since \(\phi_{i}\)'s are unknown, the market operator cannot directly solve Problem (2a)-(2b) to obtain the equilibrium prices in the market. For the simplicity of exposition, for the remainder of this section, we aggregate all suppliers' cost functions into a combined cost \(c_{t}(\cdot;\theta_{t})=\sum_{i=1}^{n}c_{it}(\cdot;\phi_{i},\theta_{it})\), where \(\theta_{t}=(\theta_{1t},\ldots,\theta_{nt})\) is the time-varying context associated with the combined cost function. Note that doing so is without loss of generality as all suppliers have convex costs and observe the same prices in the market. Furthermore, we note that since the private information \(\phi_{1},\ldots,\phi_{n}\) are unknown, the market operator cannot directly solve Problem (2a)-(2b) to obtain the equilibrium prices in the market. In this augmented problem setting, at each period \(t\), in addition to receiving the customer demand \(d_{t}\), the market operator observes a context \(\theta_{t}\), which it can use along with the prior history of supplier production quantities, customer demands, and contexts, to set a price \(p_{t}\). In particular, with access to sequentially arriving contexts, the market operator sets a sequence of prices given by the pricing policy \(\mathbf{\pi}=(\pi_{1},\dots,\pi_{T})\), where \(p_{t}=\pi_{t}(\{(x^{*}_{t^{\prime}})_{1=1}^{n},d_{t^{\prime}},\theta_{t^{ \prime}})_{t^{\prime}=1}^{t-1},d_{t},\theta_{t})\), where \(x^{*}_{t}\) represents the sum of optimal production quantity corresponding to the solution of Problem (1) for each supplier at period \(t\). We then evaluate the performance of this class of pricing policies on three regret metrics introduced in Section 3.2. Note that we can naturally extend these three metrics to the augmented setting with contexts by plugging in \(c_{it}(\cdot)=c_{i}(\cdot;\phi_{i},\theta_{it})\) and for completeness, we present the corresponding definitions explicitly in Appendix D. ### Algorithm for Time-Varying Costs with Contexts We now present an algorithm that simultaneously achieves sub-linear regret for the unmet demand, payment regret, and cost regret metrics for the augmented problem setting introduced in Section 6.2. Our algorithmic approach is inspired by recent ideas in the contextual bandits literature (e.g. [57, 58]) and involves two building blocks. First, we seek to learn how to associate the arriving contexts with the relevant properties of suppliers' cost functions. Next, based on the information inferred from the arriving contexts, our algorithm offers prices to suppliers in the next period. We note that the first step of our approach fundamentally differs from the fixed cost setting, as time-varying supplier costs, unlike time-varying demands, are not observed and thus unavailable to the operator making pricing decisions. Therefore, the first step of our approach is crucial in learning a descriptive model on the supplier's response to various contexts and prices. The task of learning to associate the arriving contexts with the relevant properties of the cost function is accomplished by an _online regression oracle_. In particular, an online regression oracle performs real-valued online regression and achieves a prediction error guarantee, with a bound denoted \(\textsc{Est}_{\text{sq}}(T)\), relative to the best function in a class \(\mathcal{F}\). **Definition 1** (Online Regression Oracle).: Consider a function class \(\mathcal{F}:\mathcal{A}\rightarrow\mathcal{B}\), at each time \(t\), the online regression oracle receives an input \(a_{t}\) and computes an estimate \(\hat{b}_{t}=\hat{f}_{t}(a_{t})\), where \(\hat{f}\) depends on the past history \[\mathcal{H}_{t-1}=(a_{1},b_{1}),\dots,(a_{t-1},b_{t-1}).\] Then, the oracle receives the true output \(b_{t}\). The predictors \(\hat{f}_{t}\) of the oracle are almost as accurate as any function in \(\mathcal{F}\) in the sense that: \[\sum_{t=1}^{T}(\hat{b}_{t}-b_{t})^{2}-\inf_{f\in\mathcal{F}}\sum_{t=1}^{T}(f(a _{t})-b_{t})^{2}\leq\textsc{Est}_{\text{sq}}(T)\] The prediction error \(\textsc{Est}_{\text{sq}}(T)\) of the online regression oracle scales with the "size" of the \(\mathcal{F}\) in a statistical sense. As an example, if the function class \(\mathcal{F}\) is finite, then the exponential weights update algorithm achieves \(\textsc{Est}_{\text{sq}}(T)\leq\log|\mathcal{F}|\) (see e.g. [59]). Therefore, the function class \(\mathcal{F}\) should be rich enough to capture the map between contexts and the variation in cost functions and also not be too large so that the estimation error is small. To construct a function class \(\mathcal{F}\) appropriate for our problem setting, we first note that the market operator cannot observe the supplier's costs but can instead use the oracle to regress on the supplier's production \(x^{*}(\cdot;\theta_{t})\), which is directly observable. Note that by the first-order optimality condition on Problem (1) that the production and price are related as follows: \[c^{\prime}_{t}(x^{*}(p;\theta_{t});\theta_{t})=p\implies x^{*}(p;\theta_{t})= (c^{\prime}_{t})^{-1}(p;\theta_{t}).\] Note that when the cost functions are strongly convex (i.e., \(c^{\prime}_{t}\) is invertible), the production level \(x^{*}(\cdot;\theta_{t})\) is well-defined as a function of the price \(p_{t}\) and context \(\theta_{t}\). Thus, we define the function class \(\mathcal{F}\) as the possible mappings from the price-context tuple \((p,\theta)\) to the production \(x^{*}\), i.e., the oracle tries to determine the amount of the supplier's production given a price \(p_{t}\) and context \(\theta_{t}\). With our choice of the online regression oracle, we now present our algorithm for the setting of time-varying costs with contexts based on the inverse gap weighing method introduced in [57]. To present our algorithmic approach, we restrict the algorithm's choices to a finite set of \(K\) prices that are uniformly spaced on the interval \([0,1]\), where the performance of our algorithm will depend on the choice of \(K\) (see Theorem 5). Given the oracle's output \(\hat{f}_{t}\) that estimates the production quantity \(x^{*}(p_{t};\theta_{t})\), the greedy choice at each period \(t\) is to match the requested demand \(d_{t}\) as closely as possible, i.e. choose a price \(\hat{p}_{t}\) such that the quantity \(|\hat{f}_{t}(\hat{p}_{t};\theta_{t})-d_{t}|\) is minimized. To balance between both exploration and exploitation, Algorithm 3 instead samples each price \(p_{t}\) from the set of \(K\) discrete prices according to a probability distribution \(\Delta_{t}\). In effect, Algorithm 3 achieves good exploration by choosing any of the \(K\) prices with some positive probability, which ensures that the online regression oracle has access to a wide-ranging history \(\mathcal{H}_{t-1}\), so it can achieve a low prediction error even when the market operator receives a new context or demand. Furthermore, to minimize the penalty for exploration, the probability distribution \(\Delta_{t}\) is chosen such that it assigns the highest probability to the greedy choice \(\hat{p}_{t}\) under the current oracle estimate \(\hat{f}_{t}\) while assigning a probability to every other price that is roughly inversely proportional to the gap between its unmet or excess demand and that of the greedy choice \(\hat{p}_{t}\). Then, for each period \(t\), given this choice of \(\Delta_{t}\), a price \(p_{t}\) is sampled from this distribution, following which suppliers produce an optimal quantity \(x^{*}(p_{t};\theta_{t})\) of the commodity as given by the solution of Problem (1). Finally, the oracle is updated with the new context \(\theta_{t}\), customer demand \(d_{t}\), and optimal supplier production to generate a new estimator \(\hat{f}_{t+1}\) for the next period. This process is presented formally in Algorithm 3. ``` Input : Online regression oracle \(\mathcal{O}\) with input pairs \((\theta_{t},p_{t})\) and output \(x_{t}\); uniform \(K\)-cover of possible prices \(0=p_{1}<p_{2}<\dots<p_{K}=1\); exploration parameter \(\gamma>0\) for\(t=1,\dots,T\)do Query the oracle \(\mathcal{O}\) for an estimator \(\hat{f}_{t}\); Receive context \(\theta_{t}\) and demand \(d_{t}\); Sample price \(p_{t}\) from the probability distribution \[\Delta_{t}(p_{i})=\frac{1}{\lambda+2\gamma\left(|\hat{f}_{t}(p_{i};\theta_{t})- d_{t}|-|\hat{f}_{t}(\hat{p}_{t};\theta_{t})-d_{t}|\right)},\] with \(\hat{p}_{t}=\arg\min_{p\in\{p_{1},\dots,p_{K}\}}|\hat{f}_{t}(p;\theta_{t})-d _{t}|\) and \(\lambda\in(0,K)\) as the normalization constant; Commit \(p_{t}\) and observe the production \(x_{t}=x^{*}(p_{t};\theta_{t})\) corresponding to the solution of Problem (1) given the price \(p_{t}\); Update the oracle \(\mathcal{O}\) with \(((\theta_{t},p_{t}),x_{t})\); end for ``` **Algorithm 3**Online Equilibrium Pricing for Time-Varying Costs We now present the main result of this section, which establishes that for an appropriate choice of the discretization \(K\) of the price set, Algorithm 3 can achieve sub-linear regret on all three regret metrics, as is elucidated through the following theorem. We highlight that this theorem holds for any sequence of contexts \(\theta_{t}\), which means that \(\theta_{t}\) can be derived from some physical dynamics, drawn from a probability distribution, or even chosen adversarially. **Theorem 5** (Informal).: _With high probability, for any sequence of contexts \(\theta_{t}\) and customer demands \(d_{t}\) over \(T\) periods, the unmet demand, payment regret, and cost regret satisfy:_ \[\mathbb{E}_{p_{t}\sim\Delta_{t},t\in[T]}[U_{T}(p_{1},\dots,p_{T})]\leq O\left( \sqrt{KT\cdot\textsc{Est}_{\text{sq}}(T)}+\frac{T}{K}\right),\] _and similarly for \(\mathbb{E}_{p_{t}\sim\Delta_{t},t\in[T]}[P_{T}(p_{1},\dots,p_{T})]\) and \(\mathbb{E}_{p_{t}\sim\Delta_{t},t\in[T]}[C_{T}(p_{1},\dots,p_{T})]\), where \(K\) is the number of prices in the uniformly discretized price set._ We note that because the pricing policy in Algorithm 3 is probabilistic, we present the regret bounds in expectation with respect to the distributions \(\Delta_{t}\). For the simplicity of exposition, we only present Theorem 5 as an informal statement and present the analysis of a rigorous theorem statement, which involves introducing additional notation, e.g., quantifying the high probability bound, in Appendix G. Furthermore, we present a sketch of the main ideas used to analyze Algorithm 3 in Section 6.4. **Remark 1**.: Recall that, when the function class \(\mathcal{F}\) is finite, e.g. there are only finitely many possible cost functions, the exponential weights algorithm can achieve \(\operatorname{\textsc{Est}}_{\operatorname{sq}}(T)=\log|\mathcal{F}|\). Therefore, picking \(K=\sqrt[3]{T/\log|\mathcal{F}|}\) achieves a sublinear regret bound of \(O(T^{2/3}\sqrt[3]{\log|\mathcal{F}|})\). Additionally, in Appendix G, we discuss several other function classes \(\mathcal{F}\), including parametric classes, bounded Lipschitz functions, and neural networks with bounded spectral norm, for which Algorithm 3 achieves a sub-linear regret guarantee by leveraging results from the statistical learning literature (e.g. see [60, 61]). Note that the regret bound obtained for finite function classes corresponds to an additional factor of \(O(T^{1/6})\) than the guarantee for the setting with time-varying demands and fixed costs (Theorem 3). The additional loss in the regret for the setting of time-varying costs is because Algorithm 3 utilizes an online regression oracle to associate the contexts with properties of the supplier's varying cost functions that are unknown to the market operator. Note that by performing regression, Algorithm 3 learns all of the features about the supplier's optimal production as a function of the contexts and prices. In contrast, Algorithm 2 only learns the equilibrium prices for a discrete set of demands and is not concerned with accurately predicting the supplier's production for other demands. Thus, the additional factor of \(O(T^{1/6})\) in the regret is attributable to the fact that Algorithm 3 attempts to solve a more complex statistical problem than in the fixed cost setting. Nevertheless, the guarantee in Theorem 5 still compares favorably to the best known algorithms in related problem settings, e.g., no online algorithm can achieve a regret better than \(O(T^{2/3})\) for a Lipschitz bandit with contexts in \(\mathbb{R}\) (see e.g. [43, 57]). ### Sketch of Main Ideas to Analyze Algorithm 3 In this section, we shall describe the key ideas behind the analysis of Algorithm 3. We first establish the necessary Lipshitzness relations between the problem parameters, which enables us to simultaneously optimize over multiple objectives, as opposed to standard contextual bandit algorithms that typically optimize a single regret metric. We then define a notion of "proxy" regret and show that Algorithm 3 achieves the desired regret bound in the statement of Theorem 5. Finally, Theorem 5 follows as the "proxy" regret serves as an upper bound on the unmet demand, payment regret, and cost regret due to the derived Lipshitz relations. Our first lemma establishes the Lipschitzness between the optimal prices corresponding to the solution of Problem (2a)-(2b) and customer demands utilizing techniques from parametric optimization. **Lemma 6** (Lipschitzness of Prices in Demands).: _The optimal prices corresponding to the dual variables of the market clearing constraint of Problem (2a)-(2b) are Lipschitz in the demand \(d\), i.e., \(|p^{*}(d_{1})-p^{*}(d_{2})|\leq L_{1}|d_{1}-d_{2}|\) for some constant \(L_{1}>0\) for all \(d_{1},d_{2}\in[\underline{d},\underline{d}]\). Here, \(p^{*}(d)\) is the optimal price corresponding to the dual variable of the market clearing constraint of Problem (2a)-(2b) with a customer demand of \(d\)._ Next, we use Lemma 6 to show that both the payment and cost regret metrics are Lipschitz in the prices. That is, Lemma 7 establishes that small changes in the prices result in only small changes in the cost and payment regret metrics. **Lemma 7** (Lipschitzness of Regret Metrics in Prices).: _Consider an online pricing policy \(\boldsymbol{\pi}\) The payment and cost regret are upper bounded by the absolute difference in prices \(p_{t}\) corresponding to the online pricing policy \(\boldsymbol{\pi}\) and equilibrium prices \(p_{t}^{*}\). Namely, there exists some constant \(L_{2},L_{3}>0\) so that_ \[P_{T}(\boldsymbol{\pi})\leq L_{2}\sum_{t=1}^{T}|p_{t}-p_{t}^{*}|,\text{and }C_{T}(\boldsymbol{\pi})\leq L_{3}\sum_{t=1}^{T}|p_{t}-p_{t}^{*}|.\] For proofs of Lemmas 6 and 7, we refer the readers to Appendix E. By employing these two lemmas, we can upper bound unmet demand, payment regret, and cost regret with the following proxy regret metric: \[\operatorname{\textsc{Reg}}(T):=\sum_{t=1}^{T}\mathbb{E}_{p_{t}\sim\Delta_{t} }\left[|x^{*}(p_{t};\theta_{t})-d_{t}|\mid\mathcal{H}_{t-1}\right],\] where \(x^{*}(p_{t};\theta_{t})\) is the optimal production quantity of the suppliers and \(p_{t}\) is chosen according to the distribution \(\Delta_{t}\) as defined in Algorithm 3. Note that the choice of distribution \(\Delta_{t}\) depends on the past history \(\mathcal{H}_{t-1}=((p_{1},\theta_{1}),x_{1}),\ldots((p_{t-1},\theta_{t-1}),x_{t -1})\). We can employ Lemmas 6 and 7 to show that the proxy regret upper bounds the unmet demand, payment regret, and cost regret metrics. First, we note that this proxy regret upper bounds the unmet demand because \[\left|x^{*}(p_{t};\theta_{t})-d_{t}\right|\geq\left(x^{*}(p_{t};\theta_{t})-d_ {t}\right)_{+}.\] Next, let \(p_{t}^{*}\) be the equilibrium price for demand \(d_{t}\), and note that \(p_{t}\) is the equilibrium price when demand is equal to \(x^{*}(p_{t};\theta_{t})\). So, by Lemma 6, it follows that \[\left|p_{t}-p_{t}^{*}\right|\leq L_{1}\left|x^{*}(p_{t};\theta_{t})-d_{t}\right|\] for some constant \(L\). And by applying Lemma 7 to the previous inequality, we conclude that \[\mathbb{E}_{p_{t}\sim\Delta_{t},t\in[T]}[P_{T}]\leq L_{1}\sum_{t=1}^{T} \mathbb{E}_{p_{t}\sim\Delta_{t}}\left|p_{t}-p_{t}^{*}\right|\leq L_{1}L_{2} \sum_{t=1}^{T}\mathbb{E}_{p_{t}\sim\Delta_{t}}\left|x^{*}(p_{t};\theta_{t})-d_ {t}\right|=L_{1}L_{2}\textsc{Reg}(T).\] Using a similar line of reasoning, we can also show that the proxy regret serves as an upper bound, up to constants, for the cost regret \(\mathbb{E}_{p_{t}\sim\Delta_{t},t\in[T]}[C_{T}]\). Given the above observation that the proxy regret is an upper bound on the three regret metrics, it suffices to show that Algorithm 3 achieves a sub-linear regret, as is elucidated by the following proposition. **Proposition 8** (informal).: _With high probability, for any sequence of context \(\theta_{t}\) and demand \(d_{t}\), the proxy regret is bounded by_ \[\textsc{Reg}(T)\leq O\left(\sqrt{KT\cdot\textsc{Est}_{\mathrm{sq}}(T)}+\frac {T}{K}\right).\] Note that Theorem 5 is an immediate consequence of Proposition 8. For a complete proof of Proposition 8, see Appendix G. ## 7 Conclusion and Future Work In this work, we studied the problem of setting equilibrium prices to satisfy the customer demand for a commodity in markets where the cost functions of suppliers are unknown to the market operator. Since centralized optimization approaches to compute equilibrium prices are typically not conducive in this incomplete information setting, we studied the problem of learning equilibrium prices online under several informational settings regarding the time-varying nature of the customer demands and supplier cost functions. We first considered the setting when suppliers' cost functions are fixed over the \(T\) periods and developed algorithms with regret guarantees of \(O(\log\log T)\) (and \(O(\sqrt{T}\log\log T)\)) when the customer demand is fixed (or can vary across the periods in a continuous interval) for strongly convex cost functions. Next, when suppliers' cost functions are time-varying, we showed that no online algorithm achieves sub-linear regret on all three regret metrics when suppliers' cost functions are sampled i.i.d. from a distribution. Thus, we studied an augmented contextual bandit setting where the operator has access to hints (contexts) on how the cost functions change over time and developed an algorithm that with sub-linear regret on all three regret metrics in this setting. There are several directions for future research. First, we note that the discretization in Algorithms 2 and 3 results in memory requirements that depend on the number of periods \(T\). Thus, it would be interesting to investigate the design of algorithms whose memory requirement does not scale with \(T\) but still achieves similar performance guarantees to Algorithms 2 and 3 in their respective settings. Next, the unmet demand metric adopted in this work involved satisfying the customer demand at each period without the possibility of the rollover of excess supply of the commodity to subsequent periods. Since there is often storage capacity for excess supply in many application settings, it would be worthwhile to study the design of algorithms under a more relaxed unmet demand notion with the possibility of using excess supply at earlier periods to satisfy customer demand at subsequent periods. Furthermore, there is a scope to generalize the model when suppliers' cost functions are non-convex, in which case the operator may need different pricing strategies for each supplier [5].
2304.04836
Generative Knowledge Selection for Knowledge-Grounded Dialogues
Knowledge selection is the key in knowledge-grounded dialogues (KGD), which aims to select an appropriate knowledge snippet to be used in the utterance based on dialogue history. Previous studies mainly employ the classification approach to classify each candidate snippet as "relevant" or "irrelevant" independently. However, such approaches neglect the interactions between snippets, leading to difficulties in inferring the meaning of snippets. Moreover, they lack modeling of the discourse structure of dialogue-knowledge interactions. We propose a simple yet effective generative approach for knowledge selection, called GenKS. GenKS learns to select snippets by generating their identifiers with a sequence-to-sequence model. GenKS therefore captures intra-knowledge interaction inherently through attention mechanisms. Meanwhile, we devise a hyperlink mechanism to model the dialogue-knowledge interactions explicitly. We conduct experiments on three benchmark datasets, and verify GenKS achieves the best results on both knowledge selection and response generation.
Weiwei Sun, Pengjie Ren, Zhaochun Ren
2023-04-10T19:49:55Z
http://arxiv.org/abs/2304.04836v1
# Generative Knowledge Selection for Knowledge-Grounded Dialogues ###### Abstract Knowledge selection is the key in knowledge-grounded dialogues (KGD), which aims to select an appropriate knowledge snippet to be used in the utterance based on dialogue history. Previous studies mainly employ the classification approach to classify each candidate snippet as "relevant" or "irrelevant" independently. However, such approaches neglect the interactions between snippets, leading to difficulties in inferring the meaning of snippets. Moreover, they lack modeling of the discourse structure of dialogue-knowledge interactions. We propose a simple yet effective generative approach for knowledge selection, called GenKS. GenKS learns to select snippets by generating their identifiers with a sequence-to-sequence model. GenKS therefore captures intra-knowledge interaction inherently through attention mechanisms. Meanwhile, we devise a _hyperlink_ mechanism to model the dialogue-knowledge interactions explicitly. We conduct experiments on three benchmark datasets, and verify GenKS achieves the best results on both knowledge selection and response generation. ## 1 Introduction To improve the informativeness in open-domain dialogue agents (Freitas et al., 2020), knowledge-grounded dialogues (KGD) are proposed to leverage external structured (Liu et al., 2019) and unstructured (Dinan et al., 2019) knowledge to dialogue responses. In KGD, it is pivotal to embed factual and conversationally appropriate knowledge in responses. Two classes of approaches are considered to embed knowledge: _end-to-end_ and _pipeline_. End-to-end models, such as FiD (Izacard and Grave, 2021), process the document and generate the response in one shot. However, they tend to misuse knowledge (Adolphs et al., 2021). Pipeline models address this problem by explicitly identifying a specific knowledge snippet to be used in the response (Adolphs et al., 2021). Typically, pipeline KGD approaches have two substeps, i.e., knowledge selection and response generation (Dinan et al., 2019; Kim et al., 2020): The former aims to select knowledge snippets from passages, and the latter generates responses based on them. Knowledge selection plays a vital role in KGD as it directly determines the content of the response (Lian et al., 2019; Meng et al., 2020). In this paper, we focus on selecting knowledge snippets for dialogue to enhance pipeline KGD models. The _Classification_ paradigm dominates knowledge selection studies. In this paradigm, each snippet is independently classified as "relevant" or "irrelevant" (Dinan et al., 2019; Zhao et al., 2020). However, these approaches ignore _knowledge interactions_, which refer to flows of information Figure 1: An example of knowledge-grounded dialogues. The dialogue agent selects a knowledge snippet (i.e., <7>) from passages and generates a response based on it. Intra-knowledge interactions and dialogue-knowledge interactions are denoted by 1� within the knowledge or between knowledge and dialogues. As shown in Figure 1, we identify two types of knowledge interactions in KGD: **Intra-knowledge interaction**: Intra-knowledge interaction refers to the interactions between snippets. It is worth noting that the meaning of a knowledge snippet is context-dependent and can be ambiguous when taken individually. For example, the <8> snippet in Figure 1 _"This work led to their"_ has a referential element _their_, and is difficult to identify its meaning without knowing the remaining context of the sentence. However, with the existence of the remaining context, we can quickly infer that it refers to _Lamarr and George Antheil_. _This problem challenges existing methods when selecting knowledge on new topics._ **Dialogue-Knowledge interaction**: Previous works also neglect interactions between dialogue and knowledge. There is a discourse structure and smooth transition of involved knowledge in multi-turn dialogue. For example, _Lamarr_'s profession mentioned in the dialogue in Figure 1 is demonstrated in a parallel and multi-perspective manner, while some other cases follow a shallow-to-deep structure in dialogue. Some recent efforts attempt to fix these problems within the classification paradigm; for example, Li et al. (2022) build a semantic graph for passages to capture intra-knowledge interaction, Kim et al. (2020) propose sequential knowledge selection to model the dialogue-knowledge interaction as latent variables. However, they are complicated, lack deep semantic interactions, and are challenging to model the two types of knowledge interaction simultaneously. In this work, we propose **GenKS** (**G**enerative **K**nowledge **S**election), a simple yet effective _generative_ model that addresses these challenges. GenKS first assigns an identifier to each snippet, feeds all the snippets into the model simultaneously, and then selects snippets by generating their identifiers with a sequence-to-sequence Transformer model (e.g., BART Lewis et al. (2020)). Compared with KGD methods with the _classification_ paradigm, GenKS captures interactions between knowledge snippets through the _self-attention_ mechanism in Transformer Vaswani et al. (2017). Therefore, GenKS can obviate the ambiguity in snippets with the existence of the rest context and improve the understanding of knowledge. Moreover, we propose a _hyperlink_ method to capture the dialogue-knowledge interactions explicitly and effectively. Finally, we propose to joint knowledge selection and response generation within one generative model. We evaluate our proposed method on three public KGD datasets: Wizard of Wikipedia Dinan et al. (2019), Holl-E Moghe et al. (2018), and CMU_DoG Zhou et al. (2018). The experimental results show that GenKS significantly improves the accuracy of knowledge selection as well as the quality of response generation, by establishing new state-of-the-art on KGD benchmarks. Improvements are particularly significant on unseen topics, outperforming the BART classification model by up to 8.1% absolute. GenKS also achieves the best results as the number of dialogue turns increased, with an average of 10% improvements over the BART classification model in the last three turns. We also compare our model with recent SOTA end-to-end methods Shuster et al. (2021), and find our model can generate responses with fewer hallucinations while having better controllability and interpretability. The effectiveness of the proposed method is also validated through human evaluation and ablative experiments. Our contributions are summarized as follows: (1) We propose GenKS, which is the first attempt at generative knowledge selection in KGD. (2) GenKS captures intra-knowledge and dialogue-knowledge interactions simultaneously. (3) We propose a hyperlink method to enhance the interactions between dialogue and knowledge. (4) Experiments verify that GenKS establishes a new state-of-the-art on KGD1. Footnote 1: The code is available at: [https://github.com/sunweiwei/GenKS](https://github.com/sunweiwei/GenKS) ## 2 Related work Knowledge-grounded dialoguesWith the advances in large-scale language models, dialogue agents can now generate high-quality responses using parametric knowledge Thoppilan et al. (2022); Freitas et al. (2020); Bao et al. (2021). However, hallucination remains a challenge, which means that the language model tends to generate plausible-looking statements that are factually incorrect Shuster et al. (2021). To address this problem, knowledge-augmented approaches are applied in dialogue generation Lewis et al. (2020). In knowledge-grounded dialogues (KGD), the dialogue models first select a knowledge snippet from passages and then generate the responses Liu et al. (2018); Dinan et al. (2019). Knowledge selectionAs the critical step in KGD, knowledge selection has received many studies. The exiting methods mainly employ _classification_ model with dual-encoder Dinan et al. (2019); Kim et al. (2020) or cross-encoder Zhao et al. (2020) architecture. However, the classification paradigm is unable to capture the knowledge interaction in KGD Kim et al. (2020); Li et al. (2022). To address this problem, Li et al. (2022) propose a graph-based method to capture the relationship between candidate snippets, Zhan et al. (2021) and Wu et al. (2021) employ machine reading comprehension model to extract span from long document. Sequential knowledge selection has also been proposed to capture the topic transition in conversations Kim et al. (2020); Zhan et al. (2021); Zheng et al. (2020); Meng et al. (2020); Yang et al. (2022). Despite their effectiveness, the existing methods have two drawbacks: (1) they use compact vectors to represent dialogue and knowledge and thus lack deep semantic interactions; (2) they are complicated and challenging to capture intra-knowledge and dialogue-knowledge interactions simultaneously. We address these drawbacks by shifting the modeling paradigm of knowledge selection to identifier generation Sun et al. (2022), and propose GenKS to capture the two types of interaction simultaneously using Transformer Vaswani et al. (2017). Generative knowledge selectionA generative paradigm for knowledge selection is not foreign to the NLP community; for example, sequence-to-sequence models have been applied on entity retrieval Cao et al. (2021), document ranking Nogueira et al. (2020); Tay et al. (2022), multi-evidence retrieval Min et al. (2021); Yavuz et al. (2022), and etc. Our proposed model GenKS differs from existing methods in the following ways: (1) we are the first to explore generative knowledge selection in KGD; (2) we consider the effectiveness of intra-knowledge interaction; (3) we design hyperlinks to capture the interaction between knowledge and dialogue. ## 3 GenKS We provide an overview of GenKS in Figure 2. As shown in Figure 2, the dialogue data is first serialized into a sequence. Then a sequence-to-sequence model (i.e., BART) is employed to select knowledge and get the response by generating the target sequence autoregressively. In this section, we first formulate the task in Section 3.1. Then, we detail the serialization (Section 3.2) and optimization (Section 3.3) methods. ### Problem formulation Suppose that we have a case of knowledge-grounded dialogues \((C,\mathcal{K},r)\), where \(C=(c_{1},...,c_{|C|})\) is a dialogue context that contains \(|C|\) utterances, \(r\) is the response to \(C\), \(\mathcal{K}=(K_{1},...,K_{|\mathcal{K}|})\) denotes \(|\mathcal{K}|\) passages that are relevant to \(C\); for each \(i\), \(K_{i}=(k_{i,1},...,k_{i,|K_{i}|})\) denotes a passage that contains \(|K_{i}|\) snippets. We define \(m=\sum_{i=1}^{|\mathcal{K}|}|K_{i}|\) as the total number of snippets in \(K\). A knowledge-grounded dialogue agent is decoupled into two modules: a knowledge selection module \(P(k|C,\mathcal{K})\) that selects a snippet from \(\mathcal{K}\); a response generation module \(P(r|C,\mathcal{K},k_{s})\) where \(k_{s}\) is the selected snippet from knowledge selection module. ### Serialization We formulate the knowledge selection task as a procedure of sequence generation. As shown in Figure 2, the dialogue context \(C\) and knowledge candidates \(K\) are mapped into a sequence and then fed into a sequence-to-sequence model. The model's output is converted back to the selected knowledge \(k\) or the response \(r\). Specifically, we first assign an identifier to each snippet in \(\mathcal{K}\), sequentially starting from <k1> to <km>. Then we convert passages \(\mathcal{K}\) into a sequence using a template that packages snippets with the corresponding identifiers and concatenates them in order; see the green block in Figure 2. Similarly, the dialogue context \(C\) is serialized by adding task prompts, i.e., task description and speaker name, as shown in the blue block in Figure 2. In multi-turn dialogues, the knowledge appearing in the dialogue history prompts the discourse structure of knowledge transition and knowledge expression. Hence we propose a _hyperlink_ method to capture the dialogue-knowledge interaction explicitly. We provide an example of the hyperlink method in Figure 2. We see that the first utterance of User1 refers to a snippet (whose identifier is <k2>) in the passage "Skateboarding". We thus add a hyperlink to the utterance. The hyperlink includes the identifier and the title of the snippet, i.e., annotating [Skateboarding]<k2> at the beginning of this utterance (as shown in the red block in Figure 2). Finally, we splice the passages and dialogue context sequences as input for a Transformer model (i.e., BART). Therefore, the model can capture the intra-knowledge and dialogue-knowledge interactions through a _self-attention_ mechanism (Vaswani et al., 2017). ### Optimization The knowledge selection model is optimized by the cross-entropy loss: \(\mathcal{L}=-\log P(k_{true}|C,\mathcal{K})\), where \(k_{true}\) denotes the label knowledge. Since \(k_{true}\) needs to be labeled manually and is not available in some scenarios (Zhou et al., 2018), we construct pseudo-labels for model training following Zhao et al. (2020) in cases the knowledge label is absent. In particular, we calculate the F1 score (Dian et al., 2019) between each knowledge snippet and the response. We use the snippet with the highest score as the pseudo label. Such a method is based on the intuition that human responses provide hints regarding the relevance of the snippets (Zhao et al., 2020; Li et al., 2020). Since both knowledge selection and response generation are modeled with the _generative_ paradigm, we unify the two modules with one joint generative model. In this joint model, the knowledge selection and the response generation are optimized jointly, with shared parameters. To this end, we splice the knowledge identifier \(k_{true}\) and response \(r\) into one sequence (as shown in Figure 2). Then, we optimize the sequence-to-sequence model using cross-entropy loss on all the tokens of the target sequence. In inference, the model generates knowledge identifier \(k_{s}\) and responses \(r\) in an autoregressive fashion. We note that the end-to-end model allows the two tasks to be mutually enhanced and improves the model's efficiency. ## 4 Experimental setup ### Datasets We conduct experiments on Wizard of Wikipedia (WoW) (Dinan et al., 2019), Holl-E (Moghe et al., 2018), and CMU_DoG (Zhou et al., 2018). The statistical details on these three datasets are shown in Table 6 in the appendix. * **WoW** is an open-domain KGD dataset using Wikipedia passage as background knowledge. The test set of Wizard is split into seen and unseen versions, where the unseen test set contains 58 new topics not discussed in the training data. * **Holl-E** focuses on the movie domain. The background knowledge consists of plots, comments, and movie reviews collected from different websites. Holl-E has two versions of the test set: single test and multi-reference test. In the multi-reference test, there are multiple human-annotated ground-truth knowledge and corresponding responses for each instance. * **CMU_DoG** focuses on the domain of movies. The workers discuss a movie in depth given the background knowledge(e.g., introduction, plots, and key scenes). ### Baselines We compare GenKS with baselines of two categories: (i) _End-to-end methods_ that generate response directly without explicit knowledge selection, and (ii) _Pipeline methods_ that explicitly select knowledge snippet to be used in response. The end-to-end methods we consider are: Figure 2: Overview of GenKS. The dialogue context and the knowledge are serialized and fed into a seq-to-seq model, BART. The outputs are the identifier of the selected snippet (i.e., <k5>) and the response. * **BART**[11] that generates responses without access to the external passage and uses knowledge inside model parameters instead. * **BART FID**[12] concatenates and encodes each candidate knowledge with dialogue separately and fuses all the encoded representation in the decoder to generate the response. * **BART RAG-DPR** is a baseline adopted by Adolphs et al. (2021), which uses DPR-retrieved passages and produces response using RAG. * **BART FiD-RAG DPR-Poly**[13] uses DPR-Poly to retrieve passage and uses FiD-RAG to generate the response. Regarding the pipeline baselines, according to their knowledge selection modeling paradigm, we sub-categorize pipeline baselines into four groups: (1) The _Classification methods_ includes: * **SKT**[14] proposes sequential knowledge selection. * **DiffKS**[15] captures the knowledge differences between adjacent turns. * **DukeNet**[16] models the knowledge shift and tracking processes with a dual learning scheme. * **KnowledGPT**[15] exploits pre-trained language models in KGD. * **MIKe**[16] distinguish user-initiative and system-initiative. * **K-Mine**[17] proposes a score-and-aggregate module. * **TAKE**[14] propose a topic-shift aware network. (2) The _MRC methods_ includes: * **CoLV**[14] proposes a collaborative latent variable model. * **DIALKI**[15] proposes a MRC-based model to extract span from passage. (3) The _Graph-based methods_ includes: * **Graph**[11] builds a semantic graph upon candidate documents and employs a GNN model. And (4) The _Knowledge generation methods_ includes: * **K2R**[1] uses the RAG-based model to generate knowledge text and then generates dialogue response based on it. ### Evaluation metrics In WoW, we choose perplexity (**PPL**) of the ground-truth responses, unigram **F12**[14], **Knowledge-F1**[13], and **BLEU-4**[12] score as metrics. In Holl-E, we additionally use **ROUGE-1**, and **ROUGE-2** following Meng et al. (2020). In CMU_DoG, we additionally use embedding-based metrics includes **Average**, **Extreme**, and **Greedy** following Zhao et al. (2020). Footnote 2: [https://github.com/facebookresearch/ParIAI](https://github.com/facebookresearch/ParIAI) In addition, we randomly sample 100 examples from the WoW test seen and WoW test unseen, respectively, and recruit three experts for human evaluation. The annotators are asked to judge the model-generated response in four ways: * **Fluency**, which measures whether the response is fluency in expression; * **Coherence**, which measures whether the response is coherence to the dialogue context; * **Relevance**, which measures whether the knowledge used in the response is relevant to the dialogue; and * **Factuality** measures whether the response's content is factual. In Factuality evaluation, the experts check the content using Google. The annotators are asked to assign a score in {0, 1} (representing "nonfactual" and "factual") for factuality, and a score in {0, 1, 2} (representing "bad", "fair", and "good") for the others. ### Implementation details We implement the GenKS using BART large (with 400M parameters) [11] in HuggingFace's Transformers library. We truncate the dialogue context to 256 tokens, then truncate the knowledge so that the total length is less than 1024 tokens. During inference, the responses are decoded using a greedy search. See Appendix A for more details. Typically, the number of passages in \(\mathcal{K}\) is large, so that the input sequence exceeds the maximum input length of BART (i.e., 1024 tokens). To address this problem, we take advantage of a lightweight passage selector based on DistilBERT (with 66M parameters) [14], which aims to rank the passages in \(\mathcal{K}\). Specifically, we concatenate each passage with dialogue context and encode the sequence using DistilBERT. Finally, the representation of [CLS] token is used to estimate the relevance score of the passage through a learnable MLP classifier. The passage selector is optimized via contrastive learning objective (Nogueira and Cho, 2019), in which the model learns to assign a higher score to positive passages than negative passages. During inference, we keep only the top-1 passage ranked by the passage selector. The passage selector gets Recall@1 of 75.5%, 76.5%, and 68.0% for the WoW test seen, WoW test unseen, and Holl-E, respectively. ## 5 Experimental results ### Performance on knowledge selection We evaluate the knowledge selection effectiveness of GenKS on WoW and Holl-E, respectively3. In Table 1, we compare the knowledge selection accuracy of GenKS with previous pipeline methods. Results show that GenKS outperforms the baselines and achieves the highest accuracy of knowledge selection on both datasets. Footnote 3: We are unable to evaluate the knowledge selection accuracy on CMU_DoG accuracy of knowledge selection on CMU_DoG due to the lack of manual labeling. We find that GenKS particularly excels at topics that do not appear in training dataset, as evidenced by its performance on the WoW unseen test split. In comparison, the classification models show a noticeable decrease in accuracy on the unseen topic. We also find that the baselines that incorporate intra-knowledge interaction, such as, GenKS, Graph, DIALKI, have a better grasp of knowledge of unseen topics4. Footnote 4: Note that the higher accuracy results on unseen than seen might be due to the smaller number of topics included in the unseen test set. To evaluate the performance of GenKS as dialogue goes deeper, we compare GenKS with four classification baselines (SKT, DiffKS, KnowedGPT, and BART-CLS) overturns in Figure 3. We find that in the early stages of the conversation, both GenKS and the baseline methods achieve high accuracy. However, as the dialogue dives deeply into a topic, a significant performance decline is observed among the baseline methods. In contrast, GenKS, which explicitly captures the interaction between multi-turn dialogue and knowledge, maintained a relatively high accuracy (around 22%-23%). ### Quality of generated responses We report response generation evaluation results on WoW in Table 2. The results on Holl-E and CMU_DoG are available in Table 7 and Table 8 in the appendix. The results of baselines are cited from original papers or re-evaluated using officially released checkpoints. Compared with previous pipeline models, GenKS achieve the best performance on almost all metrics. For example, GenKS surpasses KnowedGPT by 0.7% and 2.4% in terms of F1 on WoW seen and WoW unseen, respectively. Note that the improvements on the unseen test set are more notable than on the seen test set, which agrees with the experimental results regarding knowledge selection. GenKS also achieve competitive results \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{**WoW**} & \multicolumn{2}{c}{**Holl-E**} \\ \cline{2-5} & \multicolumn{2}{c}{**Seen Unseen Single Multi**} \\ \hline \multicolumn{5}{l}{_Classification methods_} \\ SKT (Kim et al., 2020) & 26.8 & 18.3 & 29.2 & 39.2 \\ DukeNet (Meng et al., 2020) & 26.4 & 19.6 & 30.0 & 40.3 \\ DiffKS (Zheng et al., 2020) & 25.5 & 19.7 & 33.0 & - \\ KnowedGPT (Zhao et al., 2020) & 28.0 & 25.4 & - & - \\ MIKe (Meng et al., 2021) & 28.4 & 21.5 & 31.9 & 41.8 \\ K-Mine (Lofti et al., 2021) & 29.7 & 28.3 & 31.7 & - \\ TAKE (Yang et al., 2022) & 28.8 & 25.8 & - & - \\ \hline \multicolumn{5}{l}{_Other methods_} \\ CoLV (Zhan et al., 2021) & 30.1 & 18.9 & 32.7 & - \\ DIALKI (Wu et al., 2021) & 32.9 & 35.5 & - & - \\ Graph (Li et al., 2022) & 29.4 & 30.8 & 37.7 & 46.1 \\ \hline **GenKS** & **34.2** & **36.6** & **37.9** & **46.8** \\ \hline \multicolumn{5}{l}{_Variants for comparison_} \\ - BART classification & 29.8 & 29.7 & 34.0 & 44.0 \\ - BART classification w/ position & 30.1 & 31.2 & 34.0 & 44.0 \\ - Hierarchical classification & 30.0 & 31.4 & 33.8 & 43.7 \\ - Without passage selector & 31.4 & 32.0 & 34.5 & 44.4 \\ - Unorder knowledge snippets & 31.8 & 33.3 & 36.5 & 45.8 \\ - Without hyperlink & 33.4 & 35.4 & 36.9 & 45.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Knowledge selection accuracy on WoW (seen and unseen test set) and Holl-E (single reference and multi-reference test set). **Bold** denote the best results with significant improvements over the previous SOTA (t-test, \(p<0.05\)). _Underline_ denote second best results. Figure 3: Knowledge selection accuracy over different dialogue turns. BART-CLS represents a text-matching model with cross-encoder architecture. compared to SOTA end-to-end models. For example, GenKS performs comparably to BART FiD-RAG DPR-Poly on WoW seen and outperformed on WoW unseen. ### Ablation study about knowledge selection To analyze the effect of each component in GenKS, we designed several variants and conducted an ablation study about knowledge selection. Results are listed in Table 1, "Variants for comparison". The details of compared variants and the findings are as follows: **BART classification** We use BART to classify each candidate snippet into two classes: "relevant" or "irrelevant". The results show that BART in the classification paradigm performs worse than GenKS by a large margin. **BART classification w/ position** To understand the influence of position bias, we splice the snippet's position into the classification model's input. We find that the results are improved to a certain extent (about 1% improvement), but there is still a clear gap compared with GenKS. **Hierarchical classification** This variant first uses the passage selector model of GenKS to rank the passages and then selects the snippets in the top-ranked passage using BART classification w/ position. The results show that the passage selector does not affect the classification model's performance. **Without passage selector** When the passage selector model of GenKS is removed, the model has more probability of truncating the label knowledge, resulting in an evident decline in performance. **Unorder knowledge snippets** To disable the intra-knowledge interaction, we under the snippets so that order of the snippets is inconsistent with the original passages. This variant shows a decline in selection accuracy, especially on unseen topics, indicating that keeping the order of the snippets in the passage is necessary. **Without hyperlinks** We remove the hyperlinks in the dialogue context. About a 1% accuracy drop is seen, indicating the effectiveness of hyperlinks. ### Ablation study about response generation As shown in Table 2, we also conduct an ablation study about response generation. The details of compared variants and the findings are as follows: **With BART classification knowledge** When replacing the generated identifier with the knowledge selected by BART classification, a performance decline is witnessed -the F1 value drops by 0.7% and 1.8% on Wizard seen and unseen, sustaining the effectiveness of the knowledge selection of GenKS. **Without identifier generation** This variant re \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c}{**WoW Seen**} & \multicolumn{3}{c}{**WoW Unseen**} \\ \cline{2-9} & PPL & F1 & KF1 & B4 & PPL & F1 & KF1 & B4 \\ \hline _End-to-end models_ & & & & & & & & \\ BART [14] & 14.7 & 20.9 & 17.4 & 1.7 & 18.9 & 18.8 & 15.1 & 0.9 \\ BART FiD [1] & 17.0 & 21.5 & 20.0 & 3.6 & 18.4 & 20.6 & 19.2 & 3.2 \\ BART RAG-DPR [1] & **11.5** & 22.6 & 26.1 & 3.7 & **13.1** & 21.5 & 22.7 & 3.0 \\ BART FiD-RAG DPR-Poly [1] & **11.4** & 22.1 & **29.7** & **4.1** & **13.1** & 21.1 & 27.1 & 3.8 \\ \hline _Pipeline models_ & & & & & & & & \\ DukeNet [16] & 48.3 & 19.3 & 18.5 & 2.4 & 69.4 & 17.1 & 16.5 & 1.7 \\ CoLV [22] & 39.5 & 20.3 & 18.2 & 2.8 & 54.3 & 18.5 & 17.5 & 2.1 \\ KnowledgeGPT [23] & 19.2 & 22.0 & 23.8 & 3.7 & 22.3 & 20.5 & 22.1 & 3.0 \\ K-Mine [11] & 13.2 & 21.8 & - & - & 16.4 & 21.1 & - & - \\ K2R RAG-DPR [1] & 18.3 & 22.0 & 27.3 & 3.7 & 22.3 & 19.9 & 23.2 & 2.8 \\ K2R BART RAG-DPR [1] & 17.9 & 21.3 & **29.2** & 3.5 & 21.1 & 19.9 & 24.3 & 2.5 \\ \hline **GenKS** & & 13.1 & **22.9\({}^{*}\)** & **29.5** & **4.5\({}^{*}\)** & **13.2** & **22.7\({}^{*}\)** & **28.1\({}^{*}\)** & **4.6\({}^{*}\)** \\ \hline _Ablative variants_ & & & & & & & & \\ - With BART classification knowledge & 14.7 & 22.0 & 25.9 & 3.5 & 16.2 & 21.1 & 24.4 & 3.1 \\ - Without identifiers generation & 13.8 & 21.7 & 23.2 & 3.7 & 14.1 & 21.8 & 23.3 & 3.9 \\ - Without hyperlink & 14.2 & 22.1 & 27.2 & 3.9 & 15.5 & 22.3 & 26.9 & 4.2 \\ - With oracle knowledge & 8.9 & 38.8 & 74.2 & 13.1 & 10.5 & 38.9 & 74.5 & 12.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results on WoW seen and unseen test set in terms of response quality. We compare against the ground-truth dialogue response in terms of perplexity (PPL), F1, Knowledge F1 (KF1), and BLEU-4 (B4). The four groups lists previous end-to-end models, previous pipeline models, GenKS, and ablative variants. The best results are highlighted with **bold**, and the second-best results are highlighted with underline. * indicates significant improvements over all baselines with p-value \(<0.05\). moves the identifier generation by directly generating the response. We see notable performance drops, especially in the KF1 metric. The results indicate that explicit training and inference about knowledge selection enable to use of more appropriate knowledge in response generation. **Without hyperlinks** This variant removes hyperlinks from GenKS. It performs worse than GenKS, probably due to its lower accuracy of knowledge selection than GenKS. **Use the oracle knowledge** We replace the model-predicted snippet identifier with the oracle one (knowledge used by ground-truth response). The results (e.g., KF1=74) suggest that GenKS can effectively locate and incorporate the corresponding knowledge into the responses following the guidance of the identifier. ### Human evaluation Table 3 shows the human-evaluating results. Results show that GenKS consistently outperforms baselines on all datasets. The Fleiss' kappa value is above \(0.60\), indicating substantial agreement among the annotators. GenKS outperforms KnowledgeGPT by about 0.02 and DukeNet by about 0.20 in terms of response generation evaluation metrics (i.e., _Fluency_ and _Context Coherence_). Moreover, for the _Knowledge Relevance_, the annotators agree that GenKS is capable of selecting knowledge that is more relevant to the dialogue and generating more informative responses than baselines. The _Factuality_ results show that by explicitly identifying the knowledge snippet used in response, GenKS can reduce the hallucination of response generation. ### Efficiency evaluation To evaluate the efficiency of GenKS, we compare it with previous end-to-end models and pipeline models. The results are presented in Figure 4, which shows that GenKS is more efficient than previous pipeline models. This may be because GenKS integrates knowledge selection and response generation in an end-to-end model, avoiding repeated encoding of dialogue history and background knowledge. Furthermore, we find that GenKS, as a pipeline method, achieves comparable efficiency to end-to-end models like RAG, but with the added advantage of explicit knowledge selection. ### Analytical experiment Multi-snippets selectionGenKS initially selects a single snippet following the experimental setup outlined in the baselines Dinan et al. (2019), but can also select multiple snippets by generating multiple identifiers. To evaluate the performance of this approach, we test a variant of the GenKS model, namely GenKS-2, which selects two snippets by generating two identifiers consecutively. We compare its performance with the original GenKS on the WoW dataset. The results are \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**WoW Seen**} & \multicolumn{4}{c}{**WoW Unseen**} \\ \cline{2-7} & F1 & KF1 & B4 & F1 & KF1 & B4 \\ \hline GenKS & 22.9 & 29.5 & 4.5 & 22.7 & 28.1 & 4.6 \\ GenKS-2 & 22.4 & 29.3 & 4.2 & 22.2 & 27.6 & 4.2 \\ \hline GenKS (5 Snippets) & 22.3 & 27.6 & 4.2 & 21.8 & 25.5 & 4.1 \\ GenKS (3 Snippets) & 21.1 & 29.3 & 3.2 & 20.0 & 20.9 & 2.9 \\ GenKS (128 Tokens) & 21.5 & 25.6 & 3.5 & 20.7 & 22.9 & 3.4 \\ GenKS (64 Tokens) & 20.7 & 23.3 & 3.0 & 20.1 & 20.6 & 2.9 \\ \hline \hline \end{tabular} \end{table} Table 4: Analytical experiment results on WoW. The first group compares GenKS and its variant GenKS, which selects two snippets instead of one. The second group includes the results of GenKS with different maximum number of snippets inputs or maximum input tokens. Figure 4: Inference time (minutes) on one GPU on WoW unseen test set. The values of models with \({}^{*}\) are estimated based on the model size and input/output length. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**WoW Seen**} & \multicolumn{4}{c}{**WoW Unseen**} \\ \cline{2-7} & Flu. & Coh. & Rel. & Fact. & Flu. & Coh. & Rel. & Fact. \\ \hline \hline BART & 1.82 & 1.51 & 1.45 & 0.82 & 1.76 & 1.50 & 1.47 & 0.76 \\ BART FiD & 1.88 & 1.70 & 1.55 & 0.84 & 1.85 & 1.67 & 1.53 & 0.82 \\ \hline TMN & 1.59 & 1.41 & 1.08 & 0.62 & 1.42 & 1.30 & 0.98 & 0.59 \\ DukeNet & 1.69 & 1.56 & 1.22 & 0.71 & 1.66 & 1.47 & 1.10 & 0.72 \\ KnowledgeGPT & 1.89 & 1.67 & 1.58 & 0.87 & 1.87 & 1.68 & 1.51 & 0.83 \\ **GenKS** & **1.90** & **1.72** & **1.69** & **0.89** & **1.91** & **1.71** & **1.67** & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 3: Human evaluation results. Flu, Coh, Rel, and Fact denote Fluency, Coherence, Relevance, and Factuality, respectively. listed in Table 4 group 1. GenKS-2 performs slightly worse than the original GenKS, likely because the WoW dataset only uses one snippet in response annotation and therefore does not benefit from using multiple snippets Dinan et al. (2019). Nevertheless, the results suggest that the proposed generative knowledge selection approach has the ability to select multiple knowledge. Hyper-parameter analysisWe also conduct ablation experiments on the number of input snippets to the model and maximum input tokens. The results are listed in Table 4 group 2. We find that reducing the number or length of knowledge reduces model effectiveness. ### Case study To better understand end-to-end baselines and our model, we provide an example in Table 5, which shows that GenKS appropriately changes its response prediction when providing different knowledge snippets5. Therefore, GenKS is more controllable and interpretable than end-to-end models, where the end-to-end system is a black box. We provide more case studies in Appendix B. Footnote 5: Note that this example only aims to show the output of the model. In fact, according to [https://en.wikipedia.org/wiki/Budweiser](https://en.wikipedia.org/wiki/Budweiser), Budweiser is also a famous lager from the Czech Republic, and American Budweiser is sold and known as Bud through most of the European Union. ## 6 Conclusion In this paper, we have proposed GenKS, a simple yet effective knowledge-grounded dialogue model. GenKS is a generative model, which learns to select knowledge snippets by generating their identifiers. Benefiting from the modeling of intra-knowledge interaction and dialogue-knowledge interaction, GenKS effectively addresses the challenges of _ambiguity_ and _discourse structure_. Our experiments have shown that GenKS establishes a new state-of-the-art on three knowledge-grounded dialogue benchmarks. Notably, GenKS particularly excels at new topics and as the dialogue goes deeper. GenKS also outperforms SOTA end-to-end models. Hence, we believe GenKS reveals a new paradigm for knowledge selection in open-domain dialogue. ## Limitations The limitations of this work include the modular modeling of passage reranks, which reduces the efficiency of the approach. Besides, we only conduct human evaluation on one popular dataset, i.e., Wizard of Wikipedia. Furthermore, the effectiveness of GenKS is only verified in the English dataset. Research on other languages establishes a new challenge, especially for languages with limited knowledge and annotated data. In future work, we would like to explore more efficient passage rerank techniques on knowledge-grounded dialogues. We will also conduct human evaluation for more datasets. Besides, generative knowledge selection can be extended to future studies about conversational recommendation. ## Ethics statement The paper proposes a knowledge-grounded dialogue system to generate a response using external knowledge. The intended use of this system is to perform chit-chat with the user on topics such as books and movies. The system is developed using large pre-trained language models (i.e., BART), who are trained on large-scale web data known to contain biased or discriminatory content. The datasets (i.e., WoW, Holl-E, CMU_DoG) that we train on also include subjective knowledge (comments on movies) that may express the bias of the writers. Although the system is able to reduce the hallucination of response compared to end-to-end models, the outputs from our system may still contain non-factual information and should not be considered as advice for any critical decision-making. ## Acknowledgements This work was supported by the National Key R&D Program of China with grant No. 2020YFB1406704, the Natural Science Foundation of China (62272274, 62202271, 61902219, \begin{table} \begin{tabular}{p{34.1pt} p{34.1pt}} \hline \hline **Topic** & Budweiser \\ **User** & I think Budweiser taste terrible. Have you ever had it? \\ \hline **Know 1** & Produced in various breweries around the world, Budweiser is a filtered beer available in draft and packaged forms. \\ **Res 1** & Yes, 1 have. It is produced in various breweries around the world! \\ \hline **Know 2** & Budweiser is an American-style pale lager produced by Anheuser-Busch, currently part of the transnational corporation Anheuser-Busch InBev. \\ **Res 2** & Yes, 1 have. I know that it is an American-style pale lager produced by Anheuser-Busch. \\ \hline \hline \end{tabular} \end{table} Table 5: Examples of GenKS outputs on the WoW. 61972234, 62072279, 62102234), the Natural Science Foundation of Shandong Province (ZR2021QF129), the Key Scientific and Technological Innovation Program of Shandong Province (2019JZZY010129). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
2303.05557
Evaporation and information puzzle for 2D nonsingular asymptotically flat black holes
We investigate the thermodynamics and the classical and semiclassical dynamics of two-dimensional ($2\text{D}$), asymptotically flat, nonsingular dilatonic black holes. They are characterized by a de Sitter core, allowing for the smearing of the classical singularity, and by the presence of two horizons with a related extremal configuration. For concreteness, we focus on a $2\text{D}$ version of the Hayward black hole. We find a second order thermodynamic phase transition, separating large unstable black holes from stable configurations close to extremality. We first describe the black-hole evaporation process using a quasistatic approximation and we show that it ends in the extremal configuration in an infinite amount of time. We go beyond the quasistatic approximation by numerically integrating the field equations for $2\text{D}$ dilaton gravity coupled to $N$ massless scalar fields, describing the radiation. We find that the inclusion of large backreaction effects ($N \gg 1$) allows for an end-point extremal configuration after a finite evaporation time. Finally, we evaluate the entanglement entropy (EE) of the radiation in the quasistatic approximation and construct the relative Page curve. We find that the EE initially grows, reaches a maximum and then goes down towards zero, in agreement with previous results in the literature. Despite the breakdown of the semiclassical approximation prevents the description of the evaporation process near extremality, we have a clear indication that the end point of the evaporation is a regular, extremal state with vanishing EE of the radiation. This means that the nonunitary evolution, which commonly characterizes the evaporation of singular black holes, could be traced back to the presence of the singularity.
Mariano Cadoni, Mauro Oi, Andrea Pierfrancesco Sanna
2023-03-09T19:32:03Z
http://arxiv.org/abs/2303.05557v3
# Evaporation and information puzzle for 2D nonsingular asymptotically flat black holes ###### Abstract We investigate the thermodynamics and the classical and semiclassical dynamics of two-dimensional (2D), asymptotically flat, nonsingular dilatonic black holes. They are characterized by a de Sitter core, allowing for the smearing of the classical singularity, and by the presence of two horizons with a related extremal configuration. For concreteness, we focus on a 2D version of the Hayward black hole. We find a second order thermodynamic phase transition, separating large unstable black holes from stable configurations close to extremality. We first describe the black-hole evaporation process using a quasistatic approximation and we show that it ends in the extremal configuration in an infinite amount of time. We go beyond the quasistatic approximation by numerically integrating the field equations for 2D dilaton gravity coupled to \(N\) massless scalar fields, describing the radiation. We find that the inclusion of large backreaction effects (\(N\gg 1\)) allows for an end-point extremal configuration after a finite evaporation time. Finally, we evaluate the entanglement entropy (EE) of the radiation in the quasistatic approximation and construct the relative Page curve. We find that the EE initially grows, reaches a maximum and then goes down towards zero, in agreement with previous results in the literature. Despite the breakdown of the semiclassical approximation prevents the description of the evaporation process near extremality, we have a clear indication that the end point of the evaporation is a regular, extremal state with vanishing EE of the radiation. This means that the nonunitary evolution, which commonly characterizes the evaporation of singular black holes, could be traced back to the presence of the singularity. ###### Contents * 1 Introduction * 2 Two-dimensional regular dilatonic black holes * 2.1 Linear dilaton solution * 2.2 Constant dilaton vacua * 2.3 General class of 2D nonsingular, asymptotically-flat black holes with a de Sitter core * 3 Two-dimensional Hayward black hole Black-hole thermodynamics * 4.1 Thermodynamic potentials and the first principle * 4.2 Thermodynamic stability and second order phase transition * 4.2.1 Specific Heat * 4.2.2 Free energy * 4.3 Including the constant dilaton vacuum * 5 Black-hole evaporation in the quasistatic approximation * 5.1 Evaporation time * 5.2 Time variation of mass and entropy * 5.3 Approaching extremality and breakdown of the semiclassical approximation * 6 Coupling to conformal matter * 6.1 Coupling to matter: shock wave solution * 7 Black hole evaporation and backreaction * 7.1 Adding counterterms and fixing the boundary conditions * 7.2 Hawking flux and apparent horizon trajectory * 7.3 Numerical results * 8 Entanglement Entropy and the Page curve * 9 Conclusions * 10 Acknowledgements * A No divergences in the ground state stress-energy tensor * B Boundary conditions for numerical integration * B.1 Boundary condition at \(x^{+}=x_{0}^{+}\) * B.2 Boundary condition at \(x^{-}\to-\infty\) Introduction Despite the huge recent progress on the observational side, achieved through gravitational-wave detection [1, 2] and imaging [3, 4], black holes are still a source of challenges for theoretical fundamental physics. The usual black-hole solutions of general relativity (GR) harbour, shielded behind event horizons, spacetime singularities, i.e. regions where the classical and semiclassical descriptions break down [5, 6]. As discovered more than 50 years ago, black holes behave as thermodynamic systems, whose microscopic description remains, however, still mysterious [7, 8, 9, 10, 11, 12, 13, 14]. They emit thermal radiation, but the description of the information flow during the evaporation has led to the information paradox, which most embodies the apparent incompatibility between quantum mechanics and GR [15, 16, 17]. A possible solution to the information puzzle, which has been pursued in the literature, is linking it to the singularity problem [18, 19, 20, 21, 22, 23]. The presence of a spacetime singularity makes the very notion of a global quantum state for matter fields in the black-hole background ill-defined. The loss of unitarity in the evolution of quantum states could be, therefore, traced back to the bad definition of the latter. The main objection to this argument is that the "unitarity crisis" shows up also for macroscopic black holes, i.e. those with masses hierarchically larger than the Planck mass \(m_{\rm p}\simeq 10^{19}\,\)GeV, that is at energy scales where the singularity cannot play any role. Also the possibility to shift the solution of the problem to the late stages of the evaporation, for instance through the formation of Planck-scale remnants, seems untenable owing to difficulty of storing/recovering the huge amount of information at these small scales [24, 25]. The recent reformulation and proposal for a solution of the information puzzle [26, 27, 28, 29] seem to bring further arguments against a close relationship between the singularity and the information problem. This is because this novel approach is focused _only_ on reconstructing the correlations between early and late Hawking radiation and, thus, refers mainly to near-horizon physics. There is, however, an important feature of black-hole solutions, which could change drastically the debate about the relationship between the singularity problem and the information paradox. The most commonly used spacetime setup is that of a black-hole solution with a single event horizon. Black holes with two (an inner and an outer) horizons introduce a new ingredient, which drastically changes the rules of the game. First of all, these black holes typically admit a ground state (GS) represented by an extremal configuration, in which the inner and outer horizons merge in a single one. Moreover, the radius of the extremal black hole could be hierachically larger than the Planck scale [23, 30]. In the near-horizon region and in the extremal regime, the geometry factorizes as a two-dimensional (2D) anti de Sitter (AdS\({}_{2}\)) spacetime times a 2D sphere of constant radius. This opens the way to the intriguing possibility that the information issue could be solved in the final stages of the evaporation process using properties of AdS\({}_{2}\) quantum gravity, e.g., by reconstructing correlations between the two disconnected parts of AdS\({}_{2}\) spacetime [31] or by the topological properties of the fragmented GS [32]. Moreover, there is some evidence that, for black holes with two-horizons and without a central singularity [33, 34, 35], the evaporation process could be unitary. Specifically, the presence of the inner Cauchy horizon could act as a trapping region for high energy modes, which could be responsible for the release of information at late times, when the two horizons are about to merge [36, 37]. The most natural candidates for testing these ideas are four-dimensional (4D) nonsingular black holes with a de Sitter (dS) core [23, 38, 39, 40, 41, 42, 43, 44]. They appear as static solutions of Einstein's equations sourced by an anisotropic fluid. The corresponding spacetime is asymptotically flat (AF) and at great distances is indistinguishable from the Schwarzschild solution, whereas the singularity at the origin of the radial coordinate is regularized due to inner dS behavior. The latter also produces an additional hair \(\ell\), which could have interesting observational signatures in the geodesics motion of massless and massive particles, quasinormal modes spectrum and gravitational waves (see, e.g., Refs. [23, 30, 45, 46, 47, 48] and references therein). Another consequence of the dS core is the presence, depending on the value of \(\ell\), of two horizons and an extremal solution. From the thermodynamic point of view, these models are characterized by a second order phase transition: the spectrum has a branch of large unstable configurations and a stable branch of near-extremal solutions. There are two main obstructions that prevent the direct use of such 4D models to address the information paradox. Firstly, we do not have a microscopic model describing the sources of the solutions. We can just give a coarse-grained description in terms of an anisotropic fluid, with equation of state \(p=-\rho\), and a given profile for the energy density \(\rho\). Secondly, there is the usual difficulty of describing semiclassical dynamics, including backreaction effects on the geometry, of quantum Hawking radiation in the 4D classical black-hole background. In this paper, we show that both issues can be addressed by considering 2D dilaton gravity models of AF, nonsingular black holes with a dS core. As we shall show, these models can be formulated at Lagrangian level and describe, in a simplified setting, the \(S\)-wave sector (radial modes) of their 4D nonsingular counterparts. This will allow us to retain the qualitative features of the higher-dimensional models, keeping, however, under control their dynamics. For concreteness, our investigations will be focused on a particular, but quite relevant, case, namely the 2D Hayward black hole. We will be able to capture the main thermodynamic features of 4D regular models and, at the same time, to describe their evaporation process and to solve the classical and semiclassical dynamics, including the backreaction of Hawking radiation on the geometry. Having under control the latter will allow us to answer some important questions regarding the end point of the evaporation process, the time evolution of the entanglement entropy and the shape of the related Page curve [16, 49]. The main limitation of this approach is obviously represented by the limit of validity of the semiclassical approximation. Nonetheless, our results, together with some known features of AdS\({}_{2}\) quantum gravity, will allow us to have clear indications about the fate of information during the evaporation of nonsingular black holes with a dS core. The structure of this paper is as follows. In Section 2, we review some general properties of 2D dilaton gravity models and we present our class of 2D, nonsingular, AF solutions with a dS core. In Section 3, we introduce the prototype-model we will use throughout our paper, namely the 2D Hayward black hole. Section 4 is devoted to the investigation of the thermodynamic properties of our 2D models. In Section 5, we discuss black-hole evaporation using a quasistatic approximation. The coupling with conformal matter, in the form of \(N\) massless scalars, is introduced in Section 6. We consider, in particular, classical solutions corresponding to a shock wave. In Section 7, we discuss the evaporation process by quantizing matter in the classical gravitational background and by including backreaction effects. The semiclassical dynamics cannot be solved analytically, so we resort to numerical integration. The entanglement entropy of the Hawking radiation is computed in Section 8 and its Page curve is presented. Finally, we draw our conclusions in Section 9. Some details of the calculations concerning the absence of divergences for the stress-energy tensor in the GS and the boundary conditions used for the numerical integration of the field equations are presented in Appendix A and Appendix B, respectively. ## 2 Two-dimensional regular dilatonic black holes The simplest black-hole models can be constructed in a 2D spacetime. However, the pure 2D Einstein-Hilbert action is a topological invariant and a metric theory of gravity has to be built by coupling the Ricci scalar with a scalar field \(\phi\), the dilaton. 2D dilaton gravity is generally described by the action (see Ref. [50] for a review; for a generalization, see Ref. [51]) \[\mathscr{L}=\sqrt{-g}\left[\mathscr{D}(\phi)R+\mathscr{K}(\phi)g^{\mu\nu} \partial_{\mu}\phi\partial_{\nu}\phi+\mathscr{V}(\phi)\right]\,, \tag{1}\] where \(\mathscr{D}\), \(\mathscr{K}\) and \(\mathscr{V}\) are functions of the dilaton, representing, respectively, the coupling with the Ricci scalar \(R\) (the dimensionless inverse Newton constant), the kinetic term of the scalar field and the potential. Using a Weyl transformation of the metric \[g_{\mu\nu}=e^{\mathscr{P}(\phi)}\tilde{g}_{\mu\nu}\,,\qquad\mathscr{P}=-\int^{ \phi}\mathrm{d}\psi\,\frac{\mathscr{K}(\psi)}{\mathscr{D}(\psi)}\,, \tag{2}\] together with a field redefinition \[\mathscr{D}(\phi)\to\phi\,, \tag{3}\] it is always possible to set \(\mathscr{K}=0\) and to recast the lagrangian into the simpler form [52, 53] \[\mathscr{L}=\sqrt{-g}\left[\phi R+\mathscr{V}(\phi)\right]\,. \tag{4}\] This choice of the conformal (Weyl) frame is typically used when dealing with asymptotically AdS black holes, while for AF configurations (which is the focus of the present paper), a conformal frame with \(\mathscr{K}\neq 0\) is generally considered more appropriate [54, 55, 56, 57, 58]. This is particularly true when 2D dilatonic black holes are used to describe the \(S\)-wave sector of higher dimensional models (but see, e.g., Refs. [59, 60, 61, 62, 63]). However, as we shall see below, the description of also AF black holes is much simpler in the conformal frame (4) than in a frame with \(\mathscr{K}\neq 0\). Moreover, the lagrangian (4) is fully characterized by the dilaton potential \(\mathscr{V}(\phi)\); this allows for a simple classification of regular black-hole models in terms of the properties of \(\mathscr{V}\). For these reasons, in the following, we will work in this frame, although this choice will introduce some difficulties concerning the physical interpretation of the parameters we use to describe black holes. The equations of motion stemming from Eq. (4), in the absence of matter fields, read \[R+\frac{\mathrm{d}\mathscr{V}}{\mathrm{d}\phi}=0\,; \tag{5a}\] \[\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)\phi-\frac{1} {2}g_{\mu\nu}\mathscr{V}=0\,. \tag{5b}\] ### Linear dilaton solution Let us now first consider static solutions of Eq. (5b). In this case, the dilaton can be used as a spacelike "radial" coordinate of the 2D spacetime, \[\phi=\lambda r, \tag{6}\] where \(\lambda\) is a constant, with dimensions of the inverse of a length, characterizing the potential \(\mathscr{V}\). This parametrization of the dilaton is particularly useful when the 2D theory is used to describe the \(S\)-wave sector of 4D black holes. In this case, the dilaton is proportional to the radius of the transverse two-sphere. Notice also that the dilaton represents the inverse of the 2D Newton constant. This means that the region \(r\ll\lambda^{-1}\) is in a strong coupling regime, whereas \(r\gg\lambda^{-1}\) is a weak-coupling region. To be consistent, both interpretations of \(\phi\) require to limit the range of variation of the radial coordinate to \(r\in[0,\infty)\). Eq. (6) allows us to write the most general static solution of Eqs. (5a) and (5b) as the linear dilaton solution (LDS) \[ds^{2}=-f(r)\,\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{f(r)}\,,\qquad f=c_{1}+ \frac{1}{\lambda}\int\mathrm{d}r\,\mathscr{V} \tag{7}\] where \(c_{1}\) is a dimensionless integration constant, which can be written in terms of the covariant mass \(\mathcal{M}\), which can be defined for a generic 2D dilaton gravity theory [64]. For a static spacetime, \(\mathcal{M}\) is the conserved charge associated with the Killing vector \(\chi^{\mu}=F_{0}\epsilon^{\mu\nu}\partial_{\nu}\phi\), generating time translations 1. \(F_{0}\) is a constant, which is fixed by the normalization of \(\chi^{\mu}\). As we will see in the following, to make contact with four dimensional models, a convenient choice2 is \(F_{0}=-1/\lambda\). Footnote 1: In this case \(\mathcal{M}\) corresponds to the ADM mass of the solution [65, 66]. Footnote 2: Note that this normalization differs only in the sign with respect to the expression of Ref. [64]. In our Weyl frame, \(\mathcal{M}\) reads \[\mathcal{M}=\frac{F_{0}}{2}\left[\int^{\phi}\mathrm{d}\phi\,\mathscr{V}-g^{ \mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\right]=-\frac{F_{0}}{2}c_{1} \lambda^{2}\,. \tag{8}\] Choosing appropriately the form of \(\mathscr{V}\) allows to generate different solutions. In particular, we focus on dilaton gravity models allowing for AF nonsingular black holes. A useful information for classifying different classes of models can be obtained from the existence of solutions characterized by a constant dilaton, the so-called constant dilaton vacua (CDV). Owing to the \(r\)-dependent parametrization of the dilaton (6), these solutions cannot be obtained as particular LDSs given by Eq. (7) and must be discussed separately. ### Constant dilaton vacua The CDV solutions of Eqs. (5) can be obtained by setting \(\phi=\text{constant}\equiv\phi_{0}\). According to Eq. (5b), these vacuum configurations must correspond to zeroes of the potential, \(\mathscr{V}(\phi_{0})=0\). On the other hand, Eq. (5a) tells us that they correspond to 2D spacetimes with constant curvature and can be classified according to the sign of \(\mathrm{d}\mathscr{V}/\mathrm{d}\phi|_{\phi=\phi_{0}}\). We have three possible cases: 1. If \(\mathrm{d}\mathscr{V}/\mathrm{d}\phi|_{\phi_{0}}<0\), \(R>0\) and we have a dS spacetime; 2. If \(\mathrm{d}\mathscr{V}/\mathrm{d}\phi|_{\phi_{0}}>0\), \(R<0\) and we have an AdS spacetime; 3. If \(\mathrm{d}\mathscr{V}/\mathrm{d}\phi|_{\phi_{0}}=0\), \(R=0\) and we have a flat, Minkowski spacetime. Notice that the condition for having a flat CDV is rather strong. It requires \(\phi_{0}\) to be both a zero and an extremum of \(\mathscr{V}\). Asymptotic flatness, however, also implies that we always have \(\mathscr{V}(\infty)=\mathrm{d}\mathscr{V}/\mathrm{d}\phi|_{\infty}=0\). In this case, formally, we can consider \(\phi=\infty\), which corresponds to a decoupled configuration (the 2D Newton constant vanishes), as a flat CDV. In the first case above, instead, we can define \[\frac{\mathrm{d}\mathscr{V}}{\mathrm{d}\phi}\bigg{|}_{\phi_{0}}\equiv-\frac{1 }{L_{\text{dS}}^{2}}\,. \tag{9}\] From this, using Eq. (5a), we have \[f(r)=1-\frac{r^{2}}{L_{\text{dS}}^{2}}\,, \tag{10}\] which describes two-dimensional dS (dS\({}_{2}\)) spacetime, with an associated dS length \(L_{\text{dS}}\). In the second case, we define instead \[\frac{\mathrm{d}\mathscr{V}}{\mathrm{d}\phi}\bigg{|}_{\phi_{0}}\equiv\frac{1 }{L_{\text{AdS}}^{2}}\,. \tag{11}\] Equation (5a) now yields \[f(r)=\frac{r^{2}}{L_{\text{AdS}}^{2}}\,, \tag{12}\] which describes the AdS\({}_{2}\) spacetime, with an associated AdS length \(L_{\text{AdS}}\). ### General class of 2D nonsingular, asymptotically-flat black holes with a de Sitter core We are interested in 2D dilatonic black holes, which mimic the behavior of 4D regular black holes. Therefore we have to choose the form of the potential \(\mathscr{V}\) such that: (a) the spacetime has one or at most two horizons; (b) the spacetime curvature remains everywhere finite, in particular the usual 4D curvature singularity at \(r=0\) is removed; (c) the spacetime is AF, with a Schwarzschild subleading behavior at \(r\to\infty\), i.e., \(f\sim 1-c/r\), with \(c\) a constant. Condition (a) requires \(f(r)\) to have at least one zero and to stay positive outside the (outer) horizon, i.e., \[\int_{\phi_{\text{H}}}^{\phi}\mathrm{d}\psi\,\mathscr{V}(\psi)\geq 0\,. \tag{13}\] Condition (b) can be reformulated as a condition on the first derivative of \(\mathscr{V}\). In two dimensions, the Ricci scalar is the only curvature invariant. Therefore, bounding it is sufficient to generate regular spacetime metrics. According to Eq. (5a), this translates into requiring regularity of \(\mathrm{d}\mathscr{V}/\mathrm{d}\phi\). Regularity at \(r=0\) can be achieved in different ways. The simplest and more physical one, which has also been used for 4D models [38, 39, 40, 41, 42, 43, 33, 23], is to impose \(\mathrm{d}\mathscr{V}/\mathrm{d}\phi|_{0}\) to be finite and \[\mathscr{V}(0)=0,\quad\frac{\mathrm{d}\mathscr{V}}{\mathrm{d}\phi}\bigg{|}_{ 0}<0\,. \tag{14}\] According to the general discussion of Section 2.2, this implies that our model must allow for a \(\mathrm{dS}_{2}\) CDV at \(\phi=0\), given by Eq. (10), which will therefore describe the inner core of our black-hole solutions. Using Eq. (10), one can easily find the form of the potential in the \(\phi\sim 0\) region is, \[\mathscr{V}\sim-2\frac{\phi}{\lambda\hat{L}^{2}}\,. \tag{15}\] Condition (c), i.e., asymptotic flatness and a Schwarzschild subleading behavior, can be implemented by fixing the asymptotic behavior for \(\phi\to\infty\) \[\mathscr{V}\sim\frac{\lambda^{2}}{\phi^{2}}\,. \tag{16}\] Equation (14) implies the dilaton potential to be zero at \(\phi=0\), and to become negative and decrease near \(\phi\sim 0\). However, it has to grow again, cross the \(\phi\)-axis at a finite value \(\phi=\phi_{1}\), develop at least one minimum and one maximum to guarantee the positive fall of \(\mathscr{V}\) at asymptotic infinity, implied by Eq. (16). The model, therefore, must allow for three different CDV solutions at \(\phi=0\), \(\phi=\phi_{1}\) and \(\phi=\infty\), describing, respectively, a \(\mathrm{dS}_{2}\), \(\mathrm{AdS}_{2}\) and flat spacetimes. In principle, the potential could show any number of oscillations, but for simplicity in the following we restrict ourselves to potentials with a single maximum and a single minimum. The qualitative behavior of the general form of our potential \(\mathscr{V}\) is shown in Fig. 1. Quantitatively, the potential will depend on some dimensional parameters. The most natural, minimal choice is a potential depending on two parameters. Since \(\phi\) is dimensionless, the parameter \(\lambda\), introduced above, is needed to give the right dimensions to \(\mathscr{V}\). At least a second parameter, however, which we will call \(\ell\), is needed if we want to express the CDV \(\phi_{0}\) in terms of parameters of the model. The values of such parameters will impact on the behavior of the metric function. Indeed, since \(\mathscr{V}=\lambda\,\mathrm{d}f/\mathrm{d}r\) from Eq. (7), we see that the presence of two zeros for \(\mathscr{V}\) implies the existence of a minimum for \(f(r)\). Depending on the value of \(\ell\), this minimum can be above, below or exactly at the \(r\)-axis, producing horizonless, two-horizon or extremal configurations, respectively. The parameter \(\lambda\), Figure 1: Qualitative behavior of the potential, characterizing the broad class of the regular models satisfying the conditions (1), (2), (3). We restricted ourselves to the case of a single maximum and a single minimum. The horizontal and vertical lines correspond to \(\mathscr{V}=0\) and \(\phi=0\), respectively. instead, does not affect the presence of extrema in \(f(r)\), nor their location in the radial direction. Notice that the AdS\({}_{2}\) CDV describes the near-horizon behavior of the extremal black hole [67, 68]. One can easily construct dilatonic potentials \(\mathscr{V}(\phi)\) behaving as in Fig. 1. Basically, for every spherically-symmetric, regular 4D black hole, characterized by a single metric function \(f\), one can easily construct the corresponding 2D dilaton gravity theory by solving Eq. (7), determining in this way the form of \(\mathscr{V}\). For instance, notable models are those which can be obtained from the Hayward black hole [33], Gaussian-core black hole [40], the Fan-Wang model [43] or the Bardeen solution [38]. For the sake of concreteness, in the following we will focus on a 2D dilaton gravity model reproducing the Hayward black hole. However, all the considerations of the next sections can be extended to the general class of models described in this section. ## 3 Two-dimensional Hayward black hole One of the simplest cases of potentials behaving as shown in Fig. 1 is given by \[\mathscr{V}(\phi)=\lambda^{2}\frac{\phi^{4}-2\ell^{3}\lambda^{3}\phi}{(\phi^{ 3}+\lambda^{3}\ell^{3})^{2}}\,, \tag{17}\] where \(\ell\) is a parameter with dimensions of length. The potential has a zero at \(\phi=0\), which gives the dS CDV with a related dS length (9), given in this case by \(L^{2}_{\rm dS}=\lambda\,\ell^{3}\), and goes to zero for \(\phi\to\infty\). The other zero is at \[\left(\frac{\phi}{\lambda}\right)^{3}=2\ell^{3}\quad\Rightarrow\quad\phi_{0}= \sqrt[3]{2}\,\lambda\,\ell\,, \tag{18}\] which gives the AdS\({}_{2}\) CDV and, as we shall see below, describes extremal black holes in the near-horizon regime. The associated AdS length (see Eq. (11)) is \(L^{2}_{\rm AdS}=3\lambda\,\ell^{3}\). With the potential (17), solving Eq. (7) yields \[f=\frac{2\mathcal{M}}{\lambda}-\frac{1}{\lambda}\frac{r^{2}}{r^{3}+\ell^{3}}\,, \tag{19}\] which interpolates between the Schwarzschild spacetime at great distances and the dS one at \(r\sim 0\), modulo a rescaling of the coordinates \(t\) and \(r\) by the constant quantity \(2\mathcal{M}/\lambda\). This peculiar behavior, in which the mass term in the line element dominates at great distances, was analyzed in \(2D\) very recently in Ref. [69] and termed "mass-dominated" dilaton gravity. The 4D Hayward black hole [33] is described by the metric element \(\mathrm{d}s_{4}^{2}=-f_{\rm H}(r)\,\mathrm{d}t^{2}+f_{\rm H}^{-1}(r)\,\mathrm{ d}r^{2}+r^{2}\mathrm{d}\Omega_{2}\), with the metric function given by \[f_{\rm H}(r)=1-\frac{2Gmr^{2}}{r^{3}+\ell^{3}}\,, \tag{20}\] where \(m\) is the 4D ADM mass. One can easily check that the (constant) Weyl rescaling of the 2D metric, together with a rescaling of the time coordinate \[g_{\mu\nu}\to\frac{\lambda}{2\mathcal{M}}g_{\mu\nu},\quad t\to\frac{\lambda}{ 2\mathcal{M}}\,t \tag{21}\] brings the metric into the form \[\mathrm{d}s_{2}^{2}=-\left(1-\frac{1}{2\mathcal{M}}\frac{r^{2}}{r^{3}+\ell^{3} }\right)\mathrm{d}t^{2}+\left(1-\frac{1}{2\mathcal{M}}\frac{r^{2}}{r^{3}+\ell ^{3}}\right)^{-1}\mathrm{d}r^{2}\,. \tag{22}\] This transformation leaves the 2D dilaton gravity action invariant up to a constant factor, which does not alter the equations of motion. If we now write the covariant mass of the 2D solution in terms the mass \(m\) of the 4D black-hole solution (20) we get \[\frac{1}{2\mathcal{M}}\equiv\frac{2m}{\lambda^{2}}\,. \tag{23}\] The 4D metric element of the Hayward black hole can be simply written in terms of the 2D one (22) and the dilaton as \(\mathrm{d}s_{4}^{2}=\mathrm{d}s_{2}^{2}+(\phi/\lambda)^{2}\mathrm{d}\Omega_{2}\). The peculiar relation (23) can be seen as a consequence of both the specific conformal frame chosen here (see the discussion at the beginning of Section 2), which is particularly suited for asymptotically AdS spacetimes, and of the normalization of the Killing vector \(F_{0}\) adopted in Section 2.1. As we shall see, the minus sign in \(F_{0}\) implies that an asymptotic observer in 2D spacetime measures the energy of the system with the opposite sign with respect to the asymptotic observer in the 4D spacetime. Hence, when the 2D mass becomes bigger, the corresponding 4D mass decreases and viceversa, which is reflected in the inverse relation (23). We will further confirm this below, when studying the thermodynamic properties of the 2D model. In the remainder of the paper, we will consider the metric function \(f\) in the form (19). It has a minimum at \[r_{\mathrm{min}}=\sqrt[3]{2}\,\ell\,. \tag{24}\] If \(f(r_{\mathrm{min}})<0\), the metric has two horizons, solutions of \(f(r)=0\), while if \(f(r_{\mathrm{min}})>0\), it has no horizons. If \(f(r_{\mathrm{min}})=0\), the two horizons merge, become degenerate and the configuration becomes extremal, with an event horizon located at \(r_{\mathrm{min}}\equiv r_{\mathrm{ext}}\). Using the latter and setting \(f(r_{\mathrm{ext}})=0\) yields the critical value of \(\ell\) at extremality \[\ell_{c}=\frac{1}{3\sqrt[3]{2}\,\mathcal{M}}\,. \tag{25}\] Thus, for \(\ell<\ell_{c}\) the black hole has two horizons; for \(\ell=\ell_{c}\) the two horizon merge in a single one; whereas for \(\ell>\ell_{c}\) the spacetime has no horizons. Interestingly, the value of \(r_{\mathrm{ext}}\) in Eq. (24) is the same at which the potential (17) changes sign (see Eq. (18)). Indeed, as it is usually the case for two-horizon models [70, 23, 71], the extremal, near-horizon metric is that of AdS\({}_{2}\) spacetime. ## 4 Black-hole thermodynamics ### Thermodynamic potentials and the first principle Our 2D black-hole solutions can be considered as thermodynamic systems, characterized by a Hawking tempertaure \(T_{\mathrm{H}}\), an internal energy \(E\) and an entropy \(S\). The Hawking temperature is given by the standard Figure 2: Qualitative behavior of the temperature of the 2D black hole, according to Eq. (26). formula \[T_{\rm H}=\frac{f^{\prime}(r_{\rm H})}{4\pi}=\frac{\mathscr{V}(\phi_{\rm H})}{4 \pi}=\frac{r_{\rm H}^{4}-2\ell^{3}r_{\rm H}}{4\pi\lambda\left(r_{\rm H}^{3}+\ell ^{3}\right)^{2}}\,, \tag{26}\] where \(r_{\rm H}\) is the radius of the (outer) event horizon. The temperature becomes zero both at extremality, i.e., for \(r_{\rm H}=r_{\rm ext}=\sqrt[3]{2}\,\ell\), and for \(r_{\rm H}\to\infty\), whereas it reaches a maximum at \[r_{\rm H,\ peak}=\frac{\ell}{\sqrt[3]{2}}\sqrt[3]{7+\sqrt{45}}\,. \tag{27}\] The qualitative behavior of \(T_{\rm H}\) is plotted in Fig. 2. The internal energy is usually identified with the black-hole mass. However, in our case, we have chosen a negative normalization of the Killing vector generating time translation, opposite to the usual positive one. Consistency with this normalization requires \(E=-\mathcal{M}\). Using Eq. (19), we can express \(\mathcal{M}\) as a function of the outer event horizon radius \[E=-\mathcal{M}=-\frac{1}{2}\frac{r_{\rm H}^{2}}{r_{\rm H}^{3}+\ell^{3}}\,. \tag{28}\] A negative internal energy is somehow unusual for black holes, but it is perfectly consistent with their description (and normalizations) as thermodynamic systems. Indeed, we will confirm this below by using the euclidean action approach and proving the consistency of the first law of thermodynamics. The energy \(E\), thus, is always negative and goes to its maximum value \(E=0\) as \(r_{\rm H}\to\infty\), whereas it reaches its minimum value \(E_{\rm ext}=-(3\,\sqrt[3]{2}\,\ell)^{-1}\) for the extremal black hole, whose radius is given by Eq. (24)3. Moreover, for \(r_{\rm H}\geq r_{\rm ext}\), as expected, \(E(r_{\rm H})\) is a monotonic increasing function. Footnote 3: Notice that, according to Eq. (23), \(E=0\) corresponds to the 4D Hayward black hole with an infinite mass \(m\), whereas the \(E_{\rm ext}\) corresponds to the mass \(m_{\rm ext}\) of the extremal 4D black hole. Let us now calculate the entropy of the 2D Hayward black hole using the Euclidean action formalism, which allows us to calculate the partition function \(\mathcal{Z}\) of the thermodynamic ensemble in terms of the Euclidean action \(\mathcal{I}\), i.e., \(\mathcal{I}=-\ln\mathcal{Z}\). By a Wick rotation of the time \(t\), the action of the lagrangian (4) becomes the euclidean bulk action \(\mathcal{I}_{\rm b}\). It has been shown that, in order to have a well-defined variational principle for "mass-dominated" dilaton gravity theories, the action must be supported by (one half) the usual Gibbons-Hawking-York (GHY) boundary term \(\mathcal{I}_{\rm GHY}\) plus an additional one, containing the normal derivative of the dilaton [69]. The action reads \[\mathcal{I}=\mathcal{I}_{\rm b}+\mathcal{I}_{\rm GHY}+\mathcal{I}_{\partial \phi}=-\frac{1}{2}\int\mathrm{d}^{2}x\,\sqrt{-g}\,(\phi R+\mathscr{V})-\frac {1}{2}\int_{r=r_{\infty}}\mathrm{d}\tau\,\sqrt{h}\,\left[\phi\,\mathcal{K}-n ^{\mu}\partial_{\mu}\phi\right]\,. \tag{29}\] We enclose the system into a hypersurface at constant \(r=r_{\infty}\), where we define an induced, one-dimensional, metric \(h\), whose extrinsic curvature is described by \(\mathcal{K}_{\mu\nu}\) (\(\mathcal{K}\) is its trace). \(\tau\) is the euclidean time, which is periodic with period equal to the inverse of the temperature \(T_{\rm H}^{-1}\equiv\beta\). All quantities on the boundary will be evaluated at the cutoff \(r=r_{\infty}\) and then we will let \(r_{\infty}\) go to infinity. Moreover, one could add a purely topological, Einstein-Hilbert term of the action \(\mathcal{I}_{\rm topo}\propto\phi_{0}\int d^{2}x\,\sqrt{-g}R\), with \(\phi_{0}\) a constant, which only changes the value of the entropy by an additional constant \(S_{0}\), depending on \(\phi_{0}\). This constant value can be identified as the entropy of the extremal configuration. Let us now evaluate the boundary term on the LDS. The induced metric with euclidean signature reads \(h_{\mu\nu}=h_{00}=f\). The extrinsic curvature is defined in terms of the normal vector to the hypersurface \(n_{\mu}\) as \(\mathcal{K}_{\mu\nu}\equiv\frac{1}{2}\nabla_{\mu}n_{\nu}+\frac{1}{2}\nabla_{ \nu}n_{\mu}\), where the normal vector reads, in this case, \(n_{\mu}=f^{-1/2}\,\delta_{\mu}^{r}\). Therefore, when evaluated on the solution for the dilaton, we have \(\mathcal{I}_{\rm GHY}=-\frac{\lambda}{4}\beta\,r\,f^{\prime}|_{r=r_{\infty}}\), which vanishes in the limit \(r_{\infty}\to\infty\) when \(f\) is given by Eq. (19). The remaining boundary term, instead, gives \[\mathcal{I}_{\partial\phi}=\frac{\beta}{2}\,\lambda\,f\bigg{|}_{r=r_{\infty}}= \beta\mathcal{M}\,. \tag{30}\] Usually, one has to add a counterterm to the boundary action, needed to regularize divergences arising in the limit \(r_{\infty}\to\infty\). This counterterm is written in terms of the extrinsic curvature of the boundary embedded in flat spacetime. In our case, such term is not needed because there are no divergences. Moreover, there is no contribution from flat spacetime (\(f(r)=1\)) since Eq. (29) gives \(\mathcal{S}_{\text{flat}}=2\int dt\,dr\,\phi^{\prime\prime}\), which is zero for the LDS (7). We now evaluate the bulk action. Using Eq. (7), we have \[\mathcal{I}_{\text{b}}=-\frac{1}{2}\int\text{d}^{2}x\,\sqrt{-g}\left(-\phi f^ {\prime\prime}+\mathscr{V}\right)=-\frac{\beta\lambda}{2}\left[-rf^{\prime}+2 f\right]_{r_{\text{H}}}^{r_{\infty}}=-2\beta\mathcal{M}-2\pi\lambda r_{\text{H}}\,, \tag{31}\] where we used \(f(r_{\text{H}})=0\), \(r_{\infty}f^{\prime}(r_{\infty})\to 0\) and \(f(r_{\infty})\to 2\mathcal{M}/\lambda\) for \(r_{\infty}\to\infty\), and \(f^{\prime}(r_{\text{H}})=4\pi T_{\text{H}}=4\pi/\beta\). Combining Eqs. (30) and (31) yields \(\mathcal{I}=-\beta\mathcal{M}-2\pi\lambda r_{\text{H}}=-\ln\mathcal{Z}\), where \(\mathcal{Z}\) is the partition function. The internal energy and entropy, thus, read \[E =-\partial_{\beta}\ln\mathcal{Z}=-\mathcal{M}\,; \tag{32a}\] \[S =\beta\partial_{\beta}\ln\mathcal{Z}-\ln\mathcal{Z}=2\pi\lambda r _{\text{H}}=2\pi\phi(r_{\text{H}})\,. \tag{32b}\] Equation (32a) confirms Eq. (28), as expected. The black-hole entropy, instead, scales as the dilaton, i.e., as the inverse 2D Newton constant, evaluated at the horizon. This is the usual formula for the entropy of 2D black holes [73, 74] and represents the extension to two spacetime dimensions of the usual area law in higher dimensions. This is also quite evident when the 2D black hole is derived from the dimensional reduction of the 4D one, with the dilaton playing the role of the radius of the transverse \(S^{2}\) sphere, as it is here the case. Contrary to standard \(4D\) regular black-hole solutions [23], here the entropy naturally follows the area law. This is because our 2D solutions do not require external matter sources and, therefore, are not coupled to a stress-energy tensor, which in the 4D case describes an anistropic fluid. In general, for regular models, the latter is characterized by a density that depends on the ADM mass of the model, which introduces extra (bulk) terms altering the area-scaling of the entropy [23, 75]. Adding the contribution \(S_{0}\) of the topological action leads to \[S=S_{0}+2\pi\lambda r_{\text{H}}\,. \tag{33}\] Because of the minus sign in Eq. (28), we need to check the consistency of our derivation with the first principle of thermodynamics. By differentiating \(E\) and \(S\), given respectively by Eqs. (28) and (32b), with respect to \(r_{\text{H}}\), and using Eq. (26) for \(T_{\text{H}}\), we can easily check that the identity \(dE=T_{\text{H}}\text{d}S\) is satisfied. The previous results allow us to compute the energy difference between two configurations, characterized by the two values \(\phi_{1}\) and \(\phi_{2}\) of the dilaton, in terms of the integral of the dilaton potential. This can be done in all generality by integrating the first law, considering Eqs. (26) and (32b) \[\Delta E_{1,2}\equiv E(\phi_{1})-E(\phi_{2})=\frac{1}{2}\int_{\phi_{2}}^{\phi_ {1}}\mathscr{V}(\phi_{\text{H}})\,\text{d}\phi_{\text{H}}\,. \tag{34}\] Let us assume \(\phi_{2}<\phi_{1}\) and that \(\phi_{2}\) represents a black-hole configuration. Then, for \(\phi_{2}\), the condition (13) holds, whether or not \(\phi_{1}\) represents a black hole. Therefore, \(\Delta E_{1,2}>0\) and the black-hole energy increases monotonically, as already seen by analysing Eq. (28). The consequence of this is that the configuration retaining the least internal energy will be the one with the least dilaton, i.e., the extremal configuration. ### Thermodynamic stability and second order phase transition Let us now investigate the thermodynamic stability of our regular black-hole solutions. This can be done by studying the specific heat and the free energy. #### 4.2.1 Specific Heat The specific heat of our solutions is given by \[C_{\rm H}=\frac{{\rm d}E}{{\rm d}T_{\rm H}}=\frac{{\rm d}E}{{\rm d}\phi_{\rm H}} \left(\frac{{\rm d}T_{\rm H}}{{\rm d}\phi_{\rm H}}\right)^{-1}=4\pi\frac{{\rm d}E }{{\rm d}\phi_{\rm H}}\left(\frac{{\rm d}\mathscr{V}}{{\rm d}\phi_{\rm H}} \right)^{-1}\,. \tag{35}\] In Section 4.1, we showed that the internal energy of black-hole configurations is always increasing with \(\phi_{H}\). Therefore, the sign of \(C_{\rm H}\) is determined by the sign of \({\rm d}\mathscr{V}/{\rm d}\phi_{\rm H}\). As already discussed in Section 2, requiring the potential to satisfy the minimal requirements listed in Section 2.3 implies \(\mathscr{V}\) to be necessarily nonmonotonic. Moreover, imposing a dS-like behavior in the interior constrains the potential to have another zero at \(\phi_{\rm H}=\phi_{0}\), which further restricts the interval to \(\phi_{\rm H}\in[\phi_{0},\infty)\) where we have at least an extremum. The specific heat, thus, shows a single maximum in this interval, located at \(\phi_{\rm peak}\) (see Fig. 1). Here, \({\rm d}\mathscr{V}/{\rm d}\phi_{\rm H}\) changes sign (from positive to negative) as the potential falls monotonically at infinity. Therefore, we have * For \(\phi_{0}\leq\phi_{\rm H}<\phi_{\rm peak}\), \({\rm d}\mathscr{V}/{\rm d}\phi_{\rm H}>0\) and therefore \(C_{\rm H}>0\). This correspods to a branch of thermodynamically stable configurations; * For \(\phi_{\rm H}>\phi_{\rm peak}\), \({\rm d}\mathscr{V}/{\rm d}\phi_{\rm H}<0\) and therefore \(C_{\rm H}<0\). This is, instead, the branch of thermodynamically unstable configurations; * For \(\phi_{\rm H}=\phi_{\rm peak}\), \({\rm d}\mathscr{V}/{\rm d}\phi_{\rm H}=0\) and \(C_{\rm H}\to\infty\), which signals the onset of a second order phase transition. This general discussion also applies to the 2D Hayward black hole, described by the potential (17). Using Eqs. (26) and (28), Eq. (35) reads \[C_{\rm H}=\frac{\pi\lambda\left(\ell^{3}+r_{\rm H}^{3}\right)\left(2\ell^{3}r _{\rm H}-r_{\rm H}^{4}\right)}{\ell^{6}-7\ell^{3}r_{\rm H}^{3}+r_{\rm H}^{6}}\,, \tag{36}\] which diverges at the peak temperature (27). In terms of the event horizon radius we have * An unstable branch of large black holes (\(r_{\rm H}\gg r_{\rm ext}\)), with negative specific heat; * A stable branch of black holes close to extremality (\(r_{\rm H}\gtrsim r_{\rm ext}\)) with positive specific heat. Figure 3: Qualitative behavior of the specific heat of the 2D black hole. The vertical dashed line corresponds to the peak of the potential, \(\phi_{\rm peak}\). \(C_{\rm H}\) goes to zero at extremality and, near \(r_{\rm H}\sim r_{\rm ext}\), it behaves as \[C_{\rm H}\sim 2\pi\lambda(r_{\rm H}-r_{\rm ext})+{\cal O}\left[(r_{\rm H}-r_{ \rm ext})^{2}\right]\,, \tag{37}\] which scales linearly with \(r_{\rm H}\), as the specific heat of AdS\({}_{2}\) black holes [76]. This is fully consistent with the AdS\({}_{2}\) behavior of the 4D LDS Hayward metric in the near-extremal, near-horizon regime. However, differently from its four dimensional counterpart, for which the AdS\({}_{2}\) spacetime represents only an approximate solution, the AdS\({}_{2}\) appears also as an exact CDV solution for the 2D model. #### 4.2.2 Free energy The analysis of the specific heat allows us to distinguish between stable and unstable branches, but it is not sufficient to select the energetically preferred configurations. To this end, we need to analyze the difference in the free energy between different configurations sharing the same temperature. We will consider LDS on different thermodynamic branches, with the same temperature, but different dilaton (and, therefore, different horizon radius). In our analysis, we will not consider the CDV solutions, but only LDS. The inclusion of the CDV will be discussed in the next subsection. We consider the situation depicted in Fig. 4, where we present a qualitative plot of the temperature \(T(\phi)\) of our 2D black holes. We focus on two different configurations, \(\phi_{1}\) and \(\phi_{2}\), in the stable and unstable branches, respectively, but with the same temperature \(T(\phi_{1})=T(\phi_{2})\equiv T\). We now evaluate the free energy difference \(\Delta F_{2,1}\equiv F(\phi_{2})-F(\phi_{1})=\Delta E_{2,1}-T\Delta S_{2,1}\). Using Eqs. (32b) and (34), we have \[\Delta F_{2,1}=2\pi\int_{\phi_{1}}^{\phi_{2}}\left[T(\phi)-T\right]\,{\rm d} \phi\,. \tag{38}\] From a geometric point of view, \(\Delta F_{2,1}/2\pi\) represents the area limited by the graph of \(T(\phi)\) (solid blue line in Fig. 4) and the straight line \(T\) (dashed horizontal line in Fig. 4). Since \(T(\phi)\) shows a single maximum located at \(\phi=\phi_{\rm peak}\) and \(\phi_{1}<\phi_{\rm peak}<\phi_{2}\), then the relation \(T(\phi)>T\) holds true in the interval \([\phi_{1},\phi_{2}]\). This implies \(\Delta F_{2,1}\) to be strictly positive, i.e., \(F(\phi_{2})>F(\phi_{1})\). Therefore, generic configurations in the stable branch are energetically preferred with respect to configurations in the stable branch. These results confirm those obtained in Section 4.2.1 and show that configurations in the stable branch retain the least free energy and are, thus, thermodynamically favoured. Figure 4: Temperature \(T(\phi)\) as a function of \(\phi\). Here, we highlighted the two configurations corresponding to \(\phi_{1}\) and \(\phi_{2}\), respectively in the stable and unstable branch and their common temperature \(T=T(\phi_{1})=T(\phi_{2})\). The areas representing \((\phi_{2}-\phi_{1})T\) and the right-hand side of Eq. (38), are coloured in blue and red, respectively. ### Including the constant dilaton vacuum So far, in our thermodynamic considerations, we have considered only the LDSs. We have already seen in Section 2, however, that our dilaton gravity model allows, in its spectrum, a solution with constant dilaton describing an AdS\({}_{2}\) spacetime, the CDV. This solution represents a GS of the theory, with different asymptotics not only with respect to the "excited" LDS, but also to the extremal one. In fact, the LDS, including the extremal one, are metrically AF and the dilaton depends linearly on \(r\). Conversely, the CDV describes an AdS\({}_{2}\) spacetime and the dilaton is identically constant. Extremal LDSs, in the near-horizon approximation, are described by an AdS\({}_{2}\) spacetime, endowed, however, with a linear dilaton, the so called linear dilaton vacuum (LDV) [68]. The situation described above is very common for 4D charged black holes [77], for which we have both an extremal, AF solution, described in the near-horizon approximation by AdS\({}_{2}\) with a linear dilaton, and an AdS\({}_{2}\times\mathrm{S}^{2}\) solution (our CDV). Working in the context of 2D dilaton gravity, it has been shown that the AdS\({}_{2}\) CDV does not admit finite energy excitations, i.e., it is separated from the AdS\({}_{2}\) LDV by a mass gap [78]. Moreover, there is the additional difficulty that the two spacetimes have different asymptotics (linearly varying versus constant dilaton). The latter point makes it conceptually problematic to compare the free energies of the two configurations and to assess which one is thermodynamically favoured. These difficulties can be circumvented, and one can show, computing the free energy, that the AdS\({}_{2}\) CDV is energetically preferred with respect to the AdS\({}_{2}\) LDV [68]. Here, we will use a similar procedure for the solutions of the 2D Hayward model and compare the free energy of the AdS\({}_{2}\) CDV with that of the extremal LDS. Let us first note that we can formally consider zero mass, thermal excitations of the CDV using a Rindler-like coordinate transformation, which generates a horizon with a related temperature \[\mathrm{d}s^{2}=-\left(\frac{r^{2}}{L_{\mathrm{AdS}}^{2}}-4\pi^{2}T_{\mathrm{H }}^{2}L_{\mathrm{AdS}}^{2}\right)\,\mathrm{d}t^{2}+\left(\frac{r^{2}}{L_{ \mathrm{AdS}}^{2}}-4\pi^{2}T_{\mathrm{H}}^{2}L_{\mathrm{AdS}}^{2}\right)^{-1} \,\mathrm{d}r^{2}\,. \tag{39}\] We can now evaluate the bulk euclidean action of the CDV (39), considering that the dilaton is constant \(\phi=\phi_{\mathrm{CDV}}\) and the potential is zero when evaluated in the CDV. We have 4 Footnote 4: We are implicitly considering also a renormalization contribution to the action, to renormalize the divergent contribution at infinity due to the AdS asymptotics. \[\mathcal{I}_{\mathrm{b}}^{\mathrm{CDV}}=-\frac{1}{2}\int\mathrm{d}^{2}x\, \sqrt{-g}\,\left(-\phi_{\mathrm{CDV}}\,f^{\prime\prime}\right)=-\frac{\beta \phi_{\mathrm{CDV}}}{2}4\pi T_{\mathrm{H}}=-2\pi\phi_{\mathrm{CDV}}\,. \tag{40}\] The free energy \(F^{\mathrm{CDV}}=-T\,\ln\mathcal{Z}\) reads \[F^{\mathrm{CDV}}=-2\pi\phi_{\mathrm{CDV}}\,T_{\mathrm{H}}\,. \tag{41}\] We can now compute the difference in the free energy between a generic black-hole configuration of the LDS and the CDV. Using the result of Eq. (32b) for the entropy of the LDS and the equation \(F=E-T_{\mathrm{H}}S\) we obtain \[\Delta F^{\mathrm{BH-CDV}}\equiv F^{\mathrm{BH}}-F^{\mathrm{CDV}}=-\mathcal{ M}-2\pi T_{\mathrm{H}}\left(\phi_{\mathrm{H}}-\phi_{\mathrm{CDV}}\right)\,. \tag{42}\] Since, for every black-hole solution, \(\phi_{\mathrm{H}}>\phi_{\mathrm{CDV}}\), we have \(F^{\mathrm{BH}}<F^{\mathrm{CDV}}\), and thus the black-hole configuration is always thermodynamically preferred. However, this is true whenever \(T_{\mathrm{H}}\neq 0\). At \(T_{\mathrm{H}}=0\), we do not have thermal contributions to the free energy anymore and \(\Delta F\) reduces to the difference of the masses contributions. On the other hand, at \(T_{\mathrm{H}}=0\) the semiclassical approximation is broken, as signalized by the generation of the mass gap (see Section 5.3) [78, 68, 32], and thus we cannot rely on the euclidean action approach anymore. This does not allow to define a proper mass for the CDV, consistently with the fact that pure AdS\({}_{2}\) spacetime does not admit finite energy excitations. One could argue, following the argument of Ref. [68], that at \(T_{\mathrm{H}}=0\) the only contribution to \(\Delta F^{\mathrm{BH-CDV}}\) comes from the mass difference, which, due the absence of finite energy excitations, should diverge, which makes the CDV energetically preferred with respect to the extremal LDV. Black-hole evaporation in the quasistatic approximation In the following, we will describe the evaporation process of our regular 2D black hole working in the quasistatic approximation and in the semiclassical regime, in which the mass is slowly varying with time so that it can be considered almost constant for each individual evaporation step. In this way, the backreaction of the geometry due to the radiation is not taken into account in a fully dynamic way, but it is described in a very simplified, rough manner. The dynamic character of this backreaction will be, instead, fully taken into account in the next section, where we will consider the coupling of gravity to the matter fields describing Hawking radiation. We expect our quasistatic approximation to hold for black holes very far from extremality and to break down in the near-extremal regime, where the semiclassical approximation is not capable of describing the dynamics. Since our black holes behave as black bodies with a Planckian thermal spectrum, we use the Stefan-Boltzmann (SB) law to describe the time variation of the internal energy, which in arbitrary \(d+1\) dimensions reads [79] \[\frac{\mathrm{d}E}{\mathrm{d}t}=\sigma\mathcal{A}_{d-1}T_{\mathrm{H}}^{d+1}\,, \qquad\sigma=\frac{d\,\Gamma\left(\frac{d}{2}\right)\zeta\left(d+1\right)}{2 \pi^{d/2+1}}\,,\qquad\mathcal{A}_{d-1}=\frac{2\pi^{d/2}}{\Gamma\left(\frac{d}{ 2}\right)}r^{d-1}\,, \tag{43}\] where \(\sigma\) is the SB constant, \(\mathcal{A}_{d-1}\) is the \((d-1)\)-dimensional emitting surface, \(\Gamma(x)\) and \(\zeta(s)\) are the gamma and Riemann zeta functions, respectively. In the present case, \(d=1\), and thus \[\frac{\mathrm{d}E}{\mathrm{d}t}=-\frac{\mathrm{d}\mathcal{M}}{\mathrm{d}t}=- \frac{\pi}{6}T_{\mathrm{H}}^{2}\,, \tag{44}\] where we used the fact that \(E=-\mathcal{M}\). ### Evaporation time To compute the evaporation time, we express \(\mathcal{M}\) as a function of the event horizon radius and use \(\mathrm{d}\mathcal{M}/\mathrm{d}t=(\mathrm{d}\mathcal{M}/\mathrm{d}r_{\mathrm{ H}})(\mathrm{d}r_{\mathrm{H}}/\mathrm{d}t)\) together with Eqs. (26), (28) and (44), to obtain \[\frac{\mathrm{d}r_{\mathrm{H}}}{\mathrm{d}t}=-\frac{1}{12\lambda}T_{\mathrm{H }}\,. \tag{45}\] Inverting and integrating yields the evaporation time required to pass from an initial configuration with event horizon radius \(r_{\mathrm{H},0}\gg r_{\mathrm{ext}}\) to a final one with radius \(r_{\mathrm{H},\;\mathrm{final}}\) \[\Delta t=-12\lambda\int_{r_{\mathrm{H},0}}^{r_{\mathrm{H},\;\mathrm{final}}} \frac{\mathrm{d}r_{\mathrm{H}}}{T_{\mathrm{H}}}\,. \tag{46}\] From this expression, it is already evident that, as \(T_{\mathrm{H}}\to 0\), i.e., as we approach extremality, \(\Delta t\to\infty\). In other words, reaching the extremal configuration, in the quasistatic semiclassical approximation, requires an infinite time, consistently with the thermodynamic stability analysis of the previous subsections. For the particular model under investigation described by Eq. (19), Eq. (46) reads \[\Delta t=-48\pi\lambda^{2}\int_{r_{\mathrm{H},0}}^{r_{\mathrm{H},\;\mathrm{ final}}}\mathrm{d}r_{\mathrm{H}}\,\frac{(r_{\mathrm{H}}^{3}+\ell^{3})^{2}}{r_{ \mathrm{H}}^{4}-2\ell^{3}r_{\mathrm{H}}}=-8\pi\lambda^{2}\left[2r_{\mathrm{H} }^{3}-3\ell^{3}\ln r_{\mathrm{H}}+9\ell^{3}\,\ln\left(r_{\mathrm{H}}^{3}-2 \ell^{3}\right)\right]_{r_{\mathrm{H},0}}^{r_{\mathrm{H},\;\mathrm{final}}}\,. \tag{47}\] In the extremal limit \(r_{\mathrm{H},\;\mathrm{final}}\to\sqrt[3]{2}\,\ell=r_{\mathrm{ext}}\) we have a logarithmic divergence, as expected. This is consistent with the behavior of 4D regular models (see [80, 81, 82] and references therein). ### Time variation of mass and entropy Let us now compute how the mass of our 2D Hayward black hole evolves in time due to the emission of Hawking radiation, according to the SB law (44). This can be done by first inverting \(r_{\mathrm{H}}\) and \(T_{\mathrm{H}}\), to express them as functions of \(\mathcal{M}\) and then by numerically solving Eq. (44) written in the form \(\frac{\mathrm{d}\mathcal{M}}{\mathrm{d}t}=\frac{\pi}{6}T_{\mathrm{H}}^{2}( \mathcal{M})\,\). As a boundary condition, we impose \(\mathcal{M}(t=0)=0.1\,\lambda\). We also set \(\lambda=\ell=1\). Using the expression for \(r_{\mathrm{H}}\) as a function of the mass obtained by inverting Eq. (28), for \(\mathcal{M}=0.1\), we have \(r_{\mathrm{H}}\simeq 4.96\), which is about twice as large as that pertaining to the temperature peak (27) (\(r_{\mathrm{H,\ peak}}\simeq 1.9\) with \(\ell=1\)), which confirms that the initial state belongs to the unstable branch. We have also checked that the final results are independent of the value of the initial mass. The result of the numerical integration is shown in Fig. 5(a). We see that, at large times, the mass asymptotes the extremal one. The mass increases during the evaporation due to the negative normalization of the Killing vector generating time translations. Indeed, the internal energy \(E=-\mathcal{M}\) is decreasing, as it should be during the evaporation. We can also derive the time evolution of the black-hole entropy. This can be done by simply combining the solution of Eq. (44) together with the function \(r_{\mathrm{H}}(\mathcal{M})\) (obtained by inverting Eq. (28)) and Eq. (33). The result is reported in Fig. 5(b) and confirms the fact that, as the solution asymptotes the extremal one, the entropy reduces to zero. This is true only if we subtract the contribution \(S_{0}\) of Eq. (33). These results will be extended in Section 7, where we will go beyond the quasistatic approximation and we will consider the full dynamics of the backreaction of Hawking radiation on the background geometry. We will show that the inclusion of the latter causes the evaporation process to take place in a _finite_ time. ### Approaching extremality and breakdown of the semiclassical approximation The divergence of the evaporation time for black holes approaching extremality, found in the previous section, signalizes the breakdown of the semiclassical approximation. Let us now study in details how an excited configuration approaches the extremal limit, by solving Eq. (45) at leading order around extremality, i.e., around \(r_{\mathrm{H}}\sim\sqrt[3]{2}\,\ell\). Near extremality, at leading order, the temperature varies linearly with \(r_{\mathrm{H}}\), \[T_{\mathrm{H}}\simeq\frac{r_{\mathrm{H}}-\sqrt[3]{2}\,\ell}{6\pi\lambda\ell^ {3}}+\mathcal{O}\left[\left(r_{\mathrm{H}}-\sqrt[3]{2}\ell\right)^{2}\right]\,, \tag{48}\] Figure 5: **Figure (a):** Numerical solution of Eq. (44) (blue solid line), which shows the time evolution of the mass of an evaporating black hole as a function of time, in the quasistatic approximation. We see that at large times, the numerical solution asymptotes the extremal mass (orange dashed line). For the numerical integration, we set \(\mathcal{M}(t=0)=0.1\,\lambda\) as an initial condition. **Figure (b):** Time variation of the entropy of the black hole according to Eq. (33). At late times, the entropy goes to zero, after subtracting the topological term \(S_{0}\). In both figures, we set \(\lambda=\ell=1\). which gives, after solving Eq. (45), the time-dependence of \(r_{\rm H}\) \[r_{\rm H}(t)\simeq\sqrt[3]{2}\ell+\alpha_{1}e^{-\frac{t}{72\pi\lambda^{2}\ell^{3} }}\,, \tag{49}\] where \(\alpha_{1}\) is an integration constant. Moreover, \(\mathcal{M}(t)\) behaves, near extremality, as \[\mathcal{M}\simeq\frac{\left(\sqrt[3]{2}\ell+\alpha_{1}e^{-\frac{t}{72\pi \lambda^{2}\ell^{3}}}\right)^{2}}{2\left[\ell^{3}+\left(\sqrt[3]{2}\ell+ \alpha_{1}e^{-\frac{t}{72\pi\lambda^{2}\ell^{3}}}\right)^{3}\right]}\,, \tag{50}\] which reduces to \(\frac{1}{3\sqrt[3]{2}\ell}=\mathcal{M}_{\rm ext}\) only for \(t\to\infty\), confirming the numerical results of the previous subsection. The entropy, instead, approaches exponentially that of the extremal configuration at \(t\to\infty\), according to Eq. (49). However, one must question the validity of the semiclassical approximation near extremality. The latter breaks down when the energy of Hawking quanta, which is of order \(T_{\rm H}\), becomes comparable with the energy of the black-hole energy above extremality \(\Delta E=|\mathcal{M}-\mathcal{M}_{\rm ext}|\) (see, e.g., Ref. [32]). The energy scale at which this breakdown occurs determines the mass gap separating the CDV from the continuous part of the spectrum (the LDS) [78, 32]. This mass gap can be determined by expanding Eq. (28) near extremality, which yields \[\Delta E\simeq\frac{\left(r_{\rm H}-\sqrt[3]{2}\ell\right)^{2}}{6\ell^{3}}=6 \pi^{2}\lambda^{2}\ell^{3}\,T_{\rm H}^{2}\,. \tag{51}\] From \(\Delta E\simeq T_{\rm H}\), we easily find the energy gap \[E_{\rm gap}\simeq\frac{1}{6\pi^{2}\ell^{3}\lambda^{2}}\,. \tag{52}\] This result is consistent with those obtained for 4D black holes with two horizons merging into a single one, in particular for charged black holes. For instance, in the Reissner-Nordstrom case, the energy gap behaves as \(E_{\rm gap}\propto Q^{-3}\), where \(Q\) is the black-hole charge [32]. As one can expect, here the role of \(Q\) is played by \(\ell\)5. Figure 6(a) shows the time variation of \(\Delta E=|\mathcal{M}-\mathcal{M}_{\rm ext}|\): the intersection with the horizontal dashed line, corresponding to \(\Delta E\sim E_{\rm gap}\), identifies the time at which the semiclassical approximation breaks down. Footnote 5: An important difference is that, contrary to \(Q\), \(\ell\) is not related to any conserved quantity at infinity and thus it is not associated to any thermodynamic potential. It is interesting to notice that the limit of validity of the semiclassical approximation sets also the limit of validity of the quasistatic one, which cannot be valid when the former is broken. In fact, the quasistatic approximation is valid in the initial stages, when the black hole is macroscopic, which essentially evaporates as a GR one. It remains also valid for most of the evaporation process because the evaporation time is much larger than the typical black-hole time scale \(1/\Delta E\) (see also, e.g., Refs. [82, 83]). Only at extremality, when \(\Delta E\) goes to zero, the two time scales become comparable. Moreover, it should be considered that the time at which the semiclassical approximation breaks down (which can be read from Fig. 6(a)) is close to the time at which the evaporation process reaches the maximum of the temperature (27), where we have the onset of the second order phase transition (see Fig. 6(b)). ## 6 Coupling to conformal matter In the previous section we have described the black-hole evaporation process in the semiclassical and quasistatic approximations. Within these approximations, the backreaction effects of the geometry on the presence of Hawking radiation is completely encoded in the change in time of the black-hole mass \(\mathcal{M}\). As we have seen above, this may be a good approximation in the early stages of the evaporation, but it is expected to fail at later times. Another shortcoming of the approximation is that the backreaction is not fully dynamic, since it does not involve the full metric solution. Its role is simply encoded in the variation of the black-hole mass. In this section, we will give an exact semiclassical description of the evaporation process by studying the coupling of our 2D model to quantum conformal matter, in the form of \(N\) massless scalar fields. This coupling is most easily analyzed in the conformal gauge, where the 2D metric reads \[\mathrm{d}s^{2}=-e^{2\rho(x^{+},\,x^{-})}\mathrm{d}x^{+}\mathrm{d}x^{-}\,, \tag{53}\] where \(e^{2\rho}\) is the conformal factor of the metric. The transition from the metric in the Schwarzschild gauge (7) to that in the form Eq. (53) is realized by using the coordinates, \(x^{\pm}=t\pm r_{*}\), where \(r_{*}\equiv\int\mathrm{d}r/f\) is the tortoise coordinate. The system of coordinates \(x^{\pm}\) does not cover the interior of the black hole, but only the region outside the outer horizon. Indeed, \(x^{+}-x^{-}\to-\infty\) corresponds to the horizon (\(x^{+}\to-\infty\) gives the past horizon, while \(x^{-}\to\infty\) future horizon), while \(x^{+}-x^{-}\to\infty\) corresponds to asymptotic infinity. The field equations stemming from Eq. (4) are now \[8e^{-2\rho}\partial_{+}\partial_{-}\rho=-\frac{\mathrm{d}\mathscr{ V}}{\mathrm{d}\phi}\,; \tag{54a}\] \[\partial_{+}^{2}\phi-2\partial_{+}\rho\partial_{+}\phi=0\,;\] (54b) \[\partial_{-}^{2}\phi-2\partial_{-}\rho\partial_{-}\phi=0\,;\] (54c) \[\partial_{+}\partial_{-}\phi+\frac{\mathscr{V}}{4}e^{2\rho}=0\,. \tag{54d}\] Figure 6: **Figure (a): Plot of \(\Delta E\) as a function of time (solid blue line). The horizontal dashed orange line corresponds to the value of the mass gap (52). The intersection point between the two curves gives the time where the semiclassical approximation breaks down. Figure (b): Mass evolution in time (solid blue line). The horizontal dashed orange line corresponds to the value of the mass of the extremal configuration. The horizontal dotted green line corresponds to the value of the mass at the temperature peak (27). The vertical dashed line, instead, signals the instant of time where \(|\mathcal{M}-\mathcal{M}_{\mathrm{ext}}|\sim E_{\mathrm{gap}}\), with \(E_{\mathrm{gap}}\) given by Eq. (52). As it can be seen, the breakdown of the semiclassical approximation happens after the evaporation process reaches the temperature peak.** For both figures, we set \(\mathcal{M}(t=0)=0.1\,\lambda\) and \(\ell=\lambda=1\). The solution reads \[e^{2\rho}=f=\frac{2\mathcal{M}}{\lambda}+\frac{1}{\lambda^{2}}\int^{ \phi}\mathrm{d}\psi\,\mathscr{V}\,\equiv\frac{2\mathcal{M}}{\lambda}+\mathcal{ J}\,; \tag{55a}\] \[\int^{\phi}\frac{\mathrm{d}\psi}{e^{2\rho}}=\int^{\phi}\frac{ \mathrm{d}\psi}{\frac{2\mathcal{M}}{\lambda}+\mathcal{J}}=\lambda r_{*}=\frac{ \lambda}{2}\left(x^{+}-x^{-}\right)\,, \tag{55b}\] where we defined \(\mathcal{J}\equiv\frac{1}{\lambda^{2}}\int^{\phi}\mathrm{d}\psi\,\mathscr{V}(\psi)\). ### Coupling to matter: shock wave solution We now couple 2D dilaton gravity to the \(N\) massless scalar fields describing conformal matter. The full action reads \[\mathcal{S}=\frac{1}{2}\int\mathrm{d}^{2}x\,\sqrt{-g}\,\left[\phi R+\mathscr{V }-\frac{1}{2}\sum_{i=1}^{N}\left(\nabla f_{i}\right)^{2}\right]\,. \tag{56}\] The stress-energy tensor of matter fields reads \[T_{\mu\nu}=-\frac{1}{4}g_{\mu\nu}\sum_{i=1}^{N}g^{\rho\sigma}\partial_{\rho}f_ {i}\partial_{\sigma}f_{i}+\frac{1}{2}\sum_{i=1}^{N}\partial_{\mu}f_{i} \partial_{\nu}f_{i}\,. \tag{57}\] The field equations (54) now become \[8e^{-2\rho}\partial_{+}\partial_{-}\rho=-\frac{\mathrm{d}\mathscr{V }}{\mathrm{d}\phi}\,; \tag{58a}\] \[\partial_{+}^{2}\phi-2\partial_{+}\rho\partial_{+}\phi=-T_{++}=- \frac{1}{2}\sum_{i=1}^{N}\partial_{+}f_{i}\partial_{+}f_{i}\,;\] (58b) \[\partial_{-}^{2}\phi-2\partial_{-}\rho\partial_{-}\phi=-\frac{1}{2}\sum_{i=1} ^{N}\partial_{-}f_{i}\partial_{-}f_{i}\,;\] (58c) \[\partial_{+}\partial_{-}\phi+\frac{\mathscr{V}}{4}e^{2\rho}=0\,;\] (58d) \[\partial_{+}\partial_{-}f_{i}=0\quad\Rightarrow\quad f_{i}=f_{i, +}(x^{+})+f_{i,-}(x^{-})\,. \tag{58e}\] The matter-coupled field equations above admit an exact solution if we consider an ingoing shock wave starting at \(x^{+}=x_{0}^{+}\) and propagating in the \(x^{-}\) direction, while no energy flux is present in the \(x^{+}\) direction \[T_{++}=-\mathcal{M}\,\delta\left(x^{+}-x_{0}^{+}\right)=\frac{1}{2}\sum_{i=1} ^{N}\partial_{+}f_{i}\partial_{+}f_{i}\,,\qquad T_{--}=0\,. \tag{59}\] The minus sign in \(T_{++}\) is again due to the normalization of the Killing vector of the metric. From Birkhoff's theorem, we can write the full solution by patching, on the infall line \(x^{+}=x_{0}^{+}\), the vacuum solution together with the one after the shock wave [62]. \(x^{+}\leq x_{0}^{+}\) At the end of Section 4.1, we showed that, in the linear dilaton case, the GS of the theory, i.e., the state retaining the least internal energy, is the extremal black-hole configuration, characterized by a mass \(\mathcal{M}_{\mathrm{ext}}\) given by Eq. (25). The vacuum solution (before the shock wave) therefore is equivalent to Eq. (55) with \(\mathcal{M}=\mathcal{M}_{\mathrm{ext}}\) \[e^{2\rho}=\frac{2\mathcal{M}_{\mathrm{ext}}}{\lambda}+\mathcal{J}\,,\qquad \int^{\phi}\frac{\mathrm{d}\psi}{\frac{2\mathcal{M}_{\mathrm{ext}}}{\lambda} +\mathcal{J}}=\frac{\lambda}{2}\left(x^{+}-x^{-}\right)\,. \tag{60}\] Since \(T_{--}=0\), now the solution is (see Ref. [62]) \[e^{2\rho}=\left(\frac{2\mathcal{M}}{\lambda}+\mathcal{J}\right)F^{ \prime}(x^{-})\,;\] \[\int^{\phi}\frac{\mathrm{d}\psi}{\mathcal{J}+\frac{2\mathcal{M}}{ \lambda}}=\frac{\lambda}{2}\left[x^{+}-x_{0}^{+}-F(x^{-})\right]\,; \tag{61}\] \[F^{\prime}(x^{-})\equiv\frac{\mathrm{d}F(x^{-})}{\mathrm{d}x^{- }}=\frac{\mathcal{J}_{0}+\frac{2\mathcal{M}_{\mathrm{ext}}}{\lambda}}{ \mathcal{J}_{0}+\frac{2\mathcal{M}_{\mathrm{ext}}}{\lambda}}\,,\] where \(\mathcal{J}_{0}\equiv\mathcal{J}|_{x^{+}=x_{0}^{+}}\) and \(F(x^{-})\) is a function needed to map the old coordinate \(x^{-}\) of the observer in the GS solution into a new coordinate, which pertains to an observer in the excited solution. Its form is fixed by requiring continuity of the metric function across the shock wave at \(x^{+}=x_{0}^{+}\). This defines the function up to an integration constant, which is fixed by requiring the continuity of the dilaton across the shock wave. As usual, and as we will see in more details in Section 7.2, the function \(F(x^{-})\) generates the Hawking flux of particles, which can be described in terms of the change of the coordinate \(x^{-}\) defined by \(F\). In fact, in 2D, the flux of Hawking particles can be described in terms of the Schwarzian derivative of the function \(F(x^{-})\) (see, e.g., Refs. [62, 84]). Due to the sign of the shock wave (59) and to the minus sign in Eq. (28), the mass \(\mathcal{M}\) of the excited states is _less_ than the extremal mass, as the shock wave increases the internal energy of the system. The physical picture we expect is therefore the following (see also Fig. 7). The shock wave increases the internal energy of the initial system, i.e., the GS, extremal configuration: the degenerate horizon splits into two apparent horizons. According to the thermodynamic analysis and the results of Section 5.2, we then expect that, when dynamically evolving the system, these two horizons will meet again at the end of evaporation, and merge to give again the extremal configuration. We will confirm this in the next section by keeping into account the dynamic contributions of backreaction effects of the radiation on the geometry and dynamically describing the evaporation process by numerically integrating the field equations. Figure 7: Metric functions with varying mass \(\mathcal{M}\). The blue solid line refers to the extremal configuration with \(\mathcal{M}=\mathcal{M}_{\mathrm{ext}}\). The orange dashed line, instead, refers to an excited state (after the shock wave) with \(\mathcal{M}=0.9\,\mathcal{M}_{\mathrm{ext}}\), which presents two event horizons. Black hole evaporation and backreaction The evaporation and its backreaction effects on the spacetime geometry, are studied by quantizing the conformal matter on the curved 2D background. An important consequence of the curvature of the spacetime is that the otherwise classically traceless stress-energy tensor acquires a nonzero trace, proportional to the Ricci scalar, which is the so-called conformal anomaly [85] \[\langle T^{\mu}_{\mu}\rangle=\frac{N}{24}R\,, \tag{62}\] where \(N\) is the number of matter fields. This can be accounted for by adding, to the classical action, the nonlocal Polyakov term \[{\cal S}_{\rm Pol}=-\frac{N}{96}\int{\rm d}^{2}x\,\sqrt{-g}\,R\Box^{-1}R\,, \tag{63}\] where \(\Box^{-1}\) is the scalar Green function. Using Eq. (62), we can derive the expectation value of \(T_{+-}\), which in the conformal gauge is entirely local \[\langle T_{+-}\rangle=-\frac{N}{12}\partial_{+}\partial_{-}\rho\,. \tag{64}\] The nonlocal effects stemming from the action (63) are, instead, encoded in the other components of the stress-energy tensor \(\langle T_{\pm\pm}\rangle\), which can be obtained from covariant conservation of the latter \[\langle T_{--}\rangle=-\frac{N}{12}\left[\partial_{-}\rho\partial _{-}\rho-\partial_{-}^{2}\rho+t_{-}(x^{-})\right]\,; \tag{65a}\] \[\langle T_{++}\rangle=-\frac{N}{12}\left[\partial_{+}\rho \partial_{+}\rho-\partial_{+}^{2}\rho+t_{+}(x^{+})\right]\,. \tag{65b}\] Here, \(t_{\pm}(x^{\pm})\) are integration functions, which depend on the boundary conditions and therefore encode the nonlocal effects of the Polyakov action (63). These functions are also sensitive to the choice of the coordinates. Indeed, under a conformal transformation of the coordinates \(x^{\pm}\to y^{\pm}(x^{\pm})\), the conformal factor transforms as \[\rho(y^{+},\,y^{-})=\rho(x^{+},\,x^{-})-\frac{1}{2}\ln\frac{{\rm d}y^{+}}{{ \rm d}x^{+}}\frac{{\rm d}y^{-}}{{\rm d}x^{-}}\,, \tag{66}\] which, plugged into Eqs. (64), (65a) and (65b), yields the anomalous transformation of the stress-energy tensor with the Schwarzian derivative \[\left(\frac{{\rm d}y^{\pm}}{{\rm d}x^{\pm}}\right)^{2}T_{\pm\pm}(y^{\pm})=T_{ \pm\pm}(x^{\pm})-\frac{N}{12}\left\{y^{\pm},x^{\pm}\right\}\,,\qquad\left\{y, x\right\}=\frac{y^{\prime\prime\prime}}{y^{\prime}}-\frac{3}{2}\frac{y^{\prime \prime 2}}{y^{\prime 2}}\,, \tag{67}\] where the prime indicates a derivative with respect to \(x\). The form of the stress-energy tensor in the new coordinate system is preserved if the \(t_{\pm}\)'s trasform as \[\left(\frac{{\rm d}y^{\pm}}{{\rm d}x^{\pm}}\right)^{2}t_{\pm}(y^{\pm})=t_{\pm }(x^{\pm})+\left\{y^{\pm},x^{\pm}\right\}\,. \tag{68}\] Including the conformal anomaly, the field equations (58) become \[8e^{-2\rho}\partial_{+}\partial_{-}\rho=-\frac{{\rm d}\mathscr{V }}{{\rm d}\phi}\,; \tag{69a}\] \[\partial_{+}^{2}\phi-2\partial_{+}\rho\partial_{+}\phi={\cal M} \,\delta\left(x^{+}-x_{0}^{+}\right)+\frac{N}{12}\left[\left(\partial_{+} \rho\right)^{2}-\partial_{+}^{2}\rho+t_{+}(x^{+})\right]\,;\] (69b) \[\partial_{-}^{2}\phi-2\partial_{-}\rho\partial_{-}\phi=\frac{N}{1 2}\left[\left(\partial_{-}\rho\right)^{2}-\partial_{-}^{2}\rho+t_{-}(x^{-}) \right]\,;\] (69c) \[\partial_{+}\partial_{-}\phi+\frac{\mathscr{V}}{4}e^{2\rho}=- \frac{N}{12}\partial_{+}\partial_{-}\rho\,;\] (69d) \[f_{i}=f_{i,+}(x^{+})+f_{i,-}(x^{-})\,. \tag{69e}\] Eqs. (69) can be solved once suitable initial conditions to fix the functions \(t_{\pm}(x^{\pm})\) are imposed. These can be determined assuming the GS as the initial state. In conventional AF models, like the CGHS one [54], the GS is pure Minkowski spacetime. One can therefore define a global coordinate transformation in which the conformal metric (53) is manifestly flat, i.e., we can define a system of coordinates in which \(e^{2\rho}=\text{constant}=1\). One can then assume that there is no incoming radiation (except from the classical shock wave) and that there is no net outcoming flux, so that \[\langle T_{\mu\nu}\rangle_{\text{GS}}=0 \tag{70}\] identically, which implies \(t_{\pm}=0\) on the GS in this system of coordinates. One can then transform back to the original coordinates and exploit the anomalous transformation (68) to obtain their final form in the new coordinates. In the case under consideration, however, we saw that the GS does not correspond to Minkowski spacetime (which is only reached asymptotically), but it is given by the extremal configuration (60). This, of course, prevents from defining a global coordinate transformation which brings \(e^{2\rho}\to\text{constant}\). Despite this difficulty, we can still use Eq. (70) as a boundary condition, similarly to the CGHS model. Once the boundary conditions on the GS have been imposed, the solution before the shock wave \((x^{+}<x_{0}^{+})\) is Eq. (60), the vacuum one, while after the shock wave \((x^{+}>x_{0}^{+})\) it is given by an evaporating black-hole solution. ### Adding counterterms and fixing the boundary conditions In order to preserve the physically motivated boundary condition (70), we can follow Refs. [62, 65] and modify the usual Polyakov action (63) by adding the most general local covariant counterterms with no second order derivatives \[\mathcal{S}_{\text{Pol}}=-\frac{N}{96}\int\text{d}^{2}x\,\sqrt{-g}\,\left[R \Box^{-1}R-4\mathscr{A}(\phi)R+4\mathscr{B}(\phi)\left(\nabla\phi\right)^{2} \right]\,, \tag{71}\] where \(\mathscr{A}\) and \(\mathscr{B}\) are functions of the scalar field. The presence of these new terms, of course, does not alter the classical limit \(N\to 0\). Also notice that the addition of new counterterms was already employed in, e.g., the CGHS model, in order to make the theory exactly solvable [86, 87]. In Ref. [62], the addition of the counterterms was necessary to prevent \(\langle T_{\mu\nu}\rangle_{\text{GS}}\) from diverging for \(\phi\to\infty\). In the present case, it can be shown that divergences are absent due to the peculiar properties of the potential outlined in Section 2.3 (see Appendix A). Nevertheless, adding counterterms is needed to implement the boundary condition (70) in a consistent way. With the new terms, the components of the stress-energy tensor (64), (65a) and (65b) become \[\langle T_{+-}\rangle =-\frac{N}{12}\left(\partial_{+}\partial_{-}\rho+\partial_{+} \partial_{-}\mathscr{A}\right)\,; \tag{72a}\] \[\langle T_{\pm\pm}\rangle =-\frac{N}{12}\left[\partial_{\pm}\rho\partial_{\pm}\rho-\partial _{\pm}^{2}\rho+2\partial_{\pm}\rho\partial_{\pm}\mathscr{A}-\partial_{\pm}^{2} \mathscr{A}-\mathscr{B}\partial_{\pm}\phi\partial_{\pm}\phi+t_{\pm}(x^{\pm}) \right]\,. \tag{72b}\] We now impose the boundary condition (70). Requiring \(t_{\pm}(x^{\pm})=0\) on the GS completely fixes the two functions \(\mathscr{A}\) and \(\mathscr{B}\) (see also Ref. [62]) \[\mathscr{A}(\phi) =-\frac{1}{2}\ln\left(\frac{2\mathcal{M}_{\text{ext}}}{\lambda}+ \mathcal{J}_{\text{GS}}\right)\,=-\rho_{\text{GS}}\,, \tag{73a}\] \[\mathscr{B}(\phi) =-\left(\partial_{\phi}\rho_{\text{GS}}\right)^{2}=-\frac{1}{4 \left(\frac{2\mathcal{M}_{\text{ext}}}{\lambda}+\mathcal{J}_{\text{GS}} \right)^{2}}\left(\frac{d\mathcal{J}_{\text{GS}}}{d\phi}\right)^{2}\,, \tag{73b}\] where the subscript GS indicates that \(\mathcal{J}\) is computed at extremality. Equations (72a) and (72b) now read \[\langle T_{+-}\rangle=-\frac{N}{12}\left(\partial_{+}\partial_{-}\rho-\frac{ \partial_{+}\partial_{-}\mathcal{J}_{\rm GS}}{2\left(\frac{2\mathcal{M}_{\rm ext }}{\lambda}+\mathcal{J}_{\rm GS}\right)}+\frac{\partial_{-}\mathcal{J}_{\rm GS }\partial_{+}\mathcal{J}_{\rm GS}}{2\left(\frac{2\mathcal{M}_{\rm ext}}{ \lambda}+\mathcal{J}_{\rm GS}\right)^{2}}\right)\,; \tag{74}\] \[\langle T_{\pm\pm}\rangle=-\frac{N}{12}\bigg{[} \partial_{\pm}\rho\partial_{\pm}\rho-\partial_{\pm}^{2}\rho-\frac{ \partial_{\pm}\rho\partial_{\pm}\mathcal{J}_{\rm GS}}{2\frac{\mathcal{M}_{\rm ext }}{\lambda}+\mathcal{J}_{\rm GS}}+\frac{\partial_{\pm}^{2}\mathcal{J}_{\rm GS }}{2\left(\frac{2\mathcal{M}_{\rm ext}}{\lambda}+\mathcal{J}_{\rm GS} \right)}+ \tag{75}\] \[-\frac{\partial_{\pm}\mathcal{J}_{\rm GS}\partial_{\pm}\mathcal{ J}_{\rm GS}}{2\left(\frac{2\mathcal{M}_{\rm ext}}{\lambda}+\mathcal{J}_{\rm GS} \right)^{2}}+\frac{\partial_{\pm}\phi_{\rm GS}\partial_{\pm}\phi_{GS}}{4\left( \frac{2\mathcal{M}_{\rm ext}}{\lambda}+\mathcal{J}_{\rm GS}\right)^{2}}\left( \frac{d\mathcal{J}_{\rm GS}}{d\phi}\right)^{2}\bigg{]}\,.\] Since the scalar field is a function of \(x^{+}\) and \(x^{-}\), while we are treating \(\mathcal{J}\) as a function of \(\phi\), it is convenient to rewrite all derivatives of \(\mathcal{J}\) with respect to the coordinates as derivatives with respect to \(\phi\) (to lighten the notation, we indicate derivation with respect to \(\phi\) with a subscript \({}_{,\phi}\)). With the new components of the stress-energy tensor (74) and also using Eq. (60), the field equations (69) become \[8e^{-2\rho}\partial_{+}\partial_{-}\rho =-\mathscr{V}_{,\phi}\,; \tag{76a}\] \[\partial_{+}^{2}\phi-2\partial_{+}\rho\partial_{+}\phi =\mathcal{M}\,\delta\left(x^{+}-x_{0}^{+}\right)+\frac{N}{12} \left[\partial_{+}\rho\partial_{+}\rho-\partial_{+}^{2}\rho-\frac{\lambda}{2} \partial_{+}\rho\,\mathcal{J}_{\rm GS,\phi}+\right.\] \[\left.+\frac{\lambda^{2}}{8}\left(\frac{2\mathcal{M}_{\rm ext}}{ \lambda}+\mathcal{J}_{\rm GS}\right)\mathcal{J}_{\rm GS,\phi\phi}+\frac{ \lambda^{2}}{16}\left(\mathcal{J}_{\rm GS,\phi}\right)^{2}\right]\,;\] (76b) \[\partial_{-}^{2}\phi-2\partial_{-}\rho\partial_{-}\phi =\frac{N}{12}\bigg{[}\partial_{-}\rho\partial_{-}\rho-\partial_{ -}^{2}\rho+\frac{\lambda}{2}\partial_{-}\rho\,\mathcal{J}_{\rm GS,\phi}+ \frac{\lambda^{2}}{8}\left(\frac{2\mathcal{M}_{\rm ext}}{\lambda}+\mathcal{J} _{\rm GS}\right)\mathcal{J}_{\rm GS,\phi\phi}+\frac{\lambda^{2}}{16}\left( \mathcal{J}_{\rm GS,\phi}\right)^{2}\bigg{]}\,;\] (76c) \[\partial_{+}\partial_{-}\phi+\frac{\mathscr{V}}{4}e^{2\rho} =-\frac{N}{12}\left[\partial_{+}\partial_{-}\rho+\frac{\lambda^{2}}{8} \left(\frac{2\mathcal{M}_{\rm ext}}{\lambda}+\mathcal{J}_{\rm GS}\right) \mathcal{J}_{\rm GS,\phi\phi}\right]\,. \tag{76d}\] ### Hawking flux and apparent horizon trajectory We now derive the asymptotic form of the Hawking flux in our model. This can be done by studying the behavior of \(\langle T_{\mu\nu}\rangle\) at future null infinity \(x^{+}\to\infty\). In this region, we are considering \(\phi\to\infty\). We are, therefore, in the decoupling regime, where the gravitational coupling is weak, so that the effects of backreaction can be approximately neglected. This means that the solution in the region of interest (\(x^{+}>x_{0}^{+}\)) corresponds to the classical one (61). In this limit, \(\mathcal{J}\to 0\), \(\mathcal{J}_{,\phi}\to 0\) and \(\mathcal{J}_{,\phi\phi}\to 0\). We have thus \[\langle T_{+-}\rangle \to 0\,; \tag{77a}\] \[\langle T_{++}\rangle \to 0\,;\] (77b) \[\langle T_{--}\rangle \to\frac{N}{24}\left\{F,x^{-}\right\}\,,\qquad\left\{F,x^{-} \right\}=\frac{F^{\prime\prime\prime}}{F^{\prime}}-\frac{3}{2}\left(\frac{F^{ \prime\prime}}{F^{\prime}}\right)^{2}\,, \tag{77c}\] where now \({}^{\prime}\) indicates differentiation with respect to \(x^{-}\). This result agrees with that of Ref. [62], as it is naturally expected. As it is noted there, this expression diverges once the (outer) event horizon is reached, due to the choice of coordinates adopted. One way to solve this problem is to redefine the \(x^{-}\) coordinate as \(\hat{x}^{-}\equiv F(x^{-})\) and exploit the anomalous transformation (67). This leads to a well-behaved expression at the horizon, which reads \[\langle\hat{T}_{--}\rangle=\frac{N}{24}\frac{\left\{F,x^{-}\right\}}{F^{\prime 2 }}\,. \tag{78}\] Using the form of \(F\) given by Eq. (61), we obtain, approaching the horizon \[\langle\hat{T}_{--}\rangle=\frac{N}{192}\left[\mathscr{V}(\phi_{\rm H})\right]^{ 2}\,\propto T_{\rm H}^{2}\,. \tag{79}\] The proportionality relation is the same as the SB law (44), which confirms the Planckian nature of the emitted spectrum. Here, however, the flux is modified with respect to standard singular models due to the specific form of the potential (17). Since the outgoing flux of Hawking radiation is positive, we expect the (outer) apparent event horizon to recede. To see this, we closely follow the approach adopted in Ref. [88]. The apparent horizon trajectory \(\hat{x}^{-}=\hat{x}^{-}(x^{+})\) can be derived using the definition of apparent horizon, which satisfies \(\partial_{+}\phi=0\). This implies \[0=\frac{\mathrm{d}}{\mathrm{d}x^{+}}\partial_{+}\phi\bigg{|}_{x^{-}=\hat{x}^{ -}}=\partial_{+}^{2}\phi+\left(\frac{\mathrm{d}\hat{x}^{-}}{\mathrm{d}x^{+}} \right)\partial_{+}\partial_{-}\phi\,, \tag{80}\] from which follows \[\partial_{+}\partial_{-}\phi=-\partial_{+}^{2}\phi\,\left(\frac{\mathrm{d} \hat{x}^{-}}{\mathrm{d}x^{+}}\right)^{-1}\,. \tag{81}\] Combining Eqs. (76a), (76b) and (76d) into the above yields \[\frac{\mathrm{d}\hat{x}^{-}}{\mathrm{d}x^{+}}=\frac{N}{12}\frac{\partial_{+} \rho\partial_{+}\rho-\partial_{+}^{2}\rho-\frac{\partial_{+}\rho}{2\lambda} \,\mathcal{V}_{\mathrm{GS}}+\frac{e^{2\rho_{\mathrm{ext}}}}{8}\,\mathcal{V}_ {\mathrm{GS},\phi}+\frac{\mathcal{V}_{\mathrm{GS}}^{2}}{16\lambda^{2}}}{ \frac{\mathcal{I}}{4}e^{2\rho}+\frac{N}{96}\left(e^{2\rho_{\mathrm{ext}}} \,\mathcal{V}_{\mathrm{GS},\phi}-e^{2\rho}\,\mathcal{V}_{\phi}\right)}. \tag{82}\] The qualitative behavior of the trajectory of the apparent horizon is determined by the sign of the right hand side of the expression above. In order to assess the latter, we would need the full solution of Eqs. (76a) and (76d), which however can only be solved numerically. This will be done in the next section. Here, as a first test, we exploit the fact that the full solution of the field equations approaches the classical one in the asymptotic region \(\phi\to\infty\), where the coupling with matter fields and backreaction effects become negligible. In this region, the solution is then given by Eq. (61). Using this solution into Eq. (82), it can be shown that \(\mathrm{d}\hat{x}^{-}/\mathrm{d}x^{+}\) is indeed positive. This implies a receding outer apparent horizon, as expected. Moreover, it confirms the qualitative picture we expect from evaporation, outlined in Section 5 and at the end of Section 6.1: the outer horizon recedes and approaches the horizon of the extremal GS. Notice that, due to the limitations caused by the adopted system of coordinates, we are able to describe the behavior of the outer horizon only. ### Numerical results We now numerically solve the equations of motion given in Eqs. (76). This will allow us to capture the full dynamics of the evaporation process, keeping into account also backreaction effects. A numerical study of evaporating 2D models was performed in the past for the CGHS [89, 90, 91] and other (singular or regular) 2D models [61, 92]. To numerically integrate the equations of motion, we construct a spacetime lattice by means of a grid of null lines and we impose two different sets of boundary conditions, one along \(x^{+}\) and one along \(x^{-}\): * At the shock wave, i.e., at \(x^{+}=x_{0}^{+}\), we require the solution to coincide with the GS, given by Eq. (60); * Above the shock wave, along \(\mathcal{I}^{-}\), i.e., at \(x^{-}\to-\infty\), where backreaction effects are expected to be negligible, we require the solution to match Eq. (61), the classical one. Of course, we cannot numerically set a condition at infinity, so we choose a reasonably large negative value. Here we set \(x_{\infty}^{-}=-220\). This value is found to be the minimal one for which the numerical solution coincides, in the classical limit \(N\to 0\), with Eq. (61), for every value of \(x^{+}\) and \(x^{-}\), within a reasonably small numerical error. For values of \(x_{\infty}^{-}\) greater than \(-220\), the numerical solution deviates from the expected analytical one, while for smaller values the results are the same as the one obtained with \(x_{\infty}^{-}=-220\), but with a much higher integration time. Both the boundary conditions at \(x^{+}=x_{0}^{+}\) and at \(x^{-}\to-\infty\) are given in implicit form. We therefore first need to integrate and invert the corresponding expressions (the details of this computation are reported in Appendix B). To numerically integrate the field equations, we also need to select an appropriate integration interval along \(x^{+}\), from the shock wave at \(x^{+}_{0}\) up to a maximum value \(x^{+}_{\rm max}\). We chose the interval \(x^{+}\in[x^{+}_{0},5]\) (where we set \(x^{+}_{0}=1\)). We expect the general results to hold also for larger values of \(x^{+}_{\rm max}\). We chose it equal to \(5\) to have a reasonable integration time interval. For larger values of \(x^{+}_{\rm max}\), the time required to complete a computation increases considerably, given the high computational cost of the algorithm. The interval on the \(x^{+}\)-axis is then discretized into a number \(n_{\rm steps}\) of small intervals, with length \(\Delta x=(x^{+}_{\rm max}-x^{+}_{0})/n_{\rm steps}\). The number of steps was set equal to \(n_{\rm steps}=1000\). We checked that, for larger values of \(n_{\rm steps}\), the results of the integration remain qualitatively the same, at the price of having, again, a much longer computational time. Each point of the discretized \(x^{+}\)-axis is labeled by an index \(i\). We choose to discretize the derivatives in the \(x^{+}\) direction accordingly, \[\partial_{+}\phi = \frac{\phi(x^{+}_{i+1},x^{-})-\phi(x^{+}_{i},x^{-})}{\Delta x}+ \mathcal{O}(\Delta x)\,, \tag{83}\] \[\partial_{+}\rho = \frac{\rho(x^{+}_{i+1},x^{-})-\rho(x^{+}_{i},x^{-})}{\Delta x}+ \mathcal{O}(\Delta x)\,. \tag{84}\] Notice that, with this choice, our algorithm converges to the solution only at first order in \(\Delta x\). However, since we are interested in the qualitative behavior of the solutions, Eqs. (83) and (84) represent a good approximation for the derivatives of \(\phi\) and \(\rho\). Along \(x^{-}\) at fixed \(x^{+}\), the field equations reduce to ordinary differential equations. Therefore, for each step in the \(x^{+}\) direction, we numerically integrate the equations along \(x^{-}\) by means of a 4th order Runge-Kutta algorithm. The outcome, thus, is a list of \(x^{-}\)-profiles of \(\phi\) and \(\rho\) for each point of the discretized interval on \(x^{+}\) (see Fig. 8). For all the cases considered here, the values of the parameters are set equal to \(\lambda=1\) and \(\ell=1\). The mass of the evaporating solution is fixed equal to \(\mathcal{M}=0.1\) (in these units). As a first test, we have verified the accuracy of the integration algorithm in the absence of backreaction, i.e., for \(N=0\), by comparing the numerical solution with the analytical classical one (61). Overall, we find that the relative difference between the numerical and the analytical solutions is smaller than \(1\,\%\) as long as we consider large negative values of \(x^{-}\), while it increases for \(x^{-}\to 0\), staying however \(\lesssim 20\,\%\) Figure 8: Schematic representation of the numerical algorithm adopted to numerically integrate Eqs. (76a) and (76d). We discretize the \(x^{+}\)-axis, and for each interval on the latter, we numerically integrate the field equations along \(x^{-}\) using a Runge-Kutta algorithm. (see Fig. 9). We checked that increasing \(n_{\text{steps}}\) leads to an improvement in the accuracy of the integration algorithm (the relative differences decrease), without, however, altering the qualitative final results and at the price of having a much longer computational time. Although a relative discrepancy of \(20\,\%\) may seem quite important, one should consider that, in the presence of backreaction (see below), the extremal solution is reached at values of \(x^{-}\) for which the relative discrepancy stays always below \(5\,\%\). As in the following we are not interested in the exact details of the evaporation process, but rather in its qualitative evolution and outcome, we will adopt \(n_{\text{steps}}=1000\) anyway, favouring time efficiency over high precision. After this preliminary test, we analyse three different cases of increasing \(N\): \(N=0\), \(N=24\) and \(N=2400\), to study the backreaction effects in different regimes. The \(x^{-}\)-profiles of \(\phi\) and \(\rho\), together with their variations \(\Delta\phi=|\phi(x^{-})-\phi_{\text{ext}}(x^{-})|\), \(\Delta\rho=|\rho(x^{-})-\rho_{\text{ext}}(x^{-})|\) with respect to the extremal configurations, are computed numerically for several values of \(x^{+}\) in the range \(x^{+}\in[x_{0}^{+},5]\). At \(x^{+}=x_{0}^{+}\), the numerical solutions match exactly the extremal ones, as it should be according to the boundary condition imposed at the shock wave. For simplicity, in Figs. 10 to 12, we only show the plots for \(x^{+}=5\) and \(N=0\), \(24\), \(2400\). The plots for the other values of \(x^{+}\) in the range considered here have the same qualitative behavior. Moreover, given the increase in computational errors near \(x^{-}\sim 0\), we performed the integration in the range \(x^{-}\in[-220,0]\) for convenience. We have checked that the results do not differ from those shown, even if we extend the \(x^{-}\) axis to positive values: the convergence to the extremal solution either does not occur in the entire axis or always occurs in the \(x^{-}<0\) region. For \(N=0\), i.e., in the absence of backreaction, we see, as expected, that the black-hole solution remains different from the extremal one for every value of the coordinate \(x^{-}\). For \(N=24\), namely when backreaction effects begin to become relevant, we see that, although \(\Delta\phi\) and \(\Delta\rho\) remain different from zero for all values of \(x^{-}\), they begin to decrease towards zero after reaching a maximum. For \(N\gg 24\), i.e., \(N\sim 2400\), when backreaction effects become stronger, we see that \(\Delta\phi\) and \(\Delta\rho\) become always zero at some finite (negative) value of \(x^{-}\). In general, the larger \(N\), i.e., the stronger backreaction effects, the faster the evaporating configuration reaches the extremal GS. As remarked above, the convergence to the extremal configuration occurs at values of \(x^{-}\) for which relative numerical errors are less than \(5\,\%\). It is very important to notice that the convergence of the excited, evaporating solution towards the Figure 9: Relative difference between the numerical (obtained by setting \(N=0\)) and the exact (61) solutions at \(x^{+}=x_{0}^{+}\) and \(x^{+}=5\). As it can be seen, the difference increases as we move towards small negative values of \(x^{-}\). The numerical integration has been carried out on an \(x^{+}\) interval of \([x_{0}^{+},x_{\text{max}}^{+}]=[1,5]\), while on the \(x^{-}\) direction, in the interval \([-220,0]\). The parameters of the integration are the following: \(\lambda=\ell=1\), \(\mathcal{M}=0.1\) (in these units), \(n_{\text{steps}}=1000\), so that \(\Delta x=4\cdot 10^{-3}\). We checked that, for higher values of \(n_{\text{steps}}\), the relative differences decrease, without however altering the qualitative final results and with a much higher computational time. extremal one is non-monotonic. As one can see clearly from the plots shown (but the same happens also for other values of \(N\) not shown here), \(\Delta\phi\) and \(\Delta\rho\) stay almost flat in the region of large \(\phi\) (corresponding to \(x^{-}\ll 0\)). Then, they reach a sharp maximum at relatively large values of \(x^{-}\) before falling rapidly towards zero. This behavior cannot be traced back to backreaction effects, since it is present also in the \(N=0\) case. The sharp maximum seems to be related to the presence of the maximum in the potential \(\mathscr{V}\) at (relatively) small values of the dilaton (see Fig. 1), thus to a self-interaction effect of the dilaton. On the other hand, this maximum in \(\mathscr{V}\) is also responsible for both the presence of two horizons (instead of only one) and for the phase transition small/large black holes (see Sections 2.3 and 4). We will come back to this intriguing point in Section 9. Summarizing, the numerical integration of eqs. (76) clearly shows that, differently from what obtained in the rough quasistatic description, the effect of the backreaction is to bring the excited, evaporating solution back to the extremal state after a _finite_ time, when \(N\) is chosen to be sufficiently large, i.e., at least \(N\sim\mathcal{O}(10^{2}-10^{3})\). Figure 10: **Upper figures:** Comparison between the numerical (solid blue line) and analytical extremal (dashed orange line) dilaton solutions (left figure), and difference between the two (right figure), as functions of \(x^{-}\). **Lower figures:** Comparison between the numerical (solid blue line) and analytical extremal (dashed orange line) metric solution (left figure), and difference between the two (right figure), as functions of \(x^{-}\). All figures are evaluated at \(x^{+}=5\) and with \(N=0\), in units where \(\lambda=\ell=1\). Figure 11: **Upper figures:** Comparison between the numerical (solid blue line) and analytical extremal (dashed orange line) dilaton solutions (left figure), and difference between the two (right figure), as functions of \(x^{-}\). **Lower figures:** Comparison between the numerical (solid blue line) and analytical extremal (dashed orange line) metric solution (left figure), and difference between the two (right figure), as functions of \(x^{-}\). All figures are evaluated at \(x^{+}=5\) and with \(N=24\), in units where \(\lambda=\ell=1\). Figure 12: **Upper figures:** Comparison between the numerical (solid blue line) and analytical extremal (dashed orange line) dilaton solutions (left figure), and difference between the two (right figure), as functions of \(x^{-}\). **Lower figures:** Comparison between the numerical (solid blue line) and analytical extremal (dashed orange line) metric solution (left figure), and difference between the two (right figure), as functions of \(x^{-}\). All figures are evaluated at \(x^{+}=5\) and with \(N=2400\). The \(x^{-}\) axis has been cut at the point where the numerical solution matches the extremal one. Entanglement Entropy and the Page curve In this section, we compute the entanglement entropy (EE) of Hawking radiation, described here as a collection of \(N\) massless scalar fields, in the 2D nonsingular black-hole geometry. By assuming that the evaporation process is quasistatic, we also determine the time variation of the EE and construct the related Page curve. The EE of the radiation can be computed by using Kruskal coordinates, covering the region outside the outer event horizon of the black hole, \[\kappa X^{\pm}=\pm e^{\pm\kappa\,x^{\pm}}\,\longleftrightarrow x^{\pm}=\pm \frac{1}{\kappa}\ln\left(\pm\kappa\,X^{\pm}\right)\,, \tag{85}\] where \(\kappa\) is the surface gravity at the outer event horizon. In these coordinates, the conformal factor of the metric (53) can be written as \[e^{2\rho}=\frac{f(r)}{-\kappa^{2}X^{+}X^{-}}\,. \tag{86}\] The entanglement entropy of \(N\) massless scalar fields in two spacetime dimensions on a line can be evaluated by tracing out the degrees of freedom in a spacelike slice \([x,y]\) connecting two points. The resulting expression is 6 (see, e.g., Refs. [27, 28, 93, 94]) Footnote 6: This expression should be dependent also on ultraviolet cutoffs, which are here considered as additive constants. \[S_{\rm matter}=\frac{N}{6}\ln d^{2}(x,y)\,, \tag{87}\] where \(d(x,y)\) is the geodesic distance between \(x\) and \(y\). In principle, Eq. (87) is valid for a QFT on a flat spacetime [95], but it has been generalized to _static_ curved spacetime [93], where \(d^{2}(x,y)\) reads \[d^{2}(x,y)=-\left[X^{+}(x)-X^{+}(y)\right]\left[X^{-}(x)-X^{-}(y)\right]e^{ \rho(x)}e^{\rho(y)}\,. \tag{88}\] To compute the entanglement entropy, we construct a spacelike surface encompassing different regions of the black hole. In Fig. 13, this surface is \(\Sigma_{\rm L}\cup I\cup\Sigma_{\rm R}\), where \(\Sigma_{\rm L}\) and \(\Sigma_{\rm R}\) are two hypersurfaces on the _outside_ regions of the two copies of the black hole, where an observer collects Hawking radiation. They are the portion of the hypersurface, where the radiation degrees of freedom are defined. They are anchored to two timelike surfaces (dashed black lines in Fig. 13) at the points \(b_{+}=(t_{b},b)\) (right wedge) and \(b_{-}=(-t_{b}+{\rm i}\beta/2,b)\) (left wedge). The surface \(\mathcal{J}\) defines, instead, the interior region of the black hole. The radiation quantum state over the whole hypersurface \(\Sigma_{\rm L}\cup I\cup\Sigma_{\rm R}\) is pure. When tracing out the interior degrees of freedom in \(I\), we obtain the mixed state of the radiation described by the density matrix \(\rho_{\rm rad}\), which can therefore be used to compute the entanglement entropy. This is reminiscent of the thermofield double state of the black hole [31, 96]: the entanglement entropy takes into account the correlations between the two disjointed copies of the black hole (right and left wedges). Since we have radiation outside the black hole, there will be two copies of this thermal bath (the two regions \(\Sigma_{\rm L}\) and \(\Sigma_{\rm R}\)). In our case, Eq. (88) reads \[d^{2}(b_{+},b_{-})=-\frac{1}{\kappa^{2}}\left[e^{\kappa t_{b}+\kappa b}-e^{- \kappa t_{b}+\kappa b}e^{i\kappa\beta/2}\right]\left[-e^{-\kappa t_{b}+\kappa b }+e^{\kappa t_{b}+\kappa b}e^{-i\kappa\beta/2}\right]\frac{f(b)}{e^{2\kappa b} }\,. \tag{89}\] This is valid off-shell. On shell \(\kappa\to\kappa_{\rm H}\), we have \(\frac{\beta\kappa_{\rm H}}{2}=\pi\), and thus \[d^{2}(b_{+},b_{-})=\frac{4f(b)}{\kappa_{\rm H}^{2}}\cosh^{2}(\kappa_{\rm H}t_ {b})\,. \tag{90}\] Finally, the EE of the matter fields is \[S_{\rm matter}=\frac{N}{6}\ln\left[\frac{4f(b)}{\kappa_{\rm H}^{2}}\cosh^{2} (\kappa_{\rm H}t_{b})\right]\,. \tag{91}\] As stressed above, this is valid as long as we consider the static case. However, \(\kappa_{\rm H}\) varies due to the evaporation process. To get a qualitative picture of the behavior of the entropy in time, we can assume that the evaporation process happens in an adiabatic way, so that we can use a quasistatic approximation. The evaporation is thus again modelled in terms of a sequence of static states with decreasing mass. As we have seen explicitly in Section 5.3, the quasistatic approximation is reliable as long as the semiclassical one is valid. In a first approximation, therefore, we can use the time variation of the event horizon \(r_{\rm H}=r_{\rm H}(t)\), computed as a solution of the SB law (45), and plug it into the expression of the surface gravity \[\kappa_{\rm H}(r_{\rm H})=\frac{f^{\prime}(r_{\rm H})}{2}=\frac{r_{\rm H}^{4}- 2\ell^{3}r_{\rm H}}{2\lambda\left(\ell^{3}+r_{\rm H}^{3}\right)^{2}}\,. \tag{92}\] The qualitative result (obtained neglecting the irrelevant constants \(\frac{N}{6}\ln\left[4f(b)\right]\)) is plotted in Fig. 14. As in singular black-hole models, initially the entanglement entropy of the radiation grows. However, this growth reaches a maximum at the "Page time" \(t_{\rm P}\) and then the entropy starts decreasing, due to the peculiar form of the surface gravity, which is related to the absence of a singularity. This feature was found before for nonsingular black holes in Ref. [36], where it was noted that the presence of the dS core traps Hawking modes, which cause a decrease in entropy once freed from the trapping region. We find here a similar behavior. This release of information could also be related to the peculiarities of the inner horizon, which has negative surface gravity, causing an outburst of energy in the final stages of the evaporation [37], a process similar to the mass inflation. The assumptions used so far are the validity of the quasistatic and the semiclassical approximations. As we have seen in Section 5.3, the semiclassical approximation (and hence also the quasistatic one) breaks down near extremality, when we reach the energy gap (52). This happens at the time corresp Figure 13: Penrose diagram of the maximally extended spacetime of the nonextremal configuration. The two points \(b_{+}\) and \(b_{-}\), belonging to the right and left wedges, respectively, are highlighted, and represents points anchored to two timelike curves (dashed black lines in the two wedges). The union between the three hypersurfaces \(\Sigma_{\rm L}\cup I\cup\Sigma_{\rm R}\) (the red and orange curves) is a spacelike surface and the state defined on it is pure. The radiation is defined on \(\Sigma_{\rm L}\) and \(\Sigma_{\rm R}\) and its state is mixed. dashed orange line in Fig. 14. Therefore, we have to cut the Page curve when the semiclassical approximation breaks down. What happens beyond this point cannot be inferred from our semiclassical description of the dynamics. In particular, we cannot assess whether the decrease in EE continues until it becomes zero at extremality, as it would be expected for an evaporation process that leaves behind a quantum pure state. Results from AdS\({}_{2}\) quantum gravity indicate the occurrence of a quantum phase transition from the LDS vacuum to the AdS\({}_{2}\) CDV [68]. Similarly to what happens in the case of extremal charged black holes, the near-extremal, near-horizon state of 4D nonsingular Hayward black holes, described by the AdS\({}_{2}\times S^{2}\) spacetime, could have a purely topological entropy content, explained in terms of AdS\({}_{2}\) fragmentation [32]. ## 9 Conclusions In this paper we have investigated the thermodynamics and the classical and semiclassical dynamics of 2D, AF, nonsingular dilatonic black holes with a dS core. The aim has been trying to understand both the end point of the black-hole evaporation and the related information flow. Our analytic and numerical results give a clear indication that the latter leads to a regular extremal configuration in a finite amount of time. This conclusion is supported both by a thermodynamic analysis, showing that extremal configurations are energetically preferred, and by the numerical integration of the semiclassical field equations, which allows to take into account the full dynamics of the backreaction. Concerning the information flow during the evaporation process, the Page curve we have constructed clearly shows the presence of a maximum at the Page time, followed by a descent, in which information is recovered from the hole. These features have also a nice physical explanation in terms of the trapping/flow of Hawking modes in the region between the inner and outer horizons. On the other hand, the intrinsic limitation of our approach, the validity of the semiclassical approximation, prevents us from putting a definitive final word on the issue. Near extremality, the semiclassical approximation breaks down and a mass gap is generated, which is a well-known feature of AdS\({}_{2}\) quantum gravity. This gap separates the extremal configuration, endowed with a linear dilaton, from another vacuum of the theory, the AdS\({}_{2}\) CDV. There are strong indications that, once the former is reached during evaporation, a quantum phase transition occurs that causes a transition to the latter. A similar phase transition between the extremal LDS and the AdS\({}_{2}\) CDV seems to occur also in the case under consideration. If this could be independently confirmed, it would imply that the end point of the evaporation process is the full AdS\({}_{2}\) spacetime endowed with a constant dilaton, which is a perfectly regular configuration. This would be Figure 14: Qualitative time variation of the entanglement entropy of matter fields, according to Eq. (91), where we considered the time variation of the surface gravity, calculated using the solution of Eq. (45). The vertical, dashed green line corresponds to the maximum of the curve (the Page time \(t_{\rm P}\)), while the vertical dashed orange one indicates the time \(t_{\rm B}\) at which the semiclassical approximation should break down. a clear confirmation that the evaporation process is unitary. In terms of the 4D solution, which we are here modelling with the 2D dilaton gravity models, this corresponds to a (topology changing) phase transition between an extremal solution, which is an AF spacetime with an infinitely long throat, and an AdS\({}_{2}\times S^{2}\) spacetime, describing instead the near-horizon region. Our investigation has also revealed some intriguing peculiar features of the semiclassical dynamics of regular black holes with a dS core. The approach to extremality of excited black-hole solutions is not monotonic, but presents a sharp maximum of \(\Delta\rho\) and \(\Delta\phi\) near extremality, at relatively small values of the dilaton. The presence of this maximum can be explained in terms of the self-interaction of the dilaton, i.e., the presence of a maximum in the dilaton potential \(\mathscr{V}(\phi)\). This, in turn, is what determines the presence of two horizons, instead of a single one, the presence of a large/small-black-hole phase transition and also the decreasing tail in the Page curve for the EE of Hawking radiation. Since the behavior of the dilatonic potential is essentially determined by the presence of a dS core, all the previous features are a consequence of the absence of singularities, which is achieved exactly thanks to this dS behavior in the black-hole interior. This strongly supports the hypothesis that the nonunitary evolution of the evaporation process has to be traced back to the presence of a singularity in the black hole interior. ## 10 Acknowledgements We thank D. Grumiller for valuable comments and for having pointed our attention to Ref. [69]. ## Appendix A No divergences in the ground state stress-energy tensor In Ref. [62], counterterms are added to the Polyakov action (as in Eq. (71)) to eliminate a divergence in the stress-energy tensor of the GS \(\langle T_{\mu\nu}\rangle\sim\phi^{2a}\), which, for \(a>0\) diverges at asymptotic infinity \(\phi\to\infty\). We show that in our case the rapid fall of the potential at infinity allows to avoid this divergent behavior. We consider, as an example, \(\langle T_{+-}\rangle_{\rm GS}\), but similar considerations also hold for the other components of \(T_{\mu\nu}\). Using Eq. (64), we have \[\langle T_{+-}\rangle_{\rm GS}=-\frac{N}{12}\partial_{+}\partial_{-}\rho \bigg{|}_{\rm GS}\,, \tag{93}\] where \(\rho\), computed at the GS, is given by Eq. (60) \[\rho_{\rm GS}=\frac{1}{2}\ln\left(\frac{2\mathcal{M}_{\rm ext}}{\lambda}+ \mathcal{J}_{\rm GS}\right)\,,\qquad\mathcal{J}\equiv\frac{1}{\lambda^{2}} \int^{\phi}\mathrm{d}\psi\,\mathscr{V}(\psi)\,. \tag{94}\] Therefore \[\partial_{-}\rho_{\rm GS}=\frac{1}{2}\frac{\partial_{-}\mathcal{J}_{\rm GS}} {\frac{2\mathcal{M}_{\rm ext}}{\lambda}+\mathcal{J}_{\rm GS}}=\frac{1}{2} \frac{\partial_{-}\phi_{\rm GS}\,\mathcal{J}_{\rm GS,\phi}}{\frac{2\mathcal{M }_{\rm ext}}{\lambda}+\mathcal{J}_{\rm GS}}=-\frac{1}{4\lambda}\,\mathcal{V}_ {\rm GS}\,, \tag{95}\] where \(\mathcal{J}_{,\phi}=\lambda^{-2}\mathscr{V}\) and we used the vacuum solution Eq. (55b). Differentiating with respect to \(x^{+}\) yields \[\partial_{+}\partial_{-}\rho_{\rm GS}=-\frac{1}{4\lambda}\partial_{+}\phi\, \mathscr{V}_{\rm GS,\phi}=-\frac{1}{8}e^{2\rho}\mathscr{V}_{\rm GS,\phi}\,. \tag{96}\] For \(\phi\to\infty\), \(e^{2\rho}\to\mathrm{constant}\), while \(\mathscr{V}_{,\phi}\sim-\phi^{-3}\to 0\). So we do not have divergences. ## Appendix B Boundary conditions for numerical integration ### Boundary condition at \(x^{+}=x^{+}_{0}\) For our numerical integration, we set \(\lambda=1\) and \(\ell=1\). When the shock wave is turned on, the solution is set equal to the extremal configuration (60). Our procedure to implement this boundary condition is the following: 1. We first integrate the equation for the dilaton (expression on the right of Eq. (60)), leading to an implicit relation between the dilaton and the coordinates; 2. We numerically invert it to have explicitly \(\phi=\phi(x^{+},x^{-})\); 3. We plug the result into the equation for \(e^{2\rho}\). The first point is achieved by solving the differential equation \[\frac{\mathrm{d}r_{*}}{\mathrm{d}\psi}=\frac{1}{\frac{2\mathcal{M}_{\mathrm{ ext}}}{\lambda}+\mathcal{J}(\psi)}\,. \tag{97}\] This integral can be done analytically. With \(\mathcal{M}_{\mathrm{ext}}=\sqrt[3]{2}\,\ell/3\) and \(\ell=\lambda=1\) it reads \[r_{*,\mathrm{ext}}=2^{5/3}\ln\left(\left|\sqrt[3]{2}-\phi\right|\right)+\frac{ 3\phi}{2^{2/3}}+\frac{3}{\sqrt[3]{2}-\phi}+\frac{\ln\left(2\phi+\sqrt[3]{2} \right)}{2\sqrt[3]{2}}. \tag{98}\] We then numerically invert the result to get \(\phi=\phi(x^{+},x^{-})\) and plug the result into Eq. (60) to get \(\rho=\rho(x^{+},x^{-})\), both evaluated at extremality. ### Boundary condition at \(x^{-}\to-\infty\) The procedure is exactly the same as before, with \(\mathcal{M}_{\mathrm{ext}}\) replaced by a different value of the mass. Here, we choose \(\mathcal{M}=0.1\,\lambda\). The coordinate \(r_{*}\) as a function of \(\phi\) in this case reads \[r_{*}\simeq-1.37\ln(|0.47-\phi|)+25.42\ln(|4.96-\phi|)+5\phi+0.95\ln(\phi+0.43 )\,. \tag{99}\] Again, we numerically invert this expression, to get \(\phi=\phi(x^{+},x^{-})\). In this case, however, the quantity we are (improperly) calling \(r_{*}\) contains the function \(F(x^{-})\), which has to be computed to fully obtain \(\phi\) as a function of the coordinates. Starting from \[\mathscr{F}[\phi(x^{+},x^{-})]\equiv\int^{\phi}\frac{\mathrm{d}\psi}{\frac{2 \mathcal{M}}{\lambda}+\mathcal{J}}=\frac{\lambda}{2}\left[x^{+}-x_{0}^{+}-F( x^{-})\right]\,, \tag{100}\] we note that, when evaluated at \(x^{+}=x_{0}^{+}\), it reads \[\mathscr{F}[\phi(x_{0}^{+},x^{-})]=-\frac{\lambda}{2}F(x^{-})\,, \tag{101}\] from which we get \[\phi(x_{0}^{+},x^{-})\equiv\phi_{0}(x^{-})=\mathscr{F}^{-1}\left[-\frac{ \lambda}{2}F(x^{-})\right]\,. \tag{102}\] This expresssion of \(\phi(x_{0}^{+},x^{-})\) can be used in Eq. (61) to compute \(\mathcal{J}_{0}\). This gives us a differential equation in terms of \(F\) which reads \[F^{\prime}(x^{-})=\frac{\frac{2\mathcal{M}_{\mathrm{ext}}}{\lambda}+\mathcal{ J}_{0,\mathrm{ext}}(x^{-})}{\frac{2\mathcal{M}}{\lambda}+\mathcal{J}_{0} \left[\phi_{0}(x^{-})\right]}\equiv\frac{\frac{2\mathcal{M}_{\mathrm{ext}}}{ \lambda}+\mathcal{J}_{0,\mathrm{ext}}(x^{-})}{\frac{2\mathcal{M}}{\lambda}+ \mathcal{J}_{0}\left(\mathscr{F}^{-1}\left[-\frac{\lambda}{2}F(x^{-})\right] \right)}\,. \tag{103}\] This equation is solved numerically. The integration constant is chosen so that, once the solution of \(F\) is plugged into the dilaton solution (61), at \(x^{+}=x_{0}^{+}\), the dilaton is equal to the extremal one computed at \(x_{0}^{+}\). This guarantees the continuity of the scalar field across the shock wave. Once \(F(x^{-})\) is computed, we plug it into \(\phi=\phi(x^{+},x^{-})\). With this and Eq. (61), we obtain also \(\rho=\rho(x^{+},x^{-})\) above extremality.
2308.08301
Proposal for Numerical Benchmarking of Fluid-Structure Interaction in Cerebral Aneurysms
Computational fluid dynamics is intensively used to deepen the understanding of aneurysm growth and rupture in the attempt to support physicians during therapy planning. Numerous studies have assumed fully-rigid vessel walls in their simulations, whose sole hemodynamics may fail to provide a satisfactory criterion for rupture risk assessment. Moreover, direct in-vivo observations of intracranial aneurysm pulsation have been recently reported, encouraging the development of fluid-structure interaction for their modelling and for new assessments. In this work, we describe a new fluid-structure interaction benchmark setting for the careful evaluation of different aneurysm shapes. The studied configurations consist of three real aneurysm domes positioned on a toroidal channel. All geometric features, meshing characteristics, flow quantities, comparisons with a rigid-wall model and corresponding plots are provided. Reported results emphasize the alteration of flow patterns and hemodynamic descriptors when moving from the rigid-wall model to the complete fluid-structure interaction framework, thereby underlining the importance of the coupling between hemodynamics and the surrounding vessel tissue.
Aurèle Goetz, Pablo Jeken Rico, Yves Chau, Jacques Sédat, Aurélien Larcher, Elie Hachem
2023-08-16T12:02:01Z
http://arxiv.org/abs/2308.08301v1
# Proposal for Numerical Benchmarking of Fluid-Structure Interaction in Cerebral Aneurysms ###### Abstract Computational fluid dynamics is intensively used to deepen the understanding of aneurysm growth and rupture in the attempt to support physicians during therapy planning. Numerous studies have assumed fully-rigid vessel walls in their simulations, whose sole hemodynamics may fail to provide a satisfactory criterion for rupture risk assessment. Moreover, direct in-vivo observations of intracranial aneurysm pulsation have been recently reported, encouraging the development of fluid-structure interaction for their modelling and for new assessments. In this work, we describe a new fluid-structure interaction benchmark setting for the careful evaluation of different aneurysm shapes. The configurations consist of three real aneurysm domes positioned on a toroidal channel. All geometric features, meshing characteristics, flow quantities, comparisons with rigid wall model and corresponding plots are provided. The results emphasize the alteration of flow patterns and hemodynamic descriptors between the different cases and in particular with the rigid-wall model, thereby underlining the importance of fluid-structure interaction modelling. intracranial aneurysm, haemodynamics, fluid-structure interaction, arterial tissue modelling. Article ## 1 Introduction Intracranial aneurysms (IAs) are pathological dilations of blood vessels that bear the risk of rupture and subsequent subarachnoid haemorrhage, which is associated with high mortality and morbidity rates [1]. According to prevalence studies, around 3% of the world population hosts at least one of these aneurysms [2]. When identified, they raise the question of a potential clinical intervention, which also entails a non-negligible threat [3, 4]. As a result, there is a need for risk-evaluation tools in order to comprehensively assess the stability of IAs. So far, clinical decisions have been mostly based on the size, shape and location of the bulge. However, it has been shown that the risk evaluation accuracy is limited when using these parameters only [5]. That is the reason why research effort is currently made to model and simulate patient-specific inner haemodynamics of IAs through Computational Fluid Dynamics (CFD) in order to aid physicians in decision-making. Computational models of aneurysm biomechanics indeed hold great promise for risk stratification, as haemodynamic features reveal key correlations with future aneurysm growth [6, 7]. Performing numerical simulations for large numbers of aneurysm cases has raised multiple challenges in the scientific community ranging from the efficient and systematic generation of adapted computational meshes to the solving of coupled systems of equations complemented with complex rheology models and tailored boundary conditions [8]. Most of these challenges aim at enriching the simulation fidelity towards real patient-specific predictive modelling. Among them, moving from rigid arterial wall modelling to adequate Fluid-Structure Interaction (FSI) simulations stands as a key research goal. Pure CFD modeling based on rigid wall assumption has been shown to overestimate Wall Shear Stresses (WSS) [9; 10], which in turn cast doubt in its ability to provide satisfactory criteria for rupture risk assessment. Fidelity can be refined by simulating the vascular flow in conjunction with vessel wall deformation via relevant coupled FSI modeling. The early research effort in that field has been carried out by Torii et al. [11; 9; 12]. Authors have compared the results of fully-coupled FSI simulations using elastic and hyperelastic neo-Hookean wall behaviour [11]. They investigated later three aneurysms through elastic FSI simulations, comparing the obtained results to a fully-rigid configuration [9]. It has been suggested that the need for FSI modelling is geometry-dependent, with WSS in areas of flow impingement being overestimated in the rigid configuration. Going a step further, a few studies reported the specific interest of FSI simulations, when the wall thickness is modelled accurately [10; 13]. Voss et al. imaged the geometry of a single aneurysm dome using micro-CT after the tissue was resected in surgical clipping [13]. Subsequent FSI simulations of the acquired geometry compared the results obtained with a uniform-thickness configuration (\(0.3\,mm\)) and with the specific thickness distribution measured in micro-CT, revealing peak local stress variations of around 50%. A few other research teams have proposed similar studies [10; 14] but always investigating different cohorts of aneurysm cases composed of a few specimens only. This, along with different modelling assumptions, undermines comparisons between them, thereby limiting reproducibility of the reported results. At the same time, it is important to recall that FSI in intracranial aneurysms is a complex multi-parametric problem, which would require large cohorts of investigated cases in order to draw robust conclusions. However, simulating fluid-structure coupled physics involves a certain algorithmic complexity and is computationally costly. In addition, clinical routine imaging techniques did not allow until recently the visualization of wall movement in the brain, thereby not encouraging the development of arterial compliance modelling for intracranial aneurysms. As a result, no general consensus has been reached yet on the relevance of FSI modelling in the context of IA risk assessment. Over the past few years, IA pulsations have been directly observed through clinical imaging [15; 16; 17; 18]. This provides new perspectives to the FSI modelling of brain aneurysms. Indeed, in-vivo data acquisition has substantially benefited the modelling of aortic aneurysms, with the prescription of patient-specific wall thicknesses [19] and even local tissue stiffness estimation through 4D-flow analysis. However, this has hardly been conceivable in the brain, as most aneurysms have wall thicknesses ranging between \(30\,\mu m\) and \(400\,\mu m\)[20; 21], hence falling under common medical imaging resolutions. Mostly supported by the development of very precise ECG-gated 4D-CTA [16; 17], the situation is progressively changing and future research will surely benefit from additional in-vivo data to feed FSI models. So far, aneurysm pulsation has been mostly expressed in terms of overall bulge volume variation over a cardiac cycle, with peak reported values of 20% [18]. Even though these measurements suffer from large uncertainty, especially for small bulges [22; 23], this new insight motivates the development of FSI modelling of IAs beyond the scarce existing literature. Furthermore, publications have already reported that IAs demonstrate very different mechanical properties and thicknesses [24; 21], due to several biological phenomena linked with their formation and growth [7], inducing potential local weaknesses in pathological tissue. If future medical equipment allows the in-vivo localization of these weaker spots as done post-mortem in [13], FSI models will surely contribute to build precise rupture risk assessment tools. On top of that, even if measurement data is still lacking, assessing the sensitivity of several physical parameters in the context of FSI will give crucial insights for the future of IA modelling. In this work, we describe a new benchmark setting using idealized geometry for the careful evaluation of different aneurysm dome shapes in interaction with the blood flow dynamics. Although the analysis of patient-specific geometries remains an ultimate goal, it seems that idealized IA geometries are still missing for studying FSI-related phenomena in a more controlled manner. Idealized geometries of sidewall aneurysms have been investigated almost exclusively through rigid-wall CFD simulations [25; 26; 27], showing the sensitivity of haemodynamics towards various geometrical parameters. They proved to be very useful in particular for studying the impact of certain modelling assumptions and boundary conditions [27], or highlighting specific trends that could be reproduced. In this work, we propose a novel FSI modelling based on the Variational Multiscale Method (VMS) for both the fluid and solid solvers. It is employed to investigate the proposed benchmark setting along with three real aneurysm domes. The introduced benchmark geometry is extremely versatile, as bulge shapes can be substituted while keeping the general case settings unchanged. The simplified problem offers a better environment to draw conclusions from a smaller manifold of explored configurations. All geometric features, meshes, flow quantities, comparisons with rigid wall model and corresponding plots are provided. We cared that the test cases are easy to reproduce but help draw the necessary conclusions on the system's sensitivity to physical and geometrical parameters. The design of the proposed benchmark is described in the following section, along with the employed methods for solving the coupled physics. Then, after validation of the method on the well-established pressure wave benchmark proposed in [28], we explore the possibilities of the proposed geometry and shed light on bulge shapes that might consequently benefit from the modelling of compliant arterial tissue. ## 2 Materials and Methods ### Design of the proposed FSI case #### 2.1.1 A simple but versatile geometry A cut view of the proposed benchmark is shown in Figure 1. With an inlet diameter of \(1.4\,mm\), the case mimics a simplified sidewall aneurysm located at the last segments (C6-C7) of a human Internal Carotid Artery (ICA) [29]. These segments, named ophthalmic (C6) and communicating (C7), are fully located in the subarachnoid space and constitute commonly known locations for sidewall IAs [30]. The basic aneurysm bulge is designed as a perfect sphere (\(4.4\,mm\) diameter) intersecting the toroidal geometry. Proportions are inspired by previous work [27, 26]. The wall thickness is set to \(200\,\mu m\) as will be discussed in the following section. Given that it is both unrealistic and a potential source of problems in finite-element simulations, the singular sharp angles at the neck are smoothed, resulting in the 3D geometry visible in Figure 2. In the following, we refer to this idealized geometry as R (for Reference). Going a step further, three realistic bulge shapes taken from [31] have been employed, by adapting them manually to fit the neck of our idealized geometry. The resulting shapes, presented in Figure 3 will be referred to as S1, S2, S3 (for Specific). The benchmark's modularity provides a functional exploration framework while preserving the biological and computational environment of IA modelling. #### 2.1.2 Choice of physical parameters FSI in the context of IAs is undermined by uncertainty regarding pathological tissue characteristics. Research effort has been carried out to measure the material properties of resected IA domes after clipping surgical operation [33, 24, 34], analysing the alteration of collagen fiber architectures, and reporting a substantial scatter in ultimate stress and Young moduli between specimens. Even on the surface of a single bulge, it has been shown that distinct regions can exhibit very different material properties [34]. IA wall thickness follows a similar trend. Already in 1978, the authors of [21] studied 45 clipped IAs and presented a diverse spectrum Figure 1: Schematic illustration of the proposed benchmark (dimensions are given in mm). of wall thickness profiles. This pathological tissue heterogeneity has been later quantified in several publications [34; 35]. However, due to the lack of _in-vivo_ patient-specific data, most previous FSI studies modelling IAs employed constant wall thicknesses and isotropic mechanical properties [11; 10], with the Neo-Hookean or Mooney-Rivlin models being commonly applied [11; 9; 10; 13]. Even though implementing more realistic anisotropic models such as HGO [36] is technically not a problem, applying it in a meaningful way is arduous, as literature does not provide insights into the preferred orientation of fibers in pathological tissue. Furthermore, it has often been reported that, while hyperelastic modelling is required considering the observed deformation level of arterial walls, the kind of employed non-linearity has only little effect on the obtained results [10; 11]. This comforts us in the choice of a Neo-Hookean model, as long as no more information is known about patient-specific tissue characteristics. In addition, relying on a widely applied model like this one makes the case easier to reproduce. Regarding the selected values of wall characteristics, a typical Young modulus of \(E=1\,\mathrm{MPa}\) (with Poisson ratio \(\nu=0.45\)) is frequently reported [11; 9; 10; 13] and prescribed wall thicknesses range from \(200\,\mu m\)[6; 10] to \(350\,\mu m\)[14], with some studies reducing it drastically at the aneurysm bulge [9; 6]. In our case, to circumvent the difficult choice of absolute parameter values based on experimental measurement and characterization, we decide to rely on the rare visual evidence of brain arterial wall movement. As mentioned in the introduction, recent studies have reported IA pulsation observation through direct _in-vivo_ imaging [15; 16; 17; 18]. We use these volume variation measurements as an approximate reference and adjust the wall stiffness to calibrate the amplitude of the movement. As no guideline exists to vary the thickness and stiffness of the walls in a meaningful way, we prefer solely altering the stiffness, viewed as a global wall structural resistance. In the following, we explore 3 configurations (R1, R2, R3) with \(E_{1}=1.38\,\mathrm{MPa}\) (Lame coefficient \(\mu_{1}=0.4\,\mathrm{MPa}\)), \(E_{2}=E_{1}/2\), \(E_{3}=E_{2}/2\). These configurations should result in pressure-induced volume changes that approximately cover the spectrum of reported values [18]. To draw the focus solely on the aneurysm bulge interaction with the flow, we decide to drastically raise the stiffness of the artery to \(E_{artery}=5E_{1}\). The bulge area, where the lower stiffness is prescribed, is shown in red in Figure 2. Note that the intermediary stiffness \(E_{2}\) is employed for the Specific shapes (S1-3). Thanks to abundant literature, choosing fluid properties stands as less problematic. A shear-thinning Carreau-Yasuda rheology model is implemented as in Eq. (1). The employed parameters based on [37] are summarized in Figure 2. \[\mu(\dot{\gamma})=\mu_{\infty}+(\mu_{0}-\mu_{\infty})\,(1+(\lambda\dot{\gamma} )^{a})^{(n-1)/a} \tag{1}\] Figure 2: Overview of the case settings and parameters. The generalized inflow waveform has been adapted from [32]. The area coloured in red is where a lower solid stiffness is prescribed (\(E_{aneurysm}\)). B.C. stands for Boundary Conditions. #### Boundary conditions In the fluid, inlet velocities and outlet pressure conditions are prescribed as follows: \[\begin{split}\mathbf{v}(\mathbf{x},t)&=V(t)\left(1-\left( \frac{||\mathbf{x}-R_{torus}||}{r_{torus}}\right)^{2}\right)\,\mathbf{e}_{y},\quad\mathbf{x} \in\Gamma_{f,in},\\ P(t)&=P_{0}+R_{d}\int\limits_{\Gamma_{f,out}}\mathbf{v} (\mathbf{x},t)\cdot(-\mathbf{e}_{y})\,d\Gamma,\quad\text{on}\,\Gamma_{f,out}.\end{split} \tag{2}\] Where \(V(t)\) has been built based on the waveform plotted in Figure 2, corresponding to an averaged internal carotid pulse reported by [32]. It has been scaled with respect to the inlet diameter to reach realistic flow rates and prefixed with a \(0.2\,s\) linear ramp for a smoother initialization. At the outflow, contrary to regular CFD simulations, the absolute value of the pressure is of major importance. To reach plausible deformations of the aneurysm membrane, physiological pressures have to be applied. Pressures in the vascular system dwell between 80 and \(120\,mm\,Hg\) for healthy patients. These pressure variations occur over a cardiac cycle in the system and result from the hydraulic resistance of the posterior vasculature (\(R_{d}\)), mostly imputable to brain capillaries. To account for that, the outflow pressure is scaled with respect to the flow rate (i.e. adjusting \(R_{d}\)) to keep the pressure in the system between given bounds, similar to [11]. Considering the lack of information regarding the non-linear behaviour of pathological aneurysm tissue, and for the sake of easier reproducibility, we decide to vary the outflow pressure between 0 and \(40\,mm\,Hg\) rather than pre-stressing the diastolic structure as it has been done by [10]. Thus, we set \(P_{0}=-3.683\,k\,Pa\) and \(R_{d}=2.246\,k\,Pa.s.mL^{-1}\). For the solid, boundary conditions are straightforward. Nodes situated on the inflow/outflow plane (\(y=0\)) are kept fixed, whereas a traction-free condition is prescribed on \(\Gamma_{s,ext}\). Figure 3: Overview of the Specific cases adapted from the open-source _IntrA_ dataset [31]. #### 2.1.4 Quantities of interest One of the major goals of this practical benchmark lies in the ability to assess the sensitivity of haemodynamics to FSI modelling. The most widely used metric for rupture risk assessment of IAs is the Wall Shear Stress (WSS) applied by the blood flow in the bulge. WSS is associated with remodelling pathways of IAs [39, 38, 7, 6] leading to their formation and growth, due to the interaction between abnormal blood flow and the endothelial cells of the vessels [40]. As multiple definitions exist, the one employed in this work is provided in Equation 3 for the sake of reproducibility: \[\mathbf{\tau}_{WSS}=\mathbf{n}\times\left[\left(\mathbf{\sigma}_{f}\cdot\mathbf{n}\right) \times\mathbf{n}\right]=\mathbf{\sigma}_{f}\cdot\mathbf{n}-\left[\left(\mathbf{\sigma}_{f} \cdot\mathbf{n}\right)\cdot\mathbf{n}\right]\mathbf{n} \tag{3}\] Where \(\mathbf{n}\) is the unit normal vector at the wall and \(\mathbf{\sigma}_{f}\) the Cauchy stress tensor defined as: \(\mathbf{\sigma}_{f}=-\mathbf{p}_{f}\mathbf{I}+\mu_{f}(\nabla\mathbf{\nu}+\nabla^{T}\mathbf{\nu})\). This vectorial definition allows computing another important metric known as the Oscillatory Shear Index (OSI): \[\text{OSI}=\frac{1}{2}\left(1-\frac{||\,f_{t_{0}}^{t_{0}+T}\,\mathbf{\tau}_{WSS} \,\,dt||}{\int_{t_{0}}^{t_{0}+T}\,||\mathbf{\tau}_{WSS}\,||\,dt}\right) \tag{4}\] In the presented results, we will emphasize these two indicators and record them over the second cardiac cycle only (from \(t_{0}=1\,s\) to \(t_{0}+T=1.8\,s\)) in order to limit any transient effect associated with the initial flow development. WSS will always be reported as a scalar quantity which corresponds to the Euclidian norm of \(\mathbf{\tau}_{WSS}\). WSS will be recorded at systole (\(t=1.08\,s\)) and averaged in time to yield the Time-Averaged WSS (TAWSS). #### 2.1.5 Meshing The primary variables computed when solving the Navier-Stokes equation are velocity and pressure. In our case, the velocity is obtained as a P1 field (see next Section) and gradients are computed through post-processing. As a consequence, the mesh resolution is of major importance especially in the vicinity of the walls, to resolve high gradients properly. Therefore, we use a boundary layer in the fluid domain as shown in Figure 4. We employ a geometrical progression (factor \(\alpha=1.2\)) between successive layers, a minimal element thickness of \(10\,\mu m\) and a total boundary layer size of \(0.3\,mm\). The isotropic element size of the core mesh is set to \(0.12\,mm\), and the solid thickness is divided into 6 equal layers of \(0.033\,mm\). This results in a mesh composed of \(1.2\,M\) and \(0.5\,M\) elements, for the fluid and the solid of the reference case (R), respectively. All the meshes have been generated with the Gmsh [41] python package and are available on GitHub. Figure 4: Views of the employed meshes with a focus at the inlet. ### Modelling the physics #### 2.2.1 Fluid solver When simulating haemodynamics in compliant arteries, keeping a fitted fluid-structure coupling interface is decisive in obtaining precise WSS estimates (see Section 2.1.4). Furthermore, as mesh deformations remain moderate, an Arbitrary Lagrangian-Eulerian (ALE) [42] description appears as the most practical approach to employ. Thus, we define \(\Omega_{f,t}\subset\ \mathbb{R}^{n}\) the fluid spatial domain at time \(t\ \in\ [0,T]\), with \(n\) the spatial dimension, and \(\psi\) the ALE mapping from \(\Omega_{f,0}\) to \(\Omega_{f,t}\). The associated relative velocity is denoted \(\mathbf{v}_{\mathbf{m}}\). Let \(\Gamma_{f}\) be the boundary of \(\Omega_{f}\). We consider the mixed formulation in velocity \(\mathbf{v}\) and pressure \(p_{f}\) of the transient incompressible Navier-Stokes equations given by: \[\rho_{f}\partial_{t}\mathbf{v}+\rho_{f}((\mathbf{v}-\mathbf{v}_{\mathbf{m}})\cdot \nabla)\,\mathbf{v}-\nabla\cdot\mathbf{\sigma}_{\mathbf{f}}=\mathbf{f},\quad\text{in}\,\Omega_ {f,t}. \tag{5}\] \[\nabla\cdot\mathbf{v}=0,\quad\text{in}\,\,\Omega_{f,t}. \tag{6}\] where \(\rho_{f}\) is the fluid mass density and \(\mathbf{f}\) the source term. We rely on a P1-P1 finite element discretization for solving the mixed-formulation (Eqs. (5), (6)), combined with a Variational Multiscale-type (VMS) method as described in [43]. This method ensures accuracy and stability [44] even for convection-dominated flow by enriching both velocity and pressure with residual-based subscales. In the ALE framework, the convective velocity is altered by the mesh velocity \(\mathbf{v}_{\mathbf{m}}\), which tracks the movement of coupling interfaces (\(\Gamma_{FSI}\)). The adaptive mesh displacement allows keeping boundaries fit and consequently saves the cost of interpolating between subdomains. A \(C_{2}\)-smooth \(\mathbf{v}_{\mathbf{m}}\) field can be obtained, for instance, by solving the diffusion equation [45]: \[\nabla\cdot(\gamma\nabla\mathbf{v}_{\mathbf{m}}) =0,\quad\text{on}\,\Omega_{f},\] \[\mathbf{v}_{\mathbf{m}} =\partial_{t}\mathbf{u},\quad\text{on}\,\Gamma_{FSI}, \tag{7}\] \[\mathbf{v}_{\mathbf{m}} =0,\quad\text{on}\,\Gamma_{f}\setminus\Gamma_{FSI},\] where \(\mathbf{u}\) stands for the displacement of the solid interacting with the considered fluid domain. The diffusion coefficient \(\gamma\) is taken to be the squared inverse distance to the interface \(\Gamma_{FSI}\), in order to better share the mesh deformation on the entire grid and to keep the boundary layer mesh as intact as possible. Many other solutions exist and the interested reader can refer to [46, 47]. #### 2.2.2 Solid solver The compliant arterial tissue is modelled using the Lagrangian equations of solid dynamics. Let \(\Omega_{s,0}\) and \(\Omega_{s,t}\) define the initial and current solid spatial domains, with \(\mathbf{\phi}\) the mapping between the two domains. We should distinguish here between the material Lagrangian coordinate \(\mathbf{X}\), and the updated Lagrangian coordinate \(\mathbf{x}\). The displacement of a solid particle is given by \(\mathbf{u}=\mathbf{x}-\mathbf{X}=\mathbf{\phi}(\mathbf{X},t)-\mathbf{X}\) and the deformation gradient defined as \(\mathbf{F}=\nabla_{\mathbf{X}}\mathbf{\phi}\). The Jacobian determinant is thus \(\mathbf{J}=\det[\mathbf{F}]\). The momentum and continuity equations for solid dynamics are specified as: \[\rho_{s}\partial_{t}\mathbf{u}-\nabla_{\mathbf{x}}\cdot\,\mathbf{\sigma}_{\mathbf{s}} =0,\quad\text{in}\,\,\Omega_{s,t}. \tag{8}\] \[\rho_{s}\mathbf{J}=\rho_{s_{0}},\quad\text{in}\,\,\Omega_{s,t}. \tag{9}\] Where \(\rho_{s}\), \(\dot{\mathbf{u}}\), and \(\mathbf{\sigma}_{\mathbf{s}}\), designate the solid density, the second material derivative of the displacement and the symmetric Cauchy stress tensor, respectively. For modelling the intrinsically hyperelastic nature of arterial tissue [10, 11, 36], we rely on the Helmholtz free energy formalism. Let \(\mathbf{C}\) denote the right Cauchy-Green strain tensor given by \(\mathbf{C}=\mathbf{F}^{T}\mathbf{F}\) and \(\mathbf{S}=\mathbf{J}\mathbf{F}^{-1}\mathbf{\sigma}_{\mathbf{s}}\mathbf{F}^{-T}\) the second Piola-Kirchhoff stress tensor. The Helmholtz free energy function \(\mathbf{\Psi}(\mathbf{C})\) is defined by: \[\mathbf{S}=2\partial_{\mathbf{C}}\Psi(\mathbf{C}). \tag{10}\] This free energy function is decomposed into its volumetric and deviatoric contributions, leading to the classical split: \[\Psi(\mathbf{C})=U(J)+W(\bar{\mathbf{C}}). \tag{11}\] Where \(J=\sqrt{\det[\mathbf{C}]}\), and \(\bar{\mathbf{C}}=J^{-\frac{2}{3}}\mathbf{C}\) is the volumetric/deviatoric part of \(\mathbf{C}\). We Consider a Neo-Hookean and a Simo-Taylor [48] volumetric model, which yields: \[U(J)=\frac{1}{4}\kappa(J^{2}-1)-\frac{1}{2}\kappa\text{ln}J, \tag{12}\] \[W(\bar{\mathbf{C}})=\frac{1}{2}\mu_{s}(\mathrm{tr}[\bar{\mathbf{C}}]-3)=\frac{1}{2 }\mu_{s}(\bar{\mathbf{I}}_{1}-3). \tag{13}\] where \(\kappa\) and \(\mu_{s}\) are material properties,and \(\mathbf{I}_{1}=\mathrm{tr}[\bar{\mathbf{C}}]\) is the first Cauchy-Green invariant. The Cauchy stress tensor can similarly be split into its deviatoric and volumetric parts, which gives: \[\boldsymbol{\sigma_{s}}=p_{s}\mathbf{I}+\mathrm{dev}[\boldsymbol{\sigma_{s}}]. \tag{14}\] \[p_{s}=2J^{-1}\mathbf{F}\partial_{\mathbf{C}}U(J)\mathbf{F}^{T}=U^{\prime}(J)= \frac{1}{2}\kappa(J+J^{-1}), \tag{15}\] \[\mathrm{dev}[\boldsymbol{\sigma_{s}}]=2J^{-1}\mathbf{F}\partial_{\mathbf{C}} W(\bar{\mathbf{C}})\mathbf{F}^{T}=\mu_{s}J^{-\frac{5}{3}}\mathrm{dev}[\mathbf{F} \mathbf{F}^{T}]. \tag{16}\] The final system of equations to be solved is given by: \[\rho_{s}\partial_{u}\boldsymbol{u}-\nabla_{\mathbf{x}}p_{s}-\nabla_{\mathbf{x }}\cdot\ \mathrm{dev}[\boldsymbol{\sigma_{s}}]=0,\quad\mathrm{in}\ \Omega_{s}. \tag{17}\] \[\nabla_{\mathbf{x}}\cdot\boldsymbol{u}-\frac{1}{\kappa}p_{s}=0,\quad\mathrm{in }\ \Omega_{s}. \tag{18}\] As for the fluid, we rely on a stabilized method for tackling the solid finite-element problem. More details about the method and implementation are given in [49]. #### Coupling Dynamic and kinematic coupling conditions must be enforced at the fluid-solid interface, whose normal field is denoted \(\boldsymbol{n}\): \[\boldsymbol{v}=\partial_{t}\boldsymbol{u},\quad\mathrm{on}\ \Gamma_{ FSI}, \tag{19}\] \[\boldsymbol{\sigma_{f}}\cdot\boldsymbol{n}=\boldsymbol{\sigma_{s} }\cdot\boldsymbol{n},\quad\mathrm{on}\ \Gamma_{FSI}.\] The interface continuity constraints are imposed using a partitioned, iterative scheme [50] with the convergence criterion (19). This sub-iterative process appears especially crucial when fluid and solid densities come close or when dealing with slender solid geometries, in order to not suffer from coupling instabilities commonly known as _added-mass effect_[51, 52]. We employ a classic Dirichlet-to-Neumann coupling, where the velocity of the solid is imposed as a Dirichlet condition at the fluid boundary, while the normal fluid stress is enforced as a Neumann condition on the solid. Reaching FSI convergence can be mathematically viewed as finding a fixed point of the solid and fluid operators' composition \((S\circ\mathcal{F})\). We define the FSI residual as follows: \[\mathbf{r}_{t}^{k}=\boldsymbol{u}_{t}^{k}-\tilde{\boldsymbol{u}}_{t}^{k}=S \circ\mathcal{F}(\tilde{\boldsymbol{u}}_{t}^{k})-\tilde{\boldsymbol{u}}_{t}^{k}, \tag{20}\] where \(\tilde{\boldsymbol{u}}_{t}^{k}\) is the predicted displacement of the solid used at sub-increment \(k\). Time will only be incremented after this fixed point has been reached with a given tolerance: \(||\mathbf{r}_{t}^{k}||<tol_{FSI}\ (||.||\) being the euclidian norm scaled with the number of nodes in the mesh). For the proposed test case (R or S), this tolerance is set to \(10^{-4}\ mm\). For the fixed-point algorithm to converge quicker, under-relaxation is widely employed [53, 54, 45]. It consists in using only a fraction of the algorithm's new solution (\(\boldsymbol{u}_{t}^{k}\)) for building the next guess. Mathematically, relaxing the solution with a relaxation parameter \(\omega\) can be written as: \[\tilde{\boldsymbol{u}}_{t}^{k+1}=\tilde{\boldsymbol{u}}_{t}^{k}+\omega( \boldsymbol{u}_{t}^{k}-\tilde{\boldsymbol{u}}_{t}^{k})=\tilde{\boldsymbol{u}}_ {t}^{k}+\omega\mathbf{r}_{t}^{k} \tag{21}\] From within the large family of relaxation methods [55], we choose the momentum accelerated Aitken \(\Delta 2\) scheme for its well-studied properties and extensive use in the community. The dynamic relaxation parameter \(\omega_{t}^{k}\) can be assessed at every subincrement \(k\) by using the following formula [55]: \[\omega_{t}^{k}=-\omega_{t}^{k-1}\frac{(\mathbf{r}_{t}^{k-1})^{T}(\mathbf{r}_{ t}^{k}-\mathbf{r}_{t}^{k-1})}{||(\mathbf{r}_{t}^{k}-\mathbf{r}_{t}^{k-1})||^{2}} \tag{22}\] However, one should note that Eq. (21) cannot be used to initialize the predicted displacement \(\tilde{\boldsymbol{u}}_{t}^{1}\) when starting a new timestep, as no previous subincrement exists. As a result, a linear predictor is employed, based on previous converged displacement. Similarly, Eq. (22) can only be used as from the end of the second subincrement. Before that, a fixed value \(\omega_{0}=0.1\) is employed. ## 3 Results ### Validation with the pressure wave benchmark To assess the accuracy and convergence of the employed FSI method, the well-known pressure wave benchmark [28] is implemented. This FSI case has been reproduced in several studies [56, 57, 54, 58] and remains the only widespread benchmark that models the coupling between arterial tissue and blood flow. Geometrical features are described in Figure 5. A fluid kinematic viscosity of \(\nu_{f}=10^{-6}\,m^{2}/s\) is set, along with fluid and solid densities of \(\rho_{f}=10^{3}\,kg/m^{3}\), \(\rho_{s,0}=1.2\times 10^{3}\,kg/m^{3}\), respectively. In the original work of Formaggia et al. [28], a Saint Venant-Kirchhof hyperelastic model is used with \(E=0.3\,MPa\) and \(\nu=0.3\). In our case, the Neo-Hookean formulation described in Section 2.2.2 is employed. In the context of rather small deformations, the two models have been reported to behave similarly [59]. The fluid is initially at rest and the tube is clamped at both ends. On \(\Gamma_{f,out}\), the pressure is set to zero, whereas \(1.3\times 10^{3}\,Pa\) is prescribed on \(\Gamma_{f,in}\) for \(3\,ms\), before it is relaxed to zero, creating a pressure wave that travels along the tube. We reproduced the case and solved it with a timestep of \(\delta t=0.05\,ms\), using first and second-order backward differentiation formulas for the fluid and solid time derivatives, respectively. The employed timestep is halved compared to the original benchmark and was required in our case due to the more dissipative behaviour of our stabilized solid solver [49]. The tolerance for assessing FSI subiteration convergence is set to \(10^{-9}\,mm\). Three different meshes have been tested to assess the convergence of the method, whose properties are summarized in Table 1. As shown in Figure 5, a boundary layer is implemented in the fluid, to better capture the physics at the interface. For this pressure wave benchmark, the accuracy of the results is usually assessed based on the radial and longitudinal displacements of a point situated at the middle of the tube, on the inner surface (see point A in Figure 5). The obtained curves for the three meshes are displayed in Figure 7 along with reference curves taken from [54, 58, 57]. The quality of the fit is encouraging and comforts the results reported in the following. Pressure contours are also reported in Figure 6, showing excellent agreement with the work of Lozovskiy et al. [58]. Figure 6: Middle cross-section view of pressure contours along with streamlines for different times: (a) \(t=4\ ms\), (b) \(t=6\ ms\), (c) \(t=8\ ms\) and (d) \(t=10\ ms\). Displayed results correspond to the medium mesh (M in Table 1). Structural displacements have been amplified by a factor 10. Figure 7: Radial (a) and longitudinal (b) displacements of point A (see Figure 5) for the three meshes introduced in Table 1. Results from [54], [58] and [57] have been reproduced for comparison. ### FSI in the proposed idealized aneurysm (R) Before moving to complex bulge shapes, the impact of FSI modelling on the spherical aneurysm is assessed in this section. To better visualize the 3D flow patterns involved, systolic velocity streamlines are given in Figure 9, both in the rigid and compliant configurations. Similarly, the pressure field is shown in Figure 10. To easily understand the impact of the modelling type on the aneurysm flow profile and on the domain's expansion, Figure 11 gives a systolic velocity iso-surface along with an overlay of the diastolic and systolic fluid domain shapes. In that latter, it can be observed that modelling tissue compliance creates a deeper swirl of the flow inside the bulge, due to an opening of the neck mostly in the \(z\)-direction. Apart from this shift, the general flow pattern is barely affected by the compliant tissue modelling. In both cases, the flow separates at the neck, with a small fraction entering the bulge, impinging on the wall, and resulting in high WSS values (TAWSS rise up to around \(200\,dyn.cm^{-2}\) in the FSI case). The membrane's movement is dominantly pressure-driven, with almost no tilting of the aneurysm in response to the inflow jet at the neck, agreeing with previous studies [11]. Contrary to velocity, the pressure field is consequently affected by the modelling choice. Indeed, the geometry compliance allows for delayed flow variations between the inlet and outlet, which directly affects the prescribed boundary conditions (2). For instance, the system is storing volume at systole, resulting in a lower outflow pressure. It can also be observed that the compliant modelling relaxes the pressure peak at the impingement area. The impact of the modelling choice on the quantities of interest (see Section 2.1.4) is reported in Figure 12 and 13. As the bulge opens under fluid stress, the wall inclination at the impingement area increases. As a result, this impact spot is shifted up, explaining the small region of higher TAWSS (in blue) for the FSI modelling in Figure 12. Apart from this specific spot, compliant modelling generally lowers the WSS peak values in the bulge as reported in Table 2, confirming trends of previous studies [10, 9]. However, as flow penetrates farther into the aneurysm dome, WSS at the fundus increases and the high OSI values are shifted towards the back of the bulge. For R2, fundus TAWSS varies by 15.7% with respect to the rigid configuration (see Table 2). All the reported effects are naturally amplified by lower stiffness values, which result in higher bulge deformation. For the explored configurations, observed volume variations and displacements lie in the range of previously reported values [15, 18], with R3 being in the extreme upper spectrum. Even though, the variables of interest change quantitatively, general distributions remain very similar, even when pushing the material properties to low values (R3). The embedded symmetry of the reference case (R) naturally reflects on flow patterns resulting in smaller compliance-driven changes compared to patient-specific cases [10, 13]. Similar to the pressure wave benchmark, convergence has been assessed for R2 and velocity profiles are reported in the Supplementary materials for several mesh refinements and time steps. Using the retained mesh (described in Section 2.1.5) and a timestep of \(\Delta t=5\,ms\), reference validation curves are proposed in Figure 8. These plots provide the systolic velocity (\(t=1.08\,s\)) along the \(y\) axis, which vertically goes through the aneurysm bulge. The plots are given for both rigid and compliant models to ease a step-by-step reproduction of the presented results. Figure 8: Reference systolic velocity profiles (\(t=1.08\,s\)) along the \(y\) axis (see. Figure 1) with rigid and compliant (R2) walls. Figure 9: Systolic flow lines for both rigid and compliant wall modellings. Figure 10: Systolic pressure for both rigid and compliant wall modellings. \begin{table} \begin{tabular}{l l l l l l} Case Id. & Max volume variation & Max displacement of & Systolic WSS drop at rigid & Funuds & TAWSS \\ & [\%] & \(\Gamma_{FSI}\) (mm) & & impingement spot [\%] & increase [\%] \\ \hline R1 & 6.8 & 0.20 & 4.1 & 8.9 \\ \hline R2 & 11.5 & 0.26 & 9.8 & 15.7 \\ \hline R3 & 20.9 & 0.33 & 22.7 & 22.7 \\ \hline S1 & 12.4 & 0.23 & 2.5 & 22.8 \\ \hline S2 & 16.7 & 0.45 & 16.3 & 79.0 \\ \hline S3 & 12.4 & 0.21 & -0.8 & 47.6 \\ \end{tabular} \end{table} Table 2: Comparison of FSI and rigid models through quantities of interest. All WSS-based relative differences are computed with the compliant model as a reference. The fundus location is identified on the rigid configuration as the point featuring the lowest TAWSS. Figure 11: Arbitrary systolic velocity iso-surface (left) and fluid domain extension (right), for both rigid (blue) and compliant (red) tissue modellings. Geometries are viewed from the side (along the \(z\) axis). ### Exploring different bulge shapes (S) To explore implications beyond a fully spherical bulge, the specific shapes S1-3 are thoroughly examined in Figures 9-13, similar to R2. Please note that, by construction, all the cases have the same neck geometry, which allows drawing the focus solely on the bulge's topology. S1-3 cases share some general flow aspects with the reference spherical geometry. Using FSI modelling, the flow swirls deeper and the impingement area shifts upwards in the bulge, resulting in an overall lower TAWSS. Although inspired by a real aneurysm shape, S1 does not deviate significantly from R2, even when looking at the details. S2 and S3, per contrary, have very different flow mechanisms, leading to large deviations between rigid and compliant tissue modelling. First, the S2 geometry expands mainly in the impingement area (as displayed in Figure 11), altering the inflow jet and reducing by 16.3% the WSS at the impact region predicted by the rigid model. As shown in Figure 9, the initially split aneurysal flow (rigid configuration) is merged in one general swirl for the compliant configuration due to the outward bulging of the solid in the impingement area. As a consequence, the dome's exposure to the flow increases, thereby locally rising TAWSS by 79%. The overall flow change is also illustrated by shifted OSI patterns, translating the alteration of the swirls when modelling the compliant tissue behaviour. The last investigated case, S3, also shows a noticeable change in flow pattern when employing compliant modelling. Contrary to S1-2, the flow does not penetrate easily the dome due to the orientation of the bulge with respect to the parent vessel. This results in a secondary recirculation at the dome in the rigid case, which is commonly observed in high aspect-ration IAs. This slow recirculation is significantly altered when compliant modelling is employed, due to its little inertia. The rotation of the flow is indeed modified by the expansion and contraction of the wall as shown in Figure 9. Even though TAWSS is not altered significantly, systolic WSS at the fundus changes by 47.6%. The OSI pattern is also largely impacted, as it is very sensitive to flow orientation (see Figure 13). Overall, the reported results show how different bulge shapes may demonstrate various responses to compliant tissue modelling. ## 4 Discussion ### Impact of the wall modelling Initiation and growth of IAs are sparked by abnormal flow conditions through different cascades of biological reactions [40, 38]. This tissue remodelling has been shown to be strongly correlated with haemodynamic metrics such as the ones investigated in this work [7]. The most commonly studied remains the WSS, for which both high and low values are considered a threat to IA's stability over time. If high values emphasize areas of fluid stress concentration, the lower ones are associated with blood stagnation, which can trigger an inflammatory response of the walls and atherosclerosis development [60]. The OSI complements the description at the wall's vicinity by giving insights on flow orientation changes over a cardiac cycle. The fact that these metrics can change significantly depending on the employed wall modelling suggests that rigid CFD results may yield inaccurate risk estimation. Compliant tissue modelling apparently affects WSS extreme values typically by shrinking the range. In the second specific case (S2), the irregular bulge topology features a weak spot where the largest deformation occurs. Because this structural weakness is located in the impingement area where most of the flow enters the bulge, the overall pattern is consequently modified and peak systole WSS values decrease by more than 15%. Concerning the low values, this case has also revealed a 79% TAWSS increase at the dome due to the slight inflation of the bulge that renders the fundus more accessible. Such slow-flow regions are particularly sensitive to wall compliance, as seen in the OSI patterns of S2 and S3. For the latter case, the general haemodynamics have substantially changed at the dome, even leading to local flow inversions. OSI has drastically increased there as the wall-induced flow pattern is more prone to changes of orientation over a cardiac cycle. This information appears essential in terms of thrombus formation risk assessment, as some indicators are classically built based on OSI values [61]. Even though not studied here, the flow residence time is also commonly employed as an indicator of IA stability and to predict potential thrombus formation for numerically evaluating the outcome of flow-diverters [62]. Such a flow metric will also undoubtedly reveal a different behaviour using the compliant modelling and should be explored in future work to reveal the impact on different aneurysms. All in all, considering that bulge geometries with daughter sacs or intricate dome topologies are very common and often feature zones of blood stagnation, we deem crucial to employ a compliant wall modelling to retrieve accurate slow flow patterns. While S2 and S3 feature some noticeable differences in terms of classical haemodynamic risk metrics, results have shown that S1's haemodynamics are barely altered, certainly because of the case's regular shape and the bulge alignment with the impinging flow. The diverse responses towards wall modelling observed in this study lead to the hypothesis that compliance-related effects Figure 12: TAWSS difference between the two wall modelling assumptions (rigid - compliant). Figure 13: OSI difference between the two wall modelling assumptions (rigid - compliant). are strongly shape-dependent. This motivates our sensitivity study along with future research effort to assess the generalizability of the drawn conclusions onto large-scale patient cohorts and help identify critical cases, for which compliant modelling would benefit the most. The three specific cases reported in this work pave the way toward this improved comprehension. Contrary to other patient-specific data inputs, the lumen's topology is systemically acquired in clinical routine when detecting an aneurysm, already rendering such studies feasible. However, as our results suggest, particular attention has to be dedicated to the segmentation quality of magnetic resonance angiography along with the level of smoothing employed when generating the computational domains[63], as topological details can yield large deviations when compliant modelling is employed. Lastly, an important aspect regarding outflow boundary conditions has to be stressed. It is interesting to note that almost all previous publications reporting FSI simulations of IAs investigated bifurcation cases[11, 10, 14, 13, 20]. In this study, a single outlet branch has been considered for the sake of simplicity, thus circumventing the treatment of the outflow split. If outflow boundary conditions have already proven to be a sensitive aspect of CFD simulations[27], it is even more for FSI. Indeed, several outflow branches featuring different radii will inevitably expand differently under internal pressure, thus altering the hydraulic resistance of the outflow paths and modifying the flow split. This change surely impacts the general flow pattern consequently. This observation suggests that the reported differences between rigid-wall and FSI simulations could be amplified when considering intricate patient-specific geometries, which almost always feature vascular branches in the vicinity of the aneurysm. ### Limitations and perspectives The geometries investigated in this work remain idealized and, although efforts have been made to tailor the aneurysm bulge with different shapes, a gap with patient-specific cases remains. The drawn conclusions give a grasp on the FSI modelling relevance but are not to be directly transposed to clinical applications. However, validating the highlighted trends on large-scale patient cohorts should stand as a future goal for the community, as previous studies remained limited to a few investigated cases. Following the same line, boundary conditions could be expanded to reach high-fidelity modelling of realistic aneurysms. Outflow pressure could be computed through a Windkessel model, and the solid structure pre-loaded as done in[10]. These details do not stand as a major technical burden but their relevance in such a simplified geometry remains unclear, motivating our inclination to simplicity. Finally, realistic tissue material behaviour and thickness should be prescribed to achieve accurate conclusions. Nevertheless, considering the lack of patient-specific data and of general guidelines regarding pathological tissue modelling, we believe that this belongs to future work. Progress in imaging technology will give more information about these missing parameters, allowing accurate modelling tools to prove their efficiency. If locally varying wall properties can be assessed _in-vivo_, FSI simulations will appear even more relevant. As local weaknesses are prone to result in larger deformations and constitute critical concentration points that jeopardize IAs' stability, FSI will surely contribute to high-fidelity risk assessment tools. We hope that future work will provide insights into a categorization of IAs based on their sensitivity to FSI modelling. While only a few shapes have been investigated as part of this study, the versatility of the proposed case allows us to explore a large manifold of realistic aneurysms in a controlled environment. Underlining aneurysm phenotypes that mostly benefit from the compliant modelling of arterial tissue stands as one of our future research goals. ## 5 Conclusion This work introduced a novel idealized sidewall aneurysm geometry for assessing the relevance of FSI modelling in various configurations. Different tissue stiffness values as well as four aneurysm bulge shapes have been investigated, keeping all the other simulation parameters unchanged. Well-known haemodynamic metrics such as WSS and OSI have been computed both using rigid and compliant wall modelling, revealing significant flow changes linked with the aneurysm topology. Bulges featuring blebs in jet impingement areas, as well as aneurysms with slow recirculating flow regions have been emphasized as being more sensitive to FSI modelling. The haemodynamic differences highlighted in this work suggest that modelling compliant vessels may be crucial for some IAs in order to predict their future growth and risk of rupture. Differences between rigid and compliant modelling could even be amplified using patient-specific pathological tissue data, although literature does not provide insights yet. Progress in medical imaging along with large-scale studies will certainly help to stress the limits of the widely-employed rigid wall assumption. In all scenarios, FSI models like the one introduced in this work have to be developed to improve the comprehension of IAs and to assess the relevance of wall tissue modelling assumptions. ## Acknowledgement This project has received funding from the European Research Council (ERC) under the European Union's Horizon Research and Innovation program (grant No. 101045042). We would like to thank the Neuro-Interventional department of the Nice University Hospital for their availability and support regarding medical aspects of this work.
2304.01796
Influence of Myocardial Infarction on QRS Properties: A Simulation Study
The interplay between structural and electrical changes in the heart after myocardial infarction (MI) plays a key role in the initiation and maintenance of arrhythmia. The anatomical and electrophysiological properties of scar, border zone, and normal myocardium modify the electrocardiographic morphology, which is routinely analysed in clinical settings. However, the influence of various MI properties on the QRS is not intuitively predictable.In this work, we have systematically investigated the effects of 17 post-MI scenarios, varying the location, size, transmural extent, and conductive level of scarring and border zone area, on the forward-calculated QRS. Additionally, we have compared the contributions of different QRS score criteria for quantifying post-MI pathophysiology.The propagation of electrical activity in the ventricles is simulated via a Eikonal model on a unified coordinate system.The analysis has been performed on 49 subjects, and the results imply that the QRS is capable of identifying MI, suggesting the feasibility of inversely reconstructing infarct regions from QRS.There exist sensitivity variations of different QRS criteria for identifying 17 MI scenarios, which is informative for solving the inverse problem.
Lei Li, Julia Camps, Zhinuo, Wang, Abhirup Banerjee, Blanca Rodriguez, Vicente Grau
2023-04-04T13:54:59Z
http://arxiv.org/abs/2304.01796v2
# Influence of Myocardial Infarction on QRS Properties: A Simulation Study ###### Abstract The interplay between structural and electrical changes in the heart after myocardial infarction (MI) plays a key role in the initiation and maintenance of arrhythmia. The anatomical and electrophysiological properties of scar, border zone, and normal myocardium modify the electrocardiographic morphology, which is routinely analysed in clinical settings. However, the influence of various MI properties on the QRS is not intuitively predictable. In this work, we have systematically investigated the effects of 17 post-MI scenarios, varying the location, size, transmural extent, and conductive level of scarring and border zone area, on the forward-calculated QRS. Additionally, we have compared the contributions of different QRS score criteria for quantifying post-MI pathophysiology. The propagation of electrical activity in the ventricles is simulated via a Eikonal model on a unified coordinate system. The analysis has been performed on 49 subjects, and the results imply that the QRS is capable of identifying MI, suggesting the feasibility of inversely reconstructing infarct regions from QRS. There exist sensitivity variations of different QRS criteria for identifying 17 MI scenarios, which is informative for solving the inverse problem. Keywords:Myocardial Infarction Sensitivity Analysis Simulation Cardiac Digital Twin ## 1 Introduction Myocardial infarction (MI) is a major cause of mortality and disability worldwide [12, 30]. Assessment of myocardial viability is essential in the diagnosis and treatment management for patients suffering from MI. In particular, the position, size, and shape of the scarring region and the border zone could provide important information for the selection of patients and delivery of therapies for MI. The electrocardiogram (ECG) is one of the most commonly used clinical diagnostic tools for MI [33]. It can provide useful information about the heart rhythm and reveal abnormalities related to the conduction system [15]. For example, ST-segment elevation and T-wave inversion are widely investigated indicators of cardiac remodeling associated with different stages of MI [11]. In contrast, the QRS patterns has received less attention when analyzing ECG abnormalities associated with MI. It is not yet fully clear how QRS abnormalities reflect MI characteristics, with some previous papers reporting conflicting results [27, 32]. _In-silico_ computer ECG simulations offer a powerful tool for mechanistic investigations on the MI characteristics [22, 32]. For example, Arevalo et al. constructed a cardiac computational model, where simulations of the electrical activity were executed for arrhythmia risk stratification of MI patients [1]. Wang et al. developed a multi-scale cardiac modeling and ECG simulation framework for mechanistic investigations into the pathophysiological ECG and mechanical behavior post-MI [32]. Costa et al. employed a computational ventricular model of porcine MI to investigate the impact of model anatomy, MI morphology, and EP personalization strategies on simulated ECGs. Que et al. designed a multi-scale heart-torso computational model to simulate pathological 12-lead post-MI ECGs with various topographies and extents for ECG data augmentation [24]. In this work, we investigate the association between QRS abnormalities and MI characteristics in a unified coordinate system. In this preliminary study, we only investigate QRS morphology rather than the complete ECG cycle, as the QRS simulation is quite efficient compared to the whole cycle. For each subject, we examine 17 MI scenarios, summarize their effects on the simulated QRS, and identify the scenarios with the most significant alterations in the QRS morphology. This study highlights the potential of QRS to improve the identification and localization of MI and further facilitate patient-specific clinical decision-making. It also demonstrates the feasibility of developing a cardiac "digital twin" deep computational model for the inference of MI by solving an inverse problem. The computational model provides an integrated perspective for each individual that incorporates the features from multi-modality data on cardiac systems. To the best of our knowledge, this is the first sensitivity analysis of QRS complex for quantifying the MI characteristic variation in cardiac electrical activities. ## 2 Methodology ### Anatomical Model Construction To obtain anatomical information, we generate a subject-specific 3D biventricular tetrahedral mesh from multi-view cardiac magnetic resonance (CMR) images for each subject, using the method outlined in [2]. We employ the cobiveco coordinate reference system for mesh representation to ensure a symmetric, consistent and intuitive biventricular coordinate system across various geometries [25]. The cobiveco coordinate is represented by \((tm,ab,rt)\), where \(tm\), \(ab\), and \(rt\) refer to transmural, apicobasal, and rotational coordinates, respectively. Figure 1 presents the cobiveco coordinate system. We use ellipses with radii \(tm_{r}\), \(ab_{r}\), and \(rt_{r}\) to represent infarct regions in the myocardium, represented as \[\frac{(tm_{i}-tm_{0})^{2}}{tm_{r}{}^{2}}+\frac{(ab_{i}-ab_{0})^{2}}{ab_{r}{}^{2}} +\frac{(rt_{i}-rt_{0})^{2}}{rt_{r}{}^{2}}\leq 1, \tag{1}\] where \((tm_{0},ab_{0},rt_{0})\) is the center coordinate of the infarct region. To study the effects of MI location at a population level, we employ the American Heart Association (AHA) 17-segment model [14] and consistently select the infarct areas and the transmural extent via cobiveco. ### Electrophysiological Simulation Cardiac electrophysiology is simulated via an efficient orthotropic Eikonal model [4, 31] that incorporates a human-based Purkinje system into the formulation of the root node (RN) activation times. The simulation is performed over the generated cobiveco mesh in Sec. 2.1 and can be defined as, \[\begin{cases}\sqrt{\nabla^{T}t\mathcal{V}^{2}\nabla t}=1,\\ t(\Gamma_{0})=pk(\Gamma_{0})-\min(pk(\Gamma_{0})),\end{cases} \tag{2}\] where \(\mathcal{V}\) are the orthogonal conduction velocities (CVs) of fibre, sheet (transmural), and sheet-normal directions, \(t\) is the time at which the activation wavefront reaches each point in the mesh, \(\Gamma_{0}\) is the set of locations (_i.e.,_ RNs) in the endocardi, and \(pk\) is a Purkinje-tree delay function from the His-bundle to every point in the mesh. Thus, the earliest activation time at the RNs is defined as their delay from the His-bundle through the Purkinje tree normalized by the earliest activation. The QRS can be calculated from the activation time map via a pseudo-ECG equation [10] for a 1D cable source with constant conductivity at a given electrode location (\(x^{\prime},y^{\prime},z^{\prime}\)), as \[\phi_{e}(x^{\prime},y^{\prime},z^{\prime})=\frac{a^{2}\sigma_{i}}{4\sigma_{e}} \int-\nabla V_{m}\cdot\left[\nabla\frac{1}{r}\right]dx\,dy\,dz\, \tag{3}\] Figure 1: Consistent biventricular coordinates in the cobiveco system. where \(V_{m}\) is the transmembrane potential, \(\nabla V_{m}\) is its spatial gradient, \(r\) is the Euclidean distance from a given point \((x,y,z)\) to the electrode location, \(a\) is a constant that depends on the fiber radius, and \(\sigma_{i}\) and \(\sigma_{e}\) are the intracellular and extracellular conductivities, respectively. QRS is obtained by considering this integral throughout the ventricular activation sequence period. For the measurement of electrode locations, we utilize the automated 3D torso reconstruction pipeline from the CMR images [26]. Note that the pseudo-ECG method can efficiently produce normalized ECG signals with a comparable level of morphological information as the bidomain simulation [19]. For simulation, we consider electrophysiological heterogeneities in the infarct regions, including _seven locations (see Fig. 2), two transmural extents (transmural and subendocardial MI), two different sizes, and two different sets of slower CVs in the infarct areas_[18]. Note that for the comparison of different MI sizes and CV decreasing extents, we only report on lateral MI as an illustrative case. Therefore, for each subject we simulate 17 heterogeneous MI scenarios and one normal ECG as the baseline. Figure 3 provides examples of generated MI heterogeneity scenarios. We vary the CVs of infarct and healthy myocardial areas during its simulation, as slower CVs have been observed in the infarcted human heart [8]. Conduction pathways for electrical propagation in the infarct regions might exist, as observed in clinical data [28]. Therefore, we set the CVs of scarring and border zone areas to 10% and 50% (another CV set: 5% and 25%) of the values in healthy myocardium, respectively. ### Univariate Sensitivity Analysis In the sensitivity analysis, we introduce a global QRS measure, dynamic time warping (DTW), to calculate the dissimilarity of QRS with different lengths [4]. Moreover, we investigate four local QRS criteria, corresponding to QRS abnormalities of MI reported in the literature, namely, _QRS duration prolongation_[6], _pathological Q-waves_[9], _fragmented QRS (fQRS)_[7], and _poor R wave progression (PRWP)_[13]. An example of each QRS abnormality is illustrated in Fig. 4. \begin{table} \begin{tabular}{l|c c c c c c c} \hline Location & Septal & Apical & Ext anterior & Lim anterior & Lateral & Inferior & Inferolateral \\ \hline Transmural extent & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ Size & & & & & ✓ & & \\ CV in MI region & & & & & ✓ & & \\ \hline \end{tabular} \end{table} Table 1: Summary of the investigated MI scenarios. Ext: extensive; Lim: limited. Figure 2: Seven MI locations on 17-segment AHA-map. The QRS duration is the time interval between the beginning of the Q wave and the end of the S wave. Pathological Q waves are described as the presence of Q wave with duration \(\geq\) 0.03 s and/ or amplitude \(\geq\) 25% of R-wave amplitude [9]. fQRS is defined as the number of additional spikes within the QRS complex [7]. PRWP refers to the absence of the normal increase in amplitude of the R wave in the precordial leads when advancing from lead V1 to V6 [13]. In the literature, different definitions of PRWP exist [17]. In this work, we employ criteria including R wave amplitude of 2 mm or less in the lead V3/ V4 and the presence of reversed R-wave progression such as R of V5 \(<\) R of V6 or R of V2 \(<\) R of V1, or any combination of these. ## 3 Experiments and Results ### Data Acquisition and Activation Property Configuration We collect 49 subjects with paired ECGs and CMR images, including cine short-axis, two- and four-chamber long-axis, localizer, and scout slices, from the UK Biobank study [3]. The locations of root nodes are set to seven fixed homologous locations to allow comparisons [5]. Specifically, four left ventricular (LV) earliest activation sites (LV mid-septum, LV basal-anterior paraspetal, and two LV mid-posterior) and three in the right ventricle (RV), namely, RV mid-septum and two RV free wall, are selected as root nodes. The CVs along the fiber, sheet, sheet-normal, and sparse/ dense endocardial directions are set to 65 cm/s, 48 cm/s, 51 cm/s, and 100/ 150 cm/s, respectively, in agreement with velocities reported for human healthy ventricular myocardium in [21, 29]. Figure 4: Sketch map of normal QRS and MI-related QRS abnormalities. Figure 3: Illustration of several MI scenarios, including different MI locations, sizes, and transmural extents. ### Results #### 3.2.1 QRS Differences Depending on MI Characteristics To investigate the sensitivity of QRS on the 17 MI scenarios, we compare the dissimilarity of each of these with the baseline as well as the dissimilarity between them, as shown in Fig. 5. It is clear that there exist significant morphological changes in the post-MI QRS compared to the normal QRS, especially for inferolateral, extensive anterior, and apical MIs. However, differences from healthy QRS are highly reduced, as expected, when we reduce the size of the lateral MI or its transmurality. In addition, there is a significant variation in the QRS of lateral MI among different subjects. As Fig. 6 (a-b) shows, the QRS of lateral MI can range from substantially different to almost identical to baseline. The extent of transmurality has evident effects on QRS morphology at each infarct location: as expected, transmural scars tend to present more evident morphological changes in the QRS than subendocardial ones. Even for the septal scars, in which transmural and subendocardial QRS dissimilarities are the smallest (\(DTW^{max}=0.2\) and \(DTW^{avg}=0.3\)), one still can observe their morphology difference (see Fig. 6 (c)). Nevertheless, differences in QRS between infarct locations appear to be larger than those depending on the extent of transmurality, suggesting that the QRS has higher sensitivity for localizing MI than predicting its transmural extent. The major QRS morphological variation for different degrees of CV reduction setting appears to be the QRS duration, which is not unexpected. However, according to our limited test for this purpose, we get particularly unusual QRS simulation results when we significantly reduce the CVs in the MI regions. Therefore, the CV configuration of MI areas during simulation is still an open question that demands more exploration in the future. Figure 5: (a) QRS dissimilarity of each MI scenario in each lead compared to the baseline; (b) QRS dissimilarity between each MI scenario. The full name of MI scenario is omitted here. \(DTW^{max}\) and \(DTW^{avg}\) refer to the maximum and average dynamic time warping (DTW) values of all leads, respectively. transmu: tranmural; subendo: subendocardial. The sensitivity of different QRS leads for detecting infarct location is varied. As Fig. 5 (a) shows, most infarct locations are represented on the QRS by leads I, V5, and V6, whereas septal MI is represented by leads V1-V4 and V3-V4 for subendocardial and transmural ones, respectively. This result is generally consistent with those reported in clinical practice [23]. In general, larger scars tend to result in QRS changes appearing in more leads. #### 4.2.2 Sensitivity of Different QRS Criteria for MI Classification The changes in QRS morphology for different MI scenarios are reflected in various perspectives. Here, we introduce several QRS criteria and compare the contribution of each of these for infarct detection. Apical, extensive anterior, and inferolateral MI tend to present prolongation of the QRS duration, as Fig. 6 (d) shows. PRWP mainly occurred in extensive anterior, septal, and apical MIs, similar as reported in the literature [13, 20]. Specifically, the R wave amplitude in the septal MI is sometimes flattened, while the R wave of V6 tends to be larger than the R of V5 in the apical MI, as Fig. 6 (c) shows. The prevalence of fQRS IS more common Figure 6: (a-b) QRS morphology examples of lateral MIs with different sizes and CV setting. Here, MI and baseline QRS are labeled in red and grey, respectively; (c) QRS morphology difference among transmural and subendocardial septal MIs and poor R wave progression examples occurs in apical and septal MIs. The R wave progression is labeled with a black dashed line; (d) QRS duration of MI and baseline. in the inferior lead (lead II) compared with the anterior leads (leads V3 and V4) and the lateral leads (leads V5 and V6), similar to the results reported in Liu et al. [16]. The presence of fQRS in lead II and leads V3-V4 indicate inferolateral MI and extensive anterior MI, respectively. In contrast, pathological Q wave fails to classify MI from healthy subjects in our simulation system. ## 4 Discussion and Conclusion In this paper, we have presented a sensitivity analysis of QRS for the identification of 17 MI scenarios via Eikonal simulation. The results have demonstrated the potential of the QRS to improve ECG-based prediction of MI characteristics and further facilitate patient-specific clinical decision-making. It also demonstrates the feasibility of developing a cardiac "digital twin" deep computational model for the inference of MI. Limitations of our study at this point include the assumption of a known set of RNs and limited variation in our anisotropic CVs. Moreover, currently we only consider cardiac anatomical information and electrode nodes, but ignore the torso geometry. Its introduction might provide relevant information about its influence in ECG patterns. In the future, we will extend this work by introducing non-invasive personalization of the ventricular activation sequences for a more realistic representation of the cardiac conduction system. Furthermore, this analysis could be applied on the whole ECG signal instead of only QRS, necessitating large computational costs. The results can be further validated from relevant clinical outcomes in the ECGs of real MI patients. Consequently, the developed models and techniques will enable further research in cardiac digital twins. #### Acknowledgement. This research has been conducted using the UK Biobank Resource under Application Number '40161'. The authors express no conflict of interest. This work was funded by the CompBioMed 2 Centre of Excellence in Computational Biomedicine (European Commission Horizon 2020 research and innovation programme, grant agreement No. 823712). L. Li was partially supported by the SJTU 2021 Outstanding Doctoral Graduate Development Scholarship. A. Banerjee is a Royal Society University Research Fellow and is supported by the Royal Society Grant No. URF\R1\21314. The work of A. Banerjee and V. Grau was partially supported by the British Heart Foundation (BHF) Project under Grant PG/20/21/35082.
2303.15744
Cohomology and deformations of crossed homomorphisms between Lie-Yamaguti algebras
In this paper, we introduce the notion of crossed homomorphisms between Lie-Yamaguti algebras and establish the cohomology theory of crossed homomorphisms via the Yamaguti cohomology. Consequently, we use this cohomology to characterize linear deformations of crossed homomorphisms between Lie-Yamaguti algebras. We show that if two linear or formal deformations of a crossed homomorphism are equivalent, then their infinitesimals are in the same cohomology class in the first cohomology group. Moreover, we show that an order $n$ deformation of a crossed homomorphism can be extended to an order $n+1$ deformation if and only if the obstruction class in the second cohomology group is trivial.
Jia Zhao, Yu Qiao, Senrong Xu
2023-03-28T05:58:50Z
http://arxiv.org/abs/2303.15744v1
# Cohomology and deformations of crossed homomorphisms between Lie-Yamaguti algebras ###### Abstract. In this paper, we introduce the notion of crossed homomorphisms between Lie-Yamaguti algebras and establish the cohomology theory of crossed homomorphisms via the Yamaguti cohomology. Consequently, we use this cohomology to characterize linear deformations of crossed homomorphisms between Lie-Yamaguti algebras. We show that if two linear or formal deformations of a crossed homomorphism are equivalent, then their infinitesimals are in the same cohomology class in the first cohomology group. Moreover, we show that an order \(n\) deformation of a crossed homomorphism can be extended to an order \(n+1\) deformation if and only if the obstruction class in the second cohomology group is trivial. _Keywords_: Lie-Yamaguti algebra, crossed homomorphism, cohomology, deformation _Mathematics Subject Classification_ (2020): 17A36, 17B55, 17B60 ###### Contents * 1 Introduction * 1.1 Lie-Yamaguti algebras * 1.2 Crossed homomorphisms * 1.3 Outline of the paper * 2 Preliminaries: Lie-Yamaguti algebras, representations and cohomology * 3 Crossed homomorphisms between Lie-Yamaguti algebras * 4 Cohomology of crossed homomorphisms between Lie-Yamaguti algebras * 5 Deformatons of crossed homomorphisms between Lie-Yamaguti algebras * 5.1 Linear deformations of crossed homomorphisms between Lie-Yamaguti algebras * 5.2 Formal deformations of crossed homomorphisms between Lie-Yamaguti algebras * 5.3 Order \(n\) deformations of crossed homomorphisms between Lie-Yamaguti algebras ## 1. Introduction Mathematical physics has many branches in mathematics and can be applied in Lie theory and representation theory [15, 19]. Deformation theory plays an important role in mathematics and mathematical physics. A deformation of a mathematical object, roughly speaking, means that it preserves its original structure after a parameter perturbation. In physics, deformation theory comes from quantizing classical mechanics, and this idea promotes some researches on quantum groups in mathematics [6, 13]. Recently, deformation quantization has produced many elegant works in the context of mathematical physics. Based on work of complex analysis by Kodaira and Spencer [17], deformation theory was generalized in algebra [13]. Deformation of algebra can be dated back to works on associative algebra by Gerstenhaber [8, 9, 10, 11, 12]. Later, Nijenhuis and Richadson studied deformations on Lie algebra [23]. Balavoine generalized deformation theory to operads [1]. In the context of algebras, deformation has close connection with cohomology. For instance, a suitable cohomology can be used to characterize deformations. In particular, a linear deformation of Lie algebras is controlled by a second cohomolpogy group; an order \(n\) deformation can be extended to an order \(n+1\) deformation if and only if its obstruction class is trivial; a trivial deformation gives rise to a Nijenhuis operator [7], which is key to deformation theory and has applications in integrability of constructing biHamiltonian systems [7]. Sheng and his collaborators have a series of woks on deformation theory on (3-)Lie algebras. For example, they studied deformations on 3-Lie algebras and even \(n\)-Lie algebras [20] and examined product and complex structures on 3-Lie algebras using Nijenhuis operators [26]. Moreover, they construct a controlling algebra that characterizes deformations of relative Rota-Baxter operators (also called \(\mathcal{O}\)-operators) on Lie algebras, on 3-Lie algebras, and on Leibniz algebras respectively [29, 30, 31]. Recently, Pei and his colleagues established crossed homomorphisms on Lie algebras via the same methods, and generalized constructions of many kinds of Lie algebras by using bifunctors [25]. Besides, the first two authors investigated cohomology and linear deformations of LieYRep pairs and explored several properties of relative Rota-Baxter-Nijenhuis structures on LieYRep pairs in [37], and cohomology and deformations of relative Rota-Baxter operators on Lie-Yamaguti algebras in [38]. ### Lie-Yamaguti algebras A Lie-Yamaguti algebra is a generalization of a Lie algebra and a Lie triple system which can be traced back to Nomizu's work on the affine invariant connections on homogeneous spaces in 1950's [24]. Later in 1960's, Yamaguti introduced an algebraic structure and called it a general Lie triple system or a Lie triple algebra [32, 33, 34]. Kinyon and Weinstein first called this object a Lie-Yamaguti algebra when studying Courant algebroids in the earlier 21st century [16]. Since then, this system was called a Lie-Yamaguti algebra, which has attracted much attention and is widely investigated recently. For instance, Benito and his collaborators deeply explored irreducible Lie-Yamaguti algebras and their relations with orthogonal Lie algebras [2, 3, 4, 5]. Deformations and extensions of Lie-Yamaguti algebras were examined in [18, 22, 35, 36]. Sheng, the first author, and Zhou analyzed product structures and complex structures on Lie-Yamaguti algebras by means of Nijenhuis operators in [27]. Takahashi studied modules over quandles using representations of Lie-Yamaguti algebras in [28]. ### Crossed homomorphisms The notion of crossed homomorphisms on Lie algebras was introduced when nonabelian extension of Lie algebras was studied [21]. An example of crossed homomorphisms is a differential operator of weight \(1\), and a flat connection \(1\)-form of a trivial principle bundle is also a crossed homomorphism. In [25], authors showed that the category of weak representations (resp. admissible representations) of Lie-Rinehart algebras (resp. Leibniz pairs) is a left module category over the monoidal category of representations of Lie algebras using crossed homomorphisms. Later, cohomology and deformations of crossed homomorphisms on 3-Lie algebras were also studied in [14]. Thus it is natural to consider cohomology and deformations of crossed homomorphisms between Lie-Yamaguti algebras. More precisely, for a crossed homomorphism \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) from a Lie-Yamaguti algebra \(\mathfrak{g}\) to another Lie-Yamaguti algebra \(\mathfrak{h}\) with respect to an action \((\rho,\mu)\), the most important step is to establish the cohomology theory of \(H\). Our strategy is as follows: First we introduce linear maps \(\rho_{H}:\mathfrak{g}\longrightarrow\mathfrak{h}\) and \(\mu_{H}:\otimes^{2}\mathfrak{g}\longrightarrow\mathfrak{h}\) via \(H\), and prove that \((\mathfrak{h};\rho_{H},\mu_{H})\) is a representation of \(\mathfrak{g}\) on the vector space \(\mathfrak{h}\). Consequently, we obtain a corresponding Yamaguti cohomology of Lie-Yamaguti algebra \(\mathfrak{g}\) with coefficients in the representation \((\mathfrak{h};\rho_{H},\mu_{H})\). Note that Yamaguti cohomology stars from 1-cochains. Thus the second step is to construct 0-cochains and the corresponding coboundary maps, which is a difficulty to overcome. Once the cohomology theory is established, we are able to explore the relationship between cohomology and deformations of crossed homomorphisms. For this purpose, we intend to investigate three kinds of deformations: linear, formal, and higher order deformations. Note that a Lie-Yamaguti algebra can be reduced to a Lie triple system when the ternary bracket is trivial, thus the notion of crossed homomorphisms and the cohomology and deformation theory of those between Lie triple systems can be obtained directly from the present paper. ### Outline of the paper The paper is structured as follows. In Section 2, we recall some basic notions such as Lie-Yamaguti algebras, representations, and cohomology. In Section 3, we introduce the notion of crossed homomorphisms between Lie-Yamaguti algebras, and show that there is a one-to-one correspondence between crossed homomorphisms and Lie-Yamaguti algebra homomorphisms. In Section 4, we establish the cohomology of crossed homomorphisms on Lie-Yamaguti algebras, and examine a functorial property of the cohomology theory. Finally in Section 5, we explore three kinds of deformations, and show that the infinitesimal of linear and formal deformations can be governed by cohomology and that the extension of a higher deformation is characterized by a special cohomology class. This is why we call this cohomology class the obstruction class. In this paper, all vector spaces are assumed to be over a field \(\mathbb{K}\) of characteristic 0 and finite-dimensional. ## 2. Preliminaries: Lie-Yamaguti algebras, representations and cohomology In this section, we recall some basic notions such as Lie-Yamaguti algebras, representations and their cohomology theories. The notion of Lie-Yamaguti algebras was introduced by Yamaguti in [32]. **Definition 2.1**.: [16] A **Lie-Yamaguti algebra** is a vector space \(\mathfrak{g}\) equipped with a bilinear bracket \([\cdot,\cdot]:\wedge^{2}\mathfrak{g}\to\mathfrak{g}\) and a trilinear bracket \([\![\cdot,\cdot,\cdot]\!]:\wedge^{2}\mathfrak{g}\otimes\mathfrak{g}\to \mathfrak{g}\), which meet the following conditions: for all \(x,y,z,w,t\in\mathfrak{g}\), \[[[x,y],z]+[[y,z],x]+[[z,x],y]+[\![x,y,z]\!]+[\![y,z,x]\!]+[\![z,x,y]\!]=0, \tag{2}\] \[[\![[x,y],z,w]\!]+[\![[y,z],x,w]\!]+[\![[z,x],y,w]\!]=0,\] (3) \[[\![x,y,[z,w]\!]]=[\![[\![x,y,z]\!],w]+[\![z,[\![x,y,w]\!]],\] (4) \[[\![x,y,[\![z,w,t]\!]]=[\![[\![x,y,z]\!],w,t\!]]+[\![z,[\![x,y,w]\!],t\!]]+[\![z,w,[\![x,y,t]\!]]\,. \tag{1}\] In the sequel, we denote a Lie-Yamaguti algebra by \((\mathfrak{g},[\cdot,\cdot]\!][\![\cdot,\cdot,\cdot]\!])\). **Example 2.2**.: _Let \((\mathfrak{g},[\cdot,\cdot]\!]\) be a Lie algebra. Define a trilinear bracket_ \[[\![\cdot,\cdot,\cdot]\!]:\wedge^{2}\mathfrak{g}\otimes\mathfrak{g}\to \mathfrak{g}\] _by_ \[[\![x,y,z]\!]:=[[x,y],z],\quad\forall x,y,z\in\mathfrak{g}.\] _Then by a direct computation, we know that \((\mathfrak{g},[\cdot,\cdot]\!][\![\cdot,\cdot,\cdot]\!])\) forms a Lie-Yamaguti algebra._ The following example is even more interesting. **Example 2.3**.: _Let \(M\) be a closed manifold with an affine connection, and denote by \(\mathfrak{X}(M)\) the set of vector fields on \(M\). For all \(x,y,z\in\mathfrak{X}(M)\), set_ \[[x,y] := -T(x,y),\] \[= -R(x,y)z,\] _where \(T\) and \(R\) are torsion tensor and curvature tensor respectively. It turns out that the triple \((\mathfrak{X}(M),[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) forms a Lie-Yamaguti algebra. See [24] for more details._ Next, we recall the notion of representations of Lie-Yamaguti algebras. **Definition 2.4**.: [33] Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra. A **representation** of \(\mathfrak{g}\) is a vector space \(V\) equipped with a linear map \(\rho:\mathfrak{g}\to\mathfrak{gl}(V)\) and a bilinear map \(\mu:\mathfrak{g}^{2}\mathfrak{g}\to\mathfrak{gl}(V)\), which meet the following conditions: for all \(x,y,z,w\in\mathfrak{g}\), \[\mu([x,y],z)-\mu(x,z)\rho(y)+\mu(y,z)\rho(x)=0, \tag{6}\] \[\mu(x,[y,z])-\rho(y)\mu(x,z)+\rho(z)\mu(x,y)=0,\] (7) \[\rho([\![x,y,z]\!])=[D_{\rho,\mu}(x,y),\rho(z)],\] (8) \[\mu(z,w)\mu(x,y)-\mu(y,w)\mu(x,z)-\mu(x,[\![y,z,w]\!])+D_{\rho, \mu}(y,z)\mu(x,w)=0,\] (9) \[\mu([\![x,y,z]\!]\,,w)+\mu(z,[\![x,y,w]\!])=[D_{\rho,\mu}(x,y), \mu(z,w)], \tag{5}\] where the bilinear map \(D_{\rho,\mu}:\mathfrak{g}^{2}\mathfrak{g}\to\mathfrak{gl}(V)\) is given by \[D_{\rho,\mu}(x,y):=\mu(y,x)-\mu(x,y)+[\rho(x),\rho(y)]-\rho([x,y]),\quad \forall x,y\in\mathfrak{g}. \tag{10}\] It is obvious that \(D_{\rho,\mu}\) is skew-symmetric, and we write \(D\) in the sequel without ambiguities. We denote a representation of \(\mathfrak{g}\) by \((V;\rho,\mu)\). **Remark 2.5**.: Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \((V;\rho,\mu)\) a representation of \(\mathfrak{g}\). If \(\rho=0\) and the Lie-Yamaguti algebra \(\mathfrak{g}\) reduces to a Lie triple system \((\mathfrak{g},[\![\cdot,\cdot,\cdot]\!])\), then the representation reduces to that of the Lie triple system \((\mathfrak{g},[\![\cdot,\cdot,\cdot]\!])\): \((V;\mu)\). If \(\mu=0\), \(D=0\) and the Lie-Yamaguti algebra \(\mathfrak{g}\) reduces to a Lie algebra \((\mathfrak{g},[\cdot,\cdot])\), then the representation reduces to that of the Lie algebra \((\mathfrak{g},[\cdot,\cdot])\): \((V;\rho)\). Hence a representation of a Lie-Yamaguti algebra is a natural generalization of that of a Lie algebra or of a Lie triple system. By a direct computation, we have the following lemma. **Lemma 2.6**.: _Suppose that \((V;\rho,\mu)\) is a representation of a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\). Then the following equalities are satisfied:_ \[D([x,y],z)+D([y,z],x)+D([z,x],y)=0;\] \[D([\![x,y,z]\!]\,,w)+D(z,[\![x,y,w]\!])=[D(x,y),D_{\rho,\mu}(z,w)];\] \[\mu([\![x,y,z]\!]\,,w)=\mu(x,w)\mu(z,y)-\mu(y,w)\mu(z,x)-\mu(z,w)D( x,y).\] **Example 2.7**.: _Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra. We define linear maps \(\mathrm{ad}:\mathfrak{g}\to\mathfrak{gl}(\mathfrak{g})\) and \(\mathfrak{R}:\mathfrak{g}^{2}\mathfrak{g}\to\mathfrak{gl}(\mathfrak{g})\) by \(x\mapsto\mathrm{ad}_{x}\) and \((x,y)\mapsto\mathfrak{R}_{x,y}\) respectively, where \(\mathrm{ad}_{x}z=[x,z]\) and \(\mathfrak{R}_{x,y}z=[\![z,x,y]\!]\) for all \(z\in\mathfrak{g}\). Then \((\mathrm{ad},\mathfrak{R})\) forms a representation of \(\mathfrak{g}\) on itself, where \(\mathfrak{L}:=D_{\mathrm{ad},\mathfrak{R}}\) is given by_ \[\mathfrak{L}_{x,y}=\mathfrak{R}_{y,x}-\mathfrak{R}_{x,y}+[\mathrm{ad}_{x}, \mathrm{ad}_{y}]-\mathrm{ad}_{[x,y]},\quad\forall x,y\in\mathfrak{g}.\] _By (1), we have_ \[\mathfrak{L}_{x,y}z=[\![x,y,z]\!]\,,\quad\forall z\in\mathfrak{g}.\] _In this case, \((\mathfrak{g};\operatorname{ad},\mathfrak{R})\) is called the_ **adjoint representation** _of \(\mathfrak{g}\)._ Let us recall the cohomology theory on Lie-Yamaguti algebras given in [33]. Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \((V;\rho,\mu)\) a representation of \(\mathfrak{g}\). We denote the set of \(p\)-cochains by \(C^{p}_{\operatorname{LieY}}(\mathfrak{g},V)\)\((p\geqslant 1)\), where \[C^{n+1}_{\operatorname{LieY}}(\mathfrak{g},V)\triangleq\begin{cases} \operatorname{Hom}(\underbrace{\wedge^{2}\mathfrak{g}\otimes\cdots\otimes \wedge^{2}\mathfrak{g}}_{n},V)\times\operatorname{Hom}(\underbrace{\wedge^{2 }\mathfrak{g}\otimes\cdots\otimes\wedge^{2}\mathfrak{g}}_{n}\otimes \mathfrak{g},V),&\forall n\geqslant 1,\\ \operatorname{Hom}(\mathfrak{g},V),&n=0.\end{cases}\] In the sequel, we recall the coboundary map of \(p\)-cochains: * If \(n\geqslant 1\), for any \((f,g)\in C^{n+1}_{\operatorname{LieY}}(\mathfrak{g},V)\), the coboundary map \[\delta=(\delta_{1},\delta_{\Pi}):C^{n+1}_{\operatorname{LieY}}( \mathfrak{g},V)\to C^{n+2}_{\operatorname{LieY}}(\mathfrak{g},V),\] \[(f,g)\mapsto(\delta_{1}(f,g),\delta_{\Pi}(f,g))\] is given as follows: (11) \[\left(\delta_{1}(f,g)\right)(\mathfrak{X}_{1},\cdots,\mathfrak{ X}_{n+1})\] \[= (-1)^{n}\Big{(}\rho(x_{n+1})g(\mathfrak{X}_{1},\cdots,\mathfrak{ X}_{n},y_{n+1})-\rho(y_{n+1})g(\mathfrak{X}_{1},\cdots,\mathfrak{X}_{n},x_{n+1})\] \[\qquad-g(\mathfrak{X}_{1},\cdots,\mathfrak{X}_{n},[x_{n+1},y_{n+ 1}])\Big{)}\] \[+\sum_{k=1}^{n}(-1)^{k+1}D(\mathfrak{X}_{k})f(\mathfrak{X}_{1}, \cdots,\mathfrak{X}_{k},\cdots,\mathfrak{X}_{n+1})\] \[+\sum_{1\leqslant k<l\leqslant n+1}(-1)^{k}f(\mathfrak{X}_{1}, \cdots,\mathfrak{X}_{k},\cdots,\mathfrak{X}_{k}\circ\mathfrak{X}_{l},\cdots, \mathfrak{X}_{n+1}),\] \[\Big{(}\delta_{\Pi}(f,g)\Big{)}(\mathfrak{X}_{1},\cdots, \mathfrak{X}_{n+1},z)\] \[= (-1)^{n}\Big{(}\mu(y_{n+1},z)g(\mathfrak{X}_{1},\cdots, \mathfrak{X}_{n},x_{n+1})-\mu(x_{n+1},z)g(\mathfrak{X}_{1},\cdots,\mathfrak{X }_{n},y_{n+1})\Big{)}\] \[+\sum_{k=1}^{n+1}(-1)^{k+1}D(\mathfrak{X}_{k})g(\mathfrak{X}_{1}, \cdots,\mathfrak{X}_{k},\cdots,\mathfrak{X}_{n+1},z)\] \[+\sum_{1\leqslant k<l\leqslant n+1}(-1)^{k}g(\mathfrak{X}_{1}, \cdots,\mathfrak{X}_{k},\cdots,\mathfrak{X}_{k}\circ\mathfrak{X}_{l},\cdots, \mathfrak{X}_{n+1},z)\] \[+\sum_{k=1}^{n+1}(-1)^{k}g(\mathfrak{X}_{1},\cdots,\mathfrak{X}_ {k},\cdots,\mathfrak{X}_{n+1},[\![x_{k},y_{k},z]\!]),\] where \(\mathfrak{X}_{i}=x_{i}\wedge y_{i}\in\wedge^{2}\mathfrak{g}\)\((i=1,\cdots,n+1)\), \(z\in\mathfrak{g}\) and \(\mathfrak{X}_{k}\circ\mathfrak{X}_{l}:=[\![x_{k},y_{k},x_{l}]\!]\wedge y_{l}+x_{l}\wedge [\![x_{k},y_{k},y_{l}]\!]\). * If \(n=0\), for any \(f\in C^{1}_{\operatorname{LieY}}(\mathfrak{g},V)\), the coboundary map \[\delta:C^{1}_{\operatorname{LieY}}(\mathfrak{g},V) \to C^{2}_{\operatorname{LieY}}(\mathfrak{g},V),\] \[f \mapsto(\delta_{1}(f),\delta_{\Pi}(f))\] is defined to be (13) \[\Big{(}\delta_{1}(f)\Big{)}(x,y) = \rho(x)f(y)-\rho(y)f(x)-f([x,y]),\] (14) \(\Big{(}\delta_{\Pi}(f)\Big{)}(x,y,z)=D(x,y)f(z)+\mu(y,z)f(x)-\mu(x,z)f(y)-f( \llbracket x,y,z\rrbracket),\quad\forall x,y,z\in\mathfrak{g}\). Yamaguti showed the following fact. **Proposition 2.8**.: _[_33_]_ _With the notations above, for any \(f\in C^{1}_{\mathrm{LieY}}(\mathfrak{g},V)\), we have_ \[\delta_{\Pi}\Big{(}\delta_{\Pi}(f)),\delta_{\Pi}(f)\Big{)}=0\quad\mathrm{and} \quad\delta_{\Pi}\Big{(}\delta_{\Pi}(f)),\delta_{\Pi}(f)\Big{)}=0.\] _Moreover, for all \((f,g)\in C^{p}_{\mathrm{LieY}}(\mathfrak{g},V),\ (p\geqslant 2)\), we have_ \[\delta_{\Pi}\Big{(}\delta_{\Pi}(f,g)),\delta_{\Pi}(f,g)\Big{)}=0\quad\mathrm{ and}\quad\delta_{\Pi}\Big{(}\delta_{\Pi}(f,g)),\delta_{\Pi}(f,g)\Big{)}=0.\] _Thus the cochain complex \((C^{\bullet}_{\mathrm{LieY}}(\mathfrak{g},V)=\bigoplus\limits_{p=1}^{\infty} C^{p}_{\mathrm{LieY}}(\mathfrak{g},V),\delta)\) is well defined. For convenience, we call this cohomology the_ **Yamaguti cohomology** _in this paper._ **Definition 2.9**.: With the above notations, let \((f,g)\) in \(C^{p}_{\mathrm{LieY}}(\mathfrak{g},V)\)) (resp. \(f\in C^{1}_{\mathrm{LieY}}(\mathfrak{g},V)\) for \(p=1\)) be a \(p\)-cochain. If it satisfies \(\delta(f,g)=0\) (resp. \(\delta(f)=0\)), then it is called a \(p\)-cocycle. If there exists \((h,s)\in C^{p-1}_{\mathrm{LieY}}(\mathfrak{g},V)\), (resp. \(t\in C^{1}(\mathfrak{g},V)\), if \(p=2\)) such that \((f,g)=\delta(h,s)\) (resp. \((f,g)=\delta(t)\)), then it is called a \(p\)-coboundary (\(p\geqslant 2\)). The set of \(p\)-cocycles and that of \(p\)-coboundaries are denoted by \(Z^{p}_{\mathrm{LieY}}(\mathfrak{g},V)\) and \(B^{p}_{\mathrm{LieY}}(\mathfrak{g},V)\) respectively. The resulting \(p\)-cohomology group is defined to be the factor space \[H^{p}_{\mathrm{LieY}}(\mathfrak{g},V)=Z^{p}_{\mathrm{LieY}}(\mathfrak{g},V)/ B^{p}_{\mathrm{LieY}}(\mathfrak{g},V).\] In particular, we have \[H^{1}_{\mathrm{LieY}}(\mathfrak{g},V)=\{f\in C^{1}_{\mathrm{LieY}}(\mathfrak{ g},V):\delta(f)=0\}.\] ## 3. Crossed homomorphisms between Lie-Yamaguti algebras In this section, we introduce the notion of crossed homomorphisms between Lie-Yamaguti algebras, and show that a crossed homomorphism can be seen as a homomorphism of Lie-Yamaguti algebras. Moreover, a crossed homomorphism corresponds to a relative Rota-Baxter operator of weight \(1\). Before this, we introduce the notion of center of Lie-Yamaguti algebras. Let \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}})\) be a Lie-Yamaguti algebra. Denote the **center** of \(\mathfrak{g}\) by \[C(\mathfrak{g}):=\{x\in\mathfrak{g}|[x,y]=0,\forall y\in\mathfrak{g}\}\cap \Big{(}\{x\in\mathfrak{g}|\,\llbracket x,y,z\rrbracket=0,\forall y,z\in \mathfrak{g}\}\cup\{x\in\mathfrak{g}|\,\llbracket y,x,z\rrbracket=0,\forall y,z\in\mathfrak{g}\}\Big{)}.\] **Definition 3.1**.: Let \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}})\) and \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\cdot,\cdot]_{\mathfrak{h}})\) be two Lie-Yamaguti algebras. Let \((\mathfrak{h};\rho,\mu)\) be a representation of \(\mathfrak{g}\) on the vector space \(\mathfrak{h}\), i.e., linear maps \(\rho:\mathfrak{g}\to\mathfrak{gl}(\mathfrak{h})\), \(\mu:\otimes^{2}\mathfrak{g}\to\mathfrak{gl}(\mathfrak{h})\), and \(D:\wedge^{2}\mathfrak{g}\to\mathfrak{gl}(\mathfrak{g})\) are given by Eqs. (5)-(10). If for all \(x,y\in\mathfrak{g},\ u,v,w\in\mathfrak{h}\), the following conditions are satisfied \[\rho(x)u,\mu(x,y)u\in C(\mathfrak{h}),\] \[\rho(x)[u,v]_{\mathfrak{h}}=\mu(x,y)[u,v]_{\mathfrak{h}}=0,\] \[\rho(x)[\llbracket u,v,w\rrbracket_{\mathfrak{h}}=\mu(x,y) \llbracket u,v,w\rrbracket_{\mathfrak{h}}=0.\] then we say that \((\rho,\mu)\) is an **action** of \(\mathfrak{g}\) on \(\mathfrak{h}\). Let \((\rho,\mu)\) be an action of \(\mathfrak{g}\) on \(\mathfrak{h}\). By (10), we deduce that \[D(x,y)u\in C(\mathfrak{g}),\ D(x,y)[u,v]_{\mathfrak{h}}=D(x,y)\,\llbracket u,v,w\rrbracket_{\mathfrak{h}}=0,\quad\forall x,y\in\mathfrak{g},u,v,w\in \mathfrak{h}.\] The following proposition shows that an action of Lie-Yamaguti algebras can be used to characterize semidirect product Lie-Yamaguti algebras. **Proposition 3.2**.: _Let \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot]\!]_{\mathfrak{g}})\) and \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot]\!]_{\mathfrak{h}})\) be two Lie-Yamaguti algebras. Let \((\rho,\mu)\) be an action of \(\mathfrak{g}\) on \(\mathfrak{h}\), then there is a Lie-Yamaguti algebra structure on the direct sum \(\mathfrak{g}\oplus\mathfrak{h}\) defined by_ \[[x+u,y+v]_{\rho,\mu} = [x,y]_{\mathfrak{g}}+\rho(x)v-\rho(y)u+[u,v]_{\mathfrak{h}},\] \[[\![x+u,y+v,z+w]\!]_{\rho,\mu} = [\![x,y,z]\!]_{\mathfrak{g}}+D(x,y)w+\mu(y,z)u-\mu(x,z)v+[\![u,v,w ]\!]_{\mathfrak{h}}\,,\] _for all \(x,y,z\in\mathfrak{g}\) and \(u,v,w\in\mathfrak{h}\). This Lie-Yamaguti algebra is called the_ **semidirect product Lie-Yamaguti algebra** _with respect to the action \((\rho,\mu)\), and is denoted by \(\mathfrak{g}\ltimes_{\rho,\mu}\mathfrak{h}\)._ Proof.: It is a direct computation, and we omit the details. The following definition is standard. **Definition 3.3**.: _[_27, 28_]_ _Suppose that \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{\mathfrak{ g}})\) and \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) are two Lie-Yamaguti algebras. A_ **homomorphism** _from \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) is a linear map \(\phi:\mathfrak{g}\to\mathfrak{h}\) that preserves the Lie-Yamaguti algebra structures, that is, for all \(x,y,z\in\mathfrak{g}\),_ \[\phi([x,y]_{\mathfrak{g}}) = [\phi(x),\phi(y)]_{\mathfrak{h}},\] \[\phi([\![x,y,z]\!]_{\mathfrak{g}}) = [\![\phi(x),\phi(y),\phi(z)]\!]_{\mathfrak{h}}\,.\] _If, moreover, \(\phi\) is a bijection, it is then called an_ **isomorphism**_._ Now we are ready to introduce the notion of crossed homomorphisms between Lie-Yamaguti algebras. **Definition 3.4**.: Let \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot]\!]_{\mathfrak{g}})\) and \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot]\!]_{\mathfrak{h}})\) be two Lie-Yamaguti algebras. Let \((\rho,\mu)\) be an action of \(\mathfrak{g}\) on \(\mathfrak{h}\). A linear map \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) is called a **crossed homomorphism** from \(\mathfrak{g}\) to \(\mathfrak{h}\) with respect to \((\rho,\mu)\), if \[H[x,y]_{\mathfrak{g}} = \rho(x)H(y)-\rho(y)H(x)+[Hx,Hy]_{\mathfrak{h}}, \tag{16}\] \[H\,[\![x,y,z]\!]_{\mathfrak{g}} = D(x,y)H(z)+\mu(y,z)H(x)-\mu(x,z)H(y)+[\![Hx,Hy,Hz]\!]_{\mathfrak{ h}}\,,\quad\forall x,y,z\in\mathfrak{g}. \tag{15}\] **Remark 3.5**.: If the action of \(\mathfrak{g}\) on \(\mathfrak{h}\) is trivial, then any crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{h}\) is a Lie-Yamaguti algebra homomorphism as in Definition 3.3; if \(\mathfrak{h}\) is commutative, then any crossed homomorphism is a derivation from \(\mathfrak{g}\) to \(\mathfrak{h}\) with respect to the representation \((\mathfrak{h};\rho,\mu)\). **Example 3.6**.: _Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a \(4\)-dimensional Lie-Yamaguti algebra, and \(\{e_{1},e_{2},e_{3},e_{4}\}\) a basis. The nonzero brackets are given by_ \[[e_{1},e_{2}]=2e_{4},\qquad[\![e_{1},e_{2},e_{1}]\!]=e_{4}.\] _It is obvious that the center of \(\mathfrak{g}\) is spanned by \(\{e_{3},e_{4}\}\), and that the adjoint representation \((\mathfrak{g};\operatorname{ad},\mathfrak{R})\) is an action of \(\mathfrak{g}\) on itself. Then_ \[H=\begin{pmatrix}O&A\\ B&C\end{pmatrix}\] _is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{g}\), where \(A\) and \(C\) have the form_ \[\begin{pmatrix}\lambda_{1}&0\\ \lambda_{2}&0\end{pmatrix}.\] The following theorem shows that a crossed homomorphism can be seen as a homomorphism between Lie-Yamaguti algebras. **Theorem 3.7**.: _Let \((\rho,\mu)\) be an action of a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{\mathfrak{g}})\) on another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{\mathfrak{h}})\). Then a linear map \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{h}\) if and only if the linear map \(\phi_{H}:\mathfrak{g}\longrightarrow\mathfrak{g}\ltimes_{\rho,\mu}\mathfrak{h}\) is a Lie-Yamaguti algebra homomorphism, where_ \[\phi_{H}(x):=(x,Hx),\quad\forall x\in\mathfrak{g}.\] Proof.: For all \(x,y,z\in\mathfrak{g}\), we have \[\phi_{H}\big{(}[x,y]_{\mathfrak{g}}\big{)} = \Big{(}[x,y]_{\mathfrak{g}},H[x,y]_{\mathfrak{g}}\Big{)},\] \[[\phi_{H}(x),\phi_{H}(y)]_{\rho,\mu} = [(x,Hx),(y,Hy)]_{\rho,\mu}=\Big{(}[x,y]_{\mathfrak{g}},\rho(x)Hy- \rho(y)Hx+[Hx,Hy]_{\mathfrak{h}}\Big{)}.\] Similarly, we also have \[\phi_{H}\Big{(}[\![x,y,z]\!]_{\mathfrak{g}}\Big{)} = \Big{(}[\![x,y,z]\!]_{\mathfrak{g}}\,,H[\![x,y,z]\!]_{\mathfrak{g }}\,\Big{)},\] \[[\![\phi_{H}(x),\phi_{H}(y),\phi_{H}(z)]\!]_{\rho,\mu} = \Big{(}[\![x,y,z]\!]_{\mathfrak{g}}\,,D(x,y)H(z)+\mu(y,z)H(x)-\mu (x,z)H(y)+[\![Hx,Hy,Hz]\!]_{\mathfrak{h}}\,\Big{)}.\] Thus, we have that the linear map \(\phi_{H}:\mathfrak{g}\longrightarrow\mathfrak{g}\ltimes_{\rho,\mu}\mathfrak{h}\) is a Lie-Yamaguti algebra homomorphism if and only of the following two equalities hold: \[H[x,y]_{\mathfrak{g}} = \rho(x)Hy-\rho(y)Hx+[Hx,Hy]_{\mathfrak{h}},\] \[H\,[\![x,y,z]\!]_{\mathfrak{g}} = D(x,y)H(z)+\mu(y,z)H(x)-\mu(x,z)H(y)+[\![Hx,Hy,Hz]\!]_{\mathfrak{ h}}\,,\quad\forall x,y,z\in\mathfrak{g},\] which implies that the linear map \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{h}\). This completes the proof. **Remark 3.8**.: In fact, a crossed homomorphism corresponds to a split nonabelian extension of Lie-Yamaguti algebras. More precisely, consider the following nonabelian extension of Lie-Yamaguti algebras: A section \(s:\mathfrak{g}\longrightarrow\mathfrak{g}\oplus\mathfrak{h}\) is given by \(s(x)=(x,Hx),\ x\in\mathfrak{g}\). Theorem 3.7 says that \(s\) is a Lie-Yamaguti algebra homomorphism if and only if \(H\) is a crossed homomorphism. Such an extension is called a split nonabelian extension. See [35] for more details about extension of Lie-Yamaguti algebras. Then we introduce the notion of homomorphisms of crossed homomorphisms. **Definition 3.9**.: Let \(H\) and \(H^{\prime}\) be two crossed homomorphisms from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \((\rho,\mu)\). A **homomorphism** from \(H^{\prime}\) to \(H\) is a pair \((\psi_{\mathfrak{g}},\psi_{\mathfrak{h}})\), where \(\psi_{\mathfrak{g}}:\mathfrak{g}\longrightarrow\mathfrak{g}\) and \(\psi_{\mathfrak{h}}:\mathfrak{h}\longrightarrow\mathfrak{h}\) are two Lie-Yamaguti algebra homomorphisms such that \[\psi_{\mathfrak{h}}\circ H^{\prime} = H\circ\psi_{\mathfrak{g}}, \tag{18}\] \[\psi_{\mathfrak{h}}\big{(}\rho(x)u\big{)} = \rho\Big{(}\psi_{\mathfrak{g}}(x)\psi_{\mathfrak{h}}(u),\] (19) \[\psi_{\mathfrak{h}}\big{(}\mu(x,y)u\big{)} = \mu\Big{(}\psi_{\mathfrak{g}}(x),\psi_{\mathfrak{g}}(y)\Big{)}\psi_ {\mathfrak{h}}(u),\quad\forall x,y\in\mathfrak{g},u\in\mathfrak{h}. \tag{17}\] In particular, if both \(\psi_{\mathfrak{g}}\) and \(\psi_{\mathfrak{h}}\) are invertible, then \((\psi_{\mathfrak{g}},\psi_{h})\) is called an **isomorphism** from \(H^{\prime}\) to \(H\). By Eqs. (18) and (19), and a direct computation, we have the following proposition. **Proposition 3.10**.: _Let \(H\) and \(H^{\prime}\) be two crossed homomorphisms from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{\mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{\mathfrak{ h}})\) with respect to an action \((\rho,\mu)\). Suppose that \((\psi_{\mathfrak{g}},\psi_{\mathfrak{h}})\) is a homomorphism from \(H^{\prime}\) to \(H\), then we have_ \[\psi_{\mathfrak{h}}\big{(}D(x,y)u\big{)} = D\big{(}\psi_{\mathfrak{g}}(x),\psi_{\mathfrak{g}}(y)\big{)} \psi_{\mathfrak{h}}(u),\quad\forall x,y\in\mathfrak{g},u\in\mathfrak{h}. \tag{20}\] At the end of this section, we reveal the relationship between crossed homomorphisms between Lie-Yamaguti algebras and relative Rota-Baxter operators of weight \(1\) on Lie-Yamaguti algebras. We give the notion of relative Rota-Baxter operators of weight \(\lambda\) on Lie-Yamaguti algebras first. **Definition 3.11**.: Let \((\mathfrak{g},[\cdot,\cdot],[\![\cdot,\cdot,\cdot]\!])\) be a Lie-Yamaguti algebra and \((V;\rho,\mu)\) its representation. A linear map \(T:V\longrightarrow\mathfrak{g}\) is called **a relative Rota-Baxter operator of weight \(\lambda\)** if the following equalities hold \[[Tu,Tv] = T\Big{(}\rho(Tu)v-\rho(Tv)u+\lambda[u,v]\Big{)},\] \[[\![Tu,Tv,Tw]\!] = T\Big{(}D(Tu,Tv)w+\mu(Tv,Tw)u-\mu(Tu,Tw)v+\lambda\,[\![u,v,w]\!] \,\Big{)},\quad\forall u,v,w\in V.\] Relative Rota-Baxter operators of nonzero weight on Lie algebras stem from the classical Yang-Baxter equation and have many applications on mathematical physics. Here, we introduce the notion of relative Rota-Baxter operators of nonzero weight on Lie-Yamaguti algebras and explore its relation with crossed homomorphisms. **Proposition 3.12**.: _Let \((\rho,\mu)\) be an action of a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) on another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\). An invertible linear map \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{h}\) with respect to \((\rho,\mu)\) if and only if \(H^{-1}:\mathfrak{h}\longrightarrow\mathfrak{g}\) is a relative Rota-Baxter operator of weight \(1\) on \(\mathfrak{g}\) with respect to the representation \((\mathfrak{h};\rho,\mu)\)._ Proof.: Assume that the invertible linear map \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) is a crossed homomorphism, then for all \(u,v,w\in\mathfrak{h}\), by (15), we have \[[H^{-1}(u),H^{-1}(v)]_{\mathfrak{g}}=H^{-1}\Big{(}H[H^{-1}(u),H^{ -1}(v)]_{\mathfrak{g}}\Big{)}\] \[= H^{-1}\Big{(}\rho(H^{-1}(u))v-\rho(H^{-1}(v))w+[u,v]_{\mathfrak{ h}}\Big{)}.\] Similarly, by (16), we have \[\big{[}\![H^{-1}(u),H^{-1}(v),H^{-1}(w)]\!]_{\mathfrak{g}}\] \[= H^{-1}\Big{(}H\,[\![H^{-1}(u),H^{-1}(v),H^{-1}(w)]\!]_{\mathfrak{ g}}\Big{)}\] \[= H^{-1}\Big{(}D(H^{-1}(u),H^{-1}(v))w+\mu(H^{-1}(v),H^{-1}(w))u- \mu(H^{-1}(u),H^{-1}(w))v+[\![u,v,w]\!]_{\mathfrak{h}}\,\Big{)}.\] Thus \(H^{-1}\) is a relative Rota-Baxter operator of weight \(1\). Conversely, let \(H^{-1}\) is a relative Rota-Baxter operator of weight \(1\). For all \(x,y,z\in\mathfrak{g}\), there exist \(u,v,w\in\mathfrak{h}\), such that \(x=H^{-1}(u),y=H^{-1}(v)\), and \(z=H^{-1}(w)\). Then we have \[H[x,y]_{\mathfrak{g}}=H[H^{-1}(u),H^{-1}(v)]_{\mathfrak{g}}\] \[= H\circ H^{-1}\Big{(}\rho(H^{-1}(u))v-\rho(H^{-1}(v))u+[u,v]_{ \mathfrak{h}}\Big{)}\] \[= \rho(x)H(y)-\rho(y)H(x)+[Hx,Hy]_{\mathfrak{h}},\] and \[H\,[\![x,y,z]\!]_{\mathfrak{g}}=H[H^{-1}(u),H^{-1}(v),H^{-1}(w)]_{\mathfrak{g}}\] \[= H\circ H^{-1}\Big{(}D(H^{-1}(u),H^{-1}(v))w+\mu(H^{-1}(v),H^{-1}(w))u- \mu(H^{-1}(u),H^{-1}(w))v+\llbracket u,v,w\rrbracket_{0}\,\Big{)}\] \[= D(x,y)H(z)+\mu(y,z)H(x)-\mu(x,z)H(y)+\llbracket Hx,Hy,Hz\rrbracket_{0 }\,,\] which implies that \(H\) is a crossed homomorphism. ## 4. Cohomology of crossed homomorphisms between Lie-Yamaguti algebras In this section, we build the cohomology of crossed homomorphisms between Lie-Yamaguti algebras. First of all, we construct a representation of a Lie-Yamaguti algebra via a given action. Let \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},\llbracket\cdot,\cdot]_{\mathfrak{g}})\) and \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},\llbracket\cdot,\cdot]_{\mathfrak{h}})\) be two Lie-Yamaguti algebras, and \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) a crossed homomorphism with respect to an action \((\rho,\mu)\). Define \(\rho_{H}:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h})\), \(\mu_{H}:\mathfrak{so}^{2}\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h})\) to be \[\rho_{H}(x)u := [Hx,u]_{\mathfrak{h}}+\rho(x)u, \tag{22}\] \[\mu_{H}(x,y)u := \llbracket u,Hx,Hy\rrbracket_{0}+\mu(x,y)u,\quad\forall x,y\in \mathfrak{g},\ u\in\mathfrak{h}. \tag{21}\] **Lemma 4.1**.: _With the assumptions above, define \(D_{H}:\wedge^{2}\mathfrak{g}\longrightarrow\mathfrak{gl}(h)\) to be_ \[D_{H}(x,y)u := \llbracket Hx,Hy,u\rrbracket_{0}+D(x,y)u,\quad\forall x,y\in \mathfrak{g},\ u\in\mathfrak{h}. \tag{23}\] _Then \(D_{H}=D_{\rho_{H},\mu_{H}}\)._ Proof.: For all \(x,y\in\mathfrak{g},\ u\in\mathfrak{h}\), we have \[D_{\rho_{H},\mu_{H}}(x,y)u\] \[= \mu_{H}(y,x)u-\mu_{H}(x,y)u+[\rho_{H}(x),\rho_{H}(y)](u)-\rho_{H} ([x,y]_{\mathfrak{g}})u\] \[= \llbracket u,Hy,Hx\rrbracket_{0}+\mu(y,x)u-\llbracket u,Hx,Hy \rrbracket_{0}-\mu(x,y)u\] \[+[Hx,[Hy,u]_{\mathfrak{h}}]_{\mathfrak{h}}+\rho(x)[Hy,u]_{ \mathfrak{h}}+[Hx,\rho(y)u]_{\mathfrak{h}}+\rho(x)\rho(y)u\] \[-[Hy,[Hx,u]_{\mathfrak{h}}]_{\mathfrak{h}}-\rho(y)[Hx,u]_{ \mathfrak{h}}-[Hy,\rho(x)u]_{\mathfrak{h}}-\rho(y)\rho(x)u\] \[-[H[x,y]_{\mathfrak{g}},u]_{\mathfrak{h}}-\rho([x,y]_{\mathfrak{ g}})u\] \[= \llbracket u,Hy,Hx\rrbracket_{0}-\llbracket u,Hx,Hy \rrbracket_{0}+[Hx,[Hy,u]_{\mathfrak{h}}]_{\mathfrak{h}}\] \[-[Hy,[Hx,u]_{\mathfrak{h}}]_{\mathfrak{h}}-[[Hx,Hy]_{\mathfrak{h} },u]_{\mathfrak{h}}+D(x,y)u\] \[\stackrel{{(1),(10)}}{{=}} \llbracket Hx,Hy,u\rrbracket_{0}+D(x,y)u\] \[= D_{H}(x,y)u.\] This completes the proof. **Proposition 4.2**.: _With the assumptions above, then \((\mathfrak{h};\rho_{H},\mu_{H})\) is a representation of \(\mathfrak{g}\), where \(\rho_{H},\ \mu_{H}\), and \(D_{H}\) are given by (21)-(23) respectively._ Proof.: For all \(x,y,z\in\mathfrak{g},\ u\in\mathfrak{h}\), we have \[\mu_{H}([x,y]_{\mathfrak{g}},z)u-\mu_{H}(x,z)\rho_{H}(y)u+\mu_{H} (y,z)\rho_{H}(x)u\] \[= \llbracket u,H[x,y]_{\mathfrak{g}},Hz\rrbracket_{0}+\mu([x,y],z)u- \llbracketllbracket Hy,u\rrbracket_{0},Hx,Hz\rrbracket_{0}\] \[-\mu(x,z)[Hy,u]_{\mathfrak{h}}-\llbracket\rho(y)u,Hx,Hz\rrbracket_{0 }-\mu(x,z)\rho(y)u\] \[+\llbracketllbracket Hx,u\rrbracket_{0},Hy,Hz\rrbracket_{0}+\mu( y,z)[Hx,u]_{\mathfrak{h}}+\llbracket\rho(x)u,Hy,Hz\rrbracket_{0}\] \[+\mu(y,z)\rho(x)u\] \[\stackrel{{(6)}}{{=}} \llbracket u,[Hx,Hy]_{\mathfrak{h}},Hz\rrbracket_{0}-\llbracket \llbracket Hy,u\rrbracket_{0},Hx,Hz\rrbracket_{0}+\llbracket\llbracket Hx,u \rrbracket_{0},Hy,Hz\rrbracket_{0}\] \[\begin{array}{ll}\stackrel{{\eqref{eq:2}}}{{=}}&0,\end{array}\] and \[\begin{array}{ll}&\rho_{H}(\llbracket x,y,z\rrbracket_{{}_{\rm g}})u-[D_{H}(x, y),\rho_{H}(z)]u\\ =&[H\llbracket x,y,z\rrbracket_{{}_{\rm g}},u\rrbracket_{{}_{\rm b}}+\rho( \llbracket x,y,z\rrbracket_{{}_{\rm g}})u-\llbracket Hx,Hy,[Hz,u]_{{}_{\rm b}} \rrbracket_{{}_{\rm b}}\\ &-D(x,y)[Hz,u]_{{}_{\rm b}}-\llbracket Hx,Hy,\rho(z)u\rrbracket_{{}_{\rm b}}-D (x,y)\rho(z)u\\ &+[Hz,\llbracket Hx,Hy,u\rrbracket_{{}_{\rm b}}]_{{}_{\rm b}}+\rho(z) \llbracket Hx,Hy,u\rrbracket_{{}_{\rm b}}+[Hz,D(x,y)u]_{{}_{\rm b}}\\ &+\rho(z)D(x,y)u\\ \stackrel{{\eqref{eq:2}}}{{=}}&[\llbracket Hx,Hy,Hz\rrbracket_{{}_{ \rm b}},u\rrbracket_{{}_{\rm b}}-\llbracket Hx,Hy,[Hz,u]_{{}_{\rm b}} \rrbracket_{{}_{\rm b}}+[Hz,\llbracket Hx,Hy,u\rrbracket_{{}_{\rm b}}]_{{}_{ \rm b}}\\ \stackrel{{\eqref{eq:3}}}{{=}}&0.\end{array}\] Other equalities can be obtained similarly. We omit the details. Let \(({{}_{\rm g}},\llbracket\cdot,\cdot\rrbracket_{{}_{\rm g}},\llbracket\cdot, \cdot\rrbracket_{{}_{\rm g}})\) and \(({{}_{\rm b}},\llbracket\cdot,\cdot\rrbracket_{{}_{\rm b}},\llbracket\cdot, \cdot\rrbracket_{{}_{\rm b}})\) be two Lie-Yamaguti algebras. Let \((\rho,\mu)\) be an action of \({{}_{\rm g}}\) on \({{}_{\rm b}}\). Define \(\delta:\wedge^{{}_{\rm g}}{{}_{\rm g}}\longrightarrow{\rm Hom}({{}_{\rm g}},{ _{\rm b}})\) to be \[\Big{(}\delta(x,y)\Big{)}z:=\mu(y,z)(Hx)-\mu(x,z)(Hy)+\llbracket Hx,Hy,Hz \rrbracket_{{}_{\rm b}},\quad\forall x,y,z\in{{}_{\rm g}}. \tag{24}\] **Proposition 4.3**.: _With the notations above, \(\delta(x,y)\) defined by (24) is a \(1\)-cocycle of the Lie-Yamaguti algebra \(({{}_{\rm g}},\llbracket\cdot,\cdot\rrbracket_{{}_{\rm g}},\llbracket\cdot, \cdot\rrbracket_{{}_{\rm g}})\) with coefficients in the representation \(({{}_{\rm b}};\rho_{H},\mu_{H})\)._ Proof.: For all \(x_{1},x_{2},x_{3}\in{{}_{\rm g}}\), we have \[\begin{array}{ll}&\delta_{1}\big{(}\delta(x,y)\big{)}(x_{1},x_{2})\\ =&\rho_{H}(x_{1})\delta(x,y)x_{2}-\rho_{H}(x_{2})\delta(x,y)x_{1}-\delta(x,y) \big{(}[x_{1},x_{2}]_{{}_{\rm g}}\big{)}\\ =&\rho_{H}(x_{1})\Big{(}\mu(y,x_{2})(Hx)-\mu(x,x_{2})(Hy)+\llbracket Hx,Hy,Hz \rrbracket_{{}_{\rm b}}\Big{)}\\ &-\rho_{H}(x_{2})\Big{(}\mu(y,x_{1})(Hx)-\mu(x,x_{1})(Hy)+\llbracket Hx,Hy,Hx_ {1}\rrbracket_{{}_{\rm b}}\Big{)}\\ &-\mu(y,[x_{1},x_{2}]_{{}_{\rm g}})(Hx)+\mu(x,[x_{1},x_{2}]_{{}_{\rm g}})(Hy)+ \llbracket Hx,Hy,H[x_{1},x_{2}]_{{}_{\rm g}}\rrbracket_{{}_{\rm b}}\\ =&[Hx_{1},\mu(y,x_{2})(Hx)]_{{}_{\rm b}}+\rho(x_{1})\mu(y,x_{2})(Hx)-[Hx_{1}, \mu(x,x_{2})(Hy)]_{{}_{\rm b}}\\ &-\rho(x_{1})\mu(x,x_{2})(Hy)+[Hx_{1},\llbracket Hx,Hy,Hx_{2}\rrbracket_{{}_{ \rm b}}]_{{}_{\rm b}}+\rho(x_{1})\llbracket Hx,Hy,Hx_{2}\rrbracket_{{}_{\rm b }}\\ &-[Hx_{2},\mu(y,x_{1})(Hx)]_{{}_{\rm b}}-\rho(x_{2})\mu(y,x_{1})(Hx)+[Hx_{2}, \mu(x,x_{1})(Hy)]_{{}_{\rm b}}\\ &+\rho(x_{2})\mu(x,x_{1})(Hy)-[Hx_{2},\llbracket Hx,Hy,Hx_{1}\rrbracket_{{}_{ \rm b}}]_{{}_{\rm b}}-\rho(x_{2})\llbracket Hx,Hy,Hx_{1}\rrbracket_{{}_{\rm b }}\\ &-\mu(y,[x_{1},x_{2}]_{{}_{\rm g}})(Hx)+\mu(x,[x_{1},x_{2}]_{{}_{\rm b}})(Hy)+ \llbracket Hx,Hy,[Hx_{1},Hx_{2}]_{{}_{\rm b}}\rrbracket_{{}_{\rm b}}\\ =&0,\end{array}\] and \[\begin{array}{ll}&\delta_{\rm II}(\delta(x,y))(x_{1},x_{2},x_{3})\\ =&D_{H}(x_{1},x_{2})\delta(x,y)x_{3}+\mu_{H}(x_{2},x_{3})\delta(x,y)x_{1}-\mu_ {H}(x_{1},x_{3})\delta(x,y)x_{2}-\delta(x,y)\llbracket x_{1},x_{2},x_{3} \rrbracket_{{}_{\rm g}}\\ =&D_{H}(x_{1},x_{2})\Big{(}\mu(y,x_{3})(Hx)-\mu(x,x_{3})(Hy)+\llbracket Hx,Hy, Hx_{3}\rrbracket_{{}_{\rm b}}\Big{)}\\ &+\mu_{H}(x_{2},x_{3})\Big{(}\mu(y,x_{1})(Hx)-\mu(x,x_{1})(Hy)+\llbracket Hx,Hy, Hx_{1}\rrbracket_{{}_{\rm b}}\Big{)}\\ &-\mu_{H}(x_{1},x_{3})\Big{(}\mu(y,x_{2})(Hx)-\mu(x,x_{2})(Hy)+\llbracket Hx,Hy, Hx_{2}\rrbracket_{{}_{\rm b}}\Big{)}\\ &-\mu(y,\llbracket x_{1},x_{2},x_{3}\rrbracket_{{}_{\rm b}})(Hx)+\mu(x, \llbracket x_{1},x_{2},x_{3}\rrbracket_{{}_{\rm b}})(Hy)-\llbracket Hx,Hy,H \llbracket x_{1},x_{2},x_{3}\rrbracket_{{}_{\rm b}}\rrbracket_{{}_{\rm b}}\\ \end{array}\] \[\begin{array}{ll}&\delta_{\rm II}(\delta(x,y))(x_{1},x_{2},x_{3})\\ =&D_{H}(x_{1},x_{2})\delta(x,y)x_{3}+\mu_{H}(x_{2},x_{3})\delta(x,y)x_{1}-\mu_ {H}(x_{1},x_{3})\delta(x,y)x_{2}-\delta(x,y)\llbracket x_{1},x_{2},x_{3} \rrbracket_{{}_{\rm g}}\\ =&D_{H}(x_{1},x_{2})\Big{(}\mu(y,x_{3})(Hx)-\mu(x,x_{3})(Hy)+\llbracket Hx,Hy, Hx_{3}\rrbracket_{{}_{\rm b}}\Big{)}\\ &+\mu_{H}(x_{2},x_{3})\Big{(}\mu(y,x_{1})(Hx)-\mu(x,x_{1})(Hy)+\llbracket Hx,Hy, Hx_{1}\rrbracket_{{}_{\rm b}}\Big{)}\\ &-\mu_{H}(x_{1},x_{3})\Big{(}\mu(y,x_{2})(Hx)-\mu(x,x_{2})(Hy)+\llbracket Hx,Hy, Hx_{2}\rrbracket_{{}_{\rm b}}\Big{)}\\ &-\mu(y,\llbracket x_{1},x_{2},x_{3}\rrbracket_{{}_{\rm b}})(Hx)+\mu(x, \llbracket x_{1},x_{2},x_{3}\rrbracket_{{}_{\rm b}})(Hy)-\llbracket Hx,Hy,H \llbracket x_{1},x_{2},x_{3}\rrbracket_{{}_{\rm b}}\rrbracket_{{}_{\rm b}} \end{array}\] \[= \llbracket Hx_{1},Hx_{2},\mu(y,x_{3})(Hx)\rrbracket_{0}+D(x_{1},x_{2}) \mu(y,x_{3})(Hx)-\llbracket Hx_{1},Hx_{2},\mu(x,x_{3})(Hy)\rrbracket_{0}\] \[-D(x_{1},x_{2})\mu(x,x_{3})(Hy)+\llbracket Hx_{1},Hx_{2},\llbracket Hx,Hy,Hx_{3}\rrbracket_{0}\rrbracket_{0}+D(x_{1},x_{2})\llbracket Hx,Hy,Hx_{3} \rrbracket_{0}\] \[+\llbracket\mu(y,x_{1})(Hx),Hx_{2},Hx_{3}\rrbracket_{0}+\mu(x_{2},x_{3})\mu(y,x_{1})(Hx)-\llbracket\mu(x,x_{1})(Hy),Hx_{2},Hx_{3}\rrbracket_{0}\] \[-\mu(x_{2},x_{3})\mu(x,x_{1})(Hy)+\llbracket\llbracket Hx,Hy,Hx_{1} \rrbracket_{0},Hx_{2},Hx_{3}\rrbracket_{0}+\mu(x_{2},x_{3})\llbracket Hx,Hy,Hx _{1}\rrbracket_{0}\] \[-\llbracket\mu(y,x_{2})(Hx),Hx_{1},Hx_{3}\rrbracket_{0}-\mu(x_{1 },x_{3})\mu(y,x_{2})(Hx)+\llbracket\mu(x,x_{2})(Hy),Hx_{1},Hx_{3}\rrbracket_{0}\] \[+\mu(x_{1},x_{3})\mu(x,x_{2})(Hy)-\llbracket\llbracket Hx,Hy,Hx_{2 }\rrbracket_{0},Hx_{1},Hx_{3}\rrbracket_{0}-\mu(x_{1},x_{3})\llbracket Hx,Hy,Hx _{2}\rrbracket_{0}\] \[-\mu(y,\llbracket x_{1},x_{2},x_{3}\rrbracket_{0})(Hx)+\mu(x, \llbracket x_{1},x_{2},x_{3}\rrbracket_{0})(Hy)-\llbracket Hx,Hy,\llbracket Hx _{1},Hx_{2},Hx_{3}\rrbracket_{0}\rrbracket_{0}\] \[= 0,\] which implies that \(\delta(x,y)\) is a \(1\)-cocycle. This finishes the proof. By now, we can establish the cohomology of crossed homomorphisms between Lie-Yamaguti algebra as follows. Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{g}},\llbracket \cdot,\cdot,\cdot\rrbracket_{\mathfrak{h}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{h}},\llbracket \cdot,\cdot,\cdot\rrbracket_{\mathfrak{h}})\) with respect to an action \((\rho,\mu)\). Define the set of \(n\)-cochains to be \[\mathfrak{C}^{p}_{H}(\mathfrak{g},\mathfrak{h})=\begin{cases}C^{p}_{Lie}( \mathfrak{g},\mathfrak{h}),&p\geq 1,\\ \wedge^{2}\mathfrak{g},&p=0.\end{cases}\] Define \(\partial:\mathfrak{C}^{p}_{H}(\mathfrak{g},\mathfrak{h})\longrightarrow \mathfrak{C}^{p+1}_{H}(\mathfrak{g},\mathfrak{h})\) to be \[\partial=\begin{cases}\delta^{H},&p\geq 1,\\ \delta,&p=0,\end{cases}\] where the map \(\delta^{H}\) is the corresponding coboundary map given by (11)-(14) with coefficients in the representation \((\mathfrak{h};\rho_{H},\mu_{H})\). Then combining with Proposition 4.3, we obtain that \(\big{(}\bigoplus_{n=0}^{\infty}\mathfrak{C}^{n}_{H}(\mathfrak{g},\mathfrak{ h}),\partial\big{)}\) is a complex. Denote the set of \(n\)-cochains by \(\mathcal{Z}^{n}_{H}(\mathfrak{g},\mathfrak{h})\), and denote the set of \(n\)-cobonudaries by \(\mathcal{B}^{n}_{H}(\mathfrak{g},\mathfrak{h})\). The resulting \(n\)-th cohomology group is given by \[\mathcal{H}^{n}_{H}(\mathfrak{g},\mathfrak{h}):=\mathcal{Z}^{n}_{H}(\mathfrak{ g},\mathfrak{h})/\mathcal{B}^{n}_{H}(\mathfrak{g},\mathfrak{h}),n\geq 0.\] **Definition 4.4**.: The cohomology of the cochian complex \(\big{(}\bigoplus_{n=0}^{\infty}\mathfrak{C}^{n}_{H}(\mathfrak{g},\mathfrak{h}),\partial\big{)}\) is called the **cohomology** of the crossed homomorphism \(H\). At the end of this section, we show that a certain homomorphism between two crossed homomorphisms induces a homomorphism between the corresponding cohomology groups. Let \(H\) and \(H^{\prime}\) be two crossed homomorphisms from a Lie-Yamaguti algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{h}},\llbracket \cdot,\cdot,\cdot\rrbracket_{\mathfrak{h}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{h}},\llbracket \cdot,\cdot,\cdot\rrbracket_{\mathfrak{h}})\) with respect to an action \((\rho,\mu)\). Let \((\psi_{\mathfrak{g}},\psi_{\mathfrak{h}})\) be a homomorphism from \(H^{\prime}\) to \(H\), where \(\psi_{\mathfrak{g}}\) is invertible. For \(n\geqslant 2\), define a linear map \(p:\mathfrak{C}^{n}_{H^{\prime}}(\mathfrak{g},\mathfrak{h})\longrightarrow \mathfrak{C}^{n}_{H}(\mathfrak{g},\mathfrak{h})\) to be \[p_{\Pi}(f)(\mathfrak{X}_{1},\cdots,\mathfrak{X}_{n}) = \psi_{\mathfrak{h}}\Big{(}f(\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_ {1}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}))\Big{)},\] \[p_{\Pi}(g)(\mathfrak{X}_{1},\cdots,\mathfrak{X}_{n},x) = \psi_{\mathfrak{h}}\Big{(}g(\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_ {1}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}),\psi_{\mathfrak{g}}^{- 1}(x))\Big{)},\quad\forall(f,g)\in\mathfrak{C}^{n}_{H^{\prime}}(\mathfrak{g}, \mathfrak{h}).\] Here \(\mathfrak{X}_{k}=x_{k}\wedge y_{k}\in\wedge^{2}\mathfrak{g}\), \(k=1,2,\cdots,n,\ x\in\mathfrak{g}\), and we use a notation \(\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{k})=\psi_{\mathfrak{g}}^{-1}(x_{k}) \wedge\psi_{\mathfrak{g}}^{-1}(y_{k}),\ k=1,2,\cdots,n.\) In particular, for \(n=1\), \(p:\mathfrak{C}_{H^{\prime}}^{1}(\mathfrak{g},\mathfrak{h})\longrightarrow \mathfrak{C}_{H}^{1}(\mathfrak{g},\mathfrak{h})\) is defined to be \[p(f)(x)=\psi_{\mathfrak{h}}\Big{(}f(\psi_{\mathfrak{g}}^{-1}(x))\Big{)},\ \ \ \ \forall x\in\mathfrak{g},\ f\in\mathfrak{C}_{H^{\prime}}^{1}(\mathfrak{g}, \mathfrak{h}).\] **Theorem 4.5**.: _With the notations above, \(p\) is a cochain map from a cochain \((\oplus_{n=1}^{\infty}\mathfrak{C}_{H^{\prime}}^{n}(\mathfrak{g},\mathfrak{h} ),\delta^{H^{\prime}})\) to \((\oplus_{n=1}^{\infty}\mathfrak{C}_{H}^{n}(\mathfrak{g},\mathfrak{h}),\delta ^{H})\). Consequently, \(p\) induces a homomorphism \(p_{*}:\mathcal{H}_{H^{\prime}}^{n}(\mathfrak{g},\mathfrak{h})\longrightarrow \mathcal{H}_{H}^{n}(\mathfrak{g},\mathfrak{h})\) between cohomology groups._ Proof.: For all \((f,g)\in\mathfrak{C}_{H^{\prime}}^{n}(\mathfrak{g},\mathfrak{h})\) (\(n\geq 2\)), and for all \(\mathfrak{X}_{k}=x_{k}\wedge y_{k}\in\wedge^{2}\mathfrak{g},\ k=1,2,\cdots,n,\ x\in \mathfrak{g}\), we have \[\delta_{\Pi}^{H}\Big{(}p_{1}(f),\,p_{\Pi}(g)\Big{)}(\mathfrak{X}_ {1},\cdots,\mathfrak{X}_{n},x)\] \[= (-1)^{n-1}\Big{(}\mu_{H}(y_{n},x)p_{\Pi}(g)(\mathfrak{X}_{1}, \cdots,\mathfrak{X}_{n-1},x_{n})-\mu_{H}(x_{n},x)p_{\Pi}(g)(\mathfrak{X}_{1}, \cdots,\mathfrak{X}_{n-1},y_{n})\Big{)}\] \[+\sum_{k=1}^{n}(-1)^{k+1}D_{H}(\mathfrak{X}_{k})p_{\Pi}(g)( \mathfrak{X}_{1},\cdots,\mathfrak{X}_{k},\cdots,\mathfrak{X}_{n},x)\] \[+\sum_{k<l}^{n}(-1)^{k}p_{\Pi}(g)(\mathfrak{X}_{1},\cdots, \mathfrak{X}_{k},\cdots,\mathfrak{X}_{k}\circ\mathfrak{X}_{l},\cdots, \mathfrak{X}_{n},x)\] \[+\sum_{k=1}^{n}(-1)^{k}p_{\Pi}(g)(\mathfrak{X}_{1},\cdots, \mathfrak{X}_{k},\cdots,\mathfrak{X}_{n},\llbracket\mathfrak{X},x\rrbracket_{ \mathfrak{g}}\rangle\] \[= (-1)^{n-1}\Big{(}\mu_{H}(y_{n},x)\psi_{\mathfrak{h}}(g(\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{\mathfrak{g}}^{-1}( \mathfrak{X}_{n-1}),\psi_{\mathfrak{g}}^{-1}(x_{n}))\] \[-\mu_{H}(x_{n},x)\psi_{\mathfrak{h}}(g(\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{\mathfrak{g}}^{-1}( \mathfrak{X}_{n-1}),\psi_{\mathfrak{g}}^{-1}(y_{n}))\Big{)}\] \[+\sum_{k=1}^{n}(-1)^{k+1}D_{H}(\mathfrak{X}_{k})\psi_{\mathfrak{h} }\big{(}g(\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{\mathfrak{g }}^{-1}(\mathfrak{X}_{k}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}), \psi_{\mathfrak{g}}^{-1}(x)\big{)}\] \[+\sum_{k<l}(-1)^{k}\psi_{\mathfrak{h}}\big{(}g(\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{\mathfrak{g}}^{-1}( \mathfrak{X}_{k}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{k})\circ\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{l}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{ X}_{n}),\psi_{\mathfrak{g}}^{-1}(x)\big{)}\] \[+\sum_{k=1}^{n}(-1)^{k}\psi_{\mathfrak{h}}\big{(}g(\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{ X}_{k}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}),\psi_{\mathfrak{g}}^{-1}( \mathfrak{X}_{n}),\psi_{\mathfrak{g}}^{-1}(x)\big{)}\] \[= \psi_{\mathfrak{h}}\Big{(}(-1)^{n-1}\Big{(}\mu_{H^{\prime}}(\psi_{ \mathfrak{g}}^{-1}(\mathfrak{Y}_{n}),\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}) \mathfrak{g}(\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{n-1}),\psi_{\mathfrak{g}}^{-1}(x_{n}))\] \[-\mu_{H^{\prime}}(\psi_{\mathfrak{g}}^{-1}(x_{n}),\psi_{ \mathfrak{g}}^{-1}(x))g(\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_ {\mathfrak{g}}^{-1}(\mathfrak{X}_{n}),\psi_{\mathfrak{g}}^{-1}(y_{n}))\Big{)}\] \[+\psi_{\mathfrak{h}}\Big{(}\sum_{k=1}^{n}(-1)^{k+1}D_{H^{\prime}} (\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{k}))g(\psi_{\mathfrak{g}}^{-1}(\mathfrak{ X}_{1}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{k}),\cdots,\psi_{\mathfrak{g}}^{-1}( \mathfrak{X}_{k}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}),\psi_{ \mathfrak{g}}^{-1}(x))\Big{)}\] \[+\psi_{\mathfrak{h}}\Big{(}\sum_{k=1}^{n}(-1)^{k}g(\psi_{ \mathfrak{g}}^{-1}(\mathfrak{X}_{1}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{ X}_{k}),\cdots,\psi_{\mathfrak{g}}^{-1}(\mathfrak{X}_{n}),\psi_{\mathfrak{g}}^{-1}(x) \big{)}\Big{)}\] \[= p_{\Pi}\Big{(}\delta_{\Pi}^{H^{\prime}}(f,g)\big{)}(\mathfrak{X} _{1},\cdots,\mathfrak{X}_{n},x).\] Note that the third equality holds since for all \(x,y_{n}\in\mathfrak{g},\ u\in\mathfrak{h}\), by using \((\psi_{\mathfrak{g}},\psi_{\mathfrak{h}})\) is a homomorphism from crossed homomorphism \(H^{\prime}\) to crossed homomorphism \(H\), we have \[\mu_{H}(y_{n},x)\psi_{\mathfrak{h}}(u) = \llbracket\psi_{\mathfrak{h}}(u),Hy_{n},Hx\rrbracket_{\mathfrak{h} }+\mu(y_{n},x)\psi_{\mathfrak{h}}(u)\] \[= \llbracket\psi_{\mathfrak{h}}(u),\psi_{\mathfrak{h}}\circ H^{ \prime}\circ\psi_{\mathfrak{g}}^{-1}(y_{n}),\psi_{\mathfrak{h}}\circ H^{ \prime}\circ\psi_{\mathfrak{g}}^{-1}(x)\rrbracket_{\mathfrak{h}}+\psi_{ \mathfrak{h}}\mu(\psi_{\mathfrak{g}}^{-1}(y_{n}),\psi_{\mathfrak{g}}^{-1}(x))u\] \[= \psi_{\mathfrak{h}}\Big{(}\llbracket u,H^{\prime}\circ\psi_{ \mathfrak{g}}^{-1}(y_{n}),H^{\prime}\circ\psi_{\mathfrak{g}}^{-1}(x)\rrbracket_{ \mathfrak{h}}+\mu(\psi_{\mathfrak{g}}^{-1}(y_{n}),\psi_{\mathfrak{g}}^{-1}(x)) u\Big{)}\] \[= \psi_{\mathfrak{h}}\Big{(}\mu_{H^{\prime}}(\psi_{\mathfrak{g}}^{ -1}(y_{n}),\psi_{\mathfrak{g}}^{-1}(x))\Big{)}.\] Thus we obtain that \(p_{\Pi}\Big{(}\delta_{\Pi}^{H^{\prime}}(f,g)\Big{)}=\delta_{\Pi}^{H}\big{(}p_ {1}(f),p_{\Pi}(g)\Big{)}\) for all \((f,g)\in\mathfrak{C}_{H^{\prime}}^{n}(\mathfrak{g},\mathfrak{h})\) (\(n\geq 2\)). Similarly, we can show that \(p_{\Pi}\Big{(}\delta_{1}^{H^{\prime}}(f,g)\Big{)}=\delta_{1}^{H}\big{(}p_{1}(f ),p_{\Pi}(g)\Big{)}\) for all \((f,g)\in\mathfrak{C}_{H^{\prime}}^{n}(\mathfrak{g},\mathfrak{h})\) (\(n\geq 2\)). And moreover, it is easy to see that the case of \(n=1\) is still valid. This finishes the proof. ## 5. Deformations of crossed homomorphisms between Lie-Yamaguti algebras In this section, we use the cohomology theory established in the former section to characterize deformations of crossed homomorphisms between Lie-Yamaguti algebras. ### Linear deformations of crossed homomorphisms between Lie-Yamaguti algebras In this subsection, we use the cohomology constructed in the former section to characterize the linear deformations of crossed homomorphisms between Lie-Yamaguti algebras. **Definition 5.1**.: Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},\llbracket\cdot,\cdot,\cdot \rrbracket_{\mathfrak{h}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},\llbracket\cdot,\cdot,\cdot \rrbracket_{\mathfrak{h}})\) with respect to an action \((\mathfrak{h};\rho,\mu)\). Let \(\mathfrak{H}:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a linear map. If \(H_{t}:=H+t\mathfrak{H}\) is still a crossed homomorphism for all \(t\), then we say that \(\mathfrak{H}\) generates a **linear deformation** of the crossed homomorphism \(H\). It is easy to see that \(\mathfrak{H}\) generates a linear deformation of the crossed homomorphism \(H\), then for all \(x,y,z\in\mathfrak{g}\), there hold that \[\mathfrak{H}[x,y]_{\mathfrak{g}} = \rho(x)(\mathfrak{H}y)-\rho(y)(\mathfrak{H}x)+[Hx,\mathfrak{H} y]_{\mathfrak{h}}+[\mathfrak{H}x,Hy]_{\mathfrak{h}},\] \[\mathfrak{H}[x,y,z]_{\mathfrak{h}} = D(x,y)(\mathfrak{H}z)+\mu(y,z)(\mathfrak{H}x)-\mu(x,z)(\mathfrak{H }y)+\llbracket\mathfrak{H}x,Hy,Hz\rrbracket_{\mathfrak{h}}\] \[+\llbracket Hx,\mathfrak{H}y,Hz\rrbracket_{\mathfrak{h}}+ \llbracket Hx,Hy,\mathfrak{H}z\rrbracket_{\mathfrak{h}}\,\] which means that \(\mathfrak{H}\) is a \(2\)-cocycle of the crossed homomorphism \(H\). **Definition 5.2**.: Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},\llbracket\cdot,\cdot,\cdot \rrbracket_{\mathfrak{h}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},\llbracket\cdot,\cdot,\cdot \rrbracket_{\mathfrak{h}})\) with respect to an action \((\rho,\mu)\). 1. Two linear deformations \(H_{t}^{1}=H+t\mathfrak{H}_{1}\) and \(H_{t}^{2}=H+t\mathfrak{H}_{2}\) are called **equivalent** if there exists an element \(\mathfrak{X}\in\wedge^{2}\mathfrak{g}\) such that \((\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{H}(\mathfrak{X}),\mathrm{Id}_{ \mathfrak{h}}+tD(\mathfrak{X}))\) is a homomorphism from \(H_{t}^{2}\) to \(H_{t}^{1}\). 2. A linear deformation \(H_{t}=H+t\mathfrak{H}\) of the crossed homomorphism \(H\) is called **trivial** if it is equivalent to \(H\). If two linear deformations \(H_{t}^{2}\) and \(H_{t}^{1}\) are equivalent, then \(\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{H}(\mathfrak{X})\) is a homomorphism on the Lie-Yamaguti algebra \(\mathfrak{g}\), which implies that \[\llbracket\llbracket\mathfrak{X},y\rrbracket_{\mathfrak{g}}, \llbracket\mathfrak{X},z\rrbracket_{\mathfrak{g}},t\rrbracket_{\mathfrak{g}}+ \llbracket\llbracket\mathfrak{H}x,y\rrbracket_{\mathfrak{g}},z,\llbracket \mathfrak{X},t\rrbracket_{\mathfrak{h}}\rrbracket_{\mathfrak{g}}+\llbracket \llbracket y,\llbracket\mathfrak{H}x,z\rrbracket_{\mathfrak{h}},\llbracket \mathfrak{X},t\rrbracket_{\mathfrak{h}}\rrbracket_{\mathfrak{g}} = 0, \tag{25}\] \[\llbracket\llbracket\mathfrak{X},y\rrbracket_{\mathfrak{g}}\,,\llbracket \mathfrak{X},z\rrbracket_{\mathfrak{g}}\,,\llbracket\mathfrak{X},t\rrbracket_{ \mathfrak{g}}\rrbracket_{\mathfrak{g}} = 0 \tag{27}\] for all \(y,z,t\in\mathfrak{g}\). Note that by Eqs. (18) and (19), we obtain that for all \(y,z\in\mathfrak{g}\), \[\rho(\llbracket\mathfrak{X},y\rrbracket_{\mathfrak{g}})D( \mathfrak{X}) = 0, \tag{29}\] \[\mu(\llbracket\llbracket\mathfrak{X},y\rrbracket_{\mathfrak{g}}\,,z \rrbracket_{\mathfrak{g}})D(\mathfrak{X})+\mu(\llbracket\mathfrak{X},y \rrbracket_{\mathfrak{g}},\llbracket\mathfrak{X},z\rrbracket_{\mathfrak{g}}) = 0,\] (30) \[\mu(\llbracket\mathfrak{X},y\rrbracket_{\mathfrak{g}}\,,\llbracket \mathfrak{X},z\rrbracket_{\mathfrak{g}})D(\mathfrak{X}) = 0. \tag{28}\] Moreover, Eq. (17) yields that for all \(x,y,z\in\mathfrak{g}\), \[\Big{(}\mathrm{Id}_{\mathfrak{b}}+tD(x,y)\Big{)}\Big{(}H+t\mathfrak{S}_{2} \Big{)}z=\Big{(}H+t\mathfrak{S}_{1}\Big{)}\Big{(}\mathrm{Id}_{\mathfrak{g}}+t \mathfrak{G}(x,y)\Big{)}z,\] which implies that \[\mathfrak{S}_{2}z-\mathfrak{S}_{1}z=\mu(y,z)(Hx)-\mu(x,z)(Hy)+\llbracket Hx,Hy,Hz\rrbracket_{\mathfrak{b}}=\partial(\mathfrak{X})z. \tag{31}\] Thus by (31), we have the following theorem. **Theorem 5.3**.: _Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{g}},\llbracket \cdot,\cdot\rrbracket_{\mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{h}},\llbracket \cdot,\cdot\rrbracket_{\mathfrak{h}})\) with respect to an action \((\mathfrak{h};\rho,\mu)\). If two linear deformations \(H^{1}_{t}=H+t\mathfrak{S}_{1}\) and \(H^{2}_{t}=H+t\mathfrak{S}_{2}\) are equivalent, then \(\mathfrak{H}_{1}\) and \(\mathfrak{H}_{2}\) are in the same cohomology class in \(\mathcal{H}^{2}_{H}(\mathfrak{g},\mathfrak{h})\)._ **Definition 5.4**.: Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{g}},\llbracket \cdot,\cdot\rrbracket_{\mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{h}},\llbracket \cdot,\cdot\rrbracket_{\mathfrak{h}})\) with respect to an action \((\rho,\mu)\). If an element \(\mathfrak{X}\in\wedge^{2}\mathfrak{g}\) satisfies Eqs. (25)-(30) and the following equality \[D(\mathfrak{X})\Big{(}D(\mathfrak{X})(Hy)-H[\mathfrak{X},y]_{\mathfrak{g}} \Big{)}=0,\quad\forall y\in\mathfrak{g},\] then \(\mathfrak{X}\) is called a **Nijenhuis element** associated to \(H\). Denote by \(\mathsf{Nij}(H)\) the set of Nijenhuis elements associated to \(H\). It is easy to see that a trivial deformation of a crossed homomorphism between Lie-Yamaguti algebras gives rise to a Nijenhuis element. However, the converse is also true. **Theorem 5.5**.: _Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{g}},\llbracket \cdot,\cdot,\cdot\rrbracket_{\mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},\llbracket\cdot,\cdot\rrbracket_{\mathfrak{h}},\llbracket \cdot,\cdot\rrbracket_{\mathfrak{h}})\) with respect to an action \((\rho,\mu)\). Then for any \(\mathfrak{X}\in\mathsf{Nij}(H)\), \(H_{t}:=H+t\mathfrak{H}\) with \(\mathfrak{H}=\delta^{H}(\mathfrak{X})\) is a linear deformation of \(H\). Moreover, this deformation is trivial._ We need the following lemma to prove the above theorem. **Lemma 5.6**.: _Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{h}\) with respect to an action \((\rho,\mu)\). Let \(\psi_{\mathfrak{g}}:\mathfrak{g}\longrightarrow\mathfrak{g}\) and \(\psi_{\mathfrak{h}}:\mathfrak{h}\longrightarrow\mathfrak{h}\) be two Lie-Yamaguti algebra homomorphisms such that (18) and (19) hold. Then the linear map \(\psi_{\mathfrak{h}}^{-1}\circ H\circ\psi_{\mathfrak{g}}\) is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{h}\) with respect to the action \((\rho,\mu)\)._ Proof.: For all \(x,y,z\in\mathfrak{g}\), we have \[(\psi_{\mathfrak{h}}^{-1}\circ H\circ\psi_{\mathfrak{g}})\Big{(}[x,y]_{ \mathfrak{g}}\Big{)}\] \[= \psi_{\mathfrak{h}}^{-1}\Big{(}\rho(\psi_{\mathfrak{g}}(x))H(\psi _{\mathfrak{g}}(y))-\rho(\psi_{\mathfrak{g}}(y))H(\psi_{\mathfrak{g}}(x))+[H \circ\psi_{\mathfrak{g}}(x),H\circ\psi_{\mathfrak{g}}(y)]_{\mathfrak{h}}\Big{)}\] \[= \rho(x)\Big{(}\psi_{\mathfrak{h}}^{-1}\circ H\circ\psi_{\mathfrak{ g}}(y)\Big{)}-\rho(y)\Big{(}\psi_{\mathfrak{h}}^{-1}\circ H\circ\psi_{ \mathfrak{g}}(x)\Big{)}+[\psi_{\mathfrak{h}}^{-1}\circ H\circ\psi_{\mathfrak{ g}}(x),\psi_{\mathfrak{h}}^{-1}\circ H\circ\psi_{\mathfrak{g}}(y)]_{\mathfrak{h}},\] and \[(\psi_{\mathfrak{b}}^{-1}\circ H\circ\psi_{\mathfrak{g}})\Big{(} \left[\!\left[x,y,z\right]\!\right]_{\mathfrak{g}}\Big{)}\] \[= \psi_{\mathfrak{b}}^{-1}\Big{(}D(\psi_{\mathfrak{g}}(x),\psi_{ \mathfrak{g}}(y))H(\psi_{\mathfrak{g}}(z))+\mu(\psi_{\mathfrak{g}}(y),\psi_{ \mathfrak{g}}(z))H(\psi_{\mathfrak{g}}(x))-\mu(\psi_{\mathfrak{g}}(x),\psi_{ \mathfrak{g}}(z))H(\psi_{\mathfrak{g}}(y))\] \[+\left[\!\left[H\circ\psi_{\mathfrak{g}}(x),H\circ\psi_{ \mathfrak{g}}(y),H\circ\psi_{\mathfrak{g}}(z)\right]\!\right]_{\mathfrak{b}} \Big{)}\] \[= D(x,y)\Big{(}\psi_{\mathfrak{b}}^{-1}\circ H\circ\psi_{ \mathfrak{g}}(z)\Big{)}+\mu(y,z)\Big{(}\psi_{\mathfrak{b}}^{-1}\circ H\circ \psi_{\mathfrak{g}}(x)\Big{)}-\mu(x,z)\Big{(}\psi_{\mathfrak{b}}^{-1}\circ H \circ\psi_{\mathfrak{g}}(y)\Big{)}\] \[+\left[\!\left[\psi_{\mathfrak{b}}^{-1}\circ H\circ\psi_{ \mathfrak{g}}(x),\psi_{\mathfrak{b}}^{-1}\circ H\circ\psi_{\mathfrak{g}}(y), \psi_{\mathfrak{b}}^{-1}\circ H\circ\psi_{\mathfrak{g}}(y)\right]\!\right]_{ \mathfrak{b}},\] which implies that the linear map \(\psi_{\mathfrak{b}}^{-1}\circ H\circ\psi_{\mathfrak{g}}\) is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{b}\) with respect to the action \((\rho,\mu)\). _The proof of Theorem 5.5:_ For any Nijenhuis element \(\mathfrak{X}\in\mathsf{Nij}(H)\), we define \[\mathfrak{S}:=\delta(\mathfrak{X}).\] By Definition 5.4, for any \(t\), \(H_{t}=H+t\mathfrak{S}\) satisfies that \[H\circ\left(\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{X}(\mathfrak{ X})\right) = \Big{(}\mathrm{Id}_{\mathfrak{g}}+tD(\mathfrak{X})\Big{)}\circ H_ {t},\] \[\Big{(}\mathrm{Id}_{\mathfrak{g}}+tD(\mathfrak{X})\Big{)}\circ\rho (x) = \rho\Big{(}\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{X}(\mathfrak{ X})(y)\Big{)}\circ\Big{(}\mathrm{Id}_{\mathfrak{b}}+tD(\mathfrak{X})\Big{)},\] \[\Big{(}\mathrm{Id}_{\mathfrak{b}}+tD(\mathfrak{X})\Big{)}\circ \mu(y,z) = \rho\Big{(}\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{X}(\mathfrak{ X})(y),\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{X}(\mathfrak{X})(z)\Big{)}\circ \Big{(}\mathrm{Id}_{\mathfrak{b}}+tD(\mathfrak{X})\Big{)},\quad\forall y,z\in \mathfrak{g}.\] For \(t\) sufficiently small, we see that \(\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{X}(\mathfrak{X})\) and \(\mathrm{Id}_{\mathfrak{b}}+tD(\mathfrak{X})\) are Lie-Yamaguti algebra homomorphisms. Thus, we have \[H_{t}=\Big{(}\mathrm{Id}_{\mathfrak{b}}+tD(\mathfrak{X})\Big{)}^{-1}\circ H \circ\Big{(}\mathrm{Id}_{\mathfrak{g}}+t\mathfrak{X}(\mathfrak{X})\Big{)}.\] By Lemma 5.6, we deduce that \(H_{t}\) is a crossed homomorphism from \(\mathfrak{g}\) to \(\mathfrak{b}\) for \(t\) sufficiently small. Thus \(\mathfrak{S}=\delta(\mathfrak{X})\) generates a linear deformations of \(H\). It is easy to see that this deformation is trivial. This completes the proof. ### Formal deformations of crossed homomorphisms between Lie-Yamaguti algebras In this subsection, we study formal deformations of crossed homomorphisms between Lie-Yamaguti algebras. Let \(\mathbb{K}[[t]]\) be a ring of power series of one variable \(t\). If \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},\left[\!\left[\cdot,\cdot,\cdot \right]\!\right]_{\mathfrak{g}})\) is a Lie-Yamaguti algebra, then there is a Lie-Yamaguti algebra structure over the ring \(\mathbb{K}[[t]]\) on \(\mathfrak{g}[[t]]\) given by \[\Big{[}\sum_{i=0}^{\infty}x_{i}t^{i},\sum_{j=0}^{\infty}y_{j}t^{ j}\Big{]} = \sum_{s=0}^{\infty}\sum_{i+j=s}\left[\!\left[x_{i},y_{j},z_{k} \right]\!\right]t^{s},\quad\forall x_{i},y_{j},z_{k}\in\mathfrak{g}.\] For any action \((\rho,\mu)\) of a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},\left[\!\left[\cdot,\cdot,\cdot \right]\!\right]_{\mathfrak{g}})\) on another Lie-Yamaguti algebra \((\mathfrak{b},[\cdot,\cdot]_{\mathfrak{b}},\left[\!\left[\cdot,\cdot,\cdot \right]\!\right]_{\mathfrak{b}})\), there is a natural action of the Lie-Yamaguti algebra \(\mathfrak{g}[[t]]\) on the \(\mathbb{K}[[t]]\)-Lie-Yamaguti algebra \(\mathfrak{h}[[t]]\) given by \[\rho\Big{(}\sum_{i=0}^{\infty}x_{i}t^{i}\Big{)}\Big{(}\sum_{k=0}^{\infty}v_{k }t^{k}\Big{)} = \sum_{s=0}^{\infty}\sum_{i+k=s}\rho(x_{i})v_{k}t^{s},\] \[\mu\Big{(}\sum_{i=0}^{\infty}x_{i}t^{i},\sum_{j=0}^{\infty}y_{j}t^{j} \Big{)}\Big{(}\sum_{k=0}^{\infty}v_{k}t^{k}\Big{)} = \sum_{s=0}^{\infty}\sum_{i+j+k=s}\mu(x_{i},x_{j})v_{k}t^{s},\quad \forall x_{i},y_{j}\in\mathfrak{g},\ v_{k}\in V.\] Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{\mathfrak{ h}})\) to ant \((\rho,\mu)\). Consider the power series \[H_{t}=\sum_{i=0}^{\infty}\mathfrak{S}_{i}t^{i},\quad\mathfrak{S}_{i}\in \operatorname{Hom}(\mathfrak{g},\mathfrak{h}), \tag{32}\] that is, \(H_{t}\in\operatorname{Hom}_{\mathbb{K}}(\mathfrak{g},\mathfrak{h})[[t]]= \operatorname{Hom}_{\mathbb{K}}(\mathfrak{g},\mathfrak{h}[[t]])\). **Definition 5.7**.: Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \((\rho,\mu)\). Suppose that \(H_{t}\) is given by (32) with \(\mathfrak{H}_{0}=H\), and \(H_{t}\) also satisfies \[H_{t}[x,y]_{\mathfrak{g}} = \rho(x)(H_{t}y)-\rho(y)(H_{t}x)+[H_{t}x,H_{t}y]_{\mathfrak{h}}, \tag{34}\] \[H_{t}[\![x,y,z]\!]_{\mathfrak{g}} = D(x,y)(H_{t}z)+\mu(y,z)(H_{t}x)-\mu(x,z)(H_{t}y)+[\![H_{t}x,H_{t} y,H_{t}z]\!]_{\mathfrak{h}}\,, \tag{33}\] for all \(x,y,z\in\mathfrak{g}\). We say that \(H_{t}\) is a **formal deformation** of \(H\). Substituting the Eq. (32) into Eqs. (33) and (34) and comparing the coefficients of \(t^{s}\) (\(\forall s\geqslant 0\)), we have for all \(x,y,z\in\mathfrak{g}\), (35) \[\sum_{\genfrac{}{}{0.0pt}{1}{i\neq s,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, **Definition 5.10**.: Let \(H:{\mathfrak{g}}\longrightarrow{\mathfrak{h}}\) be a crossed homomorphism from a Lie-Yamaguti algebra \(({\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \(({\mathfrak{h}},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \((\rho,\mu)\). Two formal deformations \(\bar{H}_{t}=\sum_{i=0}^{\infty}\tilde{\mathfrak{S}}_{i}t^{i}\) and \(H_{t}=\sum_{i=0}^{\infty}\tilde{\mathfrak{S}}_{i}t^{i}\), where \(\tilde{\mathfrak{S}}_{0}=\mathfrak{S}_{0}=H\) are said to be **equivalent** if there exist \(\tilde{x}\in\wedge^{2}{\mathfrak{g}},\ \phi_{i}\in{\mathfrak{gl}}({ \mathfrak{g}})\) and \(\varphi_{i}\in{\mathfrak{gl}}({\mathfrak{h}}),\ i\geqslant 2\), such that for \[\phi_{t}=\operatorname{Id}_{\mathfrak{g}}+t\mathfrak{Z}(\tilde{x})+\sum_{i=2 }^{\infty}\phi_{i}t^{i},\ \ \ \ \varphi_{t}=\operatorname{Id}_{\mathfrak{h}}+tD(\tilde{x})+\sum_{i=2}^{ \infty}\varphi_{i}t^{i}, \tag{37}\] the following conditions are satisfied: \[\left[\phi_{t}(x),\phi_{t}(y)\right]_{\mathfrak{g}}=\phi_{t}[x,y]_{ \mathfrak{g}},\ \ \ \ \left[\![\phi_{t}(x),\phi_{t}(y),\phi_{t}(z)\right]\!]_{ \mathfrak{g}}=\phi_{t}\left[\![x,y,z]\!]_{\mathfrak{g}}\,,\ \ \ \ \forall x,y,z\in{\mathfrak{g}}, \tag{39}\] \[\left[\varphi_{t}(u),\varphi_{t}(v)\right]_{\mathfrak{h}}=\varphi_ {t}[u,v]_{\mathfrak{h}},\ \ \ \ \left[\![\varphi_{t}(u),\varphi_{t}(v),\varphi_{t}(w)\right]\!]_{ \mathfrak{h}}=\varphi_{t}\left[\![u,v,w]\!]_{\mathfrak{h}}\,,\ \ \ \ \forall u,v,w\in{\mathfrak{h}},\] (40) \[\varphi_{t}\rho(x)u=\rho(\phi_{t}(x))(\varphi_{t}(u)),\ \ \ \varphi_{t}\mu(x,y)u=\mu(\phi_{t}(x),\phi_{t}(y))( \varphi_{t}(u)),\ \ \ \ \forall x,y\in{\mathfrak{g}},\ u\in{\mathfrak{h}}, \tag{38}\] and \[H_{t}\circ\varphi_{t}=\phi_{t}\circ\bar{H}_{t} \tag{41}\] as \(\mathbb{K}[[t]]\)-module maps. The following theorem is the second key conclusion in this section. **Theorem 5.11**.: _Let \(H:{\mathfrak{g}}\longrightarrow{\mathfrak{h}}\) be a crossed homomorphism from a Lie-Yamaguti algebra \(({\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \(({\mathfrak{h}},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \(({\mathfrak{h}};\rho,\mu)\). If two formal deformations \(\bar{H}_{t}=\sum_{i=0}^{\infty}\tilde{\mathfrak{S}}_{i}t^{i}\) and \(H_{t}=\sum_{i=0}^{\infty}\tilde{\mathfrak{S}}_{i}t^{i}\) are equivalent, then their infinitesimals are in the same cohomology classes._ Proof.: Let \((\phi_{t},\varphi_{t})\) be the maps defined by (37), which makes two deformations \(\bar{H}_{t}=\sum_{i=0}^{\infty}\tilde{\mathfrak{S}}_{i}t^{i}\) and \(H_{t}=\sum_{i=0}^{\infty}\tilde{\mathfrak{S}}_{i}t^{i}\) equivalent. By (41), we have \[\tilde{\mathfrak{S}}_{1}z=\tilde{\mathfrak{S}}_{1}z+\mu(y,z)(Hx)-\mu(x,z)(Hy) +\left[\![Hx,Hy,Hz]\!]_{\mathfrak{h}}=\tilde{\mathfrak{S}}_{1}z+\partial( \tilde{x})(z),\ \ \ \ \forall z\in{\mathfrak{h}},\] which implies that \(\tilde{\mathfrak{S}}_{1}\) and \(\tilde{\mathfrak{S}}_{1}\) are in the same cohomology class. ### Order \(n\) deformations of crossed homomorphisms between Lie-Yamaguti algebras In this subsection, we introduce a special cohomology class associated to an order \(n\) deformation of a crossed homomorphism, and show that a deformation of order \(n\) of a crossed homomorphism is extendable if and only if this cohomology class in the second cohomology group is trivial. This is why we call this special cohomology class the obstruction class of a deformation of order \(n\) being extendable. **Definition 5.12**.: Let \(H:{\mathfrak{g}}\longrightarrow{\mathfrak{h}}\) be a crossed homomorphism from a Lie-Yamaguti algebra \(({\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \(({\mathfrak{h}},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \(({\mathfrak{h}};\rho,\mu)\). If \(H_{t}=\sum_{i=0}^{n}\tilde{\mathfrak{S}}_{i}t^{i}\) with \(\mathfrak{S}_{0}=H\), \(\tilde{\mathfrak{S}}_{i}\in\operatorname{Hom}_{\mathbb{K}}({\mathfrak{g}},{ \mathfrak{h}})\), \(i=1,2,\cdots,n\), defines a \(\mathbb{K}[t]/(t^{n+1})\)-module from \({\mathfrak{g}}[t]/(t^{n+1})\) to the Lie-Yamaguti algebra \({\mathfrak{h}}[t]/(t^{n+1})\) satisfying \[H_{t}[x,y]_{\mathfrak{g}} = \rho(x)(H_{t}y)-\rho(y)(H_{t}x)+[H_{t}x,H_{t}y]_{\mathfrak{h}}, \tag{43}\] \[H_{t}\left[\![x,y,z]\!]_{\mathfrak{g}} = D(x,y)(H_{t}z)+\mu(y,z)(H_{t}x)-\mu(x,z)(H_{t}y)+\left[\![H_{t}x,H_ {t}y,H_{t}z]\!]_{\mathfrak{h}}\,, \tag{42}\] for all \(x,y,z\in{\mathfrak{g}}\). we say that \(H_{t}\) is an **order \(n\) deformation** of \(H\). **Remark 5.13**.: The left hand sides of Eqs. (42) and (43) hold in the Lie-Yamaguti algebra \({\mathfrak{g}}[t]/(t^{n+1})\) and the right hand sides of Eqs. (42) and (43) make sense since \(H_{t}\) is a \(\mathbb{K}[t]/(t^{n+1})\)-module map. **Definition 5.14**.: Let \(H:{\mathfrak{g}}\longrightarrow{\mathfrak{h}}\) be a crossed homomorphism from a Lie-Yamaguti algebra \(({\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}},[\cdot,\cdot,\cdot]_{\mathfrak{g}})\) to another Lie-Yamaguti algebra \(({\mathfrak{h}},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]_{\mathfrak{ h}})\) with respect to an action \((\rho,\mu)\). Let \(H_{t}=\sum_{i=0}^{n}{\mathfrak{S}}_{i}t^{i}\) be an order \(n\) deformation of \(H\). If there exists a \(1\)-cochain \({\mathfrak{S}}_{n+1}\in\operatorname{Hom}_{\mathbb{K}}({\mathfrak{g}},{ \mathfrak{h}})\) such that \(\widetilde{H}_{t}=T_{t}+{\mathfrak{S}}_{n+1}t^{n+1}\) is an order \(n+1\) deformation of \(H\), then we say that \(H_{t}\) is **extendable**. The following theorem is the third key conclusion in this section. **Theorem 5.15**.: _Let \(H:{\mathfrak{g}}\longrightarrow{\mathfrak{h}}\) be a crossed homomorphism from a Lie-Yamaguti algebra \(({\mathfrak{g}},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \(({\mathfrak{h}},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]_{ \mathfrak{h}})\) with respect to an action \(({\mathfrak{h}};\rho,\mu)\). Let \(H_{t}=\sum_{i=0}^{n}{\mathfrak{S}}_{i}t^{i}\) be an order \(n\) deformation of \(H\). Then \(H_{t}\) is extendable if and only if the cohomology class \([{\mathsf{Ob}}^{H}]\in{\mathcal{H}}^{2}_{H}({\mathfrak{g}},{\mathfrak{h}})\) is trivial, where \({\mathsf{Ob}}^{H}=({\mathsf{Ob}}^{H}_{1},{\mathsf{Ob}}^{H}_{\rm II})\in{ \mathfrak{C}}^{2}_{H}({\mathfrak{g}},{\mathfrak{h}})\) is defined by_ \[{\mathsf{Ob}}^{H}_{1}(x_{1},x_{2}) = \sum_{{i+j=0+1}\atop{0\leq i,j\leq n+1}}[{\mathfrak{S}}_{i}x,{ \mathfrak{S}}_{j}y]_{\mathfrak{h}},\] \[{\mathsf{Ob}}^{H}_{\rm II}(x_{1},x_{2},x_{3}) = \sum_{{i+j=0+1}\atop{0\leq i,j\leq n+1}}\left[{\mathfrak{S}}_{i}x _{1},{\mathfrak{S}}_{j}x_{2},{\mathfrak{S}}_{k}x_{3}\right]\,,\quad\forall x_{ 1},x_{2},x_{3}\in{\mathfrak{g}}.\] Proof.: Let \(\widetilde{H}_{t}=\sum_{i=0}^{n+1}{\mathfrak{S}}_{i}t^{i}\) be an extension of \(H_{t}\), then for all \(x,y,z\in{\mathfrak{g}}\), \[\widetilde{H}_{t}[x,y]_{\mathfrak{g}} = \rho(x)(\widetilde{H}_{t}y)-\rho(y)(\widetilde{H}_{t}x)+[\widetilde {H}_{t}x,\widetilde{H}_{t}y]_{\mathfrak{h}}, \tag{45}\] \[\widetilde{H}_{t}\left[\![x,y,z]\!\right]_{\mathfrak{g}} = D(x,y)(\widetilde{H}_{t}z)+\mu(y,z)(\widetilde{H}_{t}x)-\mu(x,z) (\widetilde{H}_{t}y)+\left[\!\left[\widetilde{H}_{t}x,\widetilde{H}_{t}y, \widetilde{H}_{t}z\right]\!\right]_{\mathfrak{h}}. \tag{44}\] Expanding the Eq.(44) and comparing the coefficients of \(t^{n}\) yields that \[{\mathfrak{S}}_{n+1}[x,y]_{\mathfrak{g}}=\rho(x)({\mathfrak{S}}_{n+1}y)-\rho( y)({\mathfrak{S}}_{n+1}x)+\sum_{i+j=n+1}[{\mathfrak{S}}_{i}x,{\mathfrak{S}}_{j}y]_{ \mathfrak{h}},\] which is equivalent to \[\rho(x)({\mathfrak{S}}_{n+1}y)-\rho(y)({\mathfrak{S}}_{n+1}x)+[{ \mathfrak{S}}_{n+1}x,Hy]_{\mathfrak{h}}+[x,{\mathfrak{S}}_{n+1}y]_{\mathfrak{h}}\] \[-{\mathfrak{S}}_{n+1}[x,y]_{\mathfrak{g}}+\sum_{{i+j=n+1}\atop{0 \leq i,j\leq n+1}}[{\mathfrak{S}}_{i}x,{\mathfrak{S}}_{j}y]_{\mathfrak{h}}=0.\] i.e., \[{\mathsf{Ob}}^{H}_{1}+\delta^{T}_{1}({\mathfrak{S}}_{n+1})=0. \tag{46}\] Similarly, expanding the Eq.(45) and comparing the coefficients of \(t^{n}\) yields that \[{\mathsf{Ob}}^{H}_{\rm II}+\delta^{T}_{\rm II}({\mathfrak{S}}_{n+1})=0. \tag{47}\] From (46) and (47), we get \[{\mathsf{Ob}}^{H}=\delta^{H}(-{\mathfrak{S}}_{n+1}).\] Thus, the cohomology class \([{\mathsf{Ob}}^{H}]\) is trivial. Conversely, suppose that the cohomology class \([{\mathsf{Ob}}^{H}]\) is trivial, then there exists \({\mathfrak{S}}_{n+1}\in{\mathfrak{C}}^{1}_{H}({\mathfrak{g}},{\mathfrak{h}})\), such that \({\mathsf{Ob}}^{H}=-\delta^{H}({\mathfrak{S}}_{n+1})\). Set \(\widetilde{H}_{t}=H_{t}+{\mathfrak{S}}_{n+1}t^{n+1}\). Then for all \(0\leq s\leq n+1\), and for all \(x,y,z\in{\mathfrak{g}}\), \(\widetilde{H}_{t}\) satisfies \[\sum_{i+j=s}\left({\mathfrak{S}}_{s}[x,y]_{\mathfrak{g}}-\left(\rho(x)({ \mathfrak{S}}_{s}y)-\rho(y)({\mathfrak{S}}_{s}x)+[{\mathfrak{S}}_{i}x,{ \mathfrak{S}}_{j}y]_{\mathfrak{h}}\right)\right)=0,\] \[\sum_{i+j+k=s}\left(\mathfrak{S}_{s}\left[\![x,y,z]\!]_{\mathfrak{g}}-\left(D(x,y)( \mathfrak{S}_{j}z)+\mu(y,z)(\mathfrak{S}_{s}x)-\mu(x,z)(\mathfrak{S}_{s}y)+ \left[\!\left[\mathfrak{S}_{i}x,\mathfrak{S}_{j}y,\mathfrak{S}_{k}z\!]\!]_{ \mathfrak{h}}\right)\right)=0,\] which implies that \(\widetilde{H}_{t}\) is an order \(n+1\) deformation of \(H\). Hence it is an extension of \(H_{t}\). **Definition 5.16**.: Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \((\mathfrak{h};\rho,\mu)\), and \(H_{t}=\sum_{i=0}^{n}\mathfrak{S}_{i}t^{i}\) be an order \(n\) deformation of \(H\). Then the cohomology class \([\mathsf{Ob}^{H}]\in\mathcal{H}^{2}_{H}(\mathfrak{g},\mathfrak{h})\) defined in Theorem 5.15 is called the **obstruction class** of \(H_{t}\) being extendable. **Corollary 5.17**.: _Let \(H:\mathfrak{g}\longrightarrow\mathfrak{h}\) be a crossed homomorphism from a Lie-Yamaguti algebra \((\mathfrak{g},[\cdot,\cdot]_{\mathfrak{g}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{g}})\) to another Lie-Yamaguti algebra \((\mathfrak{h},[\cdot,\cdot]_{\mathfrak{h}},[\![\cdot,\cdot,\cdot]\!]_{ \mathfrak{h}})\) with respect to an action \((\mathfrak{h};\rho,\mu)\). If \(\mathcal{H}^{2}_{H}(\mathfrak{g},\mathfrak{h})=0\), then every \(1\)-cocycle in \(\mathcal{Z}^{1}_{H}(\mathfrak{g},\mathfrak{h})\) is the infinitesimal of some formal deformation of \(H\)._ **Acknowledgements:** Qiao was partially supported by NSFC grant 11971282. Xu was partially supported by NSFC grant 12201253 and Natural Science Foundation of Jiangsu Province BK20220510.
2303.02238
Extreme sensitivity of the magnetic ground-state to halide composition in FeCl$_{3-x}$Br$_x$
Mixed halide chemistry has recently been utilized to tune the intrinsic magnetic properties of transition-metal halides $-$ one of the largest families of magnetic van der Waals materials. Prior studies have shown that the strength of exchange interactions, hence the critical temperature, can be tuned smoothly with halide composition for a given ground-state. Here we show that the ground-state itself can be altered by a small change of halide composition leading to a quantum phase transition in FeCl$_{3-x}$Br$_x$. Specifically, we find a three-fold jump in the N\'{e}el temperature and a sign change in the Weiss temperature at $x= 0.08$ corresponding to only $3\%$ bromine doping. Using neutron scattering, we reveal a change of the ground-state from spiral order in FeCl$_3$ to A-type antiferromagnetic order in FeBr$_3$. Using first-principles calculations, we show that a delicate balance between nearest and next-nearest neighbor interactions is responsible for such a transition. These results support the proximity of FeCl$_3$ to a spiral spin liquid state, in which competing interactions and nearly degenerate magnetic $k$-vectors may cause large changes in response to small perturbations.
Andrew Cole, Alenna Streeter, Adolfo O. Fumega, Xiaohan Yao, Zhi-Cheng Wang, Erxi Feng, Huibo Cao, Jose L. Lado, Stephen E. Nagler, Fazel Tafti
2023-03-03T22:04:30Z
http://arxiv.org/abs/2303.02238v1
Extreme sensitivity of the magnetic ground-state to halide composition in FeCl\({}_{3-x}\)Br\({}_{x}\) ###### Abstract Mixed halide chemistry has recently been utilized to tune the intrinsic magnetic properties of transition-metal halides - one of the largest families of magnetic van der Waals materials. Prior studies have shown that the strength of exchange interactions, hence the critical temperature, can be tuned smoothly with halide composition for a given ground-state. Here we show that the ground-state itself can be altered by a small change of halide composition leading to a quantum phase transition in FeCl\({}_{3-x}\)Br\({}_{x}\). Specifically, we find a three-fold jump in the Neel temperature and a sign change in the Weiss temperature at \(x=0.08\) corresponding to only 3% bromine doping. Using neutron scattering, we reveal a change of the ground-state from spiral order in FeCl\({}_{3}\) to A-type antiferromagnetic order in FeBr\({}_{3}\). Using first-principles calculations, we show that a delicate balance between nearest and next-nearest neighbor interactions is responsible for such a transition. These results support the proximity of FeCl\({}_{3}\) to a spiral spin liquid state, in which competing interactions and nearly degenerate magnetic \(k\)-vectors may cause large changes in response to small perturbations. ## I Introduction Magnetic frustration provides a fascinating playground for the realization of exotic quantum states [1]. A curious example of frustrated magnet is the spiral spin-liquid (SSL) phase produced by competing interactions in a bipartite lattice such as diamond (3D) and honeycomb (2D) structures. The SSL is characterized by degenerate spin spirals with \(\mathbf{k}\)-vectors lying on a surface in momentum space [2]. Weak thermal fluctuations can entropically lift this degeneracy and establish order by disorder [3]. Spinel materials such as MnSc\({}_{2}\)S\({}_{4}\) and CoAl\({}_{2}\)O\({}_{4}\) are candidates of SSL in the 3D diamond lattice described by a frustrated \(J_{1}\)-\(J_{2}\) Heisenberg model [4; 5; 6; 7; 8]. In 2D, however, the experimental realization of such effects in the honeycomb lattice [9; 10], prevalent in van der Waals (VdW) materials, has remained elusive. Recent neutron scattering experiments on the VdW magnet FeCl\({}_{3}\) with a honeycomb lattice have shown a ring of degenerate \(\mathbf{k}\)-vectors just above \(T_{\mathrm{N}}\)= 8.5 K, indicating a 2D SSL phase [11]. Below \(T_{\mathrm{N}}\), a spiral order with \(\mathbf{k}=(\frac{4}{15},\frac{1}{15},\frac{3}{2})\) is established [12; 13] indicating the entropic selection of this propagation vector by spin fluctuations, i.e. order by disorder. In this letter, we reveal the extreme sensitivity of the ground-state of FeCl\({}_{3}\) to tiny amounts of disorder by characterizing a series of FeCl\({}_{3-x}\)Br\({}_{x}\) crystals (Fig. 1a-c). We find a three-fold jump in the Neel temperature (\(T_{\mathrm{N}}\)) and a sign change in the Weiss temperature (\(\Theta_{\mathrm{W}}\)) between \(x=0\) and 0.08 corresponding to only 3% bromine doping. Such dramatic enhancement of the order due to a tiny amount of disorder suggests that FeCl\({}_{3}\) is a frustrated magnet on the verge of a quantum phase transition (QPT) between the SSL phase and a different ordered state (Fig. 1d). For \(x\geq 0.08\), we identify the ordered state to be A-type AFM which is ferromagnetic (FM) within the layers and AFM between them. This is similar to the ground-state of FeBr\({}_{3}\) and different from the spiral order in FeCl\({}_{3}\). Our experimental results are corroborated by first-principles calculations on a 2D \(J_{1}\)-\(J_{2}\) Heisenberg model that predict a QPT between the SSL and FM states in FeCl\({}_{3-x}\)Br\({}_{x}\) as observed experimentally. We discuss the importance of including \(p\)-orbital correlations, in addition to \(d\)-orbital correlations, for obtaining the correct magnetic ground-state in density functional theory (DFT). To our knowledge, the impact of \(p\)-orbital correlations in DFT calculations for VdW materials has not be discussed in the literature before. Details of crystal growth, neutron scattering, scanning electron microscopy (SEM), magnetization measurements, and DFT calculations are explained in the Supplementary Information (SI). ## II Results and Discussion Among different classes of VdW magnets, transition-metal halides offer a special opportunity in that their intrinsic properties can be tuned by mixing the halide species (Cl, Br, and I) [14]. With increasing halide size, the orbital overlaps and ligand spin-orbit coupling (SOC) are enhanced, which in turn tune all magneto-optical properties as reported previously in CrCl\({}_{3-x}\)Br\({}_{x}\), CrBr\({}_{3-y}\)I\({}_{y}\), and CrCl\({}_{3-x-y}\)Br\({}_{x}\)I\({}_{y}\) alloys [14; 15]. Following the recent report of a SSL ground-state and order by disorder in FeCl\({}_{3}\), [11] we grew the heteroanionic crystals of FeCl\({}_{3-x}\)Br\({}_{x}\) with the goal of tuning the SSL ground-state. Since both FeCl\({}_{3}\) and FeBr\({}_{3}\) crystallize in the rhombohedral space group \(R\bar{3}m\) with a layered honeycomb structure, a solid solution of FeCl\({}_{3-x}\)Br\({}_{x}\) crystals can also be grown with the same structure (Figs. 1a,b). The composition and uniform distribution of elements were confirmed using energy dispersive x-ray spectroscopy (EDX). For example, Fig. 1c shows the dis tribution of Fe \(K\)-edge (yellow), Cl \(K\)-edge (green), and Br \(L\)-edge (red) absorption intensities in FeCl\({}_{1.33}\)Br\({}_{1.67}\). The super-exchange interactions between Fe\({}^{3+}\) ions are mediated by Cl and Br ligands within the honeycomb layers of each FeCl\({}_{3-x}\)Br\({}_{x}\) sample. The nearest neighbor (NN) and next-nearest neighbor (NNN) exchange paths are denoted by \(J_{1}\) and \(J_{2}\) in Fig. 1a, respectively. Although \(J_{2}\) is weaker than \(J_{1}\), there are six NNNs and three NNs, leading to a close competition between the \(J_{1}\) and \(J_{2}\) terms in the model Hamiltonian \[H=J_{1}\sum_{\langle ij\rangle}\vec{S}_{i}\cdot\vec{S}_{j}+J_{2}\sum_{\langle \langle ij\rangle\rangle}\vec{S}_{i}\cdot\vec{S}_{j} \tag{1}\] where \(S=5/2\) in the high-spin state of Fe\({}^{3+}\) (\({}^{6}S_{5/2}\)). The competition between different magnetic exchange paths (direct and super-exchange) leads to effective \(J_{1}<0\) (FM) and \(J_{2}>0\) (AFM) for the 2D spin model of Eq. 1. The ratio between \(J_{1}\) and \(J_{2}\), \(|J_{1}/J_{2}|\), determines the magnetic ground-state of the honeycomb lattice as well as the degenerate \((q,q,0)\) wave vector of the SSL phase above \(T_{\rm N}\) which is shown schematically in Fig. 1d. It is known from prior neutron diffraction experiments [12] that spin spiral is the ground-state of FeCl\({}_{3}\). We will show here that the Br substitution modifies \(|J_{1}/J_{2}|\) ratio and drives the system to an A-type AFM state (which is FM in 2D) where FeBr\({}_{3}\) is located in Fig. 1d. ### Magnetization Measurements The experimental evidence of competing FM and AFM interactions appear in the magnetic susceptibility (\(\chi\)) and magnetization (\(M\)) data. The raw data for all samples with \(H\|c\) and \(H\perp c\) are shown in Figs. 2a-d, and the analyzed results are presented in Figs. 2e-h. The empty and full circles in Figs. 2a,b correspond to zero-field-cooled (ZFC) and field-cooled (FC) measurements, respectively. In Fig. 2a, the peaks in \(\chi(T)\) curves with \(H\|c\) and the minimal difference between the ZFC and FC curves are characteristics of AFM transitions. Unlike \(H\|c\), the \(H\perp c\) curves in Fig. 2b do not go to zero when \(T\to 0\), indicating a finite FM component. A combination of FM and AFM correlations also exists in CrCl\({}_{3}\), which undergoes an A-type AFM order (FM within the layers and AFM between them) [16]. In CrCl\({}_{3}\), only the t\({}_{\rm 2g}\) manifold of Cr\({}^{3+}\) is at half-filling, whereas both t\({}_{\rm 2g}\) and e\({}_{\rm g}\) levels are at half-filling in FeCl\({}_{3}\), maximizing the competition between FM and AFM correlations according to Goodenough-Kanamori rules [17; 18]. We determined \(T_{\rm N}\) from \(d\chi/dT\) curves (Supplementary Fig. S1), and plotted it as a function of bromine content (\(x\) in FeCl\({}_{3-x}\)Br\({}_{x}\)) for both field directions in Fig. 2e. Using a Curie-Weiss (CW) analysis (Supplementary Fig. S1), we extracted \(\Theta_{\rm W}\), which is a rough measure of the magnetic correlations, and plotted it as a function of \(x\) in Fig. 2f. The central observation in Figs. 2e,f is a jump in both \(T_{\rm N}\) and \(\Theta_{\rm W}\) when a tiny amount of Br is added to FeCl\({}_{3}\), i.e. at \(x=0.08\) in FeCl\({}_{3-x}\)Br\({}_{x}\) corresponding to only 3% Br doping. The effect is dramatic with \(T_{\rm N}\) showing a three-fold jump and \(\Theta_{\rm W}\) changing sign, indicating a change of magnetic ground-state at \(x=0.08\). From the CW analysis, we found the effective magnetic moment of all FeCl\({}_{3-x}\)Br\({}_{x}\) samples to be close to 5.9 \(\mu_{B}\) within experimental errors (Fig. 2g) as expected for Fe\({}^{3+}\) in the high-spin state. After the initial three-fold jump of \(T_{\rm N}\) from 8.5(2) K at \(x=0\) to 23.1(1) K at \(x=0.08\), it is suppressed linearly to 14.5(5) K at \(x=3\) (Fig. 2e). This behavior is the same for both field directions. Similarly, after the initial jump of \(\Theta_{\rm W}\) from \(-13\) K at \(x=0\) to \(+21\) K at \(x=0.08\), it is suppressed linearly to 8 K at \(x=3\) for both field directions (Fig. 2f). Two conclusions can be drawn from these observations. (i) The initial jump in \(T_{\rm N}\) and sign change in \(\Theta_{\rm W}\) suggest an abrupt change of the magnetic ground-state of FeCl\({}_{3}\) by a tiny amount in Br doping, indicating its proximity to a QPT. (ii) The linear decrease of both \(T_{\rm N}\) and \(\Theta_{\rm W}\) from \(x=0.08\) to 3 suggests that the ground-states of all FeCl\({}_{3-x}\)Br\({}_{x}\) samples with \(x\geq 0.08\) are similar to the ground-state of FeBr\({}_{3}\) and different from that of FeCl\({}_{3}\). Another evidence of the change of magnetic ground-state at \(x=0.08\) comes from the field dependence of magnetization. The \(M(H)\) curves of FeCl\({}_{3}\) in Figs. 2c,d are nearly linear in both field directions, consistent with the spiral AFM order reported in prior neutron diffraction studies [12]. The subtle kinks in the \(M(H\perp c)\) curves Figure 1: (a) Schematic illustration of the honeycomb \(ab\)-planes in the mixed halide system FeCl\({}_{1.5}\)Br\({}_{1.5}\). The \(J_{1}\) and \(J_{2}\) exchange paths are highlighted in orange and blue colors, respectively. (b) Layered (VdW) structure of FeCl\({}_{1.5}\)Br\({}_{1.5}\) viewed from the [210] direction. (c) SEM image and EDX color maps confirming the homogeneous distribution of halides in an FeCl\({}_{1.33}\)Br\({}_{1.67}\) crystal. (d) Phase diagram of the spiral and FM states with FeCl\({}_{3}\) and FeBr\({}_{3}\) across the QPT. Here, the \((q,q,0)\)\(y\)-label is the degenerate magnetic wave vector for the SSL state, not the final spiral ground-state [11]. FeBr\({}_{3}\) is not in the SSL phase, and its order is FM in 2D. at \(H=0\) and 4 T are due to the alignment of spiral domains with the field. Unlike FeCl\({}_{3}\), the FeCl\({}_{3-x}\)Br\({}_{x}\) samples with \(x\geq 0.08\) exhibit field-induced metamagnetic (MM) transitions when \(H\perp c\) and linear \(M(H)\) when \(H\|c\) (Figs. 2c,d). Tracing the critical field (\(H_{c}\)) of the MM transition as a function of \(x\) (Fig. 2h) reveals an initial jump at \(x=0.08\) followed by a smooth increase of \(H_{c}\) for \(x\geq 0.08\). The \(H_{c}\) values in Fig. 2h correspond to the peak fields in the \(dM/dH\) curves shown in the inset. Similar to the behavior of \(T_{\rm N}\) and \(\Theta_{\rm W}\) (Figs. 2e,f), the initial jump of \(H_{c}\) at \(x=0.08\) in Fig. 2h indicates a change of the magnetic ground-state, and its subsequent smooth increase indicates that the ground-states of FeCl\({}_{3-x}\)Br\({}_{x}\) samples with \(x\geq 0.08\) are similar to that of FeBr\({}_{3}\) and different from FeCl\({}_{3}\). ### Neutron Diffraction. Single crystal neutron diffraction has been performed on FeCl\({}_{3}\) previously, and the ground-state was determined as AFM with a spiral modulation parallel to the [145] direction with a wavelength of 15 (145) \(d\) spacing [12]. We performed single-crystal neutron diffraction on FeBr\({}_{3}\) at 4.8 K and zero field to probe its magnetic ground-state. Since all FeCl\({}_{3-x}\)Br\({}_{x}\) samples with \(x\neq 0\) have similar \(\chi(T)\) and \(M(H)\) behaviors as FeBr\({}_{3}\) (Fig. 2), we assume that their magnetic ground-states are similar to that of FeBr\({}_{3}\). The observed Bragg peaks in Figs. 3a,b at \({\bf Q}=(2,\bar{1},L)\) consist of structural (nuclear) peaks at \(L=3\) and 6 as well as magnetic peaks at \(L=\frac{3}{2}\), \(3+\frac{3}{2}\), and \(6+\frac{3}{2}\), hence the magnetic propagation vector \({\bf k}=(0,0,1.5)\). The systematic absence of the nuclear peaks other than \(L=3n\) (\(n\in\) integer) in Figs. 3a,b is expected in the space group \(R\bar{3}\) (#148) of FeBr\({}_{3}\). A second scan along \({\bf Q}=(1,\bar{1},L)\) in Figs. 3c,d confirms the propagation vector \({\bf k}=(0,0,1.5)\). The absence of magnetic reflections at \({\bf Q}=(0,0,4.5)\) and \((0,0,7.5)\) in Fig. 3e suggests that the ordered magnetic moments lie along the \(c\)-axis because neutrons only probe the moment perpendicular to scattering vector (\({\bf M}\perp{\bf Q}\)). Thus, we identify FeBr\({}_{3}\) as an out-of-plane Ising system with \(M_{z}\neq 0\) and \(M_{x,y}=0\). We construct an order parameter plot in Fig. 3f by tracing the intensity of the \({\bf Q}=(2,\bar{1},4.5)\) peak as a function of temperature. From a power-law fit, we extract \(T_{\rm N}=14.0(5)\) K consistent with 14.5(5) K from the magnetization measurements, and \(\beta=0.324(15)\) consistent with a 3D Ising system. A magnetic symmetry analysis based on the ordering wave-vector \({\bf k}=(0,0,1.5)\) for \(S=5/2\) Fe\({}^{3+}\) ions in the structural space group \(R\bar{3}\) of FeBr\({}_{3}\) identifies six possible ground-states. These states are labeled in Figs. 3g-l by their magnetic subgroups and irreducible represen Figure 2: (a) Magnetic susceptibility as a function of temperature measured under ZFC (open circles) and FC (full circles) conditions with \(H\|c\). (b) Same as in (a) but with \(H\perp c\). (c) Magnetization curves with \(H\|c\) showing metamagnetic (MM) transitions in FeCl\({}_{3-x}\)Br\({}_{x}\) samples with \(x\geq 0.08\). (d) The MM transitions are absent when \(H\perp c\). (e) \(T_{\rm N}\) as a function of Br content (\(x\)) showing an initial jump followed by a linear decrease. (f) \(\Theta_{\rm W}\) as a function of \(x\) showing an initial sign change followed by a linear decrease. (g) \(\mu_{\rm eff}\) estimated from the Curie-Weiss analysis (Fig. S1). Error bars in panels (e,f,g) are mainly due to the uncertainty in evaluating the mass of thin VdW crystals. (h) The critical field of MM transitions as a function of \(x\) showing an initial jump followed by a smooth increase. Error bars reflect the width of the transition (inset). tations (irreps) as \(R_{l}\bar{3}\) (irrep: \(mT_{1-}\)), \(R_{l}\bar{3}\) (\(mT_{1+}\)), \(R_{l}3\) (\(mT_{1-}\)), \(P_{s}\bar{1}\) (\(mT_{1-}\)), \(P_{s}\bar{1}\) (\(mT_{1+}\)), and \(P_{s}1\) (\(mT_{1-}\)). The first two candidates in Figs. 3g and 3h represent maximal symmetry subgroups corresponding to out-of-plane Ising spins (\(M_{z}\neq 0\) and \(M_{x,y}=0\)) with A-type and C-type AFM ordering, respectively. The lower symmetry \(R_{l}3\) similarly allows only \(M_{z}\) components but with two different moment sizes (Fig. 3i). The later three subgroups (Figs. 3j,k,l) are primitive (\(P\)) instead of rhombohedral (\(R\)). Having a lower symmetry, they allow for each of the previous cases in Figs. 3g,h,i to have moments canted toward the \(ab\)-plane, i.e. \(M_{x,y}\neq 0\). We identify the ground-state of FeBr\({}_{3}\) as A-type AFM order (Fig. 3g) for the following reasons. (i) The lack of magnetic reflections along the \(00L\) direction in Fig. 3e indicates that the allowed magnetic moments lie along the \(c\)-axis, which disqualifies the configurations in Figs. 3j,k,l with allowed moments in the \(ab\)-plane. (ii) Our neutron scattering refinement of the magnetic moment size (SI) gives 4.90(99) \(\mu_{B}\) on each Fe\({}^{3+}\) site, consistent with the results of magnetization measurements and slightly larger than 4.3 \(\mu_{B}\) reported for FeCl\({}_{3}\) in prior studies [12; 13]. Thus, the configuration in Fig. 3i with different moment sizes on different Fe-sites is also disqualified. (iii) The remaining candidates in Figs. 3g and 3h belong to the maximal subgroup \(R_{l}\bar{3}\) corresponding to the A-type and C-type AFM ordering, respectively. These two structures can be distinguished according to the intensity of magnetic diffraction peaks. We simulated the structure factor (\(F_{sim}^{2}\)) for several nuclear and magnetic Bragg peaks in both A-type and C-type structures assuming \(\mu=4.9\)\(\mu_{B}\), and summarized them in the Supplementary Table I (SI). According to Table I, \(F_{sim}^{2}\) values for two magnetic Bragg peaks are vanishingly small in the C-type AFM structure and considerably large in the A-type AFM structure. Thus, the diffraction pattern is best described by A-type AFM ordering (Fig. 3g). An A-type AFM ground-state is also consistent with the field-scale of the MM transition in Fig. 2c which is about 3 T in FeBr\({}_{3}\) corresponding to 2 K, considerably smaller than \(T_{\rm N}=14\) K. Thus, the MM transitions are likely due to a spin flip between the layers instead of within the layers, so the ground-state must be A-type instead of C-type AFM. ### First-Principles Calculations. Our experimental results suggest that FeCl\({}_{3}\) must be very close to a QPT since the magnetic ground-state changes from the spiral order in FeCl\({}_{3}\) to an A-type AFM order upon 3% bromine doping (\(x=0.08\) in FeCl\({}_{3-x}\)Br\({}_{x}\)). To understand the nature of the transition, we performed all-electron first-principles calculations based on DFT in FeCl\({}_{3}\) and FeBr\({}_{3}\) to extract \(J_{1}\) and \(J_{2}\) exchange couplings in the model Hamiltonian Eq. 1 for a 2D honeycomb lat Figure 3: (a) The 2D neutron diffraction scan at 4.8 K and zero-field in the \(H\)-\(L\) plane at \(K=\bar{1}\), showing strong nuclear reflections at integer \(L\) and weak magnetic reflections at half-integer \(L\). (b) A 1D cut through the data in panel (a) showing nuclear (strong) and magnetic (weak) Bragg peaks that identify the magnetic propagation vector \(\mathbf{k}=(0,0,1.5)\). (c) 2D scan in the (\(H\bar{1}L\)) plane showing another set of nuclear and magnetic reflections. (d) 1D cut through the data in panel (c) showing Bragg peaks along (\(1\bar{1}L\)) direction. (e) 2D scan in the (\(H0L\)) plane showing the absence of magnetic reflections along (\(00L\)) direction. (f) Temperature dependence of the intensity of the magnetic reflection \(\mathbf{Q}=(2,\bar{1},4.5)\) with a power-law fit to extract \(T_{\rm N}\) and the critical exponent \(\beta\). (g,h,i) Candidate magnetic ground-states of FeBr\({}_{3}\) with \(R_{l}\) symmetry. (j,k,l) Candidate ground-states with \(P_{s}\) symmetry. The ground-state is determined as A-type AFM shown in panel (g). tice of \(S=5/2\) spins (Fig. 1a). We extracted \(J_{1}\) and \(J_{2}\) as a function of the in-plane lattice parameters interpolating between the experimental values of FeCl\({}_{3}\) and FeBr\({}_{3}\), \(a=6.05\) and \(6.40\) A, respectively. As a benchmark to our methodology, we have obtained \(J_{1}\) and \(J_{2}\) values in good agreement with those reported for FeCl\({}_{3}\) via inelastic neutron scattering [11] giving rise to a spiral spin liquid state with \(|J_{1}/J_{2}|\approx 4\). To reproduce the high-spin configuration and \(J_{1}/J_{2}\) found experimentally in FeCl\({}_{3}\) and FeBr\({}_{3}\), onsite Coulomb interactions must be included in DFT. We found that \(U_{dd}=7.0\), \(U_{pp}=0.5\), and \(U_{pp}=3.5\) eV were required in the Fe, Cl, and Br atoms, respectively, to obtain the SSL state in FeCl\({}_{3}\) and in-plane FM state in FeBr\({}_{3}\) (Supplementary Figs. S2,3). We verified that this trend was robust against changes in the local interactions, including spin-orbit coupling effects (SI). The larger value of \(p\)-orbital potential in FeBr\({}_{3}\) was necessary for establishing the experimentally observed in-plane FM state. Smaller values of \(U_{pp}\) would lead to a much larger \(J_{2}\) and establish an AFM (Neel) state within the honeycomb planes of FeBr\({}_{3}\) due to the more covalent nature of Fe-Br bonds compared to Fe-Cl bonds. Note that the in-plane correlations are FM within an A-type AFM order (Fig. 3g) consistent with the positive \(\Theta_{\rm W}\) observed in FeBr\({}_{3}\) (Fig. 2f). Our calculations are performed on a 2D lattice without considering the inter-layer coupling \(J_{c}\) that eventually establishes the 3D A-type AFM order in FeBr\({}_{3}\) (SI). Figures 4a-d show the evolution of \(J_{1}\), \(J_{2}\) and \(|J_{1}/J_{2}|\) as a function of the lattice parameter for FeCl\({}_{3}\) and FeBr\({}_{3}\). We observe that for stoichiometric FeCl\({}_{3}\), an increase in the lattice parameter dramatically impacts \(|J_{1}|\) unlike \(|J_{2}|\) (Fig. 4a), while the system remains in the spiral state (Fig. 4b). In contrast, the stoichiometric FeBr\({}_{3}\) remains in the FM state for all lattice parameter values due to a much larger \(|J_{1}|\) (Figs. 4c,d). The dashed lines in Figs. 4b,d mark the critical value \(|J_{1}/J_{2}|=2Z=6\) for the theoretical transition from the SSL to FM/Neel state [6; 7; 8]. To account for the alloys FeCl\({}_{3-x}\)Br\({}_{x}\), we use a virtual crystal approximation (SI) to estimate \(J_{1}\) and \(J_{2}\) as a function of \(x\) based on the alloy-dependent lattice constant (Figs. 4e,f). Within this approximation, doping FeCl\({}_{3}\) with Br drives the 2D system from the spiral to FM phase, as observed experimentally. The theoretical phase diagram in Fig. 4f shows a QPT between the spiral and FM states at \(x=0.8\) in 2D layers of FeCl\({}_{3-x}\)Br\({}_{x}\). This is consistent with the experimental data in Figs. 2e,f,h that show a jump in \(T_{\rm N}\), sign change in \(\Theta_{\rm W}\), and MM transition at \(x>0.08\). However, the theoretically predicted critical doping \(x_{c}=0.8\) differs from the experimentally observed \(x_{c}=0.08\). Such a difference likely stems from limitations of the DFT methodology that neglects the impact of disorder on exchange interactions. Spin liquid phases (e.g. SSL) are particularly sensitive to disorder [7; 8; 19; 20] and thus, it is conceivable that the alloying procedure would change the critical doping via disorder effects that are not captured in the pristine-limit DFT calculations. ## III Conclusion To summarize, we have demonstrated a QPT by tuning the halide composition in the frustrated VdW system FeCl\({}_{3-x}\)Br\({}_{x}\). Our results demonstrate the application of halide engineering in tuning the \(J_{1}/J_{2}\) ratio of the underlying frustrated honeycomb spin model, in particular crossing the critical point of the spin spiral liquid at \(x_{c}=0.08\). The transition driven by the halide composition from a spiral to FM state was demonstrated with magnetometry measurements, and the ground-states were determined by neutron scattering results presented here for FeBr\({}_{3}\) and elsewhere [12] for FeCl\({}_{3}\). Our theoretical calculations further show that the transition from the spiral to FM state is driven by competing magnetic exchanges with a sizable contribution from the \(p\)-orbital correlations of halides. Our results establish a new strategy for Figure 4: (a) The NN (\(J_{1}\)) and NNN (\(J_{2}\)) coupling constants computed from first-principles and plotted as a function of in-plane lattice parameter \(a\) in stoichiometric FeCl\({}_{3}\). (b) The ratio \(|J_{1}/J_{2}|\) plotted as a function of \(a\). (c,d) Same as in (a,b) but for the stoichiometric FeBr\({}_{3}\). (e) \(J_{1}\) and \(J_{2}\) traced as a function of bromine content \(x\) in FeCl\({}_{3-x}\)Br\({}_{x}\) using virtual lattice approximation. (f) \(|J_{1}/J_{2}|\) as a function of \(x\). engineering frustrated VdW magnetic materials by exploiting a continuous parameter realized by mixed halide chemistry. ###### Acknowledgements. The work at Boston College was supported by the National Science Foundation under the grant number DMR-2203512. J.L.L. and A.F. acknowledge the computational resources provided by the Aalto Science-IT project, and the financial support from the Academy of Finland Projects No. 331342, No. 336243 and No 349696, and the Jane andatos Erkko Foundation. A portion of this research used resources at the High Flux Isotope Reactor, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.
2305.11874
User Experience Considered Harmful (for the Planet)
Great user experience is killing us (more or less)! My argument in this provocation is that the excessive focus on user experience (UX) by the tech industry and academic community has a negative impact on the sustainability of ICT devices. I will argue based on two examples, that we need new metrics or extend current UX metrics to also include third order effects and sustainability perspectives. Lastly, I would like us - the (Sustainable) HCI community - to increase our focus on solving the problems that result from our very own creations.
Markus Löchtefeld
2023-05-03T17:37:51Z
http://arxiv.org/abs/2305.11874v1
# User Experience Considered Harmful (for the Planet) ###### Abstract. Great user experience is killing us (more or less)! My argument in this provocation is that the excessive focus on user experience (UX) by the tech industry and academic community has a negative impact on the sustainability of ICT devices. I will argue based on two examples, that we need new metrics or extend current UX metrics to also include third order effects and sustainability perspectives. Lastly, I would like us - the (Sustainable) HCI community - to increase our focus on solving the problems that result from our very own creations. Sustainability, Sustainable HCI, User Experience, Repair, Sustainable User Interfaces + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: journal: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: journal: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR + Footnote †: ICLR use phase (energy consumption during use and maintenance) would be covered by renewable energy sources, the ICT sector could still be responsible for around 1% of world-wide GHG emissions. The Sustainable HCI (SHCI) community has been for nearly 15 years (Bremer et al., 2017; Bremer et al., 2018) actively engaged with mitigating issues that impact ecological- and social sustainability. In this provocation however, I will only focus on ecological sustainability, which however does not mean that I would argue for prioritizing one over the other. We have seen a lot of work focusing on ecological sustainability to reduce GHG emissions. Hannson et al. revealed that the Sustainable Development Goal (SDG) 12 "Responsible Production and Consumption" has been a major focus of attention (Bremer et al., 2017) for example using persuasive technologies and eco-feedback (Hannson et al., 2017) to support users in saving energy (Hannson et al., 2017). And while this and many other examples exist that demonstrate the potential positive impact of the SHCI community, there has also been critique (Bremer et al., 2017; Bremer et al., 2017; Bremer et al., 2018). One particular point of criticism is that SHCI should collaborate more widely across fields and disciplines (Hannson et al., 2017; Bremer et al., 2018) and include more community organisations (Bremer et al., 2018) or macrostructures (e.g. municipalities) (Bremer et al., 2018). For example Silberman et al. argued that "the processes that give rise to the issues indexed by the term sustainability are larger in time, space, organizational scale, ontological diversity, and complexity than the scales and scopes addressed by traditional HCI design, evaluation, and fieldwork" (Bremer et al., 2018) which implies that the SHCI community needs to broaden its perspective. Bremer et al. analysed how the SHCI community has responded to such critiques and found that "SHCI has shifted from second-wave approaches towards those anchored in the third wave, and has found creative ways to capture the complexity of (un)sustainability and broaden the SHCI agenda" (Bremer et al., 2018). However, this move to engage with sustainability in a broader manner also meant that the SHCI community quickly left "core" HCI territory and neglected efforts that lay at the heart of HCI. There are several first- and second wave HCI (Bremer et al., 2018; Bremer et al., 2018) problems connected to the UK of ICT products, that have significant effects for example on the longevity and energy consumption of these products that have been so far neglected. While I acknowledge the SHCI community's effort and the success we have seen in third- (and fourth) wave HCI context, in this provocation I will argue that there is still a lot of work that needs to be done in our own backyard that can have significant positive effects on the sustainability of ICT. In this provocation I argue that we (the HCI community) need to focus more on our own creations and their direct impact. Specifically our obsession with UX as a measure could be indirectly one of the main contributors for an increase of ICT's GHG emissions. However, the goal of this provocation is not to criticize but rather, through two examples highlight potentials for future research directions that are at the core of HCI and that hold potential for more environmental sustainable ICT products. ## 2. User Experience - the Root of All Evil? The term User Experience has been discussed and defined in many ways. The 2010 released ISO standard 9241-210:2010 "Ergonomics of Human-System Interaction--Part 210: Human-Centred Design for Interactive Systems" defines it as "A person's perceptions and responses resulting from the use and/or anticipated use of a product, system or service." (in chapter 2 terms and definitions) (Hannson et al., 2017). Very similar even though maybe a bit more vague, the Nielsen Norman Group defines it as:"User experience encompasses all aspects of the end-user's interaction with the company, its services, and its products"1. These definitions imply that UX designers need to adopt a holistic perspective on the design of all elements involved in the interaction, to create products that will be successful. Footnote 1: [https://www.nngroup.com/articles/definition-user-experience/](https://www.nngroup.com/articles/definition-user-experience/) In the SHCI community,ankoff et al. proposed two ways to classify work, namely _sustainability in design_ - referring to approaches that focus on mitigating effects stemming from the material design of a product - and _sustainability through design_ - referring to approaches aim at creating more sustainable lifestyles empowered through technology (Bremer et al., 2018). These classes are also sometimes referred to as _sustainability by form_ and _sustainability_ by function_(Steintein and Krenner, 2007). However, with respect to ICT products, such as smartphones, tablets, laptops or other smartproducts, this differentiation can be problematic. As discussed above, UX adopts a holistic perspective and the UX design and evaluations of these products are usually a result of the combination of hardware and software. I argue that this artificial division of the SCHI community - while helpful for classifying novel third wave HCI approaches - can have detrimental effects on sustainability of ICT devices. As we use UX as a measuring stick for most of endeavours, we should use a similar holistic perspective on accessing the sustainability of the developed artefacts. While I agree with the fact that great UX is important for any product to succeed, there can arise problems from over-prioritizing it. Currently, UX is mostly measured in the first- and second order effect created by the use of the product (Steintein and Krenner, 2007). First order effect means the impact and opportunities created by the existence of the product and second order refers to the ongoing use of the product. However, optimizing for short-term gains often can run counter to long-term success (Steintein and Krenner, 2007). As HCI researchers we often neglect third order effects, meaning the aggregated impact created by large number of users using the product over a long time (Steintein and Krenner, 2007). The reason is that these are harder to predict and at this points we have few- or no tools for assessing them. These can however have massive detrimental consequences for the environmental impact of ICT products. Here we can see a first working point, UX evaluation methods that include a sustainability lens as well. I am not the first to argue for this, Thomas et al. already questioned the anthropocentric nature of the ISO standard 9241-210:2010 (Steintein and Krenner, 2007). Remy at al. investigated the current limitations of evaluation possibilities (Steintein and Krenner, 2007) and identified five key elements that can provide guidance to identifying evaluation methods (Steintein and Krenner, 2007). Dourish framed UX in general as a legitimacy trap preventing HCI from its original goal of nurturing and sustaining human dignity and flourishing (Dillahunt et al., 2010). While these suggestions are mainly meant to evaluate SHCI research, they would also help in all HCI and UX research work as well. And while integrating Life Cycle Assessments (LCA) as suggested in (Steintein and Krenner, 2007; Stein and Krenner, 2007) would help for example quantifying embodied GHG emissions and first- and second order effects quite well, LCA would be less suited for third order effects. Even Life Cycle Cost (LCC)- or whole-life cost analysis is quite hard to predict in cases of exponential user growth. At this point very few practical guidelines exist and the closest we have to applicable guidelines are from Dillahunt et al. (2010), which don't directly consider third order effects either. So to this extend we need to work on more holistic evaluations metrics that allow to access UX and environmental sustainability with respect to third order effects. One particular example in user interface (UI) design, is if excess data is used to download something in the background that is never seen or interacted with (and not needed) on the user side. Considering exponential user growth this can in third order effects lead to quite drastic excess data and thereby excess GHG emissions. ### Hidden Data Traffic in Uls Hidden data traffic - so a system downloading data without the user explicitly telling it to do so - occur in many scenarios. Most of these can be considered a direct result of increasing the UX of a specific UI. Probably the most common one is the endless scrolling feature on social media sites like Facebook, Instagram or TikTok, where, before the user scrolls to the bottom, more posts are pre-loaded so that no visible loading time occurs. This endless scrolling dark pattern (Steintein and Krenner, 2007; Stein and Krenner, 2007) is often considered to lead to mindless scrolling stealing the users time, however, I don't want to engage with the ethical dimension of this but rather focus on the environmental impact. One might argue that in most cases their is only little data overhead of a few MB, which is most likely the case, however, given that there are between 2-3 billion monthly active social media users, this can result in a large amount of wasted data. An example: If we assume that for every daily active Instagram user (1.386 billion2) one image is loaded in excess (in the authors short test using the web client of Instagram, on average 3 images were loaded that were not visible and the image sizes varied between 31kb and 475kb with an average of 173kb over 50 images) this would result in 0.239 petabyte of excess data. At this point I abstain from calculating energy consumption or GHG emissions, as the numbers vary so widely in literature [24]. However, I hope that this estimate of 0.239 petabyte just for Instagram, demonstrate that the problem should not be ignored. Here we can see a third-order effect, turning a tiny negative element into a massive problem. Widdicks et al. highlighted a similar issue in their analysis of streaming services [58], where HCI promotes excellence in UX, SHCI would demand more conscious utilisation of resources. In their work they discuss for example the reduction of video quality to reduce data usage when streaming, or for example including the possibility of not streaming the video, if only audio is wished for. A way to reduce such hidden data traffic could be to introduce design friction - elements that are less effortless and more taxing for the user to carry out [5, 15]. Before the proliferation of 3G, most mobile social media apps, did not automatically download more content in the background and the user had to press a button to get more content. Such simple frictions would save potentially a lot of data traffic [25] and thereby a lot of energy. However, there are multiple problems at this point. First we don't have a good understanding of the theoretical workings of design frictions, but it recently gained some more attention [27]. Second, such frictions would be countering the business models of big social media companies. While the first issue can be overcome through more foundational research, the second probably won't. When simply using the websites of the social media companies, browser plugins might allow to add this friction and generate more awareness. Furthermore, there are many more cases (not only the endless scrolling) where data is pre-loaded for potential cases that don't happen, all in the name of UX. This is a good example of a second-/third wave problem that would benefit from more thorough investigation by the (S)HCI community. Many cases of hidden data usage for the sake of UX exist that potentially could be solved by friction. While there are most likely cases with bigger potential gains, this example highlights a missing assessment of UX and potential unsustainable third order effects. ### Repair & Longevity The second example I want to discuss here is the effect that UX has on embodied emissions of the ICT sector. Our current linear way of producing, consuming, and discarding ICT products is not sustainable. To overcome this, several different pathways have been suggested in the past, for example, Slow Tech. This promotes a sustainable, ethical way of technology production and consumption [47, 48] achievable by slowing down production rate, consumption, and disposal pace [36]. One particular important element to reduce the disposal pace is repair. This has also been discussed as one of the key sub strategies for sustainable production systems in the circular economy (CE) [37]. SHCI has been focusing on repair as one of the key strategies to sustainability from the very beginning [6, 31] and multiple approaches have been explored [32, 46, 52]. Generally users are replacing their devices to early compared to their optimal lifetime [38] and Brusselaers et al. also found that most users do not repair their broken products, even when it is possible and economically beneficial [10]. Prior research (particular stemming from industrial- and product design) shows that a user's decision is whether to repair a product or not is depending on the one hand on whether the product is designed for repair and on the other hand it relies on the user's knowledge and skills [1, 13, 33]. Especially the high level of integration in current ICT devices is quite problematic, as it obfuscates the inner layers and makes it hard to identify faults in the product [2]. While the tight integration which e.g. decreases size and weight, and the usage of more energy demanding materials such as aluminium and glass, are beneficial for the hedonic qualities, aesthetics and feel of ICT products they are detrimental to their repairability. For example, a phone with a glass back can "feel" sleeker and more desirable, however a glass back usually requires some kind of glue instead of screws to be hold in place and also is much more fragile. However, solving these aspects should be left to industrial- and product design experts. There are however elements that the SHCI community can focus on and here I will postulate the hypothesis that if users have a structural mental model (Srivastava et al., 2017) of an ICT product, they will be able to make better repair decisions and prolong the products lifetime. Arcos et al. already highlight the importance of the design in making repair decisions (Arcos et al., 2018). However, if we at ICT technology such as Smartphones and Laptops (here as the prime examples), we can see that these devices don't require the user to understand how they work internally, for them to be used. 20 years ago this was very different, users had to e.g., install operating systems and wrangle with hardware drivers, thereby gaining a deeper understanding of the inner workings. This is similar to current users that build their own (gaming) PC's. Their level of knowledge of how an ICT device works is usually much higher, so they are much more likely to try to repair issues and replace components (Arcos et al., 2018). Following this argument, we see that good UX - here reducing the friction and making the technology easily accessible for non-experts - is advancing the technologies unsustainability. Obviously, I will not argue that we should make the entry into using a computer harder again, but it would be worthwhile to understand how UI design and timed nudges during usage could create a structural mental model that would allow users to make better repair choices. Similar to the previous example, to support repair activities devices might also require some more friction in the interface to make users reflect on the inner workings of the devices they are using (Kraemer et al., 2017). Devices like the Fairphone3 or the Framework Laptop4 are designed to be repairable and upgradeable, it is part of the brand and communicated in a way that it will part of the mental model the customer will get when they purchase the product. Framework even has a DIY edition, where the user has to assemble some of the parts before being able to use it5. This means that the user also immediately gets an understanding of what is possible to repair or exchange. These are the kind of interactions that I envision could be worthwhile on a hardware level. On an Operating System level, Linux distributions (while having worked very hard to increase the UX and reduce friction) could be modified to allow users to get a better understanding of, and enable deeper interaction with, the inner workings of the computer for different experimental settings. Footnote 3: [https://www.fairphone.com/](https://www.fairphone.com/) Footnote 4: [https://frame.work/](https://frame.work/) Footnote 5: [https://frame.work/products/laptop-diy-](https://frame.work/products/laptop-diy-)\(12\)-gen-intel The case of repair is another first/second wave HCI problem where the SHCI community can have a significant impact. While it is a much more complex problem with a variety of different layers that will require huge efforts to properly investigate and solve the issue, it is a very promising area, with potentially significant impact. ## 3. Conclusion My aim with this provocation is to highlight that before taking on more, we as the (S)HCI community should focus on the effects of our own doing first. My answer to Bremer et al. (Bremer et al., 2017) would be, yes the community is taking on too much. It might be what Knowles et al. call digital exceptionalism treatment of ICT (Knowles et al., 2017), that drives the community to solve a lot of other problems before we focus on ICT itself. An important first move, should be to establish, inside the HCI community, that UX as the ultimate metric will harm sustainability of ICT devices, and that we need to take on more extensive and further reaching metrics. Designers must recognize that humans do not typically occupy a central role, but rather are part of a complex network of human and non-human actors that are socially, economically, and ecologically interdependent (Kraemer et al., 2017). And this needs to be also reflected in our tools to design and analyse interfaces. What do we have to do to decrease the GHG of ICT devices? Multiple pathways are possible if we ignore energy efficiency increases through new hardware. On the one hand we can decrease the energy consumption from the use phase through software (e.g., reduction of data traffic (Kraemer et al., 2017) or creating more energy efficient software by using low level programming languages (Kraemer et al., 2017)) and on the other hand we can reduce the impact of embodied emissions by extending the longevity of the hardware (so that per use hour less new hardware is needed (Arcos et al., 2018)). I do not want this provocation to be understood as critique of the (S)HCI or UX community. Engagement with outside communities and experts is essential for enabling sustainable lifestyles that we desperately need in the future. However, I would like to remind (S)HCI researchers, not to forget the problems that result from their own creations._
2301.12652
REPLUG: Retrieval-Augmented Black-Box Language Models
We introduce REPLUG, a retrieval-augmented language modeling framework that treats the language model (LM) as a black box and augments it with a tuneable retrieval model. Unlike prior retrieval-augmented LMs that train language models with special cross attention mechanisms to encode the retrieved text, REPLUG simply prepends retrieved documents to the input for the frozen black-box LM. This simple design can be easily applied to any existing retrieval and language models. Furthermore, we show that the LM can be used to supervise the retrieval model, which can then find documents that help the LM make better predictions. Our experiments demonstrate that REPLUG with the tuned retriever significantly improves the performance of GPT-3 (175B) on language modeling by 6.3%, as well as the performance of Codex on five-shot MMLU by 5.1%.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih
2023-01-30T04:18:09Z
http://arxiv.org/abs/2301.12652v4
# RePlug: Retrieval-Augmented Black-Box Language Models ###### Abstract We introduce RePlug, a retrieval-augmented language modeling framework that treats the language model (LM) as a black box and augments it with a tuneable retrieval model. Unlike prior retrieval-augmented LMs that train language models with special cross attention mechanisms to encode the retrieved text, RePlug simply prepends retrieved documents to the input for the frozen black-box LM. This simple design can be easily applied to any existing retrieval and language models. Furthermore, we show that the LM can be used to supervise the retrieval model, which can then find documents that help the LM make better predictions. Our experiments demonstrate that RePlug with the tuned retriever significantly improves the performance of GPT-3 (175B) on language modeling by 6.3%, as well as the performance of Codex on five-shot MMLU by 5.1%. Machine Learning, Retrieval-augmented language models, Retrieval-augmented language models, 2019 ## 1 Introduction Large language models (LLMs) such as GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021), have demonstrated impressive performance on a wide range of language tasks. These models are typically trained on very large datasets and store a substantial amount of world or domain knowledge implicitly in their parameters. However, they are also prone to hallucination and cannot represent the full long tail of knowledge from the training corpus. Retrieval-augmented language models (Khandelwal et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022; Yasunaga et al., 2022), in contrast, can retrieve knowledge from an external datastore when needed, potentially reducing hallucination and increasing coverage. Previous approaches of retrieval-augmented language models require access to the internal LM representations (e.g., to train the model (Borgeaud et al., 2022; Izacard et al., 2022) or to index the datastore (Khandelwal et al., 2020)), and are thus difficult to be applied to very large LMs. In addition, many best-in-class LLMs can only be accessed through APIs. Internal representations of such models are not exposed and fine-tuning is not supported. In this work, we introduce RePlug (**R**etrieve and **Plug**), a new retrieval-augmented LM framework where the language model is viewed as a black box and the retrieval component is added as a tuneable plug-and-play module. Given an input context, RePlug first retrieves relevant documents from an external corpus using an _off-the-shelf_ retrieval model. The retrieved documents are prepended to the input context and fed into the black-box LM to make the final prediction. Because the LM context length limits the number of documents that can be prepended, we also introduce a new ensemble scheme that encodes the retrieved documents in parallel with the same black-box LM, allowing us to easily trade compute for accuracy. As shown in Figure 1: Different from previous retrieval-augmented approaches (Borgeaud et al., 2022) that enhance a language model with retrieval by updating the LM’s parameters, RePlug treats the language model as a black box and augments it with a frozen or tunable retriever. This black-box assumption makes RePlug applicable to large LMs (i.e., >100B parameters), which are often served via APIs. Figure 1, RePlug is extremely flexible and can be used with any existing black-box LM and retrieval model. We also introduce RePlug LSR (RePlug with LM-Supervised **R**etrieval), a training scheme that can further improve the initial retrieval model in RePlug with supervision signals from a black-box language model. The key idea is to adapt the retriever to the LM, which is in contrast to prior work (Borgeaud et al., 2022) that adapts language models to the retriever. We use a training objective which prefers retrieving documents that improve language model perplexity, while treating the LM as a frozen, black-box scoring function. Our experiments show that RePlug can improve the performance of diverse black-box LMs on both language modeling and downstream tasks, including MMLU (Hendrycks et al., 2021) and open-domain QA (Kwiatkowski et al., 2019; Joshi et al., 2017). For instance, RePlug can improve Codex (175B) performance on MMLU by 4.5%, achieving comparable results to the 540B, instruction-finetuned Flan-PaLM. Furthermore, tuning the retriever with our training scheme (i.e., RePlug LSR) leads to additional improvements, including up to 6.3% increase in GPT-3 175B language modeling. To the best of our knowledge, our work is the first to show the benefits of retrieval to large LMs (>100B model parameters), for both reducing LM perplexity and and improving in-context learning performance. We summarize our contributions as follows: * We introduce RePlug (SS3), the first retrieval-augmented language modeling framework for enhancing large black-box language models with retrieval. * We propose a training scheme (SS4) to further adapt an off-the-shelf retrieval model to the LM, using the language modeling scores as supervision signals, resulting in improved retrieval quality. * Evaluations on language modeling (SS6), open-domain QA and MMLU demonstrate that RePlug can improve the performance of various language models such as GPT, OPT and BLOOM, including very large models with up to 175B parameters. ## 2 Background and Related Work Black-box Language ModelsLarge language models (i.e., >100B), such as GPT-3 (Brown et al., 2020), Codex (Chen et al., 2021), and Yuan 1.0 (Wu et al., 2021), are not open-sourced due to commercial considerations and are only available as black-box APIs, through which users can send queries and receive responses. On the other hand, even open sourced language models such as OPT-175B (Zhang et al., 2022) and BLOOM-176B (Scao et al., 2022) require significant computational resources to run and finetune locally. For example, finetuning BLOOM-176B requires 72 A100 GPUs (80GB memory, $15k each (Younes Belkda, 2022)), making them inaccessible to researchers and developers with limited resources. Traditionally, retrieval-augmented model frameworks (Khandelwal et al., 2020; Borgeaud et al., 2022; Yu, 2022; Izacard et al., 2022; Goyal et al., 2022) have focused on the white-box setting, where language models are fine-tuned to incorporate retrieved documents. However, the increasing scale and black-box nature of large language models makes this approach infeasible. To address the challenges posed by large language models, we investigate retrieval-augmentation in the **black-box setting**, where users only have access to the model predictions and cannot access or modify its parameters. Retrieval-augmented ModelsAugmenting language models with relevant information retrieved from various knowledge stores has shown to be effective in improving performance on various NLP tasks, including language modeling (Min et al., 2022; Borgeaud et al., 2022; Khandelwal et al., 2020) and open-domain question answering (Lewis et al., 2020; Izacard et al., 2022; Hu et al., 2022). Specifically, using the input as query, (1) a retriever first retrieves a set of documents (i.e., sequences of tokens) from a corpus and then (2) a language model incorporates the retrieved documents as additional information to make a final prediction. This style of retrieval can be added to both _encoder_(Yu, 2022; Izacard et al., 2022) and _decoder-only_ models (Khandelwal et al., 2020; Borgeaud et al., 2022; Shi et al., 2022; Rubin et al., 2022). For example, Atlas (Izacard et al., 2022) finetunes an _encoder-decoder_ model jointly with the retriever by modeling documents as latent variables, while RETRO (Borgeaud et al., 2022) changes the _decoder-only_ architecture to incorporate retrieved texts and pretrains the language model from scratch. Both methods require updating the model parameters through gradient descent, which cannot be applied to black-box LMs. Another line of retrieval-augmented LMs such as kNN-LM (Khandelwal et al., 2020; Zhong et al., 2022) retrieves a set of tokens and interpolates between the LM's next token distribution and kNN distributions computed from the retrieved tokens at inference. Although kNN-LM does not require additional training, it requires access to internal LM representations to compute the kNN distribution, which are not always available for large LMs such as GPT-3. In this work, we investigate ways to improve large black-box language models with retrieval. While concurrent work (Mallen et al., 2022; Si et al., 2023; Yu et al., 2023; Khattab et al., 2022) has demonstrated that using a frozen retriever can improve GPT-3 performance on open-domain question answering, we approach the problem in a more general setting, including language modeling and understanding tasks. We also propose an ensemble method to incorporate more documents and a training scheme to further adapt the retriever to large LMs. ## 3 RePlug We introduce RePlug (**R**etrieve and **Plug**), a new retrieval-augmented LM paradigm where the language model is treated as black box and the retrieval component is added as a potentially tuneable module. As shown in Figure 2, given an input context, RePlug first retrieves a small set of relevant documents from an external corpus using a retriever (SS3.1). Then we pass the concatenation of each retrieved document with the input context through the LM in parallel, and ensemble the predicted probabilities (SS3.2). ### Document Retrieval Given an input context \(x\), the retriever aims to retrieve a small set of documents from a corpus \(\mathcal{D}=\{d_{1}...d_{m}\}\) that are relevant to \(x\). Following prior work (Qu et al., 2021; Izacard and Grave, 2021; Ni et al., 2021), we use a dense retriever based on the dual encoder architecture, where an encoder is used to encode both the input context \(x\) and the document \(d\). Specifically, the encoder maps each document \(d\in D\) to an embedding \(\mathbf{E}(d)\) by taking the mean pooling of the last hidden representation over the tokens in \(d\). At query time, the same encoder is applied to the input context \(x\) to obtain a query embedding \(\mathbf{E}(x)\). The similarity between the query embedding and the document embedding is computed by their cosine similarity: \[s(d,x)=\cos(\mathbf{E}(d),\mathbf{E}(x)) \tag{1}\] The top-\(k\) documents that have the highest similarity scores when compared with the input \(x\) are retrieved in this step. For efficient retrieval, we precompute the embedding of each document \(d\in D\) and construct FAISS index (Johnson et al., 2019) over these embeddings. ### Input Reformulation The retrieved top-\(k\) documents provide rich information about the original input context \(x\) and can potentially help the LM to make a better prediction. One simple way to incorporate the retrieved documents as part of the input to the LM is to prepend \(x\) with all \(k\) documents. However, this simple scheme is fundamentally restricted by the number of documents (i.e., \(k\)) we can include, given the language model's context window size. To address this limitation, we adopt an ensemble strategy described as follows. Assume \(\mathcal{D}^{\prime}\subset\mathcal{D}\) consists of \(k\) most relevant documents to \(x\), according to the scoring function in Eq. (1). We prepend each document \(d\in\mathcal{D}^{\prime}\) to \(x\), pass this concatenation to the LM separately, and then ensemble output probabilities from all \(k\) passes. Formally, given the input context \(x\) and its top-\(k\) relevant documents \(\mathcal{D}^{\prime}\), the output probability of the next token \(y\) is computed as a weighted average ensemble: \[p(y\mid x,\mathcal{D}^{\prime})=\sum_{d\in\mathcal{D}^{\prime}}p(y\mid d \circ x)\cdot\lambda(d,x),\] where \(\circ\) denotes the concatenation of two sequences and the weight \(\lambda(d,x)\) is based on the similarity score between the document \(d\) and the input context \(x\): \[\lambda(d,x)=\frac{e^{s(d,x)}}{\sum_{d\in\mathcal{D}^{\prime}}e^{s(d,x)}}\] Although our ensemble method requires running the LM \(k\) times, the cross attention is performed between each retrieved document and the input context. Therefore, compared with the method of prepending all the retrieved docu Figure 2: **RePlug at inference** (§3). Given an input context, RePlug first retrieves a small set of relevant documents from an external corpus using a retriever (§3.1 _Document Retrieval_). Then it prepends each document separately to the input context and ensembles output probabilities from different passes (§3.2 _Input Reformulation_). ments, our ensemble methods do not incur additional computational cost overhead. ## 4 RePlug LSR: Training the Dense Retriever Instead of relying only on existing neural dense retrieval models (Karpukhin et al., 2020; Izacard et al., 2022; Su et al., 2022), we further propose RePlug LSR (RePlug with LM-Supervised Retrieval), which _adapts_ the retriever in RePlug by using the LM itself to provide supervision about which documents should be retrieved. Inspired by Sachan et al. (2022), our approach can be seen as adjusting the probabilities of the retrieved documents to match the probabilities of the output sequence perplexities of the language model. In other words, we would like the retriever to find documents that result in lower perplexity scores. As shown in Figure 3, our training algorithm consists of the four steps: (1) retrieving documents and computing the retrieval likelihood (SS4.1), (2) scoring the retrieved documents by the language model (SS4.2), (3) updating the retrieval model parameters by minimizing the KL divergence between the retrieval likelihood and the LM's score distribution (SS4.3), and (4) asynchronous update of the datastore index (SS4.4). ### Computing Retrieval Likelihood We retrieve \(k\) documents \(\mathcal{D}^{\prime}\subset\mathcal{D}\) with the highest similarity scores from a corpus \(\mathcal{D}\) given an input context \(x\), as described in SS3.1. We then compute the retrieval likelihood of each retrieved document \(d\): \[P_{R}(d\mid x)=\frac{e^{s(d,x)/\gamma}}{\sum_{d\in\mathcal{D}^{\prime}}e^{s( d,x)/\gamma}}\] where \(\gamma\) is a hyperparameter that controls the temperature of the softmax. Ideally, the retrieval likelihood is computed by marginalizing over all the documents in the corpus \(\mathcal{D}\), which is intractable in practice. Therefore, we approximate the retrieval likelihood by only marginalizing over the retrieved documents \(\mathcal{D}^{\prime}\). ### Computing LM likelihood We use the LM as a scoring function to measure how much each document could improve the LM perplexity. Specifically, we first compute \(P_{LM}(y\mid d,x)\), the LM probability of the ground truth output \(y\) given the input context \(x\) and a document \(d\). The higher the probability, the better the document \(d_{i}\) is at improving the LM's perplexity. We then compute the LM likelihood of each document \(d\) as follows: \[Q(d\mid x,y)=\frac{e^{P_{LM}(y\mid d,x)/\beta}}{\sum_{d\in\mathcal{D}^{\prime} }e^{P_{LM}(y\mid d,x)/\beta}}\] where \(\beta\) is another hyperparameter. ### Loss Function Given the input context \(x\) and the corresponding ground truth continuation \(y\), we compute the retrieval likelihood and the language model likelihood. The dense retriever is trained by minimizing the KL divergence between these two distributions: \[\mathcal{L}=\frac{1}{|\mathcal{B}|}\sum_{x\in\mathcal{B}}KL\Big{(}P_{R}(d\mid x )\parallel Q_{\text{LM}}(d\mid x,y)\Big{)},\] where \(\mathcal{B}\) is a set of input contexts. When minimizing the loss, we can only update the retrieval model parameters. The LM parameters are fixed due to our black-box assumption. ### Asynchronous Update of the Datastore Index Because the parameters in the retriever are updated during the training process, the previously computed document embeddings are no longer up to date. Therefore, following Guu et al. (2020), we recompute the document embeddings and rebuild the efficient search index using the new embeddings every \(T\) training steps. Then we use the new document embeddings and index for retrieval, and repeat the training procedure. ## 5 Training Setup In this section, we describe the details of our training procedure. We first describe the model setting in RePlug (SS5.1) and then describe the procedure for training the retriever in RePlug LSR (SS5.2). ### RePlug In theory, any type of retriever, either dense (Karpukhin et al., 2020; Ni et al., 2021) or sparse (Robertson et al., 2009), could be used for RePlug. Following prior work (Izacard et al., 2022), we use the Contriever (Izacard et al., 2022) as the retrieval model for RePlug, as it has demonstrated strong performance. ### RePlug LSR For RePlug LSR, we initialize the retriever with the Contriever model (Izacard et al., 2022). We use GPT-3 Curie (Brown et al., 2020) as the supervision LM to compute the LM likelihood. Training dataWe use 800K sequences of 256 tokens each, sampled from the Pile training data (Gao et al., 2020), as our training queries. Each query is split into two parts: the first 128 tokens are used as the input context \(x\), and the last 128 tokens are used as the ground truth continuation \(y\). For the external corpus \(D\), we sample 36M documents of 128 tokens from the Pile training data. To avoid trivial retrieval, we ensure that the external corpus documents do not overlap with the documents from which the training queries are sampled. Training detailsTo make the training process more efficient, we pre-compute the document embeddings of the external corpus \(D\) and create a FAISS index (Johnson et al., 2019) for fast similarity search. Given a query \(x\), we retrieve the top 20 documents from the FAISS index and compute the retrieval likelihood and the LM likelihood with a temperature of 0.1. We train the retriever using the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 2e-5, a batch size of 64, and a warmup ratio of 0.1. We re-compute the document embeddings every 3k steps and fine-tune the retriever for a total of 25k steps. ## 6 Experiments We perform evaluations on both language modeling (SS6.1) and downstream tasks such as MMLU (SS6.2) and open-domain QA (SS6.3). In all settings, RePlug improve the performance of various black-box language models, showing the effectiveness and generality of our approach. ### Language Modeling DatasetsThe Pile (Gao et al., 2020) is a language modeling benchmark that consists of text sources from diverse domains such as web pages, code and academic papers. Following prior work, we report bits per UTF-8 encoded byte (BPB) as the metric on each subset domain. BaselinesWe consider GPT-3 and GPT-2 family language model as the baselines. The four models from GPT-3 (Davinci, Curie, Baddage and Ada) are black-box models that are only accessible through API Our modelWe add RePlug and RePlug LSR to the baselines. We randomly subsampled Pile training data (367M documents of 128 tokens) and use them as the retrieval corpus for all models. As the Pile dataset has made efforts to deduplicate documents across train, validation and test splits (Gao et al., 2020), we did not do additional filtering. For both RePlug and RePlug LSR, we use a length of 128-token context to do retrieval and adopt the ensemble method (Section 3.2) to incorporate top 10 retrieved documents during inference. ResultsTable 1 reports the results of the original baselines, baselines augmented with the RePlug, and baselines augmented with the RePlug LSR. We observe that both RePlug and RePlug LSR significantly outperform the baselines. This demonstrates that simply adding a retrieval module to a frozen language model (i.e., the black-box setting) is effective at improving the performance of different sized language models on language modeling tasks. Furthermore, RePlug LSR consistently performs better than RePlug by a large margin. Specifically, RePlug LSR results in 7.7% improvement over baselines compared to 4.7% improvement of RePlug averaged over the 8 models. This indicates that further adapting the retriever to the target LM is beneficial. ### Mmlu DatasetsMassive Multi-task Language Understanding (MMLU (Hendrycks et al., 2021)) is a multiple choice QA dataset that covers exam questions from 57 tasks including mathematics, computer science, law, US history and etc. The 57 tasks are grouped into 4 categories: humanities, STEM, social sciences and other. Following Chung Figure 3: **RePlug LSR training process (§4).** The retriever is trained using the output of a frozen language model as supervision signals. et al. (2022a), we evaluate RePlug in the 5-shot in-context learning setting. BaselinesWe consider two groups of strong previous models as baselines for comparisons. The first group of baselines is the state-of-the-art LLMs including Codex1Chen et al. (2021), PaLM Chowdhery et al. (2022), and FlanPaLM Chung et al. (2022b). According to Chung et al. (2022b), these three models rank top-3 in the leaderboard of MMLU. The second group of baselines consists of retrieval-augmented language models. We only include Atlas Izacard et al. (2022b) in this group, as no other retrieval-augmented LMs have been evaluated on the MMLU dataset. Atlas trains both the retriever and the language model, which we consider a white-box retrieval LM setting. Footnote 1: Code-Davinci-002 Our modelWe add RePlug and RePlug LSR only to Codex because other models such as PaLM and Flan-PaLM are not accessible to the public. We use the test question as the query to retrieve 10 relevant documents from Wikipedia (2018, December) and prepend each retrieved document to the test question, resulting in 10 separate inputs. These inputs are then separately fed into the language models, and the output probabilities are ensemble together. ResultsTable 2 presents the results from the baselines, RePlug, and RePlug LSR on the MMLU dataset. We observe that both the RePlug and RePlug LSR improve the original Codex model by 4.5% and 5.1%, respectively. In addition, RePlug LSR largely outperforms the previous retrieval-augmented language model, Atlas, demonstrating the effectiveness of our black-box retrieval language model setting. Although our models slightly underperform FlanPaLM, this is still a strong result because Flan-PaLM has three times more parameters. We would expect that the RePlug LSR could further improve Flan-PaLM, if we had access to the model. Another interesting observation is that the RePlug LSR outperforms the original model by 1.9% even in the STEM category. This suggests that retrieval may improve a language model's problem-solving abilities. ### Open Domain QA Lastly, we conduct evaluation on two open-domain QA datasets: Natural Questions (NQ) Kwiatkowski et al. (2019) and TriviaQA Joshi et al. (2017). DatasetsNQ and TriviaQA are two open-domain QA datasets consisting of questions, answers collected from \begin{table} \begin{tabular}{l l c|c c c|c c} \hline \hline Model & \multicolumn{2}{c|}{\# Parameters} & \multicolumn{1}{c|}{Original} & \multicolumn{1}{c|}{+ RePlug} & \multicolumn{1}{c|}{Gain \%} & \multicolumn{1}{c}{+ RePlug LSR} & \multicolumn{1}{c}{Gain \%} \\ \hline GPT-2 & Small & 117M & 1.33 & 1.26 & 5.3 & 1.21 & 9.0 \\ & Medium & 345M & 1.20 & 1.14 & 5.0 & 1.11 & 7.5 \\ & Large & 774M & 1.19 & 1.15 & 3.4 & 1.09 & 8.4 \\ & XL & 1.5B & 1.16 & 1.09 & 6.0 & 1.07 & 7.8 \\ \hline GPT-3 & Ada & 350M & 1.05 & 0.98 & 6.7 & 0.96 & 8.6 \\ (black-box) & Babbage & 1.3B & 0.95 & 0.90 & 5.3 & 0.88 & 7.4 \\ & Curie & 6.7B & 0.88 & 0.85 & 3.4 & 0.82 & 6.8 \\ & Davinci & 175B & 0.80 & 0.77 & 3.8 & 0.75 & 6.3 \\ \hline \hline \end{tabular} \end{table} Table 1: **Both RePlug and RePlug LSR consistently enhanced the performance of different language models.** Bits per byte (BPB) of the File using GPT-3 and GPT-2 family models (Original) and their retrieval-augmented versions (+RePlug and +RePlug LSR. The gain % shows the relative improvement of our models compared to the original language model. \begin{table} \begin{tabular}{l c c c c c|c} \hline \hline Model & \multicolumn{1}{c}{\# Parameters} & \multicolumn{1}{c}{Humanities} & Social. & STEM & Other & All \\ \hline Codex & 175B & 74.2 & 76.9 & 57.8 & 70.1 & 68.3 \\ PaLM & 540B & 77.0 & 81.0 & 55.6 & 69.6 & 69.3 \\ Flan-PaLM & 540B & - & - & - & - & 72.2 \\ \hline Atlas & 11B & 46.1 & 54.6 & 38.8 & 52.8 & 47.9 \\ \hline Codex + RePlug & 175B & 76.0 & 79.7 & 58.8 & 72.1 & 71.4 \\ Codex + RePlug LSR & 175B & 76.5 & 79.9 & 58.9 & 73.2 & 71.8 \\ \hline \hline \end{tabular} \end{table} Table 2: **RePlug and RePlug LSR improves Codex by 4.5% and 5.1% respectively.** Performance on MMLU broken down into 4 categories. The last column averages the performance over these categories. All models are evaluated based on 5-shot in-context learning with direct prompting. Wikipedia and the Web. Following prior work (Izacard and Grave, 2021; Si et al., 2022), we report results for the filtered set of TriviaQA. For evaluation, we consider the few-shot setting where the model is only given a few training examples and full data where the model is given all the training examples. BaselinesWe compare our model with several state-of-the-art baselines, both in a few-shot setting and with full training data. The first group of models consists of powerful large language models, including Chinchilla (Hoffmann et al., 2022), PaLM (Chowdhery et al., 2022), and Codex. These models are all evaluated using in-context learning under the few-shot setting, with Chinchilla and PaLM evaluated using 64 shots, and Codex using 16 shots. The second group of models for comparison includes retrieval-augmented language models such as RETRO (Borgeaud et al., 2021), R2-D2 (Fajcik et al., 2021), and Atlas (Izacard et al., 2022). All of these retrieval-augmented models are finetuned on the training data, either in a few-shot setting or with full training data. Specifically, Atlas is finetuned on 64 examples in the few-shot setting. Our modelWe add RePlug and RePlug LSR to Codex with Wikipedia (2018, December) as the retrieval corpus to evaluate the model in a 16-shot in context learning. Similar to the setting in language modeling and MMLU, we incorporate top-10 retrieved documents using our proposed ensemble method. ResultsAs shown in Table 3, RePlug LSR significantly improves the performance of the original Codex by 12.0% on NQ and 5.0% on TQA. It outperforms the previous best model, Atlas, which was fine-tuned with 64 training examples, achieving a new state-of-the-art in the few-shot setting. However, this result still lags behind the performance of retrieval-augmented language models fine-tuned on the full training data. This is likely due to the presence of near-duplicate test questions in the training set (e.g., Lewis et al. (2021) found that 32.5% of test questions overlap with the training sets in NQ). ## 7 Analysis ### RePlug performance gain does not simply come from the ensembling effect The core of our method design is the use of an ensemble method that combines output probabilities of different passes, in which each retrieved document is prepended separately to the input and fed into a language model. To study whether the gains come solely from the ensemble method, we compare our method to ensembling random documents. For this, we randomly sample several documents, concatenated each random document with the input, and ensemble the outputs of different runs (referred to as "random"). As shown in Figure 6, we evaluated the performance of GPT-3 Curie on Pile when augmented with random documents, documents retrieved by RePlug, and documents retrieved by RePlug LSR. We observed that ensembling random documents leads to worse performance, indicating that the performance gains of RePlug do not solely come from the ensembling effect. Instead, ensembling the **relevant** documents is crucial for the success of RePlug. Additionally, as more documents were ensembled, the performance of RePlug and RePlug LSR improved monotonically. However, a small number of documents (e.g., 10) was sufficient to achieve large performance gains. Figure 4: **Ensembling random documents does not result in improved performance.** BPB of Curie augmented with different methods (random, RePlug and RePlug LSR) when varying the number of documents (i.e.; number of ensemble times.) \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{NQ} & \multicolumn{2}{c}{TQA} \\ \cline{2-5} Model & Few-shot & Full & Few-shot & Full \\ \hline Chinchilla & 35.5 & - & 64.6 & - \\ PaLM & 39.6 & - & - & - \\ Codex & 40.6 & - & 73.6 & - \\ \hline RETRO\({}^{\dagger}\) & - & 45.5 & - & - \\ R2-D2\({}^{\dagger}\) & - & 55.9 & - & 69.9 \\ Atlas\({}^{\dagger}\) & 42.4 & **60.4** & 74.5 & **79.8** \\ Codex + Contire\({}_{cc}\)\({}^{2}\) & 44.2 & - & 76.0 & - \\ \hline Codex + RePlug & 44.7 & - & 76.8 & - \\ Codex + RePlug LSR & **45.5** & - & **77.3** & - \\ \hline \hline \end{tabular} \end{table} Table 3: Performance on NQ and TQA. We report results for both few-shot (64 shots for Chinchilla, PaLM, and Atlas; 16 shots for Codex-based models) and full training data settings. RePlug LSR improves Codex by 12.0% on NQ and 5.0% on TQA, making it the best-performing model in the few-shot setting. Note that models with \(\dagger\) are finetuned using training examples, while other models use in-context learning. ### RePlug is applicable to diverse language models Here we further study whether RePlug could enhance _diverse_ language model families that have been pre-trained using different data and methods. Specifically, we focus on three groups of language models with varying sizes: GPT-2 (117M, 345M, 774M, 1.5B parameters) (Brown et al., 2020), OPT (125M, 350M, 1.3B, 2.7B, 6.7B, 13B, 30B, 66B) (Zhang et al., 2022) and BLOOM (560M, 1.1B, 1.7B, 3B and 7B) (Scao et al., 2022). We evaluate each model on Wikitext-103 (Stephen et al., 2017) test data and report its perplexity. For comparison, we augment each model with RePlug that adopts the ensemble method to incorporate top 10 retrieved documents. Following prior work (Khandelwal et al., 2020), we use Wikitext-103 training data as the retrieval corpus. Figure 5 shows the performance of different-sized language models with and without RePlug. We observe that the performance gain brought by RePlug stays consistent with model size. For example, OPT with 125M parameters achieves 6.9% perplexity improvement, while OPT with 66B parameters achieves 5.6% perplexity improvement. Additionally, RePlug improves the perplexity of all the model families. This indicates that RePlug is applicable to diverse language models with different sizes. ### Qualitative Analysis: rare entities benefit from retrieval To understand why the RePlug improves language modeling performance, we conducted manual analysis of examples in which the RePlug results in a decrease in perplexity. We find that RePlug is more helpful when texts contain rare entities. Figure 6 shows a test context and its continuation from the Wikitext-103 test set. For RePlug, we use the test context as a query to retrieve a relevant document from Wikitext-103 training data. We then compute the perplexity of the continuation using the original GPT-2 1.5B and its RePlug enhanced version. After incorporating the retrieved document, the perplexity of the continuation improves by 11%. Among all tokens in the continuation, we found that RePlug is most helpful for the rare entity name "Li Bai". This is likely because the original LM does not have sufficient information about this rare entity name. However, by incorporating the retrieved document, RePlug was able to match the name with the relevant information in the retrieved document, resulting in better performance. ## 8 Conclusion We introduce RePlug, a retrieval-augmented language modeling paradigm that treats the language model as a black box and augments it with a tuneable retrieval model. Our evaluation shows that RePlug can be integrated with any existing language model to improve their performance Figure 5: **GPT-2, BLOOM and OPT models of varying sizes consistently benefit from RePlug.** The x-axis indicates the size of the language model and the y-axis is its perplexity on Wikitext-103. Figure 6: **Rare entities benefit from retrieval.** After incorporating the retrieved document during inference, the entity “_Li Bai_” and the token “_greatest_” in the continuation show the most improvement in perplexity (15% for “_Li Bai_” and 5% for “_greatest_”). Other tokens’ perplexity changes are within 5%. on language modeling or downstream tasks. This work opens up new possibilities for integrating retrieval into large-scale black-box language models and demonstrates even the state-of-the-art large-scale LMs could benefit from retrieval. However, RePlug lacks interpretability as it is unclear when the model relies on retrieved knowledge or parametric knowledge. Future research could focus on developing more interpretable retrieval-augmented language models.
2306.08875
Cosmic birefringence tomography with polarized Sunyaev Zel'dovich effect
We consider the polarized Sunyaev-Zel'dovich (pSZ) effect for a tomographic probe of cosmic birefringence, including all relevant terms of the pSZ effect in the cosmic microwave background (CMB) observables, some of which were ignored in the previous works. The pSZ effect produces late-time polarization signals from the scattering of the local temperature quadrupole seen by an electron. We forecast the expected constraints on cosmic birefringence at the late time of the universe with the pSZ effect. We find that the birefringence angles at $2\lesssim z\lesssim 5$ are constrained at a subdegree level by the cross-correlations between CMB $E$- and $B$-modes or between CMB $B$-modes and remote quadrupole $E$-modes using data from LiteBIRD, CMB-S4, and LSST. In particular, the cross-correlation between large-scale CMB $B$-modes and remote-quadrupole $E$-modes has a much smaller bias from the Galactic foregrounds and is useful to cross-check the results from the $EB$ power spectrum.
Toshiya Namikawa, Ippei Obata
2023-06-15T06:16:43Z
http://arxiv.org/abs/2306.08875v2
# Cosmic birefringence tomography with polarized Sunyaev Zel'dovich effect ###### Abstract We consider the polarized Sunyaev-Zel'dovich (pSZ) effect for a tomographic probe of cosmic birefringence, including all relevant terms of the pSZ effect in the CMB observables, some of which were ignored in the previous works. The pSZ effect produces late-time polarization signals from the scattering of the local temperature quadrupole seen by an electron. We forecast the expected constraints on cosmic birefringence at the late time of the universe with the pSZ effect. We find that the birefringence angles at \(2\lesssim z\lesssim 5\) are constrained at a sub-degree level by the cross-correlations between CMB \(E\)- and \(B\)-modes or between CMB \(B\)-modes and remote quadrupole \(E\)-modes using data from LiteBIRD, CMB-S4, and LSST. In particular, the cross-correlation between large-scale CMB \(B\)-modes and remote-quadrupole \(E\)-modes has a much smaller bias from the Galactic foregrounds and is useful to cross-check the results from the \(EB\) power spectrum. + Footnote †: preprint: APS/123-QED ## I Introduction Cosmic birefringence -- a rotation of the linear polarization plane of the cosmic microwave background (CMB) as they travel through space1 -- is now a key observable to search for parity-violating physics in cosmology [2]. Recent measurements of the cross-correlation between the even-parity \(E\)-modes and odd-parity \(B\)-modes in the polarization map suggest a tantalizing hint of cosmic birefringence [3; 4; 5; 6; 7]. Cosmic birefringence can be induced by a pseudoscalar field, such as axionlike particles (ALPs), coupled with electromagnetic fields via the so-called Chern-Simons term, \(\mathcal{L}\supset-g_{\phi\gamma}\phi F^{\mu\nu}\bar{F}_{\mu\nu}/4\), where \(g_{\phi\gamma}\) is the ALP-photon coupling constant, \(\phi\) is an ALP field, \(F^{\mu\nu}\) is the electromagnetic field tensor, and \(\bar{F}_{\mu\nu}\) is its dual. Cosmic birefringence can be caused by the ALP field of dark energy [8; 9; 10; 11; 12; 13; 14; 15], early dark energy [16; 17; 11], dark matter [18; 19; 20], and by topological defects [21; 22; 23; 24], as well as by possible signatures of quantum gravity [25; 26]. Upcoming CMB experiments, including the BICEP [27; 28], Simons Array [29], Simons Observatory [30], CMB-S4 [31], and LiteBIRD [32], with which the polarization noise will be reduced significantly, are expected to improve cosmic birefringence measurements. Footnote 1: The nomenclature of this rotation effect is discussed in [1]. Multiple studies have shown that the shape of the \(EB\) power spectrum depends on the dynamics of the ALP fields during reionization and recombination [33; 18; 34; 35], including early dark energy [16; 17; 36], dark energy [37; 38], and other phenomenological models [39]. Hence, measuring the spectral shape of the power spectrum will provide tomographic information on such scenarios. This method, the _cosmic birefringence tomography_, can avoid the degeneracies with the instrumental miscalibration angle [40; 41; 42; 43; 44] and half-wave plate non-idealities [45]. This paper considers a new tomographic source -- the polarized Sunyaev-Zel'dovich (pSZ) effect, which generates linear polarization through Thomson scattering of CMB temperature quadrupole by free electrons in clusters or intergalactic space in late-time of the universe [46; 47; 48; 49; 50]. Measuring the polarization signal from the pSZ effect provides information on cosmic birefringence in the late-time universe. The polarization signal is decomposed into \(E\)- and \(B\)-modes, \(q^{E}\) and \(q^{B}\) (hereafter, remote quadrupole). The reference [51] provides an estimator to reconstruct \(q^{E}\) and \(q^{B}\) by cross-correlating observed CMB \(E\)- or \(B\)-modes with large-scale structure tracers, such as galaxy number density fluctuations. Future CMB experiments, such as CMB-S4 [31] and CMB-HD [52], with future galaxy surveys, such as the Vera Rubin Observatory Legacy Survey of Space and Time (LSST) [53], would be able to detect the remote quadrupole [54]. Multiple studies have discussed applications of the remote quadrupole for cosmology, including the large-scale CMB anomalies [55], the integrated Sachs-Wolfe effect [56; 57], CMB optical depth [58], and inflationary gravitational waves [59]. Recently, Hotinli et al. [60] and Lee et al. [61] have considered the birefringence effect on \(q^{E}\) and \(q^{B}\) to constrain cosmic birefringence in the late-time universe. The remote quadrupole is tiny, however, and the expected constraints on the birefringence angle from even next-generation CMB experiments and galaxy surveys are at the level of degrees to 10 degrees. In this paper, we further consider the pSZ-induced polarization in the observed CMB \(E\)- and \(B\)-modes and explore how the constraints on the birefringence angle improve by including these new contributions in conjunction with \(q^{E}\) and \(q^{B}\). This paper is organized as follows. Sec. II reviews the pSZ effect and formulates the pSZ effect in the presence of cosmic birefringence. Sec. III shows the expected constraint on cosmic birefringence by combining large-scale CMB polarization and remote quadrupole. Sec. IV is devoted to a conclusion. Throughout this paper, we define the spherical har monic decomposition of a spin-0 quantity, \(x\), as \[x_{\ell m}=\int\!\mathrm{d}^{2}\hat{\mathbf{n}}\ Y^{*}_{\ell m}(\hat{\mathbf{n}})x(\hat{ \mathbf{n}})\,, \tag{1}\] where \(Y_{\ell m}\) is the spherical harmonics. We also define the \(E\)- and \(B\)- modes from the Stokes \(Q\) and \(U\) parameters [62; 63]: \[E_{\ell m}\pm iB_{\ell m}=-\int\!\mathrm{d}^{2}\hat{\mathbf{n}}\ (Y^{\pm 2}_{\ell m}( \hat{\mathbf{n}}))^{*}P^{\pm}(\hat{\mathbf{n}})\,, \tag{2}\] where \(P^{\pm}=Q\pm iU\) and \(Y^{\pm 2}_{\ell m}\) is the spin-2 spherical harmonics. We assume the flat \(\Lambda\)CDM cosmology obtained from Planck [64]. The experimental configuration for CMB used in this paper is summarized in Table 1. ## II Polarized SZ effect In this section, we briefly review the pSZ effect by following [65] and discuss the cosmic birefringence effect on the polarization signals generated by the pSZ effect. ### Remote quadrupole In CMB observations, we measure the Stokes \(Q\) and \(U\) maps along the line-of-sight direction, \(\hat{\mathbf{n}}\). The Stokes \(Q\) and \(U\) map are given by [65] \[P^{\pm}(\hat{\mathbf{n}})=-\int_{0}^{\chi_{*}}\!\mathrm{d}\chi\ g_{\mathrm{vis}}( \chi)\frac{\sqrt{6}}{10}q^{\pm}(\chi,\hat{\mathbf{n}})\,. \tag{3}\] Here, \(\chi_{*}\) is the comoving distance from an observer to the last scattering surface of CMB, and we define the visibility function as \[g_{\mathrm{vis}}(\chi)=\frac{\mathrm{d}\tau}{\mathrm{d}\chi}\,\mathrm{e}^{- \tau(\chi)}=\sigma_{\mathrm{T}}a(\chi)n_{\mathrm{e}}(\chi)\,\mathrm{e}^{- \tau(\chi)}\,, \tag{4}\] where \(\sigma_{\mathrm{T}}\) is the cross-section of the Thomson scattering, \(a\) is the scale factor, and \(n_{\mathrm{e}}\) is the electron number density. The CMB optical depth, \(\tau\), is defined as \[\tau(\chi)=\int_{0}^{\chi}\!\mathrm{d}\chi^{\prime}\ \sigma_{\mathrm{T}}a( \chi^{\prime})n_{\mathrm{e}}(\chi^{\prime})\,. \tag{5}\] The remote quadrupole fields, \(q^{\pm}(\chi,\hat{\mathbf{n}})\), are decomposed into the local \(E\)- and \(B\)-modes, \(q^{E}_{\ell m}(\chi)\) and \(q^{B}_{\ell m}(\chi)\), using Eq. (2). If we consider only the linear density perturbations, \(q^{B}_{\ell m}(\chi)\) vanishes [66]. On the other hand, \(q^{E}_{\ell m}(\chi)\) is related to the primordial gravitational potential as [65] \[q^{E}_{\ell m}(\chi)=4\pi\int\!\frac{\mathrm{d}^{3}\mathbf{k}}{(2\pi)^{3}}\, \Delta_{\ell}^{q^{E}}(k,\chi)\Psi_{i}(\mathbf{k})Y^{*}_{\ell m}(\widehat{k})\,, \tag{6}\] with \[\Delta_{\ell}^{q^{E}}(k,\chi)=5i^{\ell}\sqrt{\frac{3}{8}\frac{( \ell+2)!}{(\ell-2)!}}\frac{j_{\ell}(k\chi)}{(k\chi)^{2}}T(k)\\ \times\sum_{\mathrm{X=SW,ISW,Doppler}}\mathcal{G}_{\mathrm{X}}(k, \chi)\,. \tag{7}\] Here, \(\Psi_{i}\) is the primordial gravitational potential, \(T(k)\) is the transfer function, and \[\mathcal{G}_{\mathrm{SW}} =-\left(2D_{\Psi}(\chi_{*})-\frac{3}{2}\right)j_{2}(k(\chi_{*}- \chi))\,, \tag{8}\] \[\mathcal{G}_{\mathrm{ISW}} =-2\int_{a_{*}}^{a_{\mathrm{e}}}\!\mathrm{d}a\ \frac{\mathrm{d}D_{\Psi}}{\mathrm{d}a}j_{2}(k(\chi-\chi))\,,\] (9) \[\mathcal{G}_{\mathrm{Doppler}} =\frac{k}{5}D_{v}(\chi_{*})[3j_{3}(k(\chi_{*}-\chi))-2j_{1}(k( \chi_{*}-\chi))]\,. \tag{10}\] \(D_{\Psi}\) is the growth function of the gravitational potential computed with the analytic formula of [67]. \(D_{v}\) is the velocity growth factor and is given by \[D_{v}(\chi)\equiv\frac{2a^{2}H(\chi)}{H_{0}^{2}\Omega_{\mathrm{m}}}\,\frac{y}{ 4+3y}\left(D_{\Psi}+\frac{\mathrm{d}D_{\Psi}}{\mathrm{d}\ln a}\right)\,, \tag{11}\] where \(H_{0}\) is the expansion rate at present, \(\Omega_{\mathrm{m}}\) is the fractional energy density of the matter component at present, \(H(\chi)=H_{0}\sqrt{\Omega_{\mathrm{m}}a^{-3}+1-\Omega_{\mathrm{m}}}\), and \(y=a/a_{\mathrm{eq}}\) with \(a_{\mathrm{eq}}\) being the radiation-matter equality time. We set \(a=1\) at the present epoch. The angular power spectrum of the remote-quadrupole \(E\)-modes is given by \[C_{L}^{q^{E}_{\mathrm{e}}{}^{\mathrm{g}}{}^{\mathrm{g}}}(\chi, \chi^{\prime})=4\pi\int\!\mathrm{d}\ln k\ \mathcal{P}_{\Psi}(k)\Delta_{\ell}^{q}(k,\chi)\Delta_{\ell}^{q}(k, \chi^{\prime})\,, \tag{12}\] where \(\mathcal{P}_{\Psi}(k)\) is the dimensionless power spectrum of \(\Psi_{i}\). In CMB observations, the observed polarization contains contributions from the late-time remote quadrupole. The \(E\)-mode contribution from the remote quadrupole is given by \[E_{\ell m}^{\mathrm{pSZ}}=-\frac{\sqrt{6}}{10}\int_{0}^{\chi_{*}}\!\mathrm{d} \chi\ g_{\mathrm{vis}}(\chi)q^{E}_{\ell m}(\chi) \tag{13}\] \begin{table} \begin{tabular}{l|c c c} Experiment & \(\sigma_{\mathrm{P}}\) & \(\theta_{\mathrm{FWHM}}\) & \(A_{\mathrm{lens}}\) \\ & \([\mu\)K-arcmin] & \([\)arcmin\(]\) & \\ \hline \hline LiteBIRD & 2 & 30 & \\ \hline S4 & 1 & 1.4 & 0.2 \\ HD & 0.4 & 0.2 & 0.1 \\ \end{tabular} \end{table} Table 1: Setup for a LiteBIRD-like (LiteBIRD), CMB-S4-like (S4), and CMB-HD-like (HD) experiments. \(\sigma_{\mathrm{P}}\) is the map noise level in \(\mu\)K-arcmin, \(\theta_{\mathrm{FWHM}}\) is the FWHM of the Gaussian beam in arcmin, and \(A_{\mathrm{lens}}\) is the fraction of the residual lensing \(B\)-mode spectrum after delensing with that experiment. LiteBIRD measures large-scale CMB polarization, and S4/HD reconstructs the remote quadrupole. For LiteBIRD, we assume delensing with a reconstructed lensing map from ground-based experiments and choose \(A_{\mathrm{lens}}=0.2\) with S4 and 0.1 with HD. The \(E\)-mode angular power spectrum from the local \(E\)-modes is given by: \[C_{\ell}^{EE}=\frac{6}{100}\int_{0}^{\chi_{*}}\!\mathrm{d}\chi\ \int_{0}^{\chi_{*}}\!\mathrm{d}\chi^{ \prime}\ g_{\mathrm{vis}}(\chi)g_{\mathrm{vis}}(\chi^{\prime})C_{L}^{q^{E}q^{E}} (\chi,\chi^{\prime})\,. \tag{14}\] ### Reconstruction of the remote quadrupole Next, we review the reconstruction of the remote quadrupole by combining CMB experiments and galaxy surveys, following [54]. The key idea of the reconstruction is that the fluctuations of electron number density modulate the remote quadrupole fields, and this modulation traces the underlying matter density fluctuations. Thus, the remote quadrupole fields are reconstructed from a correlation between this modulation and a large-scale structure tracer. If the electron number density has fluctuations, the CMB polarization from the pSZ is distorted as \[\delta P^{\pm}(\hat{\mathbf{n}})=-\int_{0}^{\chi_{*}}\!\mathrm{d}\chi\ \bar{g}_{\mathrm{vis}}(\chi)\delta_{\mathrm{e}}(\chi,\hat{\mathbf{n}})\frac{\sqrt{ 6}}{10}q^{\pm}(\chi,\hat{\mathbf{n}})\,. \tag{15}\] Here, \(\delta_{\mathrm{e}}\) is the fluctuations of the electron number density. We ignore the fluctuations of the screening, \(\,\mathrm{e}^{-\tau(\chi)}\), which are much smaller than the fluctuations of \(\,n_{\mathrm{e}}\) well after the reionization [65]. For a given interval of the comoving distance corresponding to the redshift bin in practice, we define the average components of the remote-quadrupole \(E\)-modes and optical depth in each bin as follows: \[q^{\pm,i}(\hat{\mathbf{n}}) =\frac{1}{\Delta\chi_{i}}\int_{\chi_{i-1}}^{\chi_{i}}\!\mathrm{d} \chi\ q^{\pm}(\chi,\hat{\mathbf{n}})\,, \tag{16}\] \[\delta\tau^{i}(\hat{\mathbf{n}}) =\int_{\chi_{i-1}}^{\chi_{i}}\!\mathrm{d}\chi\ \bar{g}_{\mathrm{vis}}(\chi)\delta_{\mathrm{e}}(\chi,\hat{\mathbf{n}})\,, \tag{17}\] where \(\Delta\chi_{i}\) is the bin width in the comoving distance at \(i\)th bin. The distortion to the observed CMB polarization is then given as [54] \[\delta P^{\pm}(\hat{\mathbf{n}})\simeq-\sum_{i}\frac{\sqrt{6}}{10}\delta\tau^{i}( \hat{\mathbf{n}})q^{\pm,i}(\hat{\mathbf{n}})\,. \tag{18}\] Defining the averaged remote-quadrupole \(E\)- and \(B\)-modes, \(q^{E,i}_{\ell m}\) and \(q^{B,i}_{\ell m}\), using \(q^{\pm,i}(\hat{\mathbf{n}})\), the observed CMB \(E\)- and \(B\)- modes involving \(\delta_{\mathrm{e}}\) are then given by [54] \[(\delta X_{\ell m})^{*} =-\frac{\sqrt{6}}{10}\sum_{i}\sum_{\ell_{1}m_{1}\ell_{2}m_{2}} \begin{pmatrix}\ell&\ell_{1}&\ell_{2}\\ m&m_{1}&m_{2}\end{pmatrix}\gamma_{\ell\ell_{1}\ell_{2}}\] \[\times\begin{pmatrix}\ell&\ell_{1}&\ell_{2}\\ 2&-2&0\end{pmatrix}\gamma_{Y=E,B}w^{q^{X}Y}_{\ell_{1}\ell_{2}}Q^{Y,i}_{\ell_{1} m_{1}}\delta\tau^{i}_{\ell_{2}m_{2}}\,. \tag{19}\] Here, the large parenthesis denotes the Wigner-\(3j\) symbol, and we define \[\gamma_{\ell\ell_{1}\ell_{2}} =\sqrt{\frac{(2\ell+1)(2\ell_{1}+1)(2\ell_{2}+1)}{4\pi}}\,, \tag{20}\] \[w^{q^{E}_{\ell\ell_{1}\ell_{2}}}_{\ell_{1}\ell_{2}} =\wp^{+}_{\ell\ell_{1}\ell_{2}}\,,\] (21) \[w^{q^{B}_{\ell\ell_{1}\ell_{2}}}_{\ell_{2}} =i\wp^{-}_{\ell\ell_{1}\ell_{2}}\,,\] (22) \[w^{q^{B}_{\ell\ell_{1}\ell_{2}}}_{\ell_{1}\ell_{2}} =-i\wp^{-}_{\ell\ell_{1}\ell_{2}}\,,\] (23) \[w^{q^{B}_{\ell\ell_{1}\ell_{2}}}_{\ell_{1}\ell_{2}} =\wp^{+}_{\ell\ell_{1}\ell_{2}}\,, \tag{24}\] where \(\wp^{\pm}_{\ell\ell_{1}\ell_{2}}=[1\pm(-1)^{\ell+\ell_{1}+\ell_{2}}]/2\). From Eq. (19), we can construct estimators for the remote-quadrupole \(E\)- and \(B\)- modes, \(q^{E}\) and \(q^{B}\), from measurements of the CMB \(E\)- or \(B\)-modes and a tracer of the density perturbations which correlate with \(\delta\tau^{i}\). The estimator is described as (e.g., [54; 51]) 2 Footnote 2: The minimum variance estimator of [54; 51] can be expressed as a linear combination of even and odd parity contributions. The even and odd parity terms are not correlated, and the minimum-variance estimator is given by the inverse-variance sum of these two estimators. \[(\tilde{q}^{X,i}_{\ell m})^{*} =N_{\ell}^{q^{X,i}}\sum_{Y=E,B}\sum_{\ell_{1}m_{1}\ell_{2}m_{2}}\] \[\times\begin{pmatrix}\ell&\ell_{1}&\ell_{2}\\ m&m_{1}&m_{2}\end{pmatrix}f^{q^{X,i}_{1}}_{\ell_{1}\ell_{2}}\frac{Y_{\ell_{1} }m_{i}}{\tilde{C}^{YY}_{\ell_{1}}}\frac{(x^{i}_{\ell_{2}m_{2}})^{*}}{\tilde{C}^ {i^{x}_{\ell_{2}}}_{\ell_{2}}}\,, \tag{25}\] where \(\tilde{C}^{YY}_{\ell}\) is the observed power spectrum of \(Y\) and \(\tilde{C}^{i^{x}:x^{i}}_{\ell}\) is that of the mass tracer at \(i\)th bin, \(x^{i}\). The weight function is defined as (e.g., [66]) \[f^{q^{X,i}_{Y}}_{\ell\ell_{1}\ell_{2}}=-\frac{\sqrt{6}}{10}\gamma_{\ell\ell_{1} \ell_{2}}\begin{pmatrix}\ell&\ell_{1}&\ell_{2}\\ 2&-2&0\end{pmatrix}C^{\delta\tau^{i}x^{i}}_{\ell_{2}}w^{q^{X}Y}_{\ell\ell_{1} \ell_{2}}\,. \tag{26}\] The estimator normalization is defined as \[\frac{1}{N_{\ell}^{q^{X,i}}}=\frac{1}{2\ell+1}\sum_{Y}\sum_{\ell_{1}\ell_{2}} \frac{|f^{q^{X,i}_{\ell_{1}\ell_{2}}}_{\ell_{1}\ell_{2}}|^{2}}{\tilde{C}^{YY}_{ \ell_{1}}\tilde{C}^{x_{\ell_{2}}x^{i}}_{\ell_{2}}}\,. \tag{27}\] The noise spectrum of the reconstructed remote quadrupole corresponds to the estimator normalization, and we use the above equation for computing the noise spectrum. The reconstruction noise spectra are computed with a public code of [66]; We first compute the power spectra, \(\tilde{C}^{x^{i}x^{i}}_{\ell}\) and \(\tilde{C}^{x^{i}x^{i}}_{\ell}\), in the Limber approximation since we only use small-scale multipoles for the reconstruction. We assume an LSST-like galaxy survey [53] with the same redshift distribution of galaxies, galaxy bias, and the same number density of galaxies as that used in the previous works [61; 66]. We use the multipole between 100 and 5000 to compute the noise spectrum. We choose six top-hat redshift bins whose bin widths are equal in comoving distance. Fig. 1 shows the \(q^{E}\) reconstruction noise spectra for the third and sixth bins for S4 and HD cases (see Table 1 for the experimental setup). Note that the \(q^{B}\) reconstruction noise spectrum is close to that of \(q^{E}\). The reconstruction noise power spectrum at the third bin is much larger than the remote quadrupole signals. At the sixth bin, the noise power spectrum is less than the signal power spectrum at \(\ell\lesssim 4\). We can only use the large-scale remote quadrupole to constrain cosmology. ### Cosmic birefringence and pSZ The cosmic birefringence converts part of the remote quadrupole \(E\)- to \(B\)-modes. At a comoving distance, \(\chi_{\rm e}\), the remote-quadrupole \(B\)-modes are given in the small-angle limit (\(|\beta|\ll 1\)) as \[q^{B}_{\ell m}(\chi)\simeq 2\beta(\chi)q^{E}_{\ell m}(\chi)\,, \tag{28}\] where the birefringence angle is given by [68; 69; 70] \[\beta(\chi)=\frac{g_{\phi\gamma}}{2}[\phi(0)-\phi(\chi)]\,. \tag{29}\] Here, \(\phi(\chi)\) is an ALP field at comoving distance \(\chi\). This remote quadrupole \(B\)-modes can be measured by the reconstruction presented in the previous section. The remote-quadrupole \(B\)-modes also contribute to the total observed CMB \(B\)-modes: \[B^{\rm pSZ}_{\ell m} =-\frac{\sqrt{6}}{10}\int_{0}^{\chi_{*}}\!{\rm d}\chi\;g_{\rm vis} (\chi)2\beta(\chi)q^{E}_{\ell m}(\chi) \tag{30}\] \[\simeq-\frac{\sqrt{6}}{10}\sum_{i}2\beta_{i}\int_{\chi_{i-1}}^{ \chi_{i}}\!{\rm d}\chi\;g_{\rm vis}(\chi)q^{E}_{\ell m}(\chi)\] \[\equiv\sum_{i}2\beta_{i}E^{i}_{\ell m}\,, \tag{31}\] where we denote \(\beta_{i}\) as the representative birefringence angle at \(i\)th bin and introduce the CMB \(E\)-modes generated during \(\chi_{i-1}\leq\chi\leq\chi_{i}\) as \(E^{i}_{\ell m}\). Let us derive the auto- and cross-angular power spectra between large-scale CMB \(E\)-modes (\(E_{\ell m}\)), CMB \(B\)-modes (\(B_{\ell m}\)), remote-quadrupole \(E\)-modes (\(q^{E}_{\ell m}\)), and \(B\)-modes (\(q^{B}_{\ell m}\)). In the small angle limit, the CMB \(E\)-modes and remote-quadrupole \(E\)-modes are unchanged by the cosmic birefringence. The auto- and cross-angular power spectra between the CMB \(E\) and \(B\) in the presence of cosmic birefringence are then given by \[C^{E^{\prime}E^{\prime}}_{\ell} \simeq C^{EE}_{\ell}\,, \tag{32}\] \[C^{E^{\prime}B^{\prime}}_{\ell} \simeq 2\beta_{\rm rei}C^{EE,{\rm rei}}_{\ell}+\sum_{i}2\beta_{i}C^{ EE^{i}}_{\ell}\,,\] (33) \[C^{B^{\prime}B^{\prime}}_{\ell} \simeq\tilde{C}^{BB}_{\ell}\,, \tag{34}\] where \(\beta_{\rm rei}\) is the birefringence angle of polarization sourced at reionization, \(C^{EE,{\rm rei}}_{\ell}\) is the \(E\)-mode power spectrum generated during reionization, and \(\tilde{C}^{BB}_{\ell}\) is the lensing-induced CMB \(B\)-modes. Since the pSZ signals are significant only at low multipole (\(\ell\lesssim 10\)), we ignore the recombination signals which are the dominant contributions at high multipole (\(\ell\gtrsim 10\)). We do not include the lensing effect except in \(C^{B^{\prime}B^{\prime}}_{\ell}\) since it does not change the power spectra at low-\(\ell\)[71]. We use the Python version of CAMB [72] to compute \(C^{EE}_{\ell}\) and \(\tilde{C}^{BB}_{\ell}\). Similarly, the cross-angular power spectra between the CMB polarization and reconstructed remote quadrupole in the presence of cosmic birefringence are then given by \[C^{E^{\prime}q^{E^{\prime},i}}_{\ell} \simeq C^{Eq^{E,i}}_{\ell}\,, \tag{35}\] \[C^{B^{\prime}q^{E^{\prime},i}}_{\ell} \simeq 2\beta_{\rm rei}C^{Eq^{E,i}}_{\ell}+\sum_{j}2\beta_{j}C^{E^{j} q^{E,i}}_{\ell}\,,\] (36) \[C^{E^{\prime}q^{E^{\prime},i}}_{\ell} \simeq 2\beta_{i}C^{Eq^{E,i}}_{\ell}\,,\] (37) \[C^{B^{\prime}q^{E^{\prime},i}}_{\ell} \simeq 0\,. \tag{38}\] The remote-quadrupole auto- and cross-angular power spectra are given by \[C^{q^{E^{\prime},i}q^{E^{\prime},j}}_{\ell} \simeq C^{q^{E,i}q^{E,j}}_{\ell}\,, \tag{39}\] \[C^{q^{E^{\prime},i}q^{E^{\prime},j}}_{\ell} \simeq 2\beta_{j}C^{q^{E,i}q^{E,j}}_{\ell}\,,\] (40) \[C^{q^{E^{\prime},i}q^{E^{\prime},j}}_{\ell} \simeq 0\,. \tag{41}\] Figure 1: Reconstruction noise power spectrum of the remote-quadrupole \(E\)-modes, \(N^{q^{E,i}q^{E,i}}_{\ell}\), for the third (dashed) and sixth (solid) redshift bin, using high-\(\ell\) CMB measurements from S4 (blue) / HD (orange) with LSST galaxies. The solid gray lines show the angular power spectra of the remote-quadrupole \(E\)-modes at each bin. Measuring the above power spectra provides information on the ALP field values at each redshift bin. Therefore, the reconstructed remote quadrupole will be a new source for cosmic birefringence tomography. Note that the second terms of Eqs. (33) and (36), which are responsible for constraining low-\(z\) birefringence angles, do not appear in Ref [61]. This is because they do not divide the contributions to each \(z\)-bins as in Eq. (31). In the next section, we forecast how these terms improve the constraints on the birefringence angles. ## III Forecast In this section, following [61], we estimate the expected constraint on the birefringence angles with the Fisher matrix formalism. We assume that the fiducial values of the birefringence angles are zero. In this case, the small angle limit, \(|\beta|\ll 1\) is implicitly assumed for the Fisher matrix formalism. We compute the Fisher information matrix as \[\{\mathbf{F}\}_{ij} =\sum_{\ell=2}^{\ell_{\text{max}}}\frac{2\ell+1}{2}f_{\text{sky} }\text{Tr}\left(\mathbf{C}_{\ell}^{-1}\frac{\partial\mathbf{C}_{\ell}}{ \partial p_{i}}\mathbf{C}_{\ell}^{-1}\frac{\partial\mathbf{C}_{\ell}}{ \partial p_{j}}\right)\bigg{|}_{\mathbf{p}=\mathbf{p}_{\text{fid}}}\,. \tag{42}\] Here, \(\mathbf{p}\) is a vector containing birefringence angle parameters, \(\mathbf{p}_{\text{fid}}\) is the fiducial value, and \(f_{\text{sky}}\) is the sky coverage of experimental data sets which is set to \(0.4\) for our analysis since the wide-field ground-based experiments plan to observe roughly \(40\%\) of the sky. We only need large-angular scales to constrain late-time birefringence and set \(\ell_{\text{max}}=10\). \(\mathbf{C}_{\ell}\) is the covariance matrix of observed data and its \((X,Y)\) element is given by \[\{\mathbf{C}_{\ell}\}^{XY} =C_{\ell}^{X^{\prime}Y^{\prime}}+\delta^{XY}N_{\ell}^{XX}\,, \tag{43}\] with \(X\) and \(Y\) are either \(E\), \(B\), \(q^{E,i}\), or \(q^{B,i}\). We assume that CMB \(E\)- and \(B\)-modes are obtained from LiteBIRD, and the remote quadrupole fields are reconstructed by combining S4 or HD with galaxies obtained from LSST. We use the experimental setup for CMB summarized in Table 1. The elements of the signal covariance matrix, \(C_{\ell}^{X^{\prime}Y^{\prime}}\), are computed from Eqs. (32) to (41). Note that, for the lensing \(B\)-mode spectrum, Eq. (34), we multiply a factor \(A_{\text{lens}}\) to account for the suppression of the lensing \(B\)-mode by delensing using a lensing map from S4 or HD. The noise spectrum in the noise covariance, \(N_{\ell}^{EE}\) and \(N_{\ell}^{BB}\), are computed for LiteBIRD since we only use multipole up to \(\ell=10\), which is hard to measure from ground-based experiments. In the LiteBIRD noise spectra, we add the residual Galactic foregrounds estimated by [73]. The noise spectra of the remote quadrupole field, \(N_{\ell}^{g^{E,i}}\) and \(N_{\ell}^{g^{B,i}}\), are already computed in Sec. II. ### Odd-parity power spectra The odd-parity power spectra, i.e., \(C_{\ell}^{Eq^{B}}\), \(C_{\ell}^{q^{B}q^{B}}\), \(C_{\ell}^{Bq^{E}}\), and \(C_{\ell}^{EB}\), constrain the rotation angles of cosmic birefringence. Thus, the high signal-to-noise ratio of these spectra is essential to constrain cosmic birefringence in the late-time universe precisely. Fig. 2 shows these odd-parity power spectra with the rotation angle of \(0.34\deg\). We also show the observational statistical errors per multipole on each power spectrum, \(\sigma_{\ell}\), defined as \[(\sigma_{\ell}^{XY})^{-2}\equiv\frac{(2\ell+1)f_{\text{sky}}(C_{\ell}^{XY})^{ 2}}{(C_{\ell}^{XX}+N_{\ell}^{XX})(C_{\ell}^{YY}+N_{\ell}^{YY})}\,. \tag{44}\] Note that we ignore the cross-power spectrum in the denominator since that contribution is negligible if we assume the rotation angle of \(0.34\deg\). Compared to the odd-parity spectra with \(q^{B}\) (i.e., \(C_{\ell}^{Eq^{B}}\) and \(C_{\ell}^{q^{B}q^{B}}\)), the cross spectra with the CMB \(B\)-modes (\(C_{\ell}^{Bq^{E}}\) and \(C_{\ell}^{EB}\)) have larger signal-to-noise and their measurements provide better constraints on cosmic birefringence at late time. For \(C_{\ell}^{Bq^{E,i}}\) and \(C_{\ell}^{EB}\), the signal power spectra become more significant at higher \(z\)-bins due to an increase of the electron number density. Compared to the statistical error, high-\(z\) birefringence angles are well constrained by \(C_{\ell}^{EB}\). ### Constraints on birefringence angles We first compute the constraints on the rotation angle at each bin independently. Fig. 3 shows the \(1\,\sigma\) expected constraints on the cosmic birefringence angles at each redshift bin, i.e., \(\sigma(\beta_{i})\equiv 1/\sqrt{\{\mathbf{F}\}_{ii}}\). We show the cases with S4 and HD for reconstructing the remote quadrupole fields. We also plot the case if we only use part of the odd-parity spectra as in [61], i.e., \(C_{\ell}^{Eq^{B,i}}\) and \(C_{\ell}^{q^{E,i}q^{B,j}}\). The constraints with all the relevant odd-parity power spectra are improved by more than an order of magnitude at high redshift bins compared to the case with only part of the parity-odd power spectra. These results are consistent with the implications obtained from Fig. 2. Fig. 4 shows the fractional change of \(\sigma(\beta_{i})\) with only each power spectrum to the case with all power spectra. The case with \(C_{\ell}^{Eq^{B,i}}\) is excluded from the figure since the constraint is much worse than in other cases. At high redshift bins, the constraint comes mostly from the \(EB\) power spectrum. At lower redshift bins, the \(Bq^{E}\) power spectrum dominates the constraint on the birefringence angle. Since the reconstruction noise of the remote quadrupole is much larger than the signal, as shown in Fig. 1, the \(q^{E}q^{B}\) cross-power spectrum cannot tightly constrain birefringence angles at any redshifts. However, in the HD case, the remote quadrupole is reconstructed more precisely, and the \(q^{E}q^{B}\) power spectrum mildly contributes to constraining the birefringence angles at lower redshift, where other observables also do not tightly constrain the birefringence angles. The constraint from \(EB\) power spectrum is \(\sim 0.3\,\mathrm{deg}\) at the highest bin. Even if we only use \(Bq^{E}\) power spectrum, the constraint becomes \(\sim 0.5\,\mathrm{deg}\) at the highest bin for S4. Next, we show the model-independent joint constraints on the birefringence angles. The parameters, \(\beta_{i}\), are not independent in terms of \(\phi\), and following [61], we introduce the following parameters: \[\Delta\beta_{i}=\beta_{i}-\beta_{i-1}\,, \tag{45}\] with \(i=2,3,\cdots,n\), \(\Delta\beta_{1}=\beta_{1}\), and \(\beta_{n}=\beta_{\mathrm{rei}}\). The above birefringence angle depends only on the evolution of the ALP fields in each redshift bin. We evaluate the expected \(1\,\sigma\) constraint as \(\sigma(\Delta\beta_{i})\equiv\{\mathbf{F}^{-1}\}_{ii}^{1/2}\). Fig. 5 shows the constraint on the reconstructed birefringence angles, \(\Delta\beta_{i}\), for the cases with S4 and HD to reconstruct the remote quadrupole fields. We also show the case if we ignore the contributions of \(Bq^{E}\) and \(EB\) cross-power spectra sourced by the pSZ effect. If we use only part of the power spectra, the constraints become very weak at high redshifts. ### Discussion The cosmic birefringence tomography with the pSZ effect is a useful probe of ALP models producing a large birefringence signal in the late-time universe, especially a scenario predicting \(|\beta_{i}|>|\beta_{\mathrm{rec}}|\). While a single-field ALP model does not realize such a scenario [11], this could happen if multiple ALPs exist and each ALP rotates the CMB linear polarization plane, and hence the net birefringence angle we observe is the sum of these angles. To demonstrate this, we consider the following simple model: two ALP fields \(\phi_{1}\) and \(\phi_{2}\) have periodic potentials generated by the instanton effects \[m_{\phi_{1}}^{2}f_{\phi_{1}}^{2}\left[1-\cos\left(\frac{\phi_{1}}{f_{\phi_{1}}} \right)\right]+m_{\phi_{2}}^{2}f_{\phi_{2}}^{2}\left[1-\cos\left(\frac{\phi_{2}} {f_{\phi_{2}}}\right)\right]\, \tag{46}\] where \(m_{\phi_{1,2}}\) and \(f_{\phi_{1,2}}\) are the ALP's mass and decay constant. Then, introducing the ALP couplings to photon \[-\frac{1}{4}\left(g_{\phi_{1}\gamma}\phi_{1}+g_{\phi_{2}\gamma}\phi_{2}\right)F _{\mu\nu}\tilde{F}^{\mu\nu}\, \tag{47}\] the total birefringence angle is given by \(\beta=\beta_{\phi_{1}}+\beta_{\phi_{2}}\). To find \(\beta\), we solve the background dynamics of ALPs. We take the equation of motion for a homogeneous ALP field as a usual Klein-Gordon equation in cosmology: \[\ddot{\phi}_{i}+3H\dot{\phi}_{i}+V_{\phi_{i}}=0\quad(i=1,2). \tag{48}\] Regarding the initial field values for ALP fields, we denote them as \[\phi_{i,\text{ini}}=\theta_{\phi_{i}}f_{\phi_{i}}\quad(i=1,2) \tag{49}\] with vacuum misalignment angles \(\theta_{\phi_{1,2}}\). The field starts oscillating at a time when the Hubble parameter becomes comparable with ALP mass. We define \(\chi_{i,\text{osc}}\) at which \(H(\chi_{i,\text{osc}})=m_{\phi_{i}}\). For ALP with mass \(H_{0}\ll m_{\phi_{i}}\ll H_{\text{rec}}\), the current field value \(\phi_{i}(0)\) is much smaller than the value before the oscillation. Namely, \(\beta_{\phi_{i}}\) is approximately given by \[\beta_{\phi_{i}}(\chi_{\text{e}}\gtrsim\chi_{i,\text{osc}})\simeq-\frac{g_{ \phi_{i}\gamma}}{2}\phi_{i,\text{ini}}\quad(i=1,2)\,. \tag{50}\] Then, representing \(g_{\phi_{i}\gamma}\) in terms of [74] \[g_{\phi_{i}\gamma}=\frac{\alpha}{2\pi}\frac{c_{\phi_{i}\gamma}}{f_{\phi_{i}}} \quad(i=1,2)\, \tag{51}\] where \(\alpha\simeq 1/137\) is QED fine structure constant and \(c_{\phi_{i}\gamma}\) is dimensionless anomaly coefficient, Eq. (50) is reduced to \[\beta_{\phi_{i}}(\chi_{\text{e}}\gtrsim\chi_{i,\text{osc}})\simeq-\frac{ \alpha}{4\pi}c_{\phi_{i}\gamma}\theta_{\phi_{i}}\quad(i=1,2)\,. \tag{52}\] Therefore, \(\beta\) is determined by the combination of anomaly coefficients and misalignment angles but independent on the decay constants. For our phenomenological interest, we assume that the axion masses have a hierarchy as \(H_{0}\ll m_{\phi_{1}}\ll H_{\text{rei}}\) and \(H_{\text{rei}}\ll m_{\phi_{2}}\ll H_{\text{rec}}\). At this time, from Eq. (50), we evaluate \(\beta_{\text{rec}}\) at the recombination epoch as \[\beta_{\text{rec}}\simeq-\frac{\alpha}{4\pi}(c_{\phi_{1}\gamma}\theta_{\phi_{1 }}+c_{\phi_{2}\gamma}\theta_{\phi_{2}}). \tag{53}\] We assume that the anomaly coefficients, generically given by the number of charged fermion loops, are of the same order: \(c_{\phi_{1}\gamma}\simeq c_{\phi_{2}\gamma}=\mathcal{O}(1)\)[75]. Hence, if \(\theta_{\phi_{1}\gamma}\) Figure 3: \(1\,\sigma\) constraint on the rotation angles at each redshift bin, including all relevant terms (solid) and ignoring the late-time birefringence effect in the observed \(B\)-modes (dashed). We assume that the CMB polarization is obtained from a LiteBIRD-like experiment. The pSZ signal is reconstructed from a ground-based S4-like (blue) or HD-like (orange) experiment with an LSST-like galaxy survey. Figure 5: Same as Fig. 3 but for \(1\,\sigma\) constraint on the reconstructed rotation angles, \(\Delta\beta_{i}\). We regard the reionization birefringence angle as the rotation angle at \(z=6\). Figure 4: Same as Fig. 3 but the fractional change using only large-scale \(B\)-modes and pSZ \(E\)-modes (\(Bq^{E}\)), large-scale \(E\)- and \(B\)-modes (\(EB\)), and pSZ \(E\)- and \(B\)-modes (\(q^{E}q^{B}\)). and \(\theta_{\phi_{2}}\) are of the same order but have the opposite signs, \(|\beta_{\rm rec}|\) becomes small due to the cancellation in Eq. (53). On the other hand, \(\beta_{i}\) at or after the reionization epoch is approximately given by \[\beta_{i}\simeq-\frac{\alpha}{4\pi}c_{\phi_{1}\gamma}\theta_{\phi_{1}}\, \tag{54}\] where the contribution from \(\phi_{2}\) in Eq. (54) is negligible because it has already decayed away due to the damped oscillation: \(m_{\phi_{2}}\gg H_{\rm rei}\). Therefore, we could obtain the condition \(|\beta_{i}|>|\beta_{\rm rec}|\) based on this model. One can also extend this model to an N-field scenario and derive the probability distribution of \(|\beta_{i}|>|\beta_{\rm rec}|\), preferable to the pSZ tomography. We leave it for future work. ## IV Conclusion We have discussed cosmic birefringence tomography by combining observations of the CMB polarization and remote quadrupole fields. Among the observables we considered, the \(EB\) power spectrum most tightly constrains the late-time birefringence angles at high redshifts (\(z\gtrsim 2\)). The \(1\,\sigma\) constraints from the \(Bq^{E}\) power spectrum are \(20\%\) (\(80\%\)) worse than those from the \(EB\) power spectrum at the fifth (sixth) bin. However, the large-scale \(EB\) power spectrum might suffer from Galactic foregrounds, and the \(Bq^{E}\) power spectrum provides a useful cross-check for constraining the high-redshift birefringence angles. The remote quadrupole is more sensitive to the low-\(z\) birefringence than the \(EB\) power spectrum and is a unique probe of the low-\(z\) birefringence sources. Precision measurements of the birefringence angles are crucial to get insight into the origin of cosmic birefringence in the late-time universe. ###### Acknowledgements. We thank Selim Hotinli, Eiichiro Komatsu, Nanoom Lee, Fumihiro Naokawa, and Hideki Tanimura for their useful comments and discussion. This work is supported in part by JSPS KAKENHI Grant No. JP20H05859 and No. JP22K03682 (T.N.) and No. 19K14702 (I.O.). Part of this work uses the resources of the National Energy Research Scientific Computing Center (NERSC). The Kavli IPMU is supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.
2308.00985
Evaluate and Guard the Wisdom of Crowds: Zero Knowledge Proofs for Crowdsourcing Truth Inference
Crowdsourcing has emerged as a prevalent method for mitigating the risks of correctness and security in outsourced cloud computing. This process involves an aggregator distributing tasks, collecting responses, and aggregating outcomes from multiple data sources. Such an approach harnesses the wisdom of crowds to accomplish complex tasks, enhancing the accuracy of task completion while diminishing the risks associated with the malicious actions of any single entity. However, a critical question arises: How can we ensure that the aggregator performs its role honestly and each contributor's input is fairly evaluated? In response to this challenge, we introduce a novel protocol termed $\mathsf{zkTI}. This scheme guarantees both the honest execution of the aggregation process by the aggregator and the fair evaluation of each data source. It innovatively integrates a cryptographic construct known as zero-knowledge proof with a category of truth inference algorithms for the first time. Under this protocol, the aggregation operates with both correctness and verifiability, while ensuring fair assessment of data source reliability. Experimental results demonstrate the protocol's efficiency and robustness, making it a viable and effective solution in crowdsourcing and cloud computing.
Xuanming Liu, Xinpeng Yang, Yinghao Wang, Xun Zhang, Xiaohu Yang
2023-08-02T07:36:40Z
http://arxiv.org/abs/2308.00985v2
# Evaluate and Guard the Wisdom of Crowds: Zero Knowledge Proofs for Crowdsourcing Truth Inference ###### Abstract Due to the risks of correctness and security in outsourced cloud computing, we consider a new paradigm called crowdsourcing: distribute tasks, receive answers and aggregate the results from multiple entities. Through this approach, we can aggregate the wisdom of the crowd to complete tasks, ensuring the accuracy of task completion while reducing the risks posed by the malicious acts of a single entity. However, the ensuing question is, how can we ensure that the aggregator has done its work honestly and each contributor's work has been evaluated fairly? In this paper, we propose a new scheme called \(\mathit{zkTl}\). This scheme ensures that the aggregator has honestly completed the aggregation and each data source is fairly evaluated. We combine a cryptographic primitive called _zero-knowledge proof_ with a class of _truth inference algorithms_ which is widely studied in AI/ML scenarios. Under this scheme, various complex outsourced tasks can be solved with efficiency and accuracy. To build our scheme, a novel method to prove the precise computation of floating-point numbers is proposed, which is nearly optimal and well-compatible with existing argument systems. This may become an independent point of interest. Thus our work can prove the process of aggregation and inference without loss of precision. We fully implement and evaluate our ideas. Compared with recent works, our scheme achieves \(2-4\times\) efficiency improvement and is robust to be widely applied. ## 1 Introduction Many users, finding themselves without the necessary resources to solve problems, must seek help from entities with access to more robust computational capabilities. Outsourcing is a common solution for this issue. However, ensuring that the service provider correctly fulfills its obligations and effectively solves the problem becomes a significant concern. A multitude of literature [1, 2, 3, 4, 5] has discussed the security and efficiency issues related to "verifiable computing" and "verifiable outsourcing." However, these methods cannot yet be practically applied to very complex tasks. We propose a different approach: instead of "outsource" the problem to a single entity, we "crowdsource" it to multiple entities. Each entity provides its own answers, and an aggregator then determines the truth of the problem, drawing from this collective wisdom. Crowdsourcing [6, 7] is a popular paradigm for harnessing knowledge from a large number of workers. Its advantage lies in the fact that workers do not need to prove the correctness of their computation to the requester (which previously made up the major cost). As long as they truthfully complete their tasks and contribute their knowledge, their work will be included in an overall consideration (the aggregation). Furthermore, the collective wisdom of multiple servers can improve the accuracy of answers and avoid errors caused by single server failures. Despite the participation of potentially malicious data providers, we can still assume that in real scenarios, the majority are honest individuals who willingly fulfill their duties and provide truthful solutions and answers in order to receive payment. Hence, it is possible to extract the true solution from a multitude of answers. This situation presents another objective, which is to fairly evaluate and safeguard the contribution of each data provider. Consequently, our main concern becomes: _How can we ensure that the aggregator has completed the aggregation process correctly and truthfully?_ In this context, we define the aggregation task as being _correctly_ executed if the aggregator, after receiving the answers from various data sources, can accurately infer the true answer to the problem from the wisdom of the crowds, and evaluate each data source's contributions and qualities fairly. Data sources that don't work earnestly or provide malicious data should be detected, while those who fulfill their duties should receive a higher evaluation. We define _truthfully_ as the aggregator executing process with integrity, not providing incorrect answers or overestimating some data sources' contributions for its own benefit. Additionally, we consider the concept of _privacy_, meaning that sensitive information (such as the truth of a question) should not be revealed to irrelevant participants. ### _Our contributions_ To extract the real answer to a problem from a multitude of data provided by various data sources, we must design different algorithms for the aggregation of various types of tasks. However, the fundamental approach to solve the problem remains consistent. First, a technique called _truth inference_ (see section 2 for background knowledge) is introduced for the above issue. It is a class of algorithms that can extract the real truth to a problem from a set of candidate answers. Furthermore, the algorithm can consider factors such as the quality of each data source and the difficulty of the problem, thereby improving accuracy. Second, to address the security issues mentioned above, these algorithms are combined with a cryptographic primitive called _zero-knowledge proof_ (ZKP) for the first time. For ZKPs, there are two main notions: _verifiable_ and _privacy-preserving_. In our context, the first notion of ZKP ensures that the aggregator performs the process with integrity, while the second ensures that the entire process does not leak sensitive information. By combining these two techniques, we achieve the "straw-man" concept, which uses ZKP to prove that the aggregator has honestly completed the truth inference algorithm, aggregating the results, inferring the truth, and evaluating the contributions of each data source. An overview of our works is given in section 3. To improve the precision of our model, a new framework for high-precision floating-point computation is designed as a building block. Under this framework, the issue of using ZKPs for proving the correctness of floating-point computation (see section 5) is addressed. This technique can be widely applied in scenarios requiring high-precision computations, such as machine learning. We also discuss how to combine _numerical analysis methods_ and the precise calculation of the _sigmoid function_ with ZKPs. Finally, we present a general scheme zkTI for the above problems and instantiate it with two practical algorithms for evaluation (see section 6). It is discovered that the problems we are studying have a wide range of application scenarios, such as data annotation in machine learning and oracles in blockchain (see section 1.2 for details). The problem of excessive overhead in verifiable computation is addressed from a new perspective. The contributions can be summarized as follows: * **Zero-knowledge proof for truth inference algorithms.** We propose a novel scheme, zkTI, which applies zero-knowledge proof techniques for truth inference algorithms. By fostering multi-party cooperation, the scheme allows for inferring the ground truth of tasks in outsourced computations and the determination of each data source's quality with high precision. Our scheme effectively prevents losses caused by a malicious aggregator, thereby safeguarding the collective wisdom. It is highly applicable in numerous real-world scenarios. We have instantiated this scheme with various existing algorithms, demonstrating both high accuracy and efficiency. **zk-SNARK for floating-point computations.** In the process of addressing the above problems, we build a block that can prove the correctness of precise computation of \(w\)-bit floating-point numbers with a circuit overhead of \(O(w)\). Therefore, compared to previous security schemes, our truth inference process does not suffer from precision loss. By leveraging existing ZKP technology, this process becomes non-interactive, succinct, and zero-knowledge. This result appears to be optimal as the essential check of a number to be \(w\)-bit already imposes a lower bound of \(O(w)\) decomposition overhead. This technique can enhance precision in various existing works of MPC and ZKPs and may be an independent point of interest.. * **Implementation and evaluations.** We have implemented and evaluated our scheme across different dataset scales. The outcomes demonstrate that our scheme's efficiency is \(2-4\times\) higher than that of recent work [8]. For a dataset comprising a large volume of labels, the algorithm's result can be proven within 20 seconds, incurring minimal verification and communication overhead. Moreover, our scheme is flexible, capable of adapting to various algorithms and scenarios. We have fully open-sourced our code (See sec. 7). ### _Example applications_ Many real-world scenarios align with our proposed model. A prototypical example of such a system is StackOverflow [9], a question-and-answer platform where users seek answers from crowds. The aggregator in this instance can filter and present the most accurate response from the collected responses. In this section, we delve into other distinct scenarios where our scheme can be effectively applied. **Data annotation in AI/ML.** Data serves as a fundamental element for machine learning (ML) and artificial intelligence (AI). Crowdsourcing for data annotation to obtain labeled datasets for model training is a prevalent strategy. Take Scale.AI [10], a distinguished data annotation firm, as an example. It operates two prevalent modes of data annotation. The first approach [11] entails the company hosting the datasets for annotation, distributing these to numerous annotators, and then aggregating the annotated data to return to the user. The second approach [12] allows users to assemble their own annotation team, distribute tasks to them, and remunerate them accordingly. Regardless of the method, the goal remains the same: accurate data annotation and fair evaluation and remuneration of the workers. Our scheme fits perfectly in this context. In the first method, the company can employ our scheme to aggregate responses from the annotators and furnish the results to the user, who can then verify their correctness using the company-provided proof. In the second method, the user can use our scheme to aggregate responses directly and present the results along with the inferred contribution of each worker. We discuss this scenario further in section 3. **Blockchain oracles.** Blockchain oracles [13] serve as interfaces that fetch off-chain data for on-chain smart contracts. The data could range from stock prices to weather forecasts or football match outcomes, which the smart contracts utilize for specific operations. A typical method of implementing a blockchain oracle involves a server collecting and directly publishing data to a smart contract. However, such centralized solutions risk failure due to single-point vulnerabilities, and an individual server might even release erroneous data, causing losses to applications [14]. Our scheme comes in handy for implementing decentralized oracles. We can engage multiple data sources, including authoritative institutions or independent individuals. When a smart contract requires data, a server retrieves potential answers from these data sources, aggregates them using our method, and presents the results along with a proof to the smart contract. The smart contract can act as a verifier in a ZKP system to validate the proof. If the verification is successful, it signifies honest data aggregation by the server. Additionally, the server can remunerate various data providers based on their contribution inferred by the algorithm. ### _Related work_ **Security and privacy of truth inference.** Truth inference algorithms are well-studied in the field of AI [15, 16]. The research mainly focuses on the accuracy of the algorithm and expanding the scenarios in which the algorithm can be applied. In recent years, many works have begun to focus on the privacy and security aspects of the algorithms [8, 17, 18, 19, 20, 21, 22]. Some works are dedicated to ensuring high accuracy of the algorithms while not revealing the workers' privacy (i.e., their answers), which is called _privacy-preserving_. They utilize some cryptographic techniques, such as garbled circuits [19] and differential privacy [18, 22]. Their work is orthogonal to ours, and these methods usually entail substantial overhead. In section 6.3 we also discussed how to incorporate some privacy-focused methods into our framework. In existing works, [8] is the closest work related to our objective. They use a pairing-based method to convert the necessary computations into a verifiable process. However, we found that our method is more efficient than theirs and also has more application scenarios.. **Zero-knowledge proofs.** The concept of ZKP was first proposed by Goldwasser et al. in [23] and has seen widespread development in recent years. The most widely used are circuit-based systems [1, 24, 25, 26]. Their design offers different trade-offs among proving time, verification time, and proof size. In our work, we utilized existing ZKP systems [24, 27]. Moreover, since our circuit design is independent of the system used, theoretically, we can use any existing system as our backend. **Floating-points in ZKPs and MPC.** The computation of floating-point numbers has always been a topic of research in zero-knowledge proofs and other fields of _multi-party computation_ (MPC). There is a demand for accurately expressing calculations of real numbers using arithmetic circuits, so that ZKPs and other MPC techniques can be applied in scenarios requiring high-precision computations (like AI/ML). In the existing literatures, [28] proposed a method for quickly and simply calculating fixed-point number multiplication. However, it is well known that fixed-point numbers have lower computational precision and are not suitable for scenarios requiring high-precision calculations. Floating-point numbers, on the other hand, provide a more precise calculation method. [29] etc. have proposed methods to support fixed-point and floating-point computation in MPC systems. Their methods need nearly \(O(w^{2})\) gates for computations for \(w\) bits real number. [30] proposes a method to support floating-point computation in ZKP systems by converting the computation of two floating-point number into binary circuit referring to the IEEE-754 standard [31]. This is the first method that can fully express high-precision floating-point calculations. However, their method is not efficient enough: to prove a multiplication of two 32-bit numbers, it takes nearly 8000 gates, which makes it hard to be applied in practice. Recent years, [32] introduces a method that can prove the computation in a batch with the properties of _succinct_ and _zero-knowledge_, while it is said that has a small prover overhead (without implementation). The downside of this method is that there are multiple rounds of interaction between the prover and the verifier. Also, this method cannot be naturally applied in the existing proof system backend (that is, represent all the effort in a constraints system), but requires additional adaptation work. Additionally, the verifier incurs an extra cost to achieve sub-linear verification. What's more, they did not provide a concrete implementation for their ideas to be evaluated. To address the aforementioned issue, in this paper, we inherit their idea and make some improvement, making the method more practical and well-compatible with existing ZKP systems. In addition, we also provide a concrete implementation. ## 2 Background: crowdsourcing truth inference We formally introduce the concept of truth inference in crowdsourcing and the classification of the problem targets. As previously mentioned, in crowdsourcing tasks and other issues requiring the aggregation of information from multiple data sources, there are low-quality or even malicious workers/data sources. Therefore, we need a server to aggregate data from multiple sources and infer the correct answer to the problem. We say the process of aggregating data and inferring the correct answer (truth) is the _truth inference_ process and the algorithm applied to infer the truth is the _truth inference algorithm_. This fundamental problem has been widely studied in the field of database (DB) and artificial intelligence (AI). Refer to this [15] very detailed review article for more information. **Truth inference.** We formally define the problem here. For the sake of brevity, we say the problems to be solved is a _task_ set of \(n\)_task_, say: \(\mathcal{T}=\{t_{1},t_{2},...,t_{n}\}\). Each task has a _ground truth_\(v_{i}^{*}\) which is the real answer for this problem. For those who answer the problem with their efforts (like a data source or a worker), we call them _workers_ of \(m\)_worker_ formally, say: \(\mathcal{W}=\{w_{1},w_{2},...,w_{m}\}\). For each task \(t_{i}\), each worker has a _answer_ given by her. We use \(v_{i}^{j}\) to denote the answer given by \(w_{j}\) for task \(t_{i}\). Given the workers' answer set \(\mathcal{V}=\{v_{i}^{j}\}\), the main goal of truth inference algorithm is to infer the correct answer for all tasks. For some algorithms, they also consider the _quality_ of each worker. We use \(q_{j}\) to denote the quality of work \(w_{j}\). **Example.** If the task is _what's the height of the Empire State Building_, the ground truth is the real height of the building. The workers are the people/industries who answer the question. The answers are the height of the building given by each worker. The result of truth inference will be the output of the applied algorithm after inferring from the answers. Here a normal _average algorithm_ may be used. See table 1 for a more detailed description. **Task classification.** In [15], the authors categorize truth inference tasks into three types: _decision-making tasks, choice tasks, and numerical tasks_. Decision-making tasks are those tasks that require a binary answer, like _is Argentina the champion of the World Cup?_. Choice tasks are those tasks that require a choice from a set of options, like _which of the following figure is a cat, given 4 figures of A, B, C and D?_. These tasks are more like asking workers to classify the given objects. If a task has \(l\) possible class, we denote them as \(\mathcal{C}=\{c_{1},...,c_{l}\}\). One can easily find out that if the choices given are two classes, a choice task is degenerated into a decision-making task. Numerical tasks are those tasks that require a numerical answer, like _what's the height of the Empire State Building?_ (the above example). ## 3 Protocol overview In this section, we give an overview of the settings and threat model of our protocol. Then a straw-man idea applying simple MV algorithm is given as an example. Finally we discuss the problems and solutions of this idea. **Protocol settings and goals.** For ease of description, we consider a scenario where a data owner \(D\) possesses some problems that acquiring for addressing. He may distribute these tasks to many workers for a crowdsourcing. In this process, firstly, the data owner aims to acquire the truth/result of each task as accurate as possible. Secondly, the fairness of this process should be promised as well. The qualities of each worker should be evaluated fairly (e.g., workers should receive fair payment commensurate with their efforts). An aggregator will collect each entity's answer and perform the aggregation process. Note that \(D\) could act as the aggregator himself, or he could delegate this role to a middle entity. Our analysis identifies two major types of processes. (see Figure. 1). 1. [label=()] 2. Here \(D\) outsources the problem to a middle entity \(\mathcal{A}\) (the aggregator) first. Then \(\mathcal{A}\) assigns the tasks to employed workers. After receiving the answers, \(\mathcal{A}\) runs the aggregation process on behalf of the data owner. To achieve fairness, \(\mathcal{A}\) needs to convince \(D\) that he has indeed used the workers' answers, and aggregate to get the result with integrity and validity, in order to receive a certain compensation. 3. The data owner \(D\) outsources the tasks to workers directly. Tasks are assigned to each worker in the same way. In this case \(D\) is actually the aggregator \(\mathcal{A}\). After receiving all of the answers, \(D\) is supposed to run an aggregation process. In this process, we mainly define fairness as \(D\) is supposed to pay the workers based on their inferred qualities. In this case, we also cares about privacy. That is, when \(\mathcal{A}\) execute the protocol with a worker, the worker should not gain any useful information beyond his own answer. Both modes can be practically applied in real-life applications (we give examples in 1.2). **Threat model.** We consider a malicious aggregator that after receiving the workers' answers, he might run the aggregation process and get the truth and workers' quality incorrectly to favor his own interests. In this way he would fail to pay each worker the compensation they deserve, which harms the workers' interests. And he may give the data owner incorrect result as well. Specifically, we assume that in the both modes, the aggregator will not collude with the workers. This is a reasonable assumption, as if the two parties were to collude, the result of the data annotation would be meaningless. Additionally, we aim to ensure that no additional private information is leaked during the execution of the protocol. For example, in setting (b), when \(D\) is running the protocol with one worker, the worker may be curious about the ground truth of the task, along with the answers provided by other workers. **A straw-man protocol using Majority Vote.** Here we first give a straw-man protocol applying a simple Majority Vote (MV) algorithm. For each task, the truth will be inferred as the most frequent result given by the workers. We consider Figure. 1 (a) for an example. In this protocol, the aggregator \(\mathcal{A}\) distributes the tasks to workers \(\mathcal{W}\) and receives their answers directly. Formally, given the answers \(\mathcal{V}\) from the workers, and assume each task has a possible domain of \(\mathcal{C}\), then \(\mathcal{A}\) runs the MV algorithm to get the aggregation result as follows. For each task \(t_{i}\in\mathcal{T}\), we infer the truth \(v_{i}^{*}\) as: \[v_{i}^{*}=\operatorname*{arg\,max}_{c\in\mathcal{C}}\sum_{w_{j}\in\mathcal{W }}\mathbb{1}(v_{i}^{j},c)\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\mathcal{W}\): workers & \(\mathcal{V}\): answers & output & ground truth \\ \hline \(w_{1}\): Alice & \(v^{1}\): 381m & & \\ \hline \(w_{2}\): Bob & \(v^{2}\): 383m & 378m & 381m \\ \hline \(w_{3}\): Carl & \(v^{3}\): 370m & & \\ \hline \end{tabular} \end{table} Table 1: Example: the height of the Empire State Building Figure 1: Two cases of crowdsourcing. In (a) \(D\) crowdsources his problems through a middle entity \(\mathcal{A}\), \(\mathcal{A}\) is responsible for the aggregation process. In (b) \(D\) crowdsources the problems to workers directly. Here \(\mathbb{1}(\cdot,\cdot)\) is an indicator function. As stated in our defined threat model, to ensure the aggregator honestly completes the work, we consider that \(\mathcal{A}\) should convince the data owner \(D\) and the workers he have indeed executed the aggregation process effectively and the inferred truth is valid. On the other hand, the workers should not know anything else except for his/her quality. To achieve this, \(\mathcal{A}\) can convert the above relation into a _arithmetic circuit_ and apply a zero-knowledge proof system for it. \(\mathcal{A}\) first execute the MV algorithm to get the inferred truth of each task. Then he converts the process into a circuit and run the argument system with the verifier (a specific worker or the data owner). The ZKPs applied can also guarantee that no other information leaks except for the answers of the worker himself. **Problems and solutions.** Despite its potential, the aforementioned straw-man protocol comes with some limitations. Firstly, the MV algorithm is essentially a rudimentary algorithm. Many scenarios require the computation and processing of continuous data, such as numerical tasks, where the MV algorithm does not currently support. Additionally, a significant drawback of this aggregation strategy is the equal treatment of all workers, regardless of the quality of information provided, which can differ considerably among workers. Some workers may even intentionally provide incorrect data. As a result, the MV algorithm does not account for the quality of workers. We aim to develop a method that does consider worker quality, assessing it during the execution process. Several algorithms already exist that adjust the impact of results based on worker quality [33, 34, 35, 36]. Therefore, we aim to enhance our protocol's adaptability to a broader range of truth inference algorithms, accommodating varying scenarios and requirements. Secondly, the current protocol construction cannot ensure consistency between the algorithm's input and the data submitted by workers. A malicious data owner could use alternative data to run the algorithm for their own benefit. To overcome these issues, we initially consider incorporating more algorithms into our protocol. However, expanding the protocol to include more truth inference algorithms is a non-trivial thing. Most truth inference algorithms require several iterations and precise floating-point computations. Some more complex algorithms even require _numerical optimization_ methods for resolution [35], posing a significant challenge for existing ZKP systems. The introduction of fixed-point numbers is a temporary solution, but it falls short in scenarios requiring high-precision computation. To navigate this barrier, we design a new framework that applies _zk-SNARK_ to floating-point computations for generality. Our design starts on the _relative error model_ introduced by [32]. We made several improvement for it and the result is the circuit representation of floating-point number computation with high precision. This design enables the expression of addition, multiplication, and division within a circuit. Thereby the computation result can be proved in a succinct and non-interactive way with high efficiency combined with ZKP systems. Further, we discuss how to represent the relation of numerical methods and some special function like _sigmoid_ in a circuit. See section 5 for further details and other extensions. For the second issue, we utilize a commitment scheme to ensure consistency between the algorithm's input and the submitted data. Based on this scheme and our previous designs, we propose a general protocol zkTl for crowdsourcing truth inference using zero-knowledge proofs. This protocol ensures the verifiability of the inferred truth of problems and the quality of crowds, while also protecting the privacy of the utilized data. See section 6.1 for our formal definition and further details. ## 4 Preliminaries We have inherited most of the symbols involved in truth inference algorithms from [15]. We use \(:=\) representing in a constraint the value on both side of the operand must be equal. We use \(\circ\) to denote a Hadamard product while use \(\cdot\) to denote a matrix product for matrices and a normal product for integers. "PPT" stands for probabilistic polynomial time. ### _Zero-knowledge arguments_ Given a known NP relationship \(\mathcal{R}\) and some common inputs \(x\), an argument system allows a prover P to convince a verifier V that there exists a witness \(w\) that satisfies the relation \(\mathcal{R}(x,w)=1\). We say the system is an _argument of knowledge_, if the prover is supposed to convince the verifier of the statement and the fact P actually knows the witness \(w\). **NIZK.** In this work we consider a family of _zero-knowledge non-interactive argument of knowledge_ (NIZK) system, allows the prover to convince the verifier of some statements without revealing the sensitive information in one round. The system is consists of a tuple of algorithms \(\Pi_{\text{ZK}}=(\mathsf{Setup},\mathsf{Prove},\mathsf{Verify})\). In the Setup phase it generate public parameters pp. In the Prove phase P generates a proof \(\pi\) using pp and the witness \(w\). Finally in the Verify phase V checks if the proof is valid and outputs 0/1 representing accepts the proof or not. The system should have the following properties: * **Completeness**. For every pp, valid inputs \(x\) and witness \(w\), the verifier accepts the proof generated by the prover with the probability 1. * **Knowledge soundness**. For any PPT prover \(\mathsf{P}^{*}\), there exists a PPT extractor \(\mathcal{E}\) that extracts the witness \(w^{*}\) out, that is: \(\mathcal{E}^{\mathsf{P}^{*}}(\mathsf{pp},x,\pi^{*})\to w^{*}\). The following relation is \(\mathsf{negl}(\lambda)\): \[\Pr[\mathcal{R}(x,w^{*})\neq 1\wedge\mathsf{Verify}(x,\pi^{*},\mathsf{pp})=1]\] * **Zero-knowledge**. There exists a PPT simulator \(\mathcal{S}\) that for any PPT algorithm \(\mathsf{V}^{*}\), the output of the simulator is indistinguishable from the real proof. The following relation holds: \[\mathsf{View}(\mathsf{V}^{*}(\mathsf{pp},x))\approx\mathcal{S}^{\mathsf{V}^{* }}(x)\] **zk-SNARK.** If the proof size \(|\pi|\) is \(\mathsf{poly}(\lambda,|x|,\log|w|)\), we say the NIZK system is _succinct_. So that we obtain a _zero-knowledge Succinct Non-interactive ARgument of Knowledge_ (zk-SNARK). The above definition is commonly used in existing zk-SNARK constructions. **Rank-1 constraint systems.** In order to apply zk-SNARKs to practical problems, we need an appropriate way to express the relationships in real-world scenarios. A common choice for argument systems is to target the problem of rank-1 constraint systems (R1CS). Since many actual problems are represented using arithmetic circuit (a circuit consists of addition and multiplication gates), R1CS is a natural choice to generalize the circuits. Give a finite field \(\mathbb{F}\), a R1CS instance is a tuple \(\mathbb{R}=(\mathbb{F},A,B,C,io,m,n)\) where \(io\) is the public input of the instance (relation), and \(A,B,C\in\mathbb{F}^{n\times m}\) where \(m\geq|io|+1\) denotes the variable number in the instance and \(n\) is the number of non-zero entries in each matrix. The instance is satisfiable if there exists a witness \(w\in\mathbb{F}^{|m-io-1|}\) such that \((A\cdot z)\circ(B\cdot z)=(C\cdot z)\) and \(z=(1,io,w)\in\mathbb{F}^{m}\). Here the witness \(w\) is called the solution to the instance. **Our zero-knowledge proof backend.** For zero-knowledge proof (ZKP) systems, there should be some trade-offs in terms of prover time, verification time, and proof size. For our construction, we need a zero-knowledge proof system to generate proofs as efficient as possible. Furthermore, in the application scenarios we proposed, many situations require the integration of _blockchain_ and _smart contracts_. This necessitates our system to be non-interactive and have a relatively small proof size. Considering the above factors, we have chosen Groth16 [24] and Spartan[27] as the backends for our ZKP system in the implementation and evaluation. The former is the most widely used zk-SNARK system in the industry with constant-size proof size and optimal verification cost. While the latter has the nearly fastest open-sourced ZKP prover. ### _Commitment schemes_ A commitment scheme allows one to commit to a value, while doesn't reveal any additional information itself. The commitment can be opened later. This scheme consists of a series of algorithms that satisfy the following properties: _binding_ and _hiding_. The former means that a commitment cannot open to two values. The latter means that the commitments reveals no information about the value it opens to. The scheme may be interesting if it has an additional property called _succinctness_ meaning that the size of the commitment is much smaller than the size of the value it opens to. In this paper we mainly consider the two algorithms \(\mathsf{KGen}\) and \(\mathsf{Commit}\): * \(\mathsf{KGen}(1^{\lambda})\rightarrow(\mathsf{ck})\) generate a commitment key \(\mathsf{ck}\) * \(\mathsf{Commit}(\mathsf{ck},x;r)\to c\) takes input a value \(x\) and generates the commitment \(c\) upon some randomness \(r\). **Commit-and-Prove SNARK.**[37] introduces a popular framework that allows to construct a SNARK from a modular compositional way. The framework is called _Commit-and-Prove SNARK_ (CP-SNARK). The idea is: given a commitment scheme \(\mathsf{C}\), one can first commit to the witness of the NP relation and then use the commitment as the input to the SNARK. The SNARK then proves that the commitment opens to a valid witness. The framework is secure if both the commitment scheme and the underlying SNARK are secure. ## 5 Building block: zk-SNARK for high precision floating-point computation In this section, a new design is introduced, taking \(O(w)\) gates for applying single floating-point computation in a ZKP system. Here \(w\) is the precision of the floating-point number. **Motivation.** To explain our motivation, let us first consider a method [33] of aggregating answers to infer the truth of the problem as follows. Given the weight \(q_{j}\) of each worker, the ground truth \(v^{*}\) of each object can be updated as below: \[v_{i}^{*}=\frac{\sum_{j=1}^{m}q_{j}*v_{i}^{j}}{\sum_{j=1}^{m}q_{j}} \tag{1}\] Here, this method takes workers quality into consideration. The truth of the problem is obtained by aggregating the answers of each worker with the qualities as weights. There is other algorithms that has more complex computation process than this instance (see 6.2). However, one will quickly notice that if the values involved in the computation are all floating-point numbers, the above computation cannot be directly expressed in an arithmetic circuit. This is because ZKP systems are usually built on certain finite fields, which requires the wires used in the circuit must be integers, not real numbers. Additionally, arithmetic circuits do not directly support division operations. There are literatures using a "scale" method to solve the problem, which is a common trick in literatures [8, 20]. That is, we can scale the values in the circuit by a large enough _scale factor_\(L\), then the real numbers can be represented as integers. This is a special case of the _fixed-point number_. However, fixed-point numbers have inferior precision, which is unsuitable for our work. Particularly, fixed-point numbers struggle with continuous multiplication operations. In a nutshell, the above method lacks generality. As for other works with similar objectives, we give a brief review and comparison in 1.3. In addition, floating-point computation is also crucial for other fields such as finance, auditing, mathematics, and AI/ML. To apply verifiable computing techniques on these applications, a generic method to support floating-point computation in ZKP systems is essential. ### _Our methods_ Our methods are based on the work of [32], yet we have made several enhancements to render it a _non-interactive_ system. This modified system can be seamlessly integrated with any existing ZKP backend, while still has a comparable efficiency. In addition, we extend the scheme to support division operation (which is not implemented in the former work). Finally, we fully implement our methods. **Floating-point number.** A floating-point number is represented as \(v=s*2^{e-w}\), where \(w\) is the _precision_, \(e\) is the _exponent_ and \(|s|\in[2^{w-1},2^{w}]\) is a _w-bits_ integer. One thing worth noting is that the _most-significant_ bit of \(s\) is always \(1\) due to the \(normalization\) operation. For example, in the IEEE-754 standard, for a 32-bit floating-point number, \(w=23\) and \(e\) is a 8-bits integer. For simplicity, in this paper we assume \(s\) is a positive number and we omit the fixed \(w\) in the exponent during the description. So a floating-point number can be represented as a pair \((s,e)\) (the same as in [32]). **Method from relative error model.** We first provide a brief summary of [32]'s design. Unlike the _deterministic_ methods [30] by proving the computation is actually performed in IEEE-754 standard (which introduces the main overhead), [32] proposes a _relative error_ model. The idea is that, to prove the computation of two values \(a,b\), instead of proving the computation of \(c=g(a,b)\) directly, it asks the prover to provide the value \(c\) that satisfies the following relation: \[|g(a,b)-c|\leq\delta|g(a,b)| \tag{2}\] Here, \(g(a,b)\) is the theoretical computation result, e.g, \(g(a.b)=a\cdot b\) for a multiplication, \(\delta\) is a common input called _relative error_ (or precision). The verifier will accept \(c\) as the computation result if the above relation holds. Taking a multiplication for example, specifically, for \(w\)-bits floating-point numbers, (2) can be represented as: \[|s_{a}\cdot 2^{e_{a}}\cdot s_{b}\cdot 2^{e_{b}}-s_{c}\cdot 2^{e_{c}}|\leq \delta|s_{a}\cdot 2^{e_{a}}\cdot s_{b}\cdot 2^{e_{b}}| \tag{3}\] Here we replace \(a,b,c\) with their corresponding floating-point representations. To prove the above relation holds, [32] first rewrite the relation as: \[|s_{a}\cdot s_{b}-s_{c}\cdot 2^{e_{c}-e_{a}-e_{b}}|\leq\delta|s_{a}\cdot s_{b}| \tag{4}\] Then the prover needs to calculate the auxiliary inputs: \[\lambda =e_{c}-e_{a}-e_{b}, \tag{5}\] \[x =\delta^{-1}\cdot(s_{a}\cdot s_{b}-s_{c}\cdot 2^{\lambda}),\] \[y =s_{a}\cdot s_{b},\] \[z =(y-x)(x+y)\] Now the prover only needs to give a range proof of: \(z>0\iff(y-x)(x+y)>0\iff|x|<|y|\). This approach sidesteps the need to perform a standard computation in the circuit for each bit of a floating-point number (like [30]), thereby significantly reducing the overhead. To give this range proof, they use a _three-square theorem_ for proving \(z\) can be represented as the sum of three squares. This introduces the additional overhead for finding three integers \(\{r_{i}\}_{3}\) satisfies \(z=r_{1}^{2}+r_{2}^{2}+r_{3}^{3}\). It's said that it can be done in \(O(w^{2})\) time where \(w\) is the bit-length of the number. The approach can obtain the properties of succinctness and zero-knowledge with additional efforts. However, we found that their methods are difficult to integrate with existing ZKP backends. First, their system requires a two-round interactive process so that cannot be considered as a SNARK construction. Secondly, their protocol demands a substantial communication overhead (approximately double that of the regular protocols), and extra work is required to achieve a sub-linear verification efficiency for the verifier. Most importantly, we found that the process of finding the three-square decomposition for a number actually introduces additional overhead for the prover. If the optimization method for _primality test_ is not introduced, it will become the main bottleneck when dealing with large-scale numbers (e.g., \(w=40\)). Finally, their scheme cannot promise the output of computation is still a number conformed to IEEE-754 standard (with a 23-bit \(w\)) as well. Therefore, a new method should be considered that follows the standard and can be more easily integrated with existing ZKP systems while maintaining acceptable efficiency. **Our design.** We start our design from the model 2 introduced by [32] as well. Our observation is, if we want to make sure \(c=(s_{c},e_{c})\) (which is a non-deterministic input) is actually a valid answer of \(g(a,b)\), we can ask the prover to prove that \(c\) is calculated according to some correct process. For example, if we want to calculate the multiplication of \(a\cdot b\) in plain math, we can first calculate \(\tilde{s_{c}}=s_{a}\cdot s_{b}\) then normalize the result as \(s_{c}=\hat{s_{c}}*2^{-\theta}\), where \(\theta\) is the normalization factor (right shift times). The exponents will be calculated as \(e_{c}=e_{a}+e_{b}-\theta\). However, it is easy to find the integer \(s_{c}\) may not be a integer actually. So we need a process called _rounding_. Here we round \(s_{c}\) to the floor integer \(\lfloor s_{c}\rfloor\). To convince that the rounding is valid, we apply the relative error model 2. Now we need to prove: \[s_{a}\cdot s_{b}\cdot 2^{-\theta}-s_{c} \leq\delta(s_{a}\cdot s_{b}\cdot 2^{-\theta})\iff \tag{6}\] \[s_{a}\cdot s_{b}-s_{c}\cdot 2^{\theta} \leq\delta(s_{a}\cdot s_{b}),\] \[e_{c}+\theta =e_{a}+e_{b}\] The above constraints can be added into a R1CS instance and be argued by P and V through a zk-SNARK backend later. It can be find that we obtain 4 in a simpler form from another way. Addition (subtraction) and division can be done in a similar way. For division of \(a,b\), we first scale \(s_{a}\) for \(2^{w}\) so that the division result \(\hat{s_{c}}=\frac{s_{a}\cdot 2^{w}}{s_{b}}\) will hold in a finite field. After that we apply normalization for \(s_{c}\) as same as above. The exponents should satisfy that \(e_{c}+\theta=e_{a}+w-e_{b}\). So the instance should include the following constraints: \[s_{a}\cdot 2^{w}-s_{c}\cdot s_{b}\cdot 2^{\theta} \leq\delta(s_{a}\cdot 2^{w}), \tag{7}\] \[e_{c}+\theta =e_{a}+w-e_{b}\] For additions (subtractions), we first scale the operands to the same exponent, then add (subtract) them together, and finally round the result to the nearest integer. Taking addition for \(a,b\) as an example, suppose \(e_{a}\geq e_{b}\), we first scale \(s_{a}\) for \(2^{\lambda}\) where \(\lambda=e_{a}-e_{b}\), then we can calculate \(\hat{s_{c}}=s_{a}\cdot 2^{\lambda}+s_{b}\) in a finite field. After that we apply normalization for \(s_{c}\) for the same. Here the exponents should satisfies that \(e_{c}+\theta=\frac{1}{2}\). \(e_{b}\). The corresponding instance should include the following constraints: \[\begin{split} s_{a}\cdot 2^{\lambda}+s_{b}-s_{c}\cdot 2^{\theta}& \leq\delta(s_{a}\cdot 2^{\lambda}+s_{b}),\\ e_{c}+\theta&=e_{b},\\ e_{b}+\lambda&=e_{a}\end{split} \tag{8}\] To see the \(\leq\) relation in the above constraints holds, we introduce a \(\mathsf{compare}\) gadget: to prove that \(x\leq y\) where \(x,y\) are w-bits integers, we can calculate \(m=x-y+2^{w}\). Then we bit-decompose \(m\) to \(w+1\) bits following the techniques in the literature [1]. If the most significant bit is \(0\), then \(x\leq y\) holds. Otherwise, \(x>y\) holds. The above relation can be restricted by the following constraints: \(m=\sum_{i=0}^{w}2^{i}\cdot m_{i}\) and \((1-m_{i})\cdot m_{i}=0\) for each \(m_{i}\). In this way we avoid the overhead for finding the three-square decomposition and the subsequent actions. And the whole relation can be enforced into a R1CS directly which can be easily applied in existing zk-SNARK backends. ### Circuits for floating-point computation Now we formally describe thr circuits for common operations in floating-point computation. We first set some concrete parameters. Given the floating-point numbers \(a,b,c\), if \(s_{a},s_{b},s_{c}\) are all \(w\) bits integers, we can set \(\delta=2^{-(w-1)}\). For example, in a 32-bit floating-point computation, we can set \(w=23\) and \(\delta=2^{-22}\). This precision is enough for most of applications. **Optimization for circuits.** It can be seen that in relations (6,7,8), there is a need for restricting the exponential computation between \((\theta,2^{\theta}),(\lambda,2^{\lambda})\). It takes a lot of constraints to enforce the relation. We optimize the circuits by the similar observation of [32]. This is a property came from the normalization for each operation. We first take the multiplication gate for an example. **Lemma 5.1**.: _In relation 6, if \(s_{a},s_{b},s_{c}\) are all \(w\) bits integers, then \(\theta\in\{w-1,w\}\) always holds._ The same idea applies for 5.2, 5.3. We leave the proofs in appendix A. The similar observations applies for the other gates as well. With these observation, we can enforce the constant \(w,w-1\) and \(2^{w},2^{w-1}\) into the circuit directly. Then we restrict the value of \(\theta\) by adding a selector \((\theta-w)(\theta-(w-1))=0\). This can save the overhead for restrict the exponential relation between \((\theta,2^{\theta})\). For one more thing, we need to restrict the range of \(s_{c}\) in \([2^{w-1},2^{w})\), which can be done with a \(w\)-bits decomposition again. Thus we obtain the optimized circuit for multiplication gate from relation 6. We leave the construction of gadgets be used in appendix B. In the step 3 we restrict the value of \(\mathsf{mid}\) using a interpolation method. We can derive the circuits for division gates in the similar way. **Lemma 5.2**.: _In relation 7, if \(s_{a},s_{b},s_{c}\) are all \(w\) bits integers, then \(\theta\in\{0,1\}\) always holds._ \[\begin{split}\mathsf{Mul}_{w}(a:(s_{a},e_{a}),b:(s_{b},e_{b}),c:( s_{c},e_{c}))\text{:}\\ 1.e_{c}+\theta:=e_{a}+e_{b}\\ 2.(\theta-w)(\theta-(w-1)):=0\\ 3.\mathsf{mid}:=(\theta-(w-1))\cdot 2^{w}-(\theta-w)\cdot 2^{w-1} \\ 4.x:=s_{a}\cdot s_{b},\quad y:=s_{c}\cdot\mathsf{mid},\quad z: =\delta^{-1}\cdot(x-y)\\ 5.1:=\mathsf{compare}_{2w}(x,z)\\ 6.1:=\mathsf{bit\_decompose}_{w}(s_{c},\{s_{ci}\})\end{split}\] Denote \(\mathsf{mid}\) for \(2^{\theta}\). The \(\mathsf{compare}\) and \(\mathsf{bit\_decompose}\) gadget takes \(O(w)\) gates and the whole circuit contains approximately \(3w\) constraints. \[\begin{split}&\mathsf{Div}_{w}(a:(s_{a},e_{a}),b:(s_{b},e_{b}),c:( s_{c},e_{c}))\text{:}\\ 1.e_{c}+\theta:=e_{a}+w-e_{b}\\ 2.\theta\cdot(\theta-1):=0\\ 3.\mathsf{mid}:=\theta+1\\ 4.x:=s_{a}\cdot 2^{w},\quad y:=s_{c}\cdot s_{b}\cdot\mathsf{mid}, \quad z:=\delta^{-1}\cdot(x-y)\\ 5.1:=\mathsf{compare}_{2w}(x,z)\\ 6.1:=\mathsf{bit\_decompose}_{w}(s_{c},\{s_{ci}\})\end{split}\] The whole circuit contains approximately \(3w\) constraints. **Lemma 5.3**.: _In relation 8, if \(s_{a},s_{b},s_{c}\) are all \(w\) bits integers, then \(\theta\in\{\lambda,\lambda+1\}\) always holds._ For addition gates, first we can obtain the lemma 5.3. However, we haven't tackled the relation of \((\lambda,2^{\lambda})\). Because of the gap between two exponents can be arbitrarily large, we cannot enforce \(\lambda\) into circuit again. Thus we use a repeated-squaring method to compute the value of \(2^{\lambda}\) of the given difference \(\lambda=e_{a}-e_{b}\). Given a \(\lambda\), we can compute \(2^{\lambda}\) with \(O(\log\lambda)\) multiplications. In appendix B we give this construction. Lastly, we supplement with another optimization. In practical computation, we can't ensure that \(e_{a}\geq e_{b}\). Hence, we allow \((s_{a},e_{a})\) and \((s_{b},e_{b})\) to pass through a permutation circuit to ensure the above relationship holds. This is inspired by [38]. Putting things together, we give the addition circuit. To further reduce the overhead of the repeating-square method, we can restrict \(\lambda\) to a certain range. That is, if \(w\) exceeds this range, we let \(c=\max\left(a,b\right)\) directly. This makes sense because a larger \(\lambda\) means the magnitude difference between the two operations is getting bigger. When the difference is too large, we can ignore the smaller operand. Further, we can demonstrate that once \(\lambda\) surpasses a specified threshold \(\epsilon\), the accuracy of the addition operation meets our precision requirements. For example, if we need a precision \(\delta=2^{-(w-1)}\), then we can set \(c=\max\left(a,b\right)\) for all \(a,b\) satisfies \(e_{a}-e_{b}>w\). Thus we decrease the overhead from \(O(\log\lambda)\) to \(O(w)\) (See appendix B for details). **Analysis.** Our circuits takes \(O(w)\) gates for a single operation. We instantiate and implement the above circuit for IEEE-754 32-bits floating-point numbers (\(w\)=23). It takes 131 constraints to implement the addition circuit. The multiplication circuit takes 82 constraints. The division circuit takes 83 constraints. It improves the overhead nearly \(75\times\) compared with [30]. And this overhead is comparable to [32] (Estimated roughly requiring 89 constraints for addition and 35 constraints for multiplication under a finite field where \(p=2^{384}\)). However, they also incur additional \(O(w^{2})\) overhead for the subsequent check and cannot be conveniently integrated with existing systems. Our floating-point computation is with security if \(\delta=2^{-(w-1)}\). And it holds when integrated with a ZKP system if the field is large enough. By combining the design with a existing zk-SNARK system, we obtain the properties of correctness, soundness and zero-knowledge. ## 6 Zero-knowledge truth inference At this point, we are able to fully express the constraints required for floating-point computations through the R1CS system. In this section, we will detail the construction of zero-knowledge truth inference and instantiate it in a variety of existing truth inference methods. ### _Definition_ A classical framework of truth inference.Recent research has highlighted a phenomenon: the quality of workers significantly impacts the accuracy of truth inference algorithms. This means that we should place more trust in the answers provided by high-quality workers, which would allow us to infer the real answers to questions more accurately. ``` 0: workers' answers \(\mathcal{V}\) and prior factors \(\mathcal{Q}\),... 0: inferred truth \(\mathcal{V}^{*}\) and updated factors \(\mathcal{Q}\),... 1: iter \(::=\) 0 2:while true do 3: iter \(::=\) iter + 1 4: // Update the truth \(\mathcal{V}^{*}\) 5:\(\mathcal{V}^{*}::=\) update_truth\((\mathcal{V},\mathcal{Q},...)\) 6: // Update the factors (e.g. worker's quality) 7:\(\mathcal{Q},...:=\) update_factors\((\mathcal{V},\mathcal{V}^{*})\) // Half if the algorithm converges or reaches the maximum iteration 8:if Converged or iter \(=\) max_iter then 9: break 10:endif 11:endwhile 12:return\(\mathcal{V}^{*},\mathcal{Q},...\) ``` **Algorithm 1** Truth inference framework However, a subsequent challenge arises: the quality of the workers is also an unknown variable. How can we depict its impact on the algorithm? One solution is that we could assess the quality of the workers through prior knowledge and continually correct these values during the execution of the algorithm. Based on the aforementioned understanding, the most existing works typically employ a two-stage framework (depicted as Alg. 1). The framework works as follows: first, we initialize the quality of the workers and other factors (such as the difficulties of the problems) based on the _prior knowledge_. Then, we iteratively update the quality of the workers and the truth of the tasks until the algorithm converges: * **update truth:** Initially, we fix all considered factors, then infer the true values of the problems. The process of inference may involve different formulas and methods for different problems. The actual formulas used can be quite complex, involving many numerical computations. * **update factors:** Next, we update the factors based on the inferred truth values and the answers given by each worker. For example, we can adjust the quality of the workers based on the correctness of their answers. Workers with a higher accuracy rate would be updated to have a higher quality. The convergence condition is that the inferred truth of the tasks does not change or the maximum number of iterations is reached. **Our scheme.** Given the above framework, we propose the zkTl scheme. Formally speaking, let \(\mathbb{F}\) be a finite field. Given \(\mathcal{T}\) to be the task set requested for annotation, \(\mathcal{V}\) is the answers given by workers \(\mathcal{W}\). \(\mathcal{Q}=\{q_{j}\}\) represents the quality of each worker. \(x\) is the answers of a part of \(\mathcal{V}\) represents the answer(s) from one or more workers, and \(x\) could be \(\emptyset\). \(f\) is one round of a given truth inference algorithm takes as input \(\mathcal{V}\) and other factors it cares about, and outputs the truth of each task and may update the factors be used (See Alg. 1 for details). Take \(y\) be the output of algorithm on the input answers, i.e., \(y=f(\mathcal{V},\mathcal{Q},...)\). The output \(y\) may contain the inferred truth for each task \(\mathcal{V}^{*}=\{v_{i}^{*}\}\). For those algorithms considering workers' quality and other factors, \(y\) may contain the updated important factors, i.e., \(y:=\{\mathcal{V}^{*},\mathcal{Q},...\}\). A zero-knowledge truth inference (zkTl) scheme consists of following algorithms: * \(\mathsf{pp}\leftarrow\texttt{zkTl.Setup}(1^{\lambda})\): given the security parameter, generate the public parameters. * \(\mathsf{conv}_{\mathcal{V}}\leftarrow\texttt{zkTl.Commit}(\mathcal{V}, \mathsf{pp})\): commit the answers \(\mathcal{V}\) as a commitment \(\mathsf{conv}_{\mathcal{V}}\). We omit the randomness needed here. * \((y,\pi)\leftarrow\texttt{zkTl.Prove}(\mathcal{V},f,\mathsf{pp})\): The prover run the truth inference algorithm and generate the output \(y\). He then generates a proof \(\pi\) to prove that \(y\) is the output of \(f\) on \(\mathcal{V}\) and other factors. * \(\{0,1\}\leftarrow\texttt{zkTl.Verify}(\mathsf{conv}_{\mathcal{V}},x,y,\pi, \mathsf{pp})\): The verifier validates the proof \(\pi\) of the output \(y\). Here the verifier may have a part of the original answers \(x\). This scheme should has the properties of completeness, soundness and zero-knowledge as the traditional zero-knowledge proofs. We omit the formal definition here. ### _Algorithms Representation_ **Construction.** To apply our scheme on existing algorithms, it first instantiates a commitment scheme such as a collision-resistant hash function [39] and commits the answers \(\mathcal{V}\) as a commitment \(\mathsf{conv}_{\mathcal{V}}\). Then, after receiving all the answers the aggregator \(\mathcal{A}\) runs the truth inference algorithm \(f\) and generates the output \(y\). For simplicity, in this section, we only consider the impact of workers' quality and reliability on the algorithm. So the result \(y\) can be rewrite as \(y:=\{\mathcal{V}^{*},\mathcal{Q}\}=f(\mathcal{V},\mathcal{Q})\). Both parties first agree on the number of times the algorithm needs to be run to converge. To validate the correctness of the computational results in one round, the aggregator \(\mathcal{A}\) now need to invoke a zero-knowledge proof. In each round of the algorithm, we treat the relationship he need to prove as \(\mathcal{R}\) and the relation holds: \[\mathcal{R}(x,y,\mathsf{conv}_{\mathcal{V}};\mathcal{V})=1 \iff\] \[y=f(\mathcal{V},\mathcal{Q})\wedge\mathsf{Commit}(\mathcal{V})= \mathsf{conv}_{\mathcal{V}}\wedge x\subseteq\mathcal{V}\] In the first round, the input quality \(\mathcal{Q}\) is initialized by an agreed upon value. For all floating-point computations involved in the algorithm (addition, multiplication and division), the aggregator \(\mathcal{A}\) can replace the gate with our floating-point computation circuits designed in section 5. After the transformation from algorithms to arithmetic circuits, the verifier and the aggregator (the prover) can invoke the zero-knowledge proofs protocol to verify the relationship without leaking the information of non-public \(\mathcal{V}\). We give our construction formally in Protocol 1. **Instantiation.** The majority of truth inference algorithms can be represented within the framework we have proposed. Here, we instantiate our scheme with two existing algorithms [33, 36]. ``` Protocol 1 (Zero Knowledge Truth Inference (zkTl)). Let \(\lambda\) be a security parameter, \(\mathbb{F}\) be a finite field, \(\mathcal{V}\) be the answers from each data sources, \(f\) is a truth inference algorithm and is converted into circuit \(\mathcal{R}\). \(\mathsf{P},\mathsf{V}\) are the prover and the verifier. \(\mathsf{ZKP}\) is the underlying zero-knowledge proof system. \(\mathsf{C}\) is the underlying commitment scheme. \(\mathsf{pp}\leftarrow\texttt{zkTl.Setup}(1^{\lambda})\): let \(\mathsf{pp}_{1}=\mathsf{ZKP.Setup}(1^{\lambda})\), \(\mathsf{pp}_{2}=\mathsf{C.KGen}(1^{\lambda})\), \(\mathsf{pp}=\{\mathsf{pp}_{1},\mathsf{pp}_{2}\}\). \(\mathsf{conv}_{\mathcal{V}}\leftarrow\texttt{zkTl.Commit}(\mathcal{V},\mathsf{pp})\): \(\mathsf{conv}_{\mathcal{V}}=\mathsf{C.Commit}(\mathsf{pp}_{2})\). \((y,\pi)\leftarrow\texttt{zkTl.Prove}(\mathcal{V},f,\mathsf{pp})\): \(\mathsf{P}\) received the answers \(\mathcal{V}\) and run the algorithm \(f\) to get the inferred result along with the qualities of each data source \(y=f(\mathcal{V})\). Let \(w\) be the involved witness, \(\mathsf{conv}_{w}=\mathsf{ZKP.Commit}(w,pp_{1})\) and \(\mathsf{P}\) sends it to \(\mathsf{V}\). The floating-point computation is converted as part of circuits \(\mathcal{R}\). After receiving the randomness of checking permutation in addition circuits, \(\mathsf{P}\) invoke \(\mathsf{ZKP.Prove}(\mathcal{R},w,\mathsf{pp})\) to get the proof \(\pi\). \(\mathsf{P}\) sends \(\pi,y\) to \(\mathsf{V}\). * \(\{0,1\}\leftarrow\texttt{zkTl.Verify}(\mathsf{conv}_{\mathcal{V}},x,y,\pi, \mathsf{pp})\): \(\mathsf{V}\) accepts \(y,\pi\) if \(\mathsf{ZKP.Verify}(\mathcal{R},\mathsf{conv}_{\mathcal{V}},\mathsf{conv}_{w},x,y, \pi,\mathsf{pp})\) output 1, otherwise reject. ``` The first algorithm CRH [33] is commonly used when discussing the security and privacy of truth inference [8, 15, 18]. After simple adjustment to the algorithm, the process can be represented as: * **update truth:** Given the quality \(q_{j}\) of each worker, the inferred truth of each task is updated as formula 1 in a weight-average manner. * **update quality:** Given \(v_{i}^{*}\) as inferred truth, \(\mathcal{Q}=\{q_{j}\}\) is updated as: \[q_{j}=\log(\frac{\sum_{j^{\prime}=1}^{m}\sum_{i=1}^{n}d(v_{i}^{j^{\prime}},v_{i} ^{*})}{\sum_{i=1}^{n}d(v_{i}^{j},v_{i}^{*})})\] (9) Here \(d\) is a distance function for calculating the distance between a given answer and the inferred truth. And we can leave the logarithm operation to the verifier since it is a public function. As can be seen, every step in the entire process of the algorithm can be precisely replaced by our floating-point computation framework. To demonstrate that our work can be applied in more complex algorithms, such as many successive multiplications, we also instantiate it for a classic algorithm called ZC [36]. This algorithm employs a probability-based calculation method. For a choice-making task, in the update truth process of the algorithm, it first calculate the probability of each choice being the correct answer using the following formula: \[\Pr(v_{i}^{*}=c_{k})=\prod_{j=1}^{m}(q_{j})^{1(v_{i}^{j},c_{k})}\cdot(1-q_{j})^{ 1-1(v_{i}^{j},c_{k})} \tag{10}\] Subsequently, there is one more process of normalization. And the inferred truth of this task is the choice with the maximum probability. It can be calculated that this process requires \(m\cdot l\) successive multiplications and the total number of multiplications needed is \(O(n\cdot m\cdot l)\). Therefore the process is difficult to complete accurately with existing methods such as fixed-point based technique. However, we found that under our scheme, the aforementioned calculations can be directly converted into corresponding constraints and added to the circuit. Thereby, it can be proven with efficiency. **Put everything together.** Since ZKP systems are generally conducted on finite fields of a specified size, to make our floating-point relation 6, 7, 8 hold on this field (the computation on this field will not wrap around), we need the prime number \(p>2^{3w+1}\). Combining everything together, we can obtain a zk\(\mathsf{T}\)l scheme that has the property of completeness, soundness and zero-knowledge (theorem 6.1). We leave the security proof of this theorem in appendix C. **Theorem 6.1**.: _If the underlying ZKP system is built on a finite field that \(p>2^{3w+1}\), the zero-knowledge truth inference protocol is with the property of completeness, soundness and zero-knowledge._ Roughly speaking, both of the above algorithms need to traverse the \(m\)-size worker set and the \(n\)-size task set, and update the truth of tasks and the quality of workers. So, the complexity of the algorithm is \(O(m\cdot n)\) approximately. The running time may vary depending on the actual construction of the algorithm. Converting the scheme into a circuit, the overall complexity of the circuit is \(O(m\cdot n\cdot w)\), combined with our floating-point computation model. ### Extensions We also explored some extensible aspects for integrating our scheme with more complex algorithms and applying them in broader scenarios. In this part, we considered how to convert algorithms that include numerical optimization methods into circuits, and how to integrate some privacy protection techniques to further safeguard workers' original data from leakage. **Extension for numerical methods.** There is a class of algorithms [34, 35] that uses the method of _maximum likelihood estimation_ to optimize various factors during the inferring (such as the quality and reliability of workers). In [35], the authors take the difficulty of tasks into consideration. They portray the probability of a worker giving the correct answer for a task as: \(\Pr(v_{i}^{*}=v_{i}^{j})=\frac{1}{1+e^{-tj}\beta^{i}}\), here \(\beta_{i}\) denotes the difficulty of task \(i\). And in the process of updating various factors, the algorithm needs to find the parameters that minimize the value of a function \(G(\mathcal{Q},\beta)\) (refer to [35] for complete formula), which requires us to use some numerical optimization methods (such as _gradient descent_) to complete. This is a significant burden for the circuit. To address such problems, we use the idea mentioned in [40]. The idea is, to prove the parameters \(\mathcal{Q},\beta\) we give is correct, we can prove that the certifying condition [41] holds: \(\nabla G(\mathcal{Q},\beta)=0\), instead of expressing the whole derivation process using a circuit. This method can significantly reduce the cost of prover. However, in our experiments, we found that due to the limitations of existing numerical methods, it is difficult to obtain precise numerical solutions, so the above relation does not always hold. We need to modify the above relationship to: \(|\nabla G(\mathcal{Q},\beta)|\leq\epsilon\). Here \(\epsilon\) is a given threshold (such as \(10^{-1}\)). Furthermore, we also discovered that when there are too many parameters, existing numerical calculation methods often cannot satisfy the relationship either. To improve this method may become an independent point of interest. **Extension for sigmoid function.** In the above algorithm, we use the sigmoid function \(\sigma(x)=\frac{1}{1+e^{-x}}\) to represent the probability of a worker giving the correct answer for a task. This sigmoid function is often used as an activation function in AI/ML work. However, due to its non-linearity, this function is difficult to represent in a circuit, hence making it challenging to apply in MPC and ZKPs. To address this issue, we consider the _spline_[42] idea introduced by [43]. To express this sigmoid function, we can replace it with a \(m+1\)_items splined function_ (we omitted the process for deriving this function): \[\sigma(\widehat{x})=\begin{cases}0,&x<x_{1}\\ a_{1}x+b_{1},&x_{1}\leq x<x_{2}\\ \cdots&\\ a_{m-1}x+b_{m-1},&x_{m-1}\leq x<x_{m}\\ 1,&x_{m}\leq x\end{cases} \tag{11}\] Here we suppose the input \(x\) and all endpoints \(x_{i}\) are also integers. Through this method, we divided the sigmoid function into many segments, each of which is approximated by a linear function of degree 1. Theoretically, the accuracy of this method can infinitely approach the original function (though the corresponding cost will increase). Then the constraint can be represented as: \[\sigma(\widehat{x}) :=\mathsf{compare}(x_{1},x)\cdot(0-(a_{1}x+b_{1}))\] \[+\cdots\] \[+\mathsf{compare}(x_{m-1},x)\cdot((a_{m-2}x+b_{m-2})-(a_{m-1}x+ b_{m-1}))\] \[+\mathsf{compare}(x_{m},x)\cdot((a_{m-1}x+b_{m-1})-1)+1\] Here \(\mathsf{compare}(a,b)\) output 1 for \(a>b\) and 0 otherwise. The completeness is straightforward. This method can be applied for other _zero-knowledge machine learning_ applications as well. **Combine privacy-preserving.** There is a demand where workers wish to complete the truth inference process without disclosing specific response information. This is because overly specific responses could expose sensitive information. To address this issue, we can employ the "perturb" method [8, 18]. This can be achieved by adding small mask value to the original responses. These practices are compatible with our scheme because the aggregator does not need to care about the content of the workers' responses.We can also adopt more complex methods, such as DP, to further enhance the privacy of the protocol. ## 7 Implementation and evaluations We fully implemented our ideas. In this section we introduce our implementation details and evaluate the performance of our scheme. ### Experimental setup **Software implementation.** We start with the implementation of our ideas. following the description in this paper, we wrote a new implementation of floating-point numbers. Then, based on the open-source ZKP frontend framework Libsnark[44], we converted the operations into arithmetic circuits and constraints respectively. This framework provides a BN-128 curve which satisfies our security needs of floating-points computation. Afterward, we implemented corresponding circuits for the algorithms in the paper. Next, we considered to evaluate our circuits using different ZKP backends. We chose Groth16[24] and Spartan[27] as our backends for evaluation. Groth16 is a pairing-based SNARK system with a negligible verification overhead but a relatively larger proving overhead. Spartan has a faster proof generation speed but incurs nearly sub-linear verification overhead and proof size. These characteristics allow us to choose different backends under different scenarios. The Libsnark framework naturally supports the Groth16 system but does not support Spartan. To support our second backend, we implemented a pipeline that can export the constraints and variables generated by Libsnark, and then import them into the Spartan proof system. This process looks somewhat similar to an RPC framework. We used a binary file standard called Zkinterface[45] as the standard for intermediate files. This could be a topic of independent interest. In summary, our implementation consists of about 3000 lines of C++ code, which is used to implement the aforementioned design and the process of exporting the constraint system to a Zkinterface file. We implemented the process of importing Zkinterface file into Spartan by modifying an open-source library [46]. In addition, we implemented the truth inference algorithms mentioned in the paper with about 1000 lines of Python code, which can be used for comparison. We fully open source our codes at [47]. **Hardware.** As [8] is the work most closely related to our objectives, we have chosen it as our point of comparison (baseline). Since they did not provide a concrete code implementation, we decided to use similar hardware conditions as our evaluation standard. We run our experiments on a server with 16GB of RAM and 2.50GHz Intel(R) Xeon(R) Platinum 8269CY CPU. **Datasets.** To evaluate the efficiency of our schemes applying existing algorithms, we need to select certain datasets as the input for the algorithm. In this evaluation, we assume that the experiment is carried out under a scenario of truth inference in a crowdsourcing data annotation task. We use a dataset from [15]. This dataset is a decision-making task set obtained by many workers performing data labeling on different images. This dataset includes 108 tasks and the responses from 39 workers to these tasks, totaling 4212 answers (in other words, there are no missing values). When the algorithm involves adding weights to workers as a consideration for worker quality, we randomly generate initial values for the quality. When applying algorithms in our dataset, the CRH and ZC both obtain a \(78.6\%\) accuracy that exceeds the naive MV algorithm. It should be emphasized that our method is independent of the content of the tasks, so we only need to consider the impact of the size of the dataset on efficiency. Similarly, the accuracy of the algorithm is not our concern because the it should vary across different datasets and algorithms [15] (However, it is promised our scheme will keep the original accuracy). ### Evaluation In this section, we report our evaluation result of our schemes. In order to measure the efficiency of our scheme, we conducted two sets of experiments. Firstly, we compared the efficiency of our method with the baseline using the same algorithm and dataset size. Secondly, we show our performance using different ZKP backends and algorithm instances. #### 7.2.1 Comparison with baseline work. We first present the performance of zkTI, in comparison with the baseline works [8]. In [8], they implemented a verifiable computation technique based on _bilinear pairing_ called V-patd, a common but expensive way for implementing verifiable computing. They applied this technique to CRH[33], thereby making the algorithm verifiable. We instantiate the algorithm using our scheme by converting the computation into circuits. For a decision-making task, the circuit takes the dataset as input, sets the initial worker quality, then outputs the inferred truth after one round of processing, and updates the quality of each worker. The truth is given as the probabilities of each option being true. We primarily assess the efficiency among different schemes from the perspective of the aggregator's computational overhead (prover time), which is determined by the scale of the circuit and always be the main bottleneck of actual applications. As can be seen from the complexity, the scale of the circuit is primarily determined by the product of the number of workers and the number of tasks. Therefore, we varied this value from 1000 to 4000, and evaluated the overhead under different Figure 2: Prover time (aggregator overhead) comparison with [8] applying CRH. Our schemes reduces the overhead by 2-4 times in both ZKP backends. performance, we ran the circuit separately in the Groth16 and Spartan systems and obtained results. Figure 2 shows the performance of each scheme. Our method is \(2.2-4.1\times\) times faster than the original method. Under the 32-bit precision floating-point computation, Groth16 generally achieves twice the efficiency of the baseline method across various scales, while having a negligible verification overhead. Spartan can save an additional 50% of computational overhead. In our evaluation, Spartan is actually a NIZK system, thus introducing an additional verification overhead of \(1-2.5\) seconds and communication overhead of \(0.2\)MB\(-0.6\)MB which is acceptable in most applications. #### 7.2.2 Performance under different settings Next, we compared the computational overheads derived from instantiating and running different algorithms with different backends under various dataset sizes. We converted the other two algorithms MV and ZC into circuits following the same methodology. Under the same dataset, the former has a smaller circuit size, while the latter has a larger circuit size, compared with CRH. All of the algorithms can obtain the same accuracy under our scheme than the plain computation. For comparison, we evaluated the performance of the three algorithms with a concrete dataset scale of \(100\times 30\). Table 2 presents the results. Spartan has the fastest proof generation speed for each algorithms, taking approximately 16s for a circuit with 1.75M constraints and 13.7s for a circuit with 1.3M constraints, while incurs additional verification overheads respectively. Both of our backends exhibit superior efficiency compared to [8]'s. Finally, by adjusting various settings, we have thoroughly tested the performance of different algorithms on various backends. Figure 3 presents comprehensive performance. With the increase in dataset size, the runtime required by both backends will correspondingly increase. For Spartan, its verification overhead and communication overhead will also grow accordingly. As Groth16 consumes a significant amount of memory, it is conceivable that it would fail due to exhausted memory when the circuit scale increases, while Spartan can handle larger-scale computations. The advantage of the former is that the verification overhead is minimal, so it can be used in some special scenarios (such as smart contracts). We found that our circuits can achieve the same accuracy as the original algorithms, reaching a precision level of \(10^{7}-10^{8}\) for every calculation step. This suggests that our floating-point computation mode is sufficiently accurate. We also note that if we change the floating-point representation mode to 16-bit and 8-bit, we can obtain circuits with 60% and 30% the size of the original one, respectively. By sacrificing a certain degree of precision, our method achieves a smaller circuit size, thereby resulting in lower computational overhead. This performance is reasonable for our scheme to be applied in practice. ## 8 Conclusion In this work we propose a novel approach, replacing outsourcing with crowdsourcing, to address the trust issues in outsourced computation. We primarily offer two contributions: making the crowdsourcing aggregation process verifiable without introducing significant overhead and new techniques for expressing floating-point computations in circuits. Our work can be applied in various scenarios such as data annotation, question answering systems, and blockchain oracles, thereby establishing bridges of trust. The techniques for floating-points can be applied in other scenarios requiring high-precision computations, such as ZKML and DeFi. \begin{table} \begin{tabular}{c c c c c} \hline \hline & types & \(|\mathsf{C}|\) & \multicolumn{2}{c}{time budget} & \(|\pi|\) \\ \cline{3-5} & & & \(\mathsf{P}\) & \(\mathsf{V}\) & \\ \hline \multirow{3}{*}{MV} & Groth16 & \multirow{3}{*}{0.11M} & 2.95s & 1ms & 1KB \\ & Spartan & & 1.7s & 0.31s & 258KB \\ \cline{1-1} & Groth16 & \multirow{3}{*}{1.75M} & 50.6s & 1ms & 1KB \\ \cline{1-1} & Spartan & & 21.0s & 3.12s & 658KB \\ \cline{1-1} & Groth16 & \multirow{3}{*}{1.30M} & 34.3s & 1ms & 1KB \\ \cline{1-1} & Spartan & & 17.4s & 2.25s & 657KB \\ \cline{1-1} & V-patt & — & 75.0s\({}^{*}\) & — & — \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of algorithms under different backend type (with dataset size of \(100\times 30\)). Note: \(|\mathsf{C}|\) denotes for circuit size and \(|\pi|\) denotes for proof size. \(*\) means the value is estimated. Figure 3: Performance under different settings. (a) Groth16 for ZC (b) Spartan for ZC (c) Groth16 for CRH (d) Spartan for CRH.
2304.00488
Saddle-to-Saddle Dynamics in Diagonal Linear Networks
In this paper we fully describe the trajectory of gradient flow over diagonal linear networks in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minimum $\ell_1$-norm solution. This saddle-to-saddle dynamics translates to an incremental learning process as each saddle corresponds to the minimiser of the loss constrained to an active set outside of which the coordinates must be zero. We explicitly characterise the visited saddles as well as the jumping times through a recursive algorithm reminiscent of the LARS algorithm used for computing the Lasso path. Our proof leverages a convenient arc-length time-reparametrisation which enables to keep track of the heteroclinic transitions between the jumps. Our analysis requires negligible assumptions on the data, applies to both under and overparametrised settings and covers complex cases where there is no monotonicity of the number of active coordinates. We provide numerical experiments to support our findings.
Scott Pesme, Nicolas Flammarion
2023-04-02T08:53:43Z
http://arxiv.org/abs/2304.00488v2
# Saddle-to-Saddle Dynamics in Diagonal Linear Networks ###### Abstract. In this paper we fully describe the trajectory of gradient flow over diagonal linear networks in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minimum \(\ell_{1}\)-norm solution. This saddle-to-saddle dynamics translates to an incremental learning process as each saddle corresponds to the minimiser of the loss constrained to an active set outside of which the coordinates must be zero. We explicitly characterise the visited saddles as well as the jumping times through a recursive algorithm reminiscent of the Homotopy algorithm used for computing the Lasso path. Our proof leverages a convenient arc-length time-reparametrisation which enables to keep track of the heteroclinic transitions between the jumps. Our analysis requires negligible assumptions on the data, applies to both under and overparametrised settings and covers complex cases where there is no monotonicity of the number of active coordinates. We provide numerical experiments to support our findings. ## 1. Introduction Strikingly simple algorithms such as gradient descent are driving forces for deep learning and have led to remarkable empirical results. Nonetheless, understanding the performances of such methods remains a challenging and exciting mystery: (i) their global convergence on highly non-convex losses is far from being trivial and (ii) the fact that they lead to solutions which generalise well [Zhang et al., 2017] is still not fully understood. To explain this second point, a major line of work has focused on the concept of implicit regularisation: amongst the infinite space of zero-loss solutions, the optimisation process must be implicitly biased towards solutions which have good generalisation properties for the considered real-world prediction task. Many papers have therefore shown that gradient methods have the fortunate property of asymptotically leading to solutions which have a well-behaving structure [Neyshabur, 2017, Gunasekar et al., 2017]. For example, convergence towards max-margin classifiers have been shown in various classification settings [Soudry et al., 2018, Lyu and Li, 2020, Chizat and Bach, 2020] and convergence towards low-norm solutions in regression settings [Woodworth et al., 2020, Boursier et al., 2022]. Aside from these results which mostly focus on characterising the asymptotic solution, a slightly different point of view has been to try to describe the full trajectory. Indeed it has been experimentally observed that gradient methods with small initialisations have the property of learning models of increasing complexity across the training of neural networks [Kalimeris et al., 2019]. This behaviour is usually referred to as _incremental learning_ or as a _saddle-to-saddle process_ and describes learning curves which are piecewise constant: the training process makes very little progress for some time, followed by a sharp transition where a new "feature" is suddenly learned. Several settings exhibiting such dynamics for small initialisation have been considered: matrix and tensor factorisation [Razin et al., 2021, Jiang et al., 2022], simplified versions of diagonal linear networks [Gissin et al., 2020, Berthier, 2022], linear networks [Gidel et al., 2019, Saxe et al., 2019, Jacot et al., 2021], 2-layer neural networks with orthogonal inputs [Boursier et al., 2022] and matrix sensing [Arora et al., 2019, Li et al., 2021, Jin et al., 2023]. However, all these results require restrictive assumptions on the data and obtaining a complete picture of the saddle-to-saddle process is mathematically challenging and still missing. We intend to fill this gap by considering diagonal linear networks which are simplified neural networks that have received significant attention lately as they are ideal proxy models for gaining a deeper understanding of complex phenomenons such as saddle-to-saddle dynamics. ### Main contribution and paper organisation In this paper, we provide a full description of the trajectory of gradient flow over diagonal linear networks in the limit of vanishing initialisation. **We show that the iterates successively jump from a saddle of the training loss to another and we fully characterise each visited saddle as well as the jumping times.** This result is informally presented here and illustrated in Figure 1. **Theorem 1**.: \([\mathrm{Main\ result},\mathrm{informal}.]\) _In the regression setting and in the limit of vanishing initialisation, the trajectory of gradient flow over diagonal linear networks converges towards a limiting process \((\tilde{\beta}^{\circ}_{t})_{t}\) which is piecewise constant and defined as follows:_ \[\textbf{(``Saddles")}\qquad\qquad\tilde{\beta}^{\circ}_{t}\equiv\beta_{k} \qquad\qquad\text{for $t\in(t_{k},t_{k+1})$ and $0\leq k\leq p$},\] _where the "saddles" \((\beta_{0}=\mathbf{0},\beta_{1},\ldots,\beta_{p-1},\beta_{p})\) and jump times \((t_{0}=0,t_{1},\ldots,t_{p},t_{p+1}=+\infty)\) can recursively and explicitly be computed by an algorithm (see Algorithm 1) reminiscent of the Homotopy algorithm for the Lasso. The final point \(\beta_{p}\) corresponds to the minimum \(\ell_{1}\)-norm solution._ The learning is said to be "incremental" as each saddle corresponds to the minimiser of the loss constrained to a set of coordinates which can be non-zero, the size of this set is typically (but not necessarily) increasing. **We make minimal assumptions on the data and our analysis holds for complex datasets** where there is no monotonicity of the number of active coordinates and where the successive active sets are highly non-trivial as depicted in Figure 1. In Section 2 we introduce the regression setting and the diagonal network architecture. Our main result exhibiting the limiting saddle-to-saddle dynamics is provided Section 3 and the sketch of proof based on the arc-length parametrisation is given Section 4. ### Related works. **Diagonal linear networks.** Diagonal linear networks are simplified neural networks which have seen a surge of interest recently (Woodworth et al., 2020; Vaskevicius et al., 2019; HaoChen et al., 2021; Pesme et al., 2021; Even et al., 2023). For these networks, the scale of the initialisation determines the structure of the recovered solution. Large initialisations yield low \(\ell_{2}\)-norm solutions (referred to as either the neural tangent kernel regime (Jacot et al., 2018), or the lazy regime (Chizat et al., 2019)). Small initialisations result in low \(\ell_{1}\)-norm solutions (known as the rich regime (Woodworth et al., 2020)). Despite their simplicity, diagonal linear networks reveal training characteristics observed in much more complex architectures. We point out that an even more drastic simplification is to consider a quadratic parametrisation where the regression predictor \(\beta\) is parametrised \(\beta=u^{2}\), with element-wise multiplication (Amid and Warmuth, 2020; Vivien et al., 2022; Berthier, 2022). **Incremental learning / saddle-to-saddle dynamics.** In all the papers mentioned in the introduction, the analysed incremental learning phenomenon and the saddle-to-saddle process are complementary facets of the same idea. Indeed for gradient flows \(\mathrm{d}w_{t}=-\nabla F(w_{t})\mathrm{d}t\), fixed points of the dynamics correspond to critical points of the loss. Stages with little progress in learning and minimal movement of the iterates necessarily Figure 1. Regression problem (for the precise experimental setting, see Appendix A). _Left and right:_ The magnitudes of the coordinates of the accelerated iterates \(\tilde{\beta}^{\alpha}\) are plotted across time. For small initialisation scales \(\alpha\), a saddle-to-saddle process appears. Coordinates are not monotonic and the number of active coordinates neither as several coordinates can deactivate at the same time. The piecewise constant process plotted in black is the limiting process \(\tilde{\beta}^{\circ}\) predicted by our theory. correspond to the iterates being in the vicinity of a critical point of the loss. It turns out that in many settings (linear networks [16], matrix sensing [17, 18]), critical points are necessarily saddle points of the loss (if not global minima) and they have a very particular structure (high sparsity, low rank etc.). The concepts of incremental learning and of saddle-to-saddle process refer therefore to the same phenomenon in many settings. In the dynamical systems literature, saddle-to-saddle dynamics are referred as heteroclinic networks [14, 15] and are characterised by the connection of multiple fixed points through orbits of the flow. We note that an alternative approach to realising these dynamics is through the perturbation of the gradient flow by a vanishing noise as studied by Bakhtin [2011]. **Rate-independent systems.** Rate-independent systems refer to time-dependent processes which are invariant under time rescaling. Examples of such systems are ubiquitous in mechanics and appear in problems related to friction, crack propagation, elastoplasticity, ferromagnetism to name a few [13, Ch. 6 for a survey]. Such systems emerge from differential problems of the form \(\partial_{q}E(t,q_{t})\in\partial h(\hat{q}_{t})\) for a time dependent _energy_ functional \(E(t,\cdot):\mathbb{R}^{d}\to\mathbb{R}\) and where the _dissipation_ potential \(h:\mathbb{R}^{d}\to\mathbb{R}_{\geq 0}\) is \(1\)-homogeneous. A main difficulty with rate independent processes is the possible appearance of jumps when the energy is non-convex. To deal with these jumps, a popular approach is to add a small viscosity regularisation, and to consider a convenient arc-length parametrisation which enables to keep track of the transition path between each jump and to analyse the solutions which arise when taking the viscosity term to zero [12, 13, 14]. We follow this approach to prove our main result. ## 2 Setup and preliminaries ### Setup. **Linear regression.** We study a linear regression problem with inputs \((x_{1},\ldots,x_{n})\in(\mathbb{R}^{d})^{n}\) and outputs \((y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\). We consider the following quadratic loss: \[L(\beta)=\frac{1}{2n}\sum_{i=1}^{n}(\langle\beta,x_{i}\rangle-y_{i})^{2}\,. \tag{1}\] We do not make any assumptions concerning the number of samples \(n\) or the dimension \(d\), however we recall the two classical settings which are covered by our results. The _underparametrised_ setting, where \(d\leq n\) and for which there exists a unique solution \(\beta^{\star}=\operatorname*{arg\,min}_{\beta}L(\beta)\). The _overparametrised_ setting, where \(d>n\) and where there exists an infinite number of minimisers \(\beta^{\star}\in\mathbb{R}^{d}\) which attain zero training loss and which correspond to linear predictors that perfectly fit the training data, _i.e._\(y_{i}=\langle\beta^{\star},x_{i}\rangle\) for all \(1\leq i\leq n\). These predictors are referred to as _interpolators_. We assume throughout the papier that the inputs \((x_{1},\ldots,x_{n})\) are in _general position_. In order to state the assumption, let \(X\in\mathbb{R}^{n\times d}\) be the feature matrix whose \(i^{th}\) row is \(x_{i}\) and let us denote by \(\tilde{x}_{j}\in\mathbb{R}^{n}\) it's \(j^{th}\) column for \(j\in[d]\). **Assumption 1** (General position).: _For any \(k\leq\min(n,d)\), the affine span of any \(k\) points \(\sigma_{1}\tilde{x}_{j_{1}},\ldots,\sigma_{k}\tilde{x}_{j_{k}}\), for arbitrary signs \(\sigma_{1},\cdots,\sigma_{k}\in\{-1,1\}\), does not contain any element of \(\{\pm\tilde{x}_{j},j\neq j_{1},\cdots,j_{k}\}\)._ This assumption is standard in the Lasso framework as it ensures that the Lasso solution is unique for any regularisation parameter, in our case it similarly ensures that our saddle-to-saddle algorithm is well-defined as shown Proposition 2. Similarly, in the overparametrised setting, this assumption ensures that the minimum \(\ell_{1}\)-norm interpolator is unique [15, Theorem 2.2], we can therefore define \(\beta^{\star}_{\ell_{1}}\coloneqq\operatorname*{arg\,min}_{y_{i}=(x_{i},\beta ^{\star}),\forall i}\|\beta^{\star}\|_{1}\). Note that this assumption is not restrictive as it holds almost surely when the data is drawn from a continuous probability distribution [12, Lemma 4]. ### 2-layer diagonal linear network. We represent the regression vector \(\beta\) as a function \(\beta_{w}\) of a trainable parameter \(w\in\mathbb{R}^{p}\). Despite the linearity of the final prediction function \(x\mapsto\langle\beta_{w},x\rangle\), the parametrisation significantly affects the training dynamics. In an effort to understand the training dynamics of neural networks, we consider a \(2\)-layer diagonal linear neural network given by: \[\beta_{w}=u\odot v\text{ where }w=(u,v)\in\mathbb{R}^{2d}\,. \tag{2}\] This parametrisation can be interpreted as a simple neural network \(x\mapsto\langle u,\sigma(\operatorname{diag}(v)x)\rangle\) where \(u\) are the output weights, the diagonal matrix \(\operatorname{diag}(v)\) represents the inner weights, and the activation \(\sigma\) is the identity function. We refer to \(w=(u,v)\in\mathbb{R}^{2d}\) as the _neurons_ and to \(\beta\coloneqq u\odot v\in\mathbb{R}^{d}\) as the _prediction parameter_. With the parametrisation (2), the loss function \(F\) over the parameters \(w=(u,v)\in\mathbb{R}^{2d}\) is defined as: \[F(w)\coloneqq L(u\odot v)=\frac{1}{2n}\sum_{i=1}^{n}(\langle u \odot v,x_{i}\rangle-y_{i})^{2}\,. \tag{3}\] The optimisation problem is non-convex and highly non-trivial training dynamics occur. **Gradient Flow.** We minimise the loss \(F\) using gradient flow: \[\mathrm{d}w_{t}=-\nabla F(w_{t})\mathrm{d}t\,, \tag{4}\] initialised at \(u_{0}=\sqrt{2}\alpha\mathbf{1}\in\mathbb{R}^{d}_{>0}\) with \(\alpha>0\), and \(v_{0}=\mathbf{0}\in\mathbb{R}^{d}\). This initialisation results in \(\beta_{0}=\mathbf{0}\in\mathbb{R}^{d}\) independently of the chosen neuron initialisation scale \(\alpha\). We denote \(\beta_{t}^{\alpha}\coloneqq u_{t}^{\alpha}\odot v_{t}^{\alpha}\) the prediction iterates generated from the gradient flow to highlight its dependency on the initialisation scale \(\alpha\)1. Footnote 1: We point out that the trajectory of \(\beta_{t}^{\alpha}\) exactly matches that of another common parametrisation \(\beta_{w}\coloneqq w_{+}^{2}-w_{-}^{2}\), with initialisation \(w_{+,0}=w_{-,0}=\alpha\mathbf{1}\). **Saddle points.** As seen in Figure 1 for gradient flow with small initialisation scale, the iterates jump from a critical point of the loss, where the iterates barely make any progress, to another. The following proposition shows that these critical points are saddle points (_i.e._ not local extrema) and that they correspond to points which have a very particular structure as highlighted in Eq. (5). The proof is deferred to Appendix B. **Proposition 1**.: _All the critical points \(w_{c}=(u_{c},v_{c})\) of \(F\) which are not global minima, i.e. \(\nabla F(w_{c})=\mathbf{0}\) and \(F(w_{c})>\min_{w}F(w)\), are necessarily saddle points (i.e. not local extrema). Each critical point maps to a parameter \(\beta_{c}=u_{c}\odot v_{c}\) which satisfies \(|\beta_{c}|\odot\nabla L(\beta_{c})=\mathbf{0}\) and verifies:_ \[\beta_{c}\in\operatorname*{arg\,min}_{\beta_{i}=0\text{ for }i\notin\operatorname*{ supp}(\beta_{c})}\,L(\beta), \tag{5}\] _where \(\operatorname*{supp}(\beta_{c})=\{i\in[d],\beta_{c}(i)\neq 0\}\) corresponds to the support of \(\beta_{c}\)._ **Necessity of "accelerating" time.** As the origin \(\mathbf{0}\in\mathbb{R}^{2d}\) is a critical point of the function \(F\), taking the initialisation \(\alpha\to 0\) arbitrarily slows down the dynamics. In fact it can easily be shown for any fixed time \(t\), that \((u_{t}^{\alpha},v_{t}^{\alpha})\to\mathbf{0}\) as \(\alpha\to 0\). Therefore if we restrict ourselves to a finite time analysis, there is no hope of exhibiting the observed saddle-to-saddle behaviour. To do so, we must find an appropriate bijection \(\tilde{t}_{\alpha}\) in \(\mathbb{R}_{\geq 0}\) which "accelerates" time (_i.e._\(\tilde{t}_{\alpha}(t)\smash{\mathop{\longrightarrow}\limits_{\alpha\to 0}}+\infty\) for all \(t\)) and consider the accelerated iterates \(\beta_{\tilde{t}_{\alpha}(t)}^{\alpha}\). ### Leveraging the mirror flow structure. It is shown in Azulay et al. (2021) that the iterates \(\beta_{t}^{\alpha}\) follow a mirror flow with potential \(\phi_{\alpha}\) initialised at \(\beta_{t=0}^{\alpha}=\mathbf{0}\): \[\mathrm{d}\nabla\phi_{\alpha}(\beta_{t}^{\alpha})=-\nabla L( \beta_{t}^{\alpha})\mathrm{d}t, \tag{6}\] where \(\phi_{\alpha}\) is the hyperbolic entropy function (Ghai et al., 2020) defined as: \[\phi_{\alpha}(\beta)=\frac{1}{2}\sum_{i=1}^{d}\Big{(}\beta_{i} \mathrm{arcsinh}(\frac{\beta_{i}}{\alpha_{i}^{2}})-\sqrt{\beta_{i}^{2}+\alpha_ {i}^{4}}+\alpha_{i}^{2}\Big{)}. \tag{7}\] Unveiling the mirror flow structure enables to leverage convex optimisation tools to prove convergence of iterates to a global minimiser \(\beta_{\alpha}^{\star}\). For overparametrised problems, the mirror formulation provides a simple proof of the associated implicit regularisation problem. As shown by Woodworth et al. (2020), the limit \(\beta_{\alpha}^{\star}\) of the _gradient flow_ is the solution of the following minimisation problem: \[\beta_{\alpha}^{\star}=\operatorname*{arg\,min}_{y_{i}=\langle x_{i},\beta \rangle,\forall i}\,\,\phi_{\alpha}(\beta) \tag{8}\] Given the fact that \(\phi_{\alpha}\) behaves as the \(\ell_{1}\)-norm as \(\alpha\) goes to \(0\), it is shown in (Woodworth et al., 2020, Theorem 2) that \(\beta_{\alpha}^{\star}\) converges to the minimum \(\ell_{1}\)-norm interpolator as \(\alpha\to 0\). We use the nice structure from Eq. (6) to bring to light the "saddle-to-saddle" dynamics which occurs as we take the initialisation to \(0\). **Taking the initialisation scale to \(0\), mirror point of view.** As mentioned before, the iterates \(\beta_{t}^{\alpha}\) stay stuck at \(\mathbf{0}\) when \(\alpha\to 0\). The mirror flow point of view sheds a new light on this observation. Differentiating the left side of equation 6 and computing the Hessian of \(\phi_{\alpha}\), the iterates \(\beta_{t}^{\alpha}\) are shown to follow \(-\sqrt{(\beta_{t}^{\alpha})^{2}+\alpha^{4}}\odot\nabla L(\beta_{t}^{\alpha})\). Taking \(\alpha\to 0\), this flow is informally equivalent to \(\dot{\beta}_{t}=-|\beta_{t}|\odot\nabla L(\beta_{t})\) for which \(\mathbf{0}\) is a fixed point as well as all vectors \(\beta_{c}=\beta_{w_{c}}\) where \(w_{c}\) is a saddle point of \(F\) (following Proposition 1). **Time-rescaled iterates.** Since for \(\beta\neq\mathbf{0}\), \(\|\nabla\phi_{\alpha}(\beta)\|\to\infty\) when \(\alpha\to 0\), the potential \(\phi_{\alpha}\) becomes degenerate for small \(\alpha\). The formulation from Eq. (6) is therefore not well defined in the limit \(\alpha\to 0\). We can nonetheless obtain a meaningful limit by considering the appropriate time acceleration \(\tilde{t}_{\alpha}(t)=\ln(1/\alpha)\cdot t\) and looking at the accelerated iterates \[\tilde{\beta}_{t}^{\alpha}\coloneqq\beta_{t_{\alpha}(t)}^{\alpha}=\beta_{\ln (1/\alpha)t}^{\alpha}. \tag{9}\] A simple chain rule leads to the "accelerated mirror flow": \(\mathrm{d}\nabla\phi_{\alpha}(\tilde{\beta}_{t}^{\alpha})=-\ln{(\frac{1}{ \alpha})}\nabla L(\tilde{\beta}_{t}^{\alpha})\mathrm{d}t\), and the accelerated iterates \((\tilde{\beta}_{t}^{\alpha})_{t}\) therefore follow a mirror descent with a rescaled potential: \[\mathrm{d}\nabla\tilde{\phi}_{\alpha}(\tilde{\beta}_{t}^{\alpha})=-\nabla L( \tilde{\beta}_{t}^{\alpha})\mathrm{d}t,\qquad\mathrm{where}\qquad\tilde{\phi} _{\alpha}\coloneqq\frac{1}{\ln(1/\alpha)}\cdot\phi_{\alpha}, \tag{10}\] with \(\tilde{\beta}_{t=0}=\mathbf{0}\) and where \(\phi_{\alpha}\) is defined Eq. (7). Also, in contrast with \(\phi_{\alpha}\), the rescaled potential \(\tilde{\phi}_{\alpha}\) is non-degenerate as \(\alpha\) approaches 0. Indeed it satisfies \(\tilde{\phi}_{\alpha}(\beta)\to\|\beta\|_{1}\) as \(\alpha\to 0\) and its gradients satisfy \([\nabla\tilde{\phi}_{\alpha}(\beta)]_{i}\to\pm 1\) if \(\beta_{i}\gtrless 0\) (see Lemma 2 in Appendix E). As a result, it is tempting to examine the limiting equation obtained when taking the limit \(\alpha\to 0\) in Eq. (10). **Notations and terminology.** In all generality we denote by \(\beta_{t_{1}}^{\star}\coloneqq\operatorname*{arg\,min}_{\beta^{\star}\in \operatorname*{arg\,min}_{\beta}L(\beta)}\|\beta^{\star}\|_{1}\) the minimum \(\ell_{1}\)-norm solution, we point out that it simplifies to \(\beta_{t_{1}}^{\star}\coloneqq\operatorname*{arg\,min}_{\beta}L(\beta)\) in the underparametrised setting, and to \(\beta_{t_{1}}^{\star}\coloneqq\operatorname*{arg\,min}_{y_{i}=(x_{i},\beta^{ \star}),\forall i}\|\beta^{\star}\|_{1}\) in the overparametrised setting. For an integer \(d\), we denote \([d]\) the set \(\{1,\ldots,d\}\) and for \(i\in[d]\) we denote \(\{i\}^{C}\) the complementary set of \(\{i\}\) in \([d]\). We abuse of the terminology of "saddle" when speaking of a point \(\beta_{c}\in\mathbb{R}^{d}\) which **is not** an actual saddle of the loss \(L\) which is convex. We use it because it corresponds to a point \(w_{c}\in\mathbb{R}^{2d}\) which is a saddle of the non-convex loss \(F\). ## 3 Main result: limiting process Our main result formalises the informal theorem given in the introduction. The limiting flow progresses by jumping from one saddle to another, where each saddle corresponds to the minimiser of the loss function under constraints on the set of active coordinates and their sign. The procedure for identifying and characterising these saddles as well as the jump times is given by a standalone algorithm outlined in Algorithm 1. We first verify that all the steps are well defined and that the algorithm terminates in a finite number of iterations in the following proposition. The proof is deferred to Appendix D.1. **Proposition 2**.: _Algorithm 1 is well defined: at each iteration (i) the attribution of \(\Delta\) is well defined as \(\Delta<+\infty\), (ii) the constrained minimisation problem has a unique solution and the attribution of the value of \(\beta\) is therefore well-founded. Furthermore, along the loops: the iterates \(\beta\) have at most \(n\) non-zero coordinates, the loss is strictly decreasing and the algorithm terminates in at most \(\min{(2^{d},\sum_{k=0}^{n}\binom{d}{k})}\) steps by outputting the minimum \(\ell_{1}\)-norm solution \(\beta_{t_{1}}^{\star}\)._ Proposition 2 highlights that Algorithm 1 is on its own an algorithm of interest for finding the minimum \(\ell_{1}\)-norm solution in an overparametrised regression setting. We point out that the provided upperbound on the number of iterations is very crude and could certainly be improved. However, analysing Algorithm 1 is not the focus of our paper. We can now state our main result. We show that at vanishing initialisation, the limiting flow progresses by jumping between different saddles, where each saddle corresponds to a minimiser of the loss function subject to specific sign constraints. **Theorem 2**.: _Let the "saddles" \((\beta_{0}=\mathbf{0},\beta_{1},\ldots,\beta_{p-1},\beta_{p}=\beta_{\ell_{1}}^ {\star})\) and jump times \((t_{0}=0,t_{1},\ldots,t_{p})\) be the outputs of Algorithm 1 and let \((\tilde{\beta}_{t}^{\circ})_{t}\) be the piecewise constant process defined as follows:_ _("Saddles")_ \[\tilde{\beta}_{t}^{\circ}\equiv\beta_{k}\qquad\qquad\text{for $t\in(t_{k},t_{k+1})$ and $0\leq k\leq p,\;\;t_{p+1}=+\infty$}.\] _The accelerated flow \((\tilde{\beta}_{t}^{\circ})_{t}\) defined in Eq. (9) uniformly converges towards the limiting process \((\tilde{\beta}_{t}^{\circ})_{t}\) on any compact subset of \(\mathbb{R}_{\geq 0}\backslash\{t_{1},\ldots,t_{p}\}\)._ **Behaviour of the saddle-to-saddle process.** The visited "saddles" are entirely provided in a recursive manner by Algorithm 1. Unlike all previous results on incremental learning, complex behaviours can occur when the feature matrix is ill designed and the RIP property (Candes et al., 2006) is not satisfied. Several coordinates can activate and deactivate at the same time (in Figure 1 (Right), \(4\) coordinates deactivate at the same time, in Figure 2 (Right), two coordinates activate at the same time). Moreover, there is no monotonicity of the total number of non-zero coordinates and each coordinate can increase, decrease and change sign many times. **Convergence result.** First we point out that we could not expect uniform convergence of \((\tilde{\beta}_{t}^{\alpha})\) on intervals of the form \([0,T]\) given the fact that the limit process is discontinuous. Hence the uniform convergence outside of the discontinuity times is one of the strongest we could expect. In Corollary 2 in the following section we give an even stronger result by showing a graph convergence of the iterates. We also highlight that showing the convergence to a limiting process is in fact the toughest challenge from a theoretical point of view and is done Section 4. However, in Section 3.1, assuming its existence, we show that constructing \(\tilde{\beta}^{\circ}\) is rather intuitive and naturally leads to Algorithm 1. **Estimate for the iterates \(\beta_{t}^{\alpha}\).** We point out that our result provides no speed of convergence of \(\tilde{\beta}^{\alpha}\) towards \(\tilde{\beta}^{\circ}\). We believe that a non-asymptotic result is challenging and leave it as future work. We experimentally notice that the convergence rate quickly degrades after each saddle. Nonetheless, we can still write for the "non-accelerated" iterates that \(\beta_{t}^{\alpha}=\tilde{\beta}_{t/\ln(1/\alpha)}^{\alpha}\sim\tilde{\beta}_ {t/\ln(1/\alpha)}^{\circ}\) when \(\alpha\to 0\). This approximation tells us for \(\alpha\) small enough that the iterates \(\beta_{t}^{\alpha}\) are roughly equal to \(0\) until time \(t_{1}\cdot\ln(1/\alpha)\) and that the minimum \(\ell_{1}\)-norm interpolator is reached at time \(t_{p}\cdot\ln(1/\alpha)\). Such a precise estimate of the global convergence time is remarkable and goes beyond classical Lyapunov analyses which only leads to \(L(\beta_{t}^{\alpha})\lesssim\ln(1/\alpha)/t\) (see Proposition 4 in Appendix C). **"Neural" point of view.** We can map back our result to the accelerated flow \(\tilde{w}_{t}^{\alpha}\coloneqq w_{\ell^{\mathrm{s}}(t)}^{\alpha}\). Indeed there is a bijective mapping between the mirror flow \(\tilde{\beta}_{t}^{\alpha}\) and the gradient flow as shown in Lemma 1 in the appendix. From there we can show that \((\tilde{u}_{t}^{\alpha},\tilde{v}_{t}^{\alpha})\underset{\alpha\to 0}{ \longrightarrow}(\sqrt{|\tilde{\beta}_{t}^{\circ}|},\mathrm{sign}(\tilde{ \beta}_{t}^{\circ})\sqrt{|\tilde{\beta}_{t}^{\circ}|})\) uniformly on any compact subset of \(\mathbb{R}_{\geq 0}\backslash\{t_{1},\ldots,t_{p}\}\). **Justification of the saddle-to-saddle terminology.** We emphasise that the \(\beta_{k}\)'s which we refer to as "saddles" **are not** saddles of the convex loss \(L\). We use this terminology as the mapping \((u_{k}\),\(v_{k})=(\sqrt{|\beta_{k}|},\mathrm{sign}(\beta_{k})\sqrt{|\beta_{k}|})\) results in actual saddle points of the non-convex loss function \(F\). ### Intuitive construction of the limiting process. In this section we provide some intuition on how the limiting flow \((\tilde{\beta}_{t}^{\circ})_{t}\) is constructed. To do so, the subdifferential of the \(\ell_{1}\)-norm turns out very useful. We recall its definition: \[\partial\|\tilde{\beta}\|_{1}\!=\!\bullet\{1\}\;\;\text{if}\;\;\;\tilde{\beta }>0\quad\bullet\{-1\}\;\;\text{if}\;\;\tilde{\beta}<0\quad\bullet\;\;[-1,1]\; \;\text{if}\;\;\tilde{\beta}=0. \tag{11}\] Recall the accelerated mirror flow Eq. (9) satisfies Eq. (10) which can be integrated as: \[-\int_{0}^{t}\nabla L(\tilde{\beta}_{s}^{\alpha})\mathrm{d}s=\nabla\tilde{\phi}_{ \alpha}(\tilde{\beta}_{t}^{\alpha}). \tag{12}\] From this equation, the complicated part is to show that the iterates \(\tilde{\beta}_{t}^{\alpha}\) indeed converge to some piecewise constant process. We show this convergence in the following section. The goal here is to provide some intuition on why Algorithm 1 describes the jumps times and the visited saddles. Assuming that a limiting process indeed exists, _i.e._\(\lim_{\alpha\to 0}\tilde{\beta}_{t}^{\alpha}=\tilde{\beta}_{t}\) exists for all \(t\), the dominated convergence theorem then yields that the process \((\tilde{\beta}_{t})_{t}\) must satisfy: \[-\int_{0}^{t}\nabla L(\tilde{\beta}_{s})\mathrm{d}s\in\partial\|\tilde{\beta} _{t}\|_{1}, \tag{13}\] Indeed, by the definition of \(\tilde{\phi}_{\alpha}\), \(\nabla\tilde{\phi}_{\alpha}(\tilde{\beta}_{t}^{\alpha})\) must converge to an element of \(\partial\|\tilde{\beta}_{t}\|_{1}\) (see Lemma 2 in the appendix). We start by providing a few comments concerning Eq. (13). **Links with Lasso.** Notice that Eq. (13) closely resembles to the Lasso optimality condition: \[\beta_{\lambda}^{\star}=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{d}}\;L (\beta)+\lambda\|\beta\|_{1}. \tag{14}\] Indeed, the optimality condition of Eq. (14) writes \(\nabla L(\beta_{\lambda})\in\lambda\partial\|\beta_{\lambda}\|_{1}\), and is almost the same as Eq. (13) with \(\lambda=1/t\). For the quadratic loss, the trajectory \((\beta_{\lambda}^{\star})_{\lambda\geq 0}\) is a piecewise linear path with a discrete number of vertices between \(\beta_{\lambda\to\infty}^{\star}=0\) and \(\beta_{\lambda\to 0}^{\star}=\beta_{t_{1}}^{\star}\). This path can be computed using the Homotopy method (see, e.g., Tibshirani, 2013, and references therein) which operates in an iterative fashion and finds each vertex starting from \(\mathbf{0}\). **Subdifferential equations and rate-independant systems.** Similar subdifferential inclusions of the form \(\nabla L(\beta_{t})\in\frac{\mathrm{d}}{\mathrm{d}t}\partial h(\beta_{t})\) for non-differential functions \(h\) have been studied by Attouch et al. (2004) but for strongly convex functions \(h\). In this case, the solutions are continuous, do not exhibit jumps and we cannot leverage their tools. On another hand, Efendiev and Mielke (2006); Mielke et al. (2009, 2012) consider rate-independent systems of the form \(\partial_{q}E(t,q_{t})\in\partial h(\dot{q}_{t})\) for \(1\)-homogeneous functions \(h\). Though this formulation is quite different from Eq. (12), its solutions can have jumps and we leverage their tools to show our result. This is detailed in Section 4. ### Intuitive proof of Theorem 2. We provide here an intuitive sketch of the proof of Theorem 2 assuming that the limiting process exists. Note that this **is not** a rigorous proof but has the advantage of preserving intuitive comprehension (the full proof is a consequence of Theorem 3 and can be found Appendix D.3). The idea is similar to the Homotopy algorithm which is used to find the Lasso path: we start from \(\mathbf{0}\) and successively look at the breaking times of Eq. (13) and determine the unique update of \(\tilde{\beta}_{t}\) that keeps the conditions satisfied. Let us denote \(s_{t}\coloneqq-\int_{0}^{t}\nabla L(\tilde{\beta}_{s})\mathrm{d}s\). The function \(s_{t}\) is therefore continuous and, as noted in Eq. (13), satisfies \(s_{t}\in\partial\|\tilde{\beta}_{t}\|_{1}\). **First saddle:**\([0,t_{1})\), \(\tilde{\beta}_{t}=\beta_{0}\equiv\mathbf{0}\). While \(\|s_{t}\|_{\infty}\)\(<1\), we must have \(\tilde{\beta}_{t}=\mathbf{0}\) from Eq. (13) and therefore \(s_{t}=-t\cdot\nabla L(\beta_{0}=\mathbf{0})\). Now notice that \(t_{1}\) defined in Algorithm 1 corresponds to the time such that \(\|s_{t_{1}}\|_{\infty}=\pm 1\). The iterates must then move: indeed assume that \(\tilde{\beta}_{t}\) is still \(\mathbf{0}\) for \(t\in[t_{1},t_{1}+\varepsilon]\), then we would have \(\|s_{t}\|_{\infty}\)\(>1\) which contradicts Eq. (13). For simplicity, assume that there is a unique coordinate \(i_{1}\) such that \(|s_{t_{1}}(i_{1})|=1\) and without loss of generality assume that \(s_{t_{1}}(i_{1})=+1\). **Jump at time \(t_{1}\).** We show that there must be a discontinuous jump at time \(t_{1}\). To understand this claim, we consider \(\tilde{\beta}_{t}^{\alpha}\) for very small \(\alpha\). Differentiating the left side from Eq. (10) and computing the Hessian of \(\tilde{\phi}_{\alpha}\), we obtain that \(\mathrm{d}\tilde{\beta}_{t}^{\alpha}=-\ln(1/\alpha)\sqrt{\tilde{\beta}_{t}^{ \alpha}+\alpha^{4}}\odot\nabla L(\tilde{\beta}_{t}^{\alpha})\sim-\ln(1/\alpha )|\tilde{\beta}_{t}^{\alpha}|\odot\nabla L(\tilde{\beta}_{t}^{\alpha})\) (\(*\)). Since \(t_{1}\) corresponds to the time iterates leave the first saddle, \(\tilde{\beta}_{t_{1}}^{\alpha}\) cannot converge to a fixed point of \(\dot{\beta}_{t}=-|\beta_{t}|\odot\nabla L(\beta_{t})\). Therefore (\(*\)) entails \(\|\tilde{\beta}_{t}^{\alpha}\|\to+\infty\) as \(\alpha\to 0\), which translates into a jump. Following the jump, the velocity should return to a bounded state and (\(*\)) implies that the iterates can only stick to another saddle where \(|\beta|\odot\nabla L(\beta)=\mathbf{0}\). **Second saddle: \((t_{1},t_{2})\).** The iterates have jumped to another saddle point which we denote \(\beta_{1}\), the question is now to determine which precise saddle it is. First note that by continuity of \(s_{t}\) at time \(t_{1}\) and because \(\|s_{t_{1}}(\{i_{1}\}^{C})\|_{\infty}{<}\)\(1\) we must have \(\beta_{1}(\{i_{1}\}^{C})=\mathbf{0}\) from Eq. (13). Furthermore, since we assumed that \(s_{t_{1}}(i_{1})=1\), we must have that \(\beta_{1}(i_{1})\geq 0\). Therefore, we necessary have \[\beta_{1}=\operatorname*{arg\,min}_{\begin{subarray}{c}\beta(\{i_{1}\}^{C})= \mathbf{0}\\ \beta(i_{1})\geq 0\end{subarray}}L(\beta), \tag{15}\] which matches the first loop of Algorithm 1. Then \(\tilde{\beta}_{t}\) stays constant at \(\beta_{1}\) until time \(t_{2}\) when an inactive coordinate \(i_{2}\) is such that \(s_{t_{2}}(i_{2})=\pm 1\). **Following saddles.** We can recursively follow the previous logic to compute the jump times and the following saddles. However one must be careful in not forgetting the sign constraints. Indeed a coordinate cannot jump from a strictly positive value to a strictly negative value since crossing \(0\) always corresponds to a fixed point of the dynamics. This feature also appears in Eq. (13): the function \(s_{t}\), being continuous at the jump times, cannot pass from a value of \(+1\) to \(-1\) instantly. ### Additional comments. **Comparisons to the Homotopy and OMP algorithms.** The jump times as well as the visited saddles depend on all the previously visited saddles, and not just on the last one. This behaviour is in stark contrast with the Homotopy algorithm for the Lasso. On a separate note, a common belief is that in the vicinity of a saddle point, the following activated coordinate should correspond to the direction of most negative curvature (i.e. eigenvector corresponding to the most negative eigenvalue). However, this statement cannot be accurate as it is inconsistent with our algorithm (though it holds true for the first coordinate). In fact, it can be shown that by selecting this particular active coordinate the resulting algorithm aligns with the orthogonal matching pursuit (OMP) algorithm [11, 12] which does not necessarily lead to the minimum \(\ell_{1}\)-norm interpolator. **Natural extensions of our setting.** More general initialisations than \(u_{t=0}=\sqrt{2}\alpha\mathbf{1}\) can easily be dealt with. For instance, initialisations of the form \(u_{t=0}=\alpha\mathbf{u_{0}}\in\mathbb{R}^{d}\) lead to the exact same result as it is shown in Woodworth et al. [2020] (Discussion after Theorem 1) that the associated mirror still converges to the \(\ell_{1}\)-norm. Initialisations of the form \([u_{t=0}]_{i}=\alpha^{k_{i}}\), where \(k_{i}>0\), lead to the associated potential converging towards a weighted \(\ell_{1}\)-norm and one should modify Algorithm 1 by accordingly weighting \(\nabla L(\beta)\) in the algorithm. Similarly, deeper linear architectures of the form \(\beta_{w}=w_{+}^{D}-w_{-}^{D}\) as in Woodworth et al. [2020] do not change our result as the associated mirror converges towards the \(\ell_{1}\)-norm too. Only the square loss is considered in the paper, however we believe that all our results should hold for any loss of the type \(L(\beta)=\sum_{i=1}^{n}\ell(y_{i},\langle x_{i},\beta\rangle)\) where for all \(y\in\mathbb{R}\), \(\ell(y,\cdot)\) is strictly convex with a unique minimiser at \(y\). In fact, the only property which cannot directly be adapted from our results is showing the uniform boundedness of the iterates (see discussion before Proposition 5 in Appendix C). ## 4 Sketch of proof and Arc-length parametrisation In this section, we consider a new time reparametrisation which circumvents the apparition of discontinuous jumps and leads to the proof of Theorem 2. The main difficulty stems from the non-continuity of the limit process \(\tilde{\beta}^{\circ}\). Therefore we cannot expect uniform convergence of \(\tilde{\beta}^{\alpha}\) towards \(\tilde{\beta}\) as \(\alpha\to 0\). In addition, \(\tilde{\beta}^{\circ}\) does not provide any insights into the path followed between the jumps. **Arc-length parametrisation.** The high-level idea is to "slow-down" time when the jumps occur. To do so we follow the approach from Efendiev and Mielke [2006], Mielke et al. [2009] and we consider an arc-length parametrisation of the path, i.e., we consider \(\tau^{\alpha}\) equal to: \[\tau^{\alpha}(t)=t+\int_{0}^{t}\lVert\dot{\tilde{\beta}}^{\alpha}_{s}\rVert \mathrm{d}s.\] In Proposition 6 in the appendix, we show that the full path length \(\int_{0}^{+\infty}\lVert\dot{\beta}^{\alpha}_{s}\rVert\mathrm{d}s\) is finite and bounded independently of \(\alpha\). Therefore \(\tau^{\alpha}\) is a bijection in \(\mathbb{R}_{\geq 0}\). We can then define the following quantities: \[\hat{t}^{\alpha}_{\tau}=(\tau^{\alpha})^{-1}(\tau)\quad\text{and}\quad\hat{ \beta}^{\alpha}_{\tau}=\tilde{\beta}^{\alpha}_{t^{\alpha}(\tau)}.\] By construction, a simple chain rule leads to \(\dot{\hat{t}}^{\alpha}(\tau)+\lVert\dot{\tilde{\beta}}^{\alpha}_{\tau}\rVert{=}\)\(1\), which means that the speed of \((\hat{\beta}^{\alpha}_{\tau})_{\tau}\) is always upperbounded by \(1\), independently of \(\alpha\). This behaviour is in stark contrast with the process \((\tilde{\beta}^{\alpha}_{t})_{t}\) which has a speed which explodes at the jumps. It presents a major advantage as we can now use Arzela-Ascoli's theorem to extract a converging subsequent. A simple change of variable shows that the new process satisfies the following equations: \[-\int_{0}^{\tau}\dot{\hat{t}}_{s}^{\alpha}\nabla L(\hat{\beta}_{s}^{\alpha}) \mathrm{d}s=\nabla\tilde{\phi}_{\alpha}(\hat{\beta}_{\tau}^{\alpha})\quad\text{ and}\quad\dot{\hat{t}}_{\tau}^{\alpha}+\|\dot{\hat{\beta}}_{\tau}^{\alpha}\|=1 \tag{16}\] started from \(\hat{\beta}_{\tau}^{\alpha}=0\) and \(\hat{t}_{0}=0\). The next proposition states the convergence of the rescaled process, up to a subsequence. The proof is deferred to Appendix D.2 **Proposition 3**.: _Let \(T\geq 0\). For every \(\alpha>0\), let \((\hat{t}^{\alpha},\hat{\beta}^{\alpha})\) be the solution of Eq. (16). Then, there exists a subsequence \((\hat{t}^{\alpha_{k}},\hat{\beta}^{\alpha_{k}})_{k\in\mathbb{N}}\) and \((\hat{t},\hat{\beta})\) such that as \(\alpha_{k}\to 0\) :_ \[(\hat{t}^{\alpha_{k}},\hat{\beta}^{\alpha_{k}}) \rightarrow(\hat{t},\hat{\beta}) \text{in }(C^{0}([0,T],\mathbb{R}\times\mathbb{R}^{d}),\|\cdot\|_{ \infty}) \tag{17}\] \[(\dot{\hat{t}}^{\alpha_{k}},\dot{\hat{\beta}}^{\alpha_{k}}) \rightarrow(\dot{\hat{t}},\dot{\hat{\beta}}) \text{in }L_{1}[0,T] \tag{18}\] _Limiting dynamics. The limits \((\hat{t},\hat{\beta})\) satisfy:_ \[-\int_{0}^{\tau}\dot{\hat{t}}_{s}\nabla L(\hat{\beta}_{s})\mathrm{d}s\in \partial\|\hat{\beta}_{\tau}\|_{1}\quad\text{and}\quad\dot{\hat{t}}_{\tau}+\| \dot{\hat{\beta}}_{\tau}\|\leq 1 \tag{19}\] _Heteroclinic orbit. In addition, when \(\hat{\beta}_{\tau}\) is such that \(|\hat{\beta}_{\tau}|\odot\nabla L(\hat{\beta}_{\tau})\neq 0\), we have_ \[\dot{\hat{\beta}}_{\tau}=-\frac{|\hat{\beta}_{\tau}|\odot\nabla L(\hat{\beta} _{\tau})}{\|\dot{\hat{\beta}}_{\tau}|\odot\nabla L(\hat{\beta}_{\tau})\|} \quad\text{and}\quad\dot{\hat{t}}_{\tau}=0. \tag{20}\] _Furthermore, the loss strictly decreases along the heteroclinic orbits and the path length \(\int_{0}^{T}\|\dot{\hat{\beta}}_{\tau}\|\mathrm{d}\tau\) is upperbounded independently of \(T\)._ The proof can be found in Appendix D and relies on the Arzela-Ascoli theorem. Borrowing terminologies from Efendiev and Mielke (2006), we can distinguish two regimes: when \(\dot{\hat{\beta}}_{\tau}=0\), the system is _sticked_ to the saddle point. When \(\dot{\hat{t}}_{\tau}=0\) and \(\|\dot{\hat{\beta}}_{\tau}\|=1\) the system switches to a _viscous slip_ which follows the normalised flow Eq. (20). We use the term of _heteroclinic orbit_ as in the dynamical systems literature since in the neuron space \((u,v)\) it corresponds to a path with links two distinct critical points of the loss \(F\). Since \(\dot{\hat{t}}_{\tau}=0\), this regime happens instantly for the original \(t\) time scale (_i.e._ a jump occurs). From Proposition 3, following the same reasoning as in Section 3.1, we can show that the rescaled process converges uniformly to a continuous saddle-to-saddle process where the saddles are linked by normalized flows. **Theorem 3**.: _Let \(T>0\). For all subsequences defined in Proposition 3, there exist times \(0=\tau_{0}^{\prime}<\tau_{1}<\tau_{1}^{\prime}<\dots<\tau_{p}<\tau_{p}^{\prime} <\tau_{p+1}=+\infty\) such that the the iterates \((\hat{\beta}_{\tau}^{\alpha_{k}})_{\tau}\) converge uniformly on \([0,T]\) to the following limit trajectory :_ \[\textbf{(``Saddle'')} \hat{\beta}_{\tau}=\beta_{k} \text{for }\tau\in[\tau_{k}^{\prime},\tau_{k+1}]\text{ where }0\leq k\leq p\] \[\textbf{(Orbit)} \dot{\hat{\beta}}_{\tau}=-\frac{|\hat{\beta}_{\tau}|\odot\nabla L (\hat{\beta}_{\tau})}{\|\dot{\hat{\beta}}_{\tau}|\odot\nabla L(\hat{\beta}_{ \tau})\|} \text{for }\tau\in[\tau_{k+1},\tau_{k+1}^{\prime}]\text{ where }0\leq k\leq p-1\] _where the saddles \((\beta_{0}=0,\beta_{1},\dots,\beta_{p}=\beta_{t_{1}}^{\star})\) are constructed in Algorithm 1. Also, the loss \((L(\hat{\beta}_{\tau}))_{\tau}\) is constant on the saddles and strictly decreasing on the orbits. Finally, independently of the chosen subsequence, for \(k\in[p]\) we have \(\hat{t}_{\tau_{k}}=\hat{t}_{\tau_{k}^{\prime}}=t_{k}\) where the times \((t_{k})_{k\in[p]}\) are defined through Algorithm 1._ Proof.: Some parts of the proof are slightly technical. To simplify the understanding, we make use of auxiliary lemmas which are stated in Appendix E. The overall spirit follows the sketch of proof given after Theorem 2 and relies on showing that Eq. (19) can only be satisfied if the iterates visit the saddles from Algorithm 1. We let \(\hat{s}_{\tau}:=-\int_{0}^{\tau}\dot{\hat{t}}_{s}\nabla L(\hat{\beta}_{s}) \mathrm{d}s\), which is continuous and satisfies \(\hat{s}_{\tau}\in\partial\|\hat{\beta}_{\tau}\|_{1}\) from Eq. (19). Let \(S=\{\beta\in\mathbb{R}^{d},|\beta|\odot\nabla L(\beta)=\mathbf{0}\}\) denote the set of critical points and let \((\beta_{k},t_{k},s_{k})\) be the successive values of \((\beta,t,s)\) which appear in the loops of Algorithm 1. **We do a proof by induction:** we start by assuming that the iterates are stuck at the saddle \(\beta_{k-1}\) at time \(\tau\geq\tau_{k-1}^{\prime}\) where \(\hat{t}_{\tau_{k-1}^{\prime}}=t_{k-1}\) and \(\hat{s}_{\tau_{k-1}^{\prime}}=s_{k-1}\) (recurrence hypothesis), we then show that they can only move at a time \(\tau_{k}\) and follow the normalised flow Eq. (20). We finally show that they must end up "stuck" at the new critical point \(\beta_{k}\), validating the recurrence hypothesis. _Proof of the jump time \(\tau_{k}\) such that \(\hat{t}_{\tau_{k}}=t_{k}:\)_ we set ourselves at time \(\tau\geq\tau_{k-1}^{\prime}\), stuck at the saddle \(\beta_{k-1}\). Let \(\tau_{k}\coloneqq\sup\{\tau,\hat{t}_{\tau}\leq t_{k}\}\), we have that \(\tau_{k}<\infty\) from Lemma 3 in the Appendix. Note that by continuity of \(\hat{t}_{\tau}\) it holds that \(t_{\tau_{k}}=t_{k}\). Now notice that \(\hat{s}_{\tau}=\hat{s}_{\tau_{k-1}^{\prime}}-(\hat{t}_{\tau}-\hat{t}_{\tau_{k -1}^{\prime}})\nabla L(\beta_{k-1})=s_{k-1}-(\hat{t}_{\tau}-t_{k-1})\nabla L( \beta_{k-1})\). We argue that for any \(\varepsilon>0\), we cannot have \(\hat{\beta}_{\tau}=\beta_{k-1}\) on \((\tau_{k},\tau_{k}+\varepsilon)\). Indeed by the definition of \(\tau_{k}\) and from the algorithmic construction of time \(t_{k}\), it would lead to \(|\hat{s}_{\tau}(i)|>1\) for some coordinate \(i\in[d]\), which contradicts Eq. (19). Therefore the iterates must move at the time \(\tau_{k}\). _Heterocline leaving \(\beta_{k-1}\) for \(\tau\in[\tau_{k},\tau_{k}^{\prime}]:\)_ contrary to before, our time rescaling enables to capture what happens during the "jump". We have shown that for any \(\varepsilon\), there exists \(\tau_{\varepsilon}\in(\tau_{k},\tau_{k}+\varepsilon)\), such that \(\hat{\beta}_{\tau_{\varepsilon}}\neq\beta_{k-1}\). From Lemma 4, since the saddles are distinct along the flow, we must have that \(\hat{\beta}_{\tau_{\varepsilon}}\notin S\) for \(\varepsilon\) small enough. The iterates therefore follow a heterocline flow leaving \(\beta_{k-1}\) with a speed of \(1\) given by Eq. (20). We now define \(\tau_{k}^{\prime}\coloneqq\inf\{\tau>\tau_{k},\exists\varepsilon_{0}>0,\forall \varepsilon\in[0,\varepsilon_{0}],\ \hat{\beta}_{\tau+\varepsilon}\in S\}\) which corresponds to the time at which the iterates reach a new critical point and stay there for at least a small time \(\varepsilon_{0}\). We have just shown that \(\tau_{k}^{\prime}>\tau_{k}\). Now from Proposition 3, the path length of \(\hat{\beta}\) is finite, and from Lemma 4 the flow visits a finite number of distinct saddles at a speed of \(1\). These two arguments put together, we get that \(\tau_{k}^{\prime}<+\infty\) and also \(\hat{\beta}_{\tau_{k}^{\prime}+\varepsilon}=\hat{\beta}_{\tau_{k}^{\prime}}\), \(\forall\varepsilon\in[0,\varepsilon_{0}]\). On another note, since \(\hat{\hat{t}}_{\tau}=0\) for \(\tau\in[\tau_{k},\tau_{k}^{\prime}]\) we have \(\hat{t}_{\tau_{k}^{\prime}}=\hat{t}_{\tau_{k}}(=t_{k})\) as well as \(\hat{s}_{\tau_{k}}=\hat{s}_{\tau_{k}^{\prime}}(=s_{k})\). _Proof of the landing point \(\beta_{k}:\)_ we now want to find to which saddle \(\hat{\beta}_{\tau_{k}^{\prime}}\in S\) the iterates have moved to. To that end, we consider the following sets which also appear in Algorithm 1: \[I_{\pm,k}\coloneqq\{i\in\{1,\dots,d\},\ \text{s.t.}\ \hat{s}_{\tau_{k}^{\prime}}(i)= \pm 1\}\quad\text{ and }\quad I_{k}=I_{+,k}\cup I_{-,k}. \tag{21}\] The set \(I_{k}\) corresponds to the coordinates of \(\hat{\beta}_{\tau_{k}^{\prime}}\) which "are allowed" (but not obliged) to be activated (_i.e._ non-zero). For \(\tau\in[\tau_{k}^{\prime},\tau_{k}^{\prime}+\varepsilon_{0}]\) we have that \(\hat{s}_{\tau}=\hat{s}_{\tau_{k}^{\prime}}-(\hat{t}_{\tau}-t_{k})\nabla L(\hat{ \beta}_{\tau_{k}^{\prime}})\). By continuity of \(\hat{s}\) and the fact that \(\hat{s}_{\tau}\in\partial\|\hat{\beta}_{\tau_{k}^{\prime}}\|_{1}\), the equality translates into: \(\,\) if \(i\notin I_{k}\), \(\hat{\beta}_{\tau_{k}^{\prime}}(i)=0\), \(\,\)* if \(i\in I_{+,k}\), then \([\nabla L(\hat{\beta}_{\tau_{k}^{\prime}})]_{i}\geq 0\) and \(\hat{\beta}_{\tau_{k}^{\prime}}(i)\geq 0\), \(\,\)* if \(i\in I_{-,k}\), then \([\nabla L(\hat{\beta}_{\tau_{k}^{\prime}})]_{i}\leq 0\) and \(\hat{\beta}_{\tau_{k}^{\prime}}(i)\leq 0\) and finally \(\,\)* for \(i\in I_{k}\), if \(\hat{\beta}_{\tau_{k}^{\prime}}(i)\neq 0\), then \([\nabla L(\hat{\beta}_{\tau_{k}^{\prime}})]_{i}=0\). One can then notice that these conditions exactly correspond to the optimality conditions of the following constrained minimisation problem: \[\underset{\begin{subarray}{c}\beta_{k}\geq 0,i\in I_{k},\\ \beta_{i}\leq 0,i\in I_{k},\\ \beta_{i}=0,i\notin I_{k}\end{subarray}}{\text{arg min}}\ L(\beta). \tag{22}\] We showed in Proposition 2 that the solution to this problem is unique and equal to \(\beta_{k}\) from Algorithm 1. Therefore \(\hat{\beta}_{\tau}=\beta_{k}\) for \(\tau\in[\tau_{k}^{\prime},\tau_{k}^{\prime}+\varepsilon_{0}]\). It finally remains to show that \(\hat{\beta}_{\tau}=\beta_{k}\) while \(\tau\leq\tau_{k+1}\), where \(\tau_{k+1}\coloneqq\sup\{\tau,\hat{t}_{\tau}=t_{k+1}\}\). For this let \(\tau\in[\tau_{k}^{\prime},\tau_{k+1}]\), notice that for \(i\notin I_{k}\), we necessarily have that \(\hat{\beta}_{\tau}(i)=\beta_{k}(i)=0\), otherwise we break the continuity of \(\hat{s}_{\tau}\). Similarly, for \(i\in I_{k,+}\), we necessarily have that \(\hat{\beta}_{\tau}(i)\geq 0\) and for \(i\in I_{k,-}\), \(\hat{\beta}_{\tau}(i)\leq 0\) for the same continuity reasons. Now assume that \(\hat{\beta}_{\tau}(I_{k})\neq\beta_{k}(I_{k})\). Then from Lemma 4 and continuity of the flow, \(\exists\tau^{\prime}\in(\tau_{k}^{\prime},\tau)\) such that \(\hat{\beta}_{\tau^{\prime}}\notin S\) and there must exist a heterocline flow Eq. (20) starting from \(\beta_{k}\) which passes through \(\beta_{\tau^{\prime}}\). This is absurd since along this flow the loss strictly decreases, which is in contradiction with the definition of \(\beta_{k}\) which minimises the problem Eq. (22). Theorem 3 enables to prove without difficulty Theorem 2. Indeed we can show that any extracted limit \(\hat{\beta}\) maps back to the unique discontinuous process \(\hat{\beta}^{\circ}\). We refer to Appendix D.3 for the full proof of Theorem 2. **Graph convergence.** A nice and pictorial consequence of Theorem 3 is that the graph of the iterates \((\tilde{\beta}_{t}^{\alpha})_{t}\) converges towards that of \((\hat{\beta}_{\tau})_{\tau}\). **Corollary 1**.: _For all \(T>t_{p}\), the graph of the iterates \((\tilde{\beta}_{t}^{\alpha})_{t\leq T}\) converges to that of \((\hat{\beta}_{\tau})_{\tau}:\)_ \[\mathrm{dist}(\{\tilde{\beta}_{t}^{\alpha}\}_{t\leq T},\{\hat{\beta}_{\tau}\}_{ \tau\geq 0})\underset{\alpha\to 0}{\longrightarrow}0,\] _where \(\mathrm{dist}(\cdot,\cdot)\) corresponds to the Hausdorff distance between 2 sets._ Unlike Theorem 2, the convergence result presented in Corollary 2 allows to track the path followed between the saddles. ## Conclusion Our study analyses the behaviour of gradient flow with vanishing initialisation over diagonal linear networks, we prove that it leads to the flow jumping from a saddle point of the loss to another. Our analysis thoroughly characterises each visited saddle point as well as the jumping times through an algorithm which is reminiscent of the Homotopy method used in the Lasso framework. There are several avenues for further exploration. The most compelling one is the extension of these techniques to broader contexts for which the implicit bias of gradient flow has not yet fully been understood. ## Acknowledgments S.P. would like to thank Quentin Rebjock, Loucas Pillaud-Vivien and Etienne Boursier for the helpful and enlightening discussions. The authors also thank Jerome Bolte for the discussions concerning subdifferential equations, Aris Daniilidis for the reference of Kurdyka [1998], as well as Aditya Varre for proofreading the paper.
2308.11650
Standard Model Muon Magnetic Dipole Moment
The most recent high-precision determination of the hadronic leading order contribution to the muon magnetic dipole moment within the Standard Model of particle physics has revealed a five standard deviation discrepancy with the previous determination with the highest precision. A systematic effect of the luminous volume created during the measurements leading to the determinations seems to be the source of the discrepancy. Correcting for the luminous volume effect allows a consistent determination of the Standard Model muon magnetic dipole moment with precision that is comparable to the latest world average from muon spin precession experiments.
Noah Bray-Ali
2023-08-19T06:54:57Z
http://arxiv.org/abs/2308.11650v3
# Standard Model Muon Magnetic Dipole Moment ###### Abstract The most recent high-precision determination of the hadronic leading order contribution to the muon magnetic dipole moment within the Standard Model of particle physics has revealed a five standard deviation discrepancy with the previous determination with the highest precision. A systematic effect of the luminous volume created during the measurements leading to the determinations seems to be the source of the discrepancy. Correcting for the luminous volume effect allows a consistent determination of the Standard Model muon magnetic dipole moment with precision that is comparable to the latest world average from muon spin precession experiments. _Introduction_--Within the Standard Model of particle physics, the shift of the magnetic dipole moment of the muon \(a_{\mu}=g_{\mu}/2-1\)[1] from the relativistic quantum mechanical value \(g_{\mu}^{\rm RQD}/2=1\)[2] is dominated by the leading order quantum electrodynamic contribution \(a_{\mu}^{\rm QED,LO}=\alpha/(2\pi)\approx 1.164\times 10^{-3}\)[3; 4], where \(\alpha\) is the electromagnetic "fine-structure" coupling constant [5]. Yet the dominant source of the uncertainty in the current consensus value \(a_{\mu}({\rm SM,WP20})=(116~{}599~{}1810\pm 43)\times 10^{-11}\)[6] is neither higher-order quantum electrodynamic contributions [7; 8] nor the contributions from the weak nuclear force [9; 10]. Instead, the leading order contribution from hadronic vacuum polarization \(a_{\mu}^{\rm HLO}\approx 700\times 10^{-10}\)[11; 12; 13; 14; 15; 16] is the main source of uncertainty followed by uncertainties in the hadronic light-by-light contribution [17; 18; 19; 20; 21; 22; 23; 24; 25] and higher-order contributions from hadronic vacuum polarization [26]. In this Article, we resolve the long-standing tension between the determinations by the KLOE [27; 28; 29; 30] and BABAR [31; 32] experiments of the resonant two-pion part \(a_{\mu}^{\pi\pi}\approx 380\times 10^{-10}\)[33] of the leading order hadronic vacuum polarization contribution. The measurements of the charged pion \(\pi^{+}\pi^{-}(\gamma)\) pair production cross-section from electron-positron annihilation at the neutral \(\rho^{0}\) vector meson resonance that determine \(a_{\mu}^{\pi\pi}\)[34] are shifted below the Standard Model value by an amount that scales with the luminous volume created by the colliding electron and positron bunches within the detector (See Table 1). Correcting for the luminous volume effect resolves the long-standing KLOE-BABAR tension which is the main source of uncertainty in the current consensus value for the muon magnetic dipole moment within the Standard Model of particle physics [6]. _Luminous volume effect_--The luminous volume \(V_{\rho}=8\pi\sigma_{x}\sigma_{y}\sigma_{z}\) created by the colliding electron and positron bunches at the interaction region within the detector combines with the time \(T_{\rho}=2\sigma_{z}/c\) it takes the bunches to cross and the energy \(E_{\rho}\) of the \(\rho^{0}\) vector meson in the lab frame to give the shift [35] \[\Delta a_{\mu}^{\pi\pi} = (19.6\pm 1.2)\times 10^{-11}\left(\frac{E_{\rho}}{1000~{}{\rm MeV }}\right)\left(\frac{\sigma_{x}}{100~{}\mu{\rm m}}\right)^{1/2} \tag{1}\] \[\times \left(\frac{\sigma_{y}}{10~{}\mu{\rm m}}\right)^{1/2}\left(\frac {\sigma_{z}}{1~{}{\rm cm}}\right),\] where, \(\sigma_{x}\) is the bunch width, \(\sigma_{y}\) is the bunch height, and \(\sigma_{z}\) is the bunch length (See Table 2 for \(E_{\rho}\) and Table 3 for \(\sigma_{x},\sigma_{y}\), and \(\sigma_{z}\)). _Resolving resonant \(\rho^{0}\) vector meson contribution_--The differences in beam parameters for KLOE and CMD-3 give them a relative shift [36] \[\Delta a_{\mu}^{\pi\pi}({\rm KLOE}) - \Delta a_{\mu}^{\pi\pi}({\rm CMD3}) \tag{2}\] \[= (167\pm 10)\times 10^{-11}.\] The relative shift roughly resolves the 5.1 \(\sigma\) discrepancy between KLOE and CMD-3 [33] \[a_{\mu}^{\pi\pi}({\rm CMD3}) - a_{\mu}^{\pi\pi}({\rm KLOE}) \tag{3}\] \[= (3793\pm 30)-(3606\pm 21))\times 10^{-11}\] \[= (187\pm 37)\times 10^{-11}\] Similarly, the differences in beam parameters at KLOE and BABAR give them a relative shift (See Table 1) \[\Delta a_{\mu}({\rm KLOE}) - \Delta a_{\mu}({\rm BABAR})\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Experiment & \(a_{\mu}^{\pi\pi}(10^{-11})\) & \(\Delta a_{\mu}^{\pi\pi}(10^{-11})\) & \(a_{\mu}^{\pi\pi}({\rm SM})(10^{-11})\) \\ \hline CMD-3 & 3793 (30) & 0 & 3793 (30)(0) \\ \hline BABAR & 3701 (27) & 55 (3) & 3756 (27)(3) \\ \hline CMD-2 & 3665 (34) & 71 (4) & 3736 (34)(4) \\ \hline BES-III & 3618 (36) & 58 (4) & 3676 (36)(4) \\ \hline KLOE & 3606 (21) & 167 (10) & 3773 (21)(10) \\ \hline \end{tabular} \end{table} Table 1: Leading order hadronic contribution \(a_{\mu}^{\pi\pi}\) to the muon magnetic dipole moment determined by pion pair production \(\pi^{+}\pi^{-}(\gamma)\) from electron-positron annihilation with center-of-mass energy in the range \(0.6\leq\sqrt{s}\leq 0.88\) GeV at various experiments compiled by Ref. [33]. The luminous volume created by the colliding electron and positron bunches produces the shift \(\Delta a_{\mu}^{\pi\pi}\) given by Eq. (1). The Standard Model value \(a_{\mu}^{\pi\pi}({\rm SM})=a_{\mu}^{\pi\pi}+\Delta a_{\mu}\) corrects for the luminous volume effect. \[= ((167\pm 10)-(55\pm 3))\times 10^{-11} \tag{4}\] \[= (112\pm 7)\times 10^{-11}.\] This relative shift resolves the long-standing 2.8 \(\sigma\) tension between KLOE and BABAR [33] \[a_{\mu}^{\pi\pi}({\rm BABAR})-a_{\mu}^{\pi\pi}({\rm KLOE})=(95\pm 34)\times 10^{-11} \tag{5}\] _Resonant \(\rho^{0}\) vector meson contribution--_Combining the volume-corrected determinations of the leading order hadronic vacuum polarization contribution to muon spin precession from charged pion pair \(\pi^{+}\pi^{-}(\gamma)\) production at the neutral \(\rho^{0}\) vector meson resonance gives the Standard Model (SM) result [37] \[a_{\mu}^{\pi\pi}({\rm SM}) = 3756\ (13)_{\rho}(5)_{A}\times 10^{-11} \tag{6}\] \[= (3756\pm 14)\times 10^{-11}\] where, the experimental uncertainty (\(\rho\)) dominates over the uncertainty (\(A\)) in the parameters entering the volume correction. _Full \(\rho^{0}\) vector meson contribution--_The other leading order hadronic vacuum polarization contributions determined from pion production channels with no net charge in the final state combine with this resonant contribution to give the Standard Model leading order hadronic vacuum polarization contribution [38]: \[a_{\mu}^{\rm HLO}(\rho^{0}) = 3756\ (13)_{\rho}(5)_{A}\times 10^{-11} \tag{7}\] \[+ 3225\ (29)_{\rm off\rho}\times 10^{-11}\] \[= 6981\ (29)_{\rm off\rho}(13)_{\rho}(5)_{A}\times 10^{-11},\] \[= (6981\pm 32)\times 10^{-11},\] where, the dominant uncertainty (off \(\rho\)) is from experiments away from the \(\rho^{0}\) resonance. _Standard Model hadronic leading order contribution--_The neutral determination \(a_{\mu}^{\rm HLO}(\rho^{0})\) from Eq. (7) combines with the charged result \(a_{\mu}^{\rm HLO}(\rho^{\pm})=(7030\pm 44)\times 10^{-11}\)[39; 40; 41] centered on the \(\rho^{\pm}\) resonance and with the lattice quantum chromodynamic estimate \(a_{\mu}^{\rm HLO}({\rm QCD})=(7075\pm 55)\times 10^{-11}\)[42] to give the Standard Model value of the hadronic leading order contribution to muon spin precession [43] \[a_{\mu}^{\rm HLO}({\rm SM}) = 7033\ (16)_{\rm off\rho}(12)_{\rho^{\pm}}(10)_{\rm QCD}(7)_{ \rho^{0}}(3)_{A}\times 10^{-11} \tag{8}\] \[= (7033\pm 23)\times 10^{-11},\] where, the uncertainty from the charged measurement (\(\rho^{\pm}\)) slightly exceeds that of the lattice (QCD). _Tension between Standard Model and Experiment--_Using the hadronic leading order contribution determined from the full range of experimental and theoretical inputs including lattice quantum chromodynamic simulations and measurements at both the neutral \(\rho^{0}\) and charged \(\rho^{\pm}\) vector meson resonances, one can then update the tension between the muon spin precession experiments (Exp) and the Standard Model (SM) value for the muon magnetic dipole moment [44]. \[a_{\mu}({\rm Exp}) - a_{\mu}({\rm SM}) \tag{9}\] \[= 146\ (22)_{\mu}(23)_{\rm HLO}(18)_{\rm HLbL}\times 10^{-11}\] \[= 146\ (22)_{\mu}(29)_{\rm SM}\] \[= (146\pm 36)\times 10^{-11},\] where, the recently reduced uncertainty from the muon measurement (\(\mu\)) is comparable to the Standard Model uncertainty (SM). The tension has 4.1\(\sigma\) significance that falls just below the conventional threshold for the discovery of a new phenomenon. _Resolving tension between Standard Model and experiment--_ A similar volume effect shifts the muon spin precession frequency to create the tension with the Standard Model muon magnetic dipole moment [45] \[\Delta a_{\mu} = (165\pm 10)\times 10^{-11}\left(\frac{N_{\mu}}{1.93\times 10^{3} }\right)^{1/2} \tag{10}\] \[\times \left(\frac{V_{\mu}}{3.12\times 10^{4}\ {\rm cm}^{3}}\right)^{1/2} \left(\frac{T_{\mu}}{64.4\ \mu{\rm s}}\right)^{1/2}\] \[= (165\pm 10)\times 10^{-11},\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Experiment & Collider & \(E\) (MeV) & \(E_{\rho}\) (MeV) \\ \hline KLOE & DA\(\Phi\)NE & 510 & 805 \\ \hline BES-III & BEPC-2 & 1885 & 1966 \\ \hline CMD-2 & VEPP-2M & 388 & 775 \\ \hline BABAR & PEP-II & 5282 & 3131 \\ \hline \end{tabular} \end{table} Table 2: Experiments at electron-positron colliders to detect \(\pi^{+}\pi^{-}(\gamma)\) production from the neutral \(\rho^{0}\) vector meson resonance at \(m_{\rho}=775\) MeV [52]. The beam has energy \(E\) and the \(\rho^{0}\) meson has energy \(E_{\rho}=\gamma\sqrt{p_{\rho}^{2}+m_{\rho}^{2}}-\gamma\beta p_{\rho}\) in the lab frame [53]. Here, \(p_{\rho}=E-m_{\rho}^{2}/(4E)\) is the momentum of the \(\rho^{0}\) meson in the center-of-mass frame. The time-dilation factor is \(\gamma\approx 1.00\) and the speed is \(\beta c\approx 0.00\ c\) for the center-of-mass frame of the electron-positron collision with respect to the lab frame for all colliders except BABAR. For BABAR the effective beam energy \(E=\sqrt{E_{+}E_{-}}=5282\) MeV is shown in the table, where, \(E_{+}=3100\) MeV was the positron beam energy and \(E_{-}=9000\) MeV was the electron beam energy giving a time-dilation factor \(\gamma=1.15\) and speed \(\beta c=0.49\ c\)[51]. The source for all beam parameters was Ref. [49] except VEPP-2M which came from Ref. [50]. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Collider & \(\sigma_{x}\) (\(\mu\)m) & \(\sigma_{y}\) (\(\mu\)m) & \(\sigma_{z}\) (cm) \\ \hline DA\(\Phi\)NE [48] & 2000 & 20 & 3.0 \\ \hline DA\(\Phi\)NE [49] & 260 & 4.8 & 2.0 \\ \hline BEPC-2 [49] & 347 & 4.5 & 1.2 \\ \hline VEPP-2M [50] & 400 & 10 & 3.0 \\ \hline VEPP-2M [50] & 35 & 35 & 3.0 \\ \hline PEP-II [49] & 157 & 4.7 & 1.05 \\ \hline \end{tabular} \end{table} Table 3: Beam parameters of high-energy electron-positron colliders. Bunch width \(\sigma_{x}\), height \(\sigma_{y}\), and length \(\sigma_{z}\) refer to the size of the bunch at the interaction point within the detectors. For colliders with more than one set of beam parameters, the shift \(\Delta a_{\mu}^{\pi\pi}\) was computed separately for each set of beam parameters using Eq. (1) and the equal-weight average of the shifts was then taken. where, \(N_{\mu}\) is the average number of muons in the beam during the measurement, \(V_{\mu}\) is the volume of the muon beam within the storage ring, and \(T_{\mu}\) is the life-time of the muon in the lab frame [46]. In the last line we average the shift for the Brookhaven and Fermilab experiments beam parameters weighted by the precision of their measurements [47]. Comparing the tension from Eq. (9) with shift shown in Eq. (10), we find that the shift resolves the 4.1 \(\sigma\) tension between the muon spin precession measurements and the Standard Model muon magnetic dipole moment. _Conclusion_--The Standard Model value for the muon magnetic dipole moment was determined using the full range of experimental and theoretical input for the first time. The result has comparable precision to the recently improved experimental world average from muon spin precession experiments. The tension between the Standard Model and experiment has 4.1 \(\sigma\) significance that falls just below the threshold for the discovery of a new phenomenon. _Acknowledgments_--The author acknowledges D. Mazura for critical contributions and constant support without which this work would not have been possible.
2305.18688
Chern-Simons field theory on the general affine group, $3d$-gravity and the extension of Cartan connections
The purpose of this article is to study the correspondence between $3d$-gravity and the Chern-Simons field theory from the perspective of geometric mechanics, specifically in the case where the structure group is the general affine group. To accomplish this, the paper discusses a variational problem of the Chern-Simons type on a principal fiber bundle with this group as its structure group. The connection to the usual Chern-Simons theory is established by utilizing a generalization, in the context of Cartan connections, of the notion of extension and reduction of connections.
Santiago Capriotti
2023-05-30T02:12:17Z
http://arxiv.org/abs/2305.18688v2
Chern-Simons field theory on the general affine group, 3d-gravity and the extension of Cartan connections ###### Abstract. The purpose of this article is to study the correspondence between 3d-gravity and the Chern-Simons field theory from the perspective of geometric mechanics, specifically in the case where the structure group is the general affine group. To accomplish this, the paper discusses a variational problem of the Chern-Simons type on a principal fiber bundle with this group as its structure group. The connection to the usual Chern-Simons theory is established by utilizing a generalization, in the context of Cartan connections, of the notion of extension and reduction of connections. ## 1. Integrable field theories and gravity Chern-Simons field theory is a well-known type of gauge field theory [10, 11, 12] whose quantization yields to topological field theory [13, 14]. In its more general setting [12], the fields in Chern-Simons gauge theory are connections on any K-principal bundle with fixed base space. In this vein, the Chern-Simons action is considered as a function on the (infinite dimensional) manifold of the K-connections on a fixed base space \(M\), and it is evident that the construction of this manifold requires a precise knowledge of every K-principal bundle on \(M\). Although this operation can be performed successfully (getting more complicated as the dimension of \(M\) increases), this scheme is out of range of the geometrical formulation for field theory [1, 1, 1, 2, 3, 4], because in this approach the fields should be sections of a definite bundle (preferably, of finite dimension). A successful formulation for Chern-Simons field theory fitting in this geometrical scheme is described in [12], using a variational problem posed by local Lagrangians; a multisymplectic formulation for Chern-Simons field theory can be found in [1], where the structure of the constraints arising from the singular nature of the Lagrangian is studied. The formulation we will use in this article is inspired in [13], and uses Cartan connections as fields. On the other hand, it is an interesting result [1, 14] that when gauge group is the Poincare group \(ISO\left(2,1\right)\), Chern-Simon field theory in dimension \(3\) can be related with Palatini gravity on a spacetime of the same dimension. This correspondence is achieved by the splitting \[\mathfrak{iso}\left(2,1\right)=\mathfrak{so}\left(2,1\right)\oplus\mathbb{R}^{3},\] that decomposes the field \(A\) in two parts, one living in \(\mathbb{R}^{3}\) and another in \(\mathfrak{so}\left(2,1\right)\); the idea is to recognize each of these fields as a vierbein and a \(\mathfrak{so}\left(2,1\right)\)-connection respectively, which can be seen as the basic fields for the Palatini description of general relativity. In [13], this scheme is generalized to pair of algebras \(\mathfrak{g}\subset\mathfrak{h}\) defining a Cartan connection on a \(G\)-principal bundle \(\pi:P\to M\). A remarkable fact is that, in some of these descriptions of Chern-Simons theory and its connection with gravity, is usual a certain lack of definiteness with respect to the nature of the principal bundle to which the connections of the theory belongs. While the corresponding base space and structure group are properly set, it is avoided any precision on the geometrical characteristics of such bundle. Not that it keeps people away from working with Chern-Simons gauge theory: It is in this context where frameworks like the one discussed by Freed are fruitful. Moreover, the informed reader could recall that when dealing with gravity, a metric is available, and it can be used to select suitable subbundles of the frame bundle. Nevertheless, this answer should be considered as partial, and in fact might put us in a paradoxical situation, because the metric is part of the dynamical fields in gravity, and so it should be necessary to solve the equations of motion of gravity before constructing the bundle where the fields should live. Therefore, it seems to appear a tension between the formalism describing field theory from a geometric viewpoint, and the characteristics that a Chern-Simons gauge theory must have in order to represent gravity. At the end, the apparent paradox is solved by invoking the gauge symmetry of the Lagrangian, but it could be interesting to explore how to deal with this situation from a geometrical point of view. These considerations set the aims of the article. Basically, we are looking for a formulation of the correspondence between Chern-Simons field theory and gravity (for spacetime dimension \(m=3\)) where the structure groups of the involved principal bundle are respectively the affine general group \(A\left(3\right)\) and the general linear group \(GL\left(3\right)\). In order to achieve this objective, we will generalize the scheme posed by Wise. The main tool used for the generalization is the (as far as I know, novel) formulation of a concept equivalent to _extension of a connection_ for Cartan connections (see Section 4 below). The idea for the generalization is that any Cartan connection can be seen as a principal connection on a suitable bundle (i.e. Proposition 3); thus the extension of a Cartan connection is the Cartan connection induced by the extension of this associated principal connection. As we will see, a drawback of this construction is that, in general, the connections obtained will be of more general nature than Cartan connections (are generalized Cartan connections, as defined in [1]). Let us describe in some detail how this scheme will be implemented. In the formulation of Wise, the basic geometrical data is a \(K\)-principal bundle \(R_{\zeta}\), and its fields are described by a Cartan connection taking values in a Lie algebra \(\mathfrak{g}\) such that \(\mathfrak{k}\subset\mathfrak{g}\). Let us recall that the Lorentz group \(K=SO\left(m-1,1\right)\) becomes a subgroup of different Lie groups \(G\) in the Wise scheme, depending on the sign of the cosmological constant \(\Lambda\); namely, we have that the total group \(G\) is in each case \[G=\begin{cases}SO\left(m,1\right)&\Lambda>0\\ K\oplus\mathbb{R}^{m}&\Lambda=0\\ SO\left(m-1,2\right)&\Lambda<0.\end{cases}\] \(K\) enters as subgroup in the case \(\Lambda<0\) through the immersion \[\iota_{-}\left(A\right)=\begin{bmatrix}A&0\\ 0&1\end{bmatrix}\] for every \(A\in K\), and \[\iota_{+}\left(A\right)=\begin{bmatrix}1&0\\ 0&A\end{bmatrix}\] in the case \(\Lambda>0\). In the present article we will focus in the case \(\Lambda=0\); the other cases will be considered elsewhere. In order to see how to proceed for the generalization of this scheme to a principal bundle with structure group \(\operatorname{GL}\left(\mathfrak{m}\right)\), let us consider the following diagram of Lie groups: Arrows are induced by canonical inclusions. We want to promote it to a commutative diagram involving principal bundles with structure groups borrowed from the nodes of this diagram. Now, what kind of \(K\)-principal bundle \(R_{\zeta}\) on a manifold \(M\) has a Cartan connection with values in the Lie algebra \(\mathfrak{g}=\mathfrak{t}\oplus\mathbb{R}^{m}\)? A possible answer can be found using geometrical considerations. In fact, as it is indicated in [20, 1], whenever the map \[\operatorname{Ad}:K\to\operatorname{GL}\left(\mathfrak{t}\oplus\mathbb{R}^{ m}/\mathfrak{t}\right)=\operatorname{GL}\left(\mathfrak{m}\right)\] is injective, \(R_{\zeta}\) becomes a \(K\)-structure, namely, it is a \(K\)-subbundle of the frame bundle \(LM\); in this case, the previous diagram has an analogous diagram at principal bundle level Here \(AM\) indicates the affine frame bundle of the spacetime \(M\), maps \(\beta:AM\to LM,\gamma:LM\to AM\) are canonical maps between these bundles (see Appendix C), and the symbol \(P\left[G\right]\), where \(\pi:P\to M\) is an \(H\)-principal bundle and \(G\supset H\) is a Lie group containing \(H\), indicates the extension of \(P\) by enlarging its structure group to \(G\) (this construction is detailed in Section 3 below). Therefore, we will restrict ourselves to Cartan connections describing Palatini gravity on a \(K\)-structure \(R_{\zeta}\); in this setting we devise a method to extend them to Cartan connections on \(LM\), in order to reproduce the correspondence between Chern-Simons field theory and Palatini gravity in a case where the structure group is the general linear group \(\operatorname{GL}\left(\mathfrak{m}\right)\). Let us briefly describe the structure of the article. The geometrical tools used throughout the article are presented in Section 2. The description of Cartan connections as sections of a bundle is carried out in Section 3; this description is necessary due to the type of geometrical formulation adopted for the variational problems. The operations of extension and reduction for generalized Cartan connections are developed in Section 4. Although the author's knowledge of Cartan connections is far from exhaustive, it seems that these operations, even though they are a direct consequence of the procedure of identification between Cartan connections and principal connections on the extended bundle, have not been described previously in the literature. If this is indeed the case, this section represents an original contribution of the present article. The variational problems for Chern-Simons and gravity are described in Section 5. The main contribution of this section is to find a global formulation for the Chern-Simons field theory in terms of jet bundles (Section 5.1); the price that must be paid for this formulation is the appearance of an additional constraint (see Remark 4). The main result of the article is discussed in Section 6: A variational problem of the Chern-Simons type has been found, but with a structure group given by the general affine group (Section 6.1) such that the extremals of any Chern-Simons theory (described in terms of the Section 5.1) are in a bijective correspondence with their extremals through the extension and reduction operations of Section 4. _Notations._ We are adopting here the notational conventions from [13] when dealing with bundles and its associated jet spaces. It means that, given a bundle \(\pi:P\to M\), there exists a family of bundles and maps fitting in the following diagram Sections of \(\pi:P\to M\) will be indicate by the symbol \(\mathbb{F}\pi\). The set of vectors tangent to \(P\) in the kernel of \(\mathbb{F}\pi\) will be represented with the symbol \(V\pi\subset TP\). In this regard, the set of vector fields which are vertical for a bundle map \(\pi:P\to M\) will be indicated by \(\mathfrak{X}^{V\pi}(P)\). The space of differential \(p\)-forms, sections of \(\Lambda^{p}(T^{*}Q)\to Q\), will be denoted by \(\Omega^{p}(Q)\). We also write \(\Lambda^{\bullet}(Q)=\bigoplus_{j=1}^{\dim Q}\Lambda^{j}(T^{*}Q)\). If \(f:P\to Q\) is a smooth map and \(\alpha_{x}\) is a \(p\)-covector on \(Q\), we will sometimes use the notation \(\alpha_{f(x)}\circ T_{x}f\) to denote its pullback \(f^{*}\alpha_{x}\). If \(P_{1}\to Q\) and \(P_{2}\to Q\) are fiber bundles over the same base \(Q\) we will write \(P_{1}\times_{Q}P_{2}\) for their fibered product, or simply \(P_{1}\times P_{2}\) if there is no risk of confusion. Unless explicitly stated, the canonical projections onto its factor will be indicated by \[\operatorname{pr}_{i}:P_{1}\times P_{2}\to P_{i},\qquad i=1,2.\] Given a manifold \(N\) and a Lie group \(G\) acting on \(N\), the symbol \(\left[n\right]_{G}\) for \(n\in N\) will indicate the \(G\)-orbit in \(N\) containing \(n\); the canonical projection onto its quotient will be denoted by \[P_{G}^{N}:N\to N/G.\] Also, if \(g\) is the Lie algebra for the group \(G\), the symbol \(\xi_{N}\) will represent the infinitesimal generator for the \(G\)-action asssociated to \(\xi\in g\). Finally, Einstein summation convention will be used everywhere. We will be working with several different types of connections, so it will be necessary to have a notational convention for them. Giving a \(H\)-principal bundle \(p:Q\to M\), the first jet bundle \(J^{1}p\) has a \(h\)-valued \(1\)-form that is called _canonical connection_; this form will be indicated by the symbol \[\theta_{J^{1}p}\in\Omega^{1}\left(J^{1}p,h\right).\] Because it will become a connection form, its associated curvature form will be denoted by \[\Theta_{J^{1}p}\in\Omega^{2}\left(J^{1}p,h\right).\] On the other hand, two different types of connection will be considered on a principal bundle \(p:Q\to M\). First, it could have a connection form, which will be indicated as \(\omega_{Q}\) its asscoiated curvature form will be \(\Omega_{Q}\). Moreover, if the structure group \(H\) is a subgroup of a bigger Lie group \(G\supset H\), then we can have on \(Q\) a \(g\)-valued \(1\)-form, which will be denoted as \(A_{Q}\), and the curvature connection will become \(F_{Q}\). ## 2. Some geometrical tools The following section collects geometrical facts regarding principal bundles and Cartan connections on them. ### Geometry of the jet space of a principal bundle Throughout the article, we will make extensive use of the geometrical tools related to the jet space associated to a principal bundle, as well as its connection bundle, as they are discussed in [1, 1]. So, in order to proceed, let \(p:Q\to N\) be a principal bundle with structure group \(H\); then we can lift the right action of \(H\) on \(Q\) to a \(H\)-action on \(J^{1}p\), and so define the bundle \[\overline{p}:C\left(Q\right):=J^{1}p/H\to M\] fitting in the following diagram It can be proved [1] that this diagram defines \(J^{1}p\) as a pullback, namely, that \[J^{1}p=p^{*}Q=Q\times_{M}C\left(Q\right). \tag{2.1}\] We know that \(J^{1}p\) comes equipped with the _contact structure_, that can be described by means of a \(Vp\)-valued \(1\)-form \[\theta_{J^{1}p}\big{|}_{j_{x}^{*}s}:=T_{j_{x}^{1}s}p_{10}-T_{x}s\circ T_{j_{x}^ {1}s}p_{1}; \tag{2.2}\] moreover, because \(p:Q\to N\) is a principal bundle, we have the bundle isomorphism on \(Q\) \[Vp\simeq Q\times h.\] It means that we can consider \(\theta\) as a \(h\)-valued \(1\)-form; in fact, with respect to the \(K\)-principal bundle structure \[p_{H}^{J^{1}p}:J^{1}p\to C\left(Q\right),\] the \(1\)-form \(\theta\) becomes a connection form, dubbed _canonical connection form_. It has the following property. **Proposition 1**.: _Let \(\Gamma_{Q}:Q\to J^{1}p\) be a connection on \(Q\). Then its connection form \(\omega_{Q}\in\Omega^{1}\left(Q,h\right)\) can be constructed from the canonical connection form through pullback along \(\Gamma_{Q}\),_ \[\omega_{Q}=\Gamma_{Q}^{*}\theta_{J^{1}p}.\] For any manifold \(M\) of dimension \(m\), the bundle \(\tau:LM\to M\) is the _frame bundle of \(M\)_, is defined through \[LM:=\bigcup_{x\in M}\left\{u:\mathbb{R}^{m}\to T_{x}M\text{ linear and with inverse}\right\}. \tag{2.3}\] It has a canonical free \(GL\left(m,\mathbb{R}\right)\)-action, given by the formula \[u\cdot A:=u\circ A,\qquad u\in LM,A\in GL\left(m,\mathbb{R}\right).\] Let \(\eta\in M_{m}\left(\mathbb{R}\right)\) be a non degenerate symmetric \(m\times m\)-matrix with real entries; for definiteness, we will fix \[\eta:=\begin{bmatrix}-1&0&\cdots&0\\ 0&1&&0\\ \vdots&&\ddots&\vdots\\ 0&\cdots&0&1\end{bmatrix},\] although the constructions we will consider in the present article should work with any signature. Then we have a Lie group \(K\subset GL\left(m,\mathbb{R}\right)\) defined by \[K:=\left\{A\in M_{m}\left(\mathbb{R}\right):A\eta\text{\rm A}^{T}=\eta\right\}. \tag{2.4}\] Then we have an action of \(K\) on \(LM\); it yields to a bundle \[\tau_{\Sigma}:\Sigma:=LM/K\to M.\] **Lemma 1**.: _The bundle \(\Sigma\) is the bundle of metrics of \(\eta\)-signature on \(M\)._ Let us indicate with \(\mathfrak{k}\) the Lie algebra of \(K\); then, we have that \[\mathfrak{k}:=\left\{a\in gl\left(m,\mathbb{R}\right):a\eta+\eta a^{T}=0 \right\}.\] Accordingly, we can define the \(K\)-invariant subspace \[p:=\left\{a\in gl\left(m,\mathbb{R}\right):a\eta-\eta a^{T}=0\right\}, \tag{2.5}\] usually called _transvections_ (see [20]); it follows that \[gl\left(m,\mathbb{R}\right)=\mathfrak{k}\oplus p.\] We will need this result concerning some natural properties of the canonical connections. **Lemma 2**.: _Let \(p:Q\to N\) be a \(H\)-principal bundle and \(i_{\zeta}:R_{\zeta}\hookrightarrow Q\) a \(L\)-principal subbundle, with projection_ \[p_{\zeta}:R_{\zeta}\to N.\] _We have the following relation between the canonical connections on \(J^{1}p\) and \(J^{1}p_{\zeta}\), namely_ \[\left(j^{1}i_{\zeta}\right)^{*}\theta_{J^{1}p}=\theta_{J^{1}p_{\zeta}},\] _where \(\theta_{J^{1}p}\in\Omega^{1}\left(J^{1}p,\mathfrak{k}\right)\) and \(\theta_{J^{1}p_{\zeta}}\in\Omega^{1}\left(J^{1}p_{\zeta},\mathfrak{k}\right)\) are the corresponding canonical connection forms._ Proof.: The lemma follows from the formula \[\theta_{J^{1}p}\big{|}_{j^{1}_{x}s}\circ T_{j^{1}_{x}\sigma}\left(j^{1}i_{ \zeta}\right)=T_{\sigma\left(x\right)}i_{\zeta}\circ\theta_{J^{1}p_{\zeta}} \big{|}_{j^{1}_{x}\sigma},\] valid for any \(j^{1}_{x}s=j^{1}i_{\zeta}\left(j^{1}_{x}\sigma\right)\) and \(j^{1}_{x}\sigma\in J^{1}p_{\zeta}\) Assuming that some topological conditions on the manifold \(M\) hold1, a family Footnote 1: The existence of a metric with \((p,q)\)-signature is equivalent to the splitting of the tangent bundle \(TM\) in a direct sum of vector subbundles of rank \(p\) and \(q\) respectively. \[\{O_{\zeta}:\zeta:M\to\Sigma\text{ metric}\}\] of subbundles of \(LM\) can be constructed; namely, let us define \[O_{\zeta}:=\{u\in LM:\zeta(u\left(w_{1}\right),u\left(w_{2}\right))=\eta\left( w_{1},w_{2}\right)\text{ for all }w_{1},w_{2}\in\mathbb{R}^{m}\}\,.\] Here \(\eta:\mathbb{R}^{m}\times\mathbb{R}^{m}\to\mathbb{R}\) indicates the bilinear form associated to the matrix \(\eta\). ### Introduction to Cartan geometry Let us recall the definition of Cartan connections on a principal bundle [10, 11, 12]. Let \((G,H)\) be a pair of Lie groups such that \(H\subset G\) is a closed subgroup and \(G/H\) is connected. Recall that for every principal bundle \(\pi:P\to M\) with structure group \(H\) we have a map \[\kappa_{H}:VP\to\mathfrak{h}\] such that \[\kappa_{H}\left(\xi_{P}\left(u\right)\right)=\xi\] for every \(\xi\in\mathfrak{h}\) and every \(u\in P\). **Definition 1** (Cartan geometry).: A _Cartan geometry_ modelled on the pair \((G,H)\) is an \(H\)-principal bundle \(\pi:P\to M\) together with a \(g\)-valued \(1\)-form \(A\) on \(P\) such that 1. \(A_{u}:T_{u}P\to g\) is a linear isomorphism for every \(u\in P\), 2. \(R_{h}^{*}A=Ad_{h^{-1}}\circ A\) for every \(h\in H\), and 3. \(A|\,VP=\kappa_{H}\). The form \(A\) is called the _(principal) Cartan connection_ for the given Cartan geometry. A Cartan geometry on a manifold provides us with a associated bundle description of its tangent bundle, as the following result indicates. **Proposition 2**.: _Let \((\pi:P\to M,A)\) be a Cartan geometry modeled on the pair \((G,H)\). Then \(TM\) is isomorphic as vector bundle to the associated bundle_ \[P\times_{H}g/\mathfrak{h},\] _where \(H\) acts on \(g/\mathfrak{h}\) through the quotient representation induced by the \(Ad\left(H\right)\)-action on \(g\)._ Proof.: Let \(u\in P\); because \(A_{u}:T_{u}P\to g\) satisfies \(A|\,VP=\kappa_{H}\), it induces a morphism \[\phi_{u}:T_{x}M\to g/\mathfrak{h}:v_{x}\mapsto\pi_{g/\mathfrak{h}}\left(A_{u} \left(\widehat{v_{x}}\right)\right),\] where \(x=\pi\left(u\right)\), \(\widehat{v_{x}}\in T_{u}P\) is any element projecting onto \(v_{x}\) via \(T_{u}\pi\), and \[\pi_{g/\mathfrak{h}}:g\to g/\mathfrak{h}\] is the canonical projection onto the quotient. It becomes an isomorphism because \(A_{u}\) is also an isomorphism, and given that this map is \(H\)-equivariant, we have that \[\phi_{u\cdot h}=Ad_{h^{-1}}\circ\phi_{u}\] for every \(u\in P\) ad \(h\in H\). Therefore, we have the isomorphism of bundles \(\overline{q}:P\times_{H}g/\mathfrak{h}\to TM\) given by \[\overline{q}\left(\left[u,\left[\xi\right]_{\mathfrak{h}}\right]_{H}\right) :=\left(\phi_{u}\right)^{-1}\left(\left[\xi\right]_{\mathfrak{h}}\right),\] that can be proved to be well-defined. We should stress that a Cartan connection is not a principal connection on \(P\), because it takes values in the larger Lie algebra \(g\); nevertheless, there exists a relationship between these concepts [10]. Consider \(H\) as a \(G\)-space through left multiplication, and construct the associated bundle \[P\left[G\right]:=P\times_{H}G;\] it is a \(G\)-principal bundle, and we have a canonical inclusion \(\gamma:P\hookrightarrow P\left[G\right]\) given by \[\gamma\left(u\right):=\left[u,e\right]_{H}.\] It is known that every morphism of \(G\)-principal bundles on a fixed basis \(M\) is an isomorphism [11]; this fact gives rise to the following property of the extension procedure described above. **Lemma 3**.: _Let \(P\) be an \(H\)-principal bundle on \(M\) and suppose further that it is a subbundle of a \(G\)-principal bundle \(Q\) on \(M\). Then \(Q\simeq P\left[G\right]\)._ Proof.: Let \(i:P\hookrightarrow Q\) be the immersion of \(P\) into \(Q\); then we have the morphism \[\phi:P\left[G\right]\to Q:\left[u,g\right]_{H}\mapsto i\left(u\right)\cdot g.\] It is a morphism of \(G\)-principal bundles on \(M\); therefore, \(Q\) and \(P\left[G\right]\) are isomorphic, as desired. The extension of a principal bundle can be used to relate Cartan connections with principal connections [1, 1, 2]. **Proposition 3**.: _The Cartan connection \(A:TP\to g\) induces on \(P\left[G\right]\) a unique principal connection form \(\widetilde{A}:TP\left[G\right]\to g\) such that_ \[\gamma^{*}\widetilde{A}=A.\] _Conversely, suppose that \(\dim P=\dim G\) and let \(\widetilde{A}\) be a principal connection on \(P\left[G\right]\) such that_ \[\ker\widetilde{A}\cap T_{Y}\left(TP\right)=\left\{0\right\}. \tag{2.6}\] _Then \(A:=\gamma^{*}\widetilde{A}\) is a Cartan connection on \(P\)._ Proof.: Because the quotient map \(p_{H}^{P\times G}:P\times G\to P\left[G\right]\) is surjective, any element \(W\in T_{\left[u,g\right]_{H}}P\left[G\right]\) can be represented as \[W=T_{\left(u,g\right)}p_{H}^{P\times G}\left(X_{u},T_{e}L_{g}\zeta\right)\] for \(X_{u}\in T_{u}P\) and \(\zeta\in g\). Then we define \[\widetilde{A}\Big{|}_{\left[u,g\right]_{H}}\left(W\right):=\zeta+\operatorname {Ad}_{g^{-1}}\left.A\right|_{u}\left(X_{u}\right).\] It can be proved that it is well-defined, and defines a principal connection on \(P\left[G\right]\). Now, let us suppose that we have a principal connection \(\widetilde{A}\) on \(P\left[G\right]\) and define \[A:=\gamma^{*}\widetilde{A}:TP\to g.\] Let us verify that it is a Cartan connection on \(P\). Because of the condition (2.6), the map \[A_{u}:T_{u}P\to g\] is a monomorphism; because \(\dim T_{u}P=\dim P=\dim G=\dim g\), we have that this map is an isomorphism. Additionally, for any \(h\in H\) \[R_{h}^{*}A =R_{h}^{*}\gamma^{*}\widetilde{A}\] \[=\left(\gamma\circ R_{h}\right)^{*}\widetilde{A}\] \[=\left(R_{h}\circ\gamma\right)^{*}\widetilde{A}\] \[=\gamma^{*}R_{h}^{*}\widetilde{A}\] \[=\gamma^{*}\left(\operatorname{Ad}_{h^{-1}}\circ\widetilde{A}\right)\] \[=\operatorname{Ad}_{h^{-1}}\circ A.\] Finally, let us take \(\xi\in h\); because of the identity \[T_{u}\gamma\left(\xi_{P}\left(u\right)\right)=\xi_{P\left[G\right]}\left( \gamma\left(u\right)\right)\] we will have that \[A_{u}\left(\xi_{P}\left(u\right)\right)=\widetilde{A}_{\gamma\left(u\right)} \left(T_{u}\gamma\left(\xi_{P}\left(u\right)\right)\right)=\widetilde{A}_{ \gamma\left(u\right)}\left(\xi_{P\left[G\right]}\left(\gamma\left(u\right) \right)\right)=\xi,\] for any \(u\in P\). ## 3. Cartan connections and jet bundles The basic idea for the geometrical interpretation of the correspondence between Chern-Simons field theory and gravity is due to Wise [23, 24], and uses a Chern-Simons Lagrangian evaluated on forms that are not principal connections, but Cartan connections. As we have pointed out before, this approach is not convenient when you try to understand Chern-Simons field theory from the viewpoint of geometric mechanics (i.e., the setting described in [11, 12, 13, 14, 15]). Therefore, we will devote the next section to translate the formalism of Cartan connections into the realm of jet bundles, in order to have at our disposal a language suitable for the description of Chern-Simons field theory from this viewpoint. ### Canonical (generalized) Cartan connection on a jet bundle Thus, we have that a Cartan connection on an \(H\)-principal bundle \(P\) can be seen as a principal connection \(\widetilde{A}\) on the extended bundle \(P\left[G\right]\), provided that * \(P\) has the same dimension than \(G\), and * the horizontal spaces of this connection are complementary to the tangent spaces of \(P\) (viewed as subspaces of the tangent spaces of \(P\left[G\right]\)). We can reformulate the second condition in terms of jet bundles; in fact, let us define the set \[U_{\gamma}:=\left\{\mathfrak{j}_{x}^{1}s\in J^{1}\pi_{\left[G\right]}:T_{s\left( x\right)}R_{g^{-1}}\left(T_{x}s\left(T_{x}M\right)\right)\cap T_{u}\gamma\left(T_{u}P \right)=\left\{0\right\}\text{ iff }s\left(x\right)=\left[u,g\right]\right\}.\] Using decomposition (2.1), we can see that sections having its images in this set corresponds exactly with connections on \(P\left[G\right]\) that verify condition (b) above. It gives us the following characterization for condition (2.6) in terms of the jet bundle of the extended principal bundle \(P\left[G\right]\). **Proposition 4**.: _A connection \(\widetilde{\Gamma}:P\left[G\right]\to J^{1}\pi_{\left[G\right]}\) will satisfy Equation (2.6) if and only if \(\widetilde{\Gamma}\left(\left[u,g\right]\right)\in U_{\gamma}\) for all \(\left[u,g\right]\in P\left[G\right]\)._ Now, we have the following result. **Lemma 4**.: _The set \(U_{\gamma}\) is open in \(J^{1}\pi_{[G]}\)._ Proof.: Given \(V_{1},V_{2}\subset V\) a pair of subspaces of the (finite-dimensional) vector space \(V\), fix basis \(B_{1},B_{2}\) and \(B\) for each of them. Thus the condition \[V_{1}\cap V_{2}=\{0\}\] is equivalent to \(F_{k}\neq 0\), where \(F_{k}\) is the sum of the squares of the \(k\times k\) minors of the matrix formed by the components of the vectors in \(B_{1}\cup B_{2}\) respect to the basis \(B\), where \(k=\dim V_{1}+\dim V_{2}\). Consider now \[V =T_{s(x)}P\left[G\right]\] \[V_{1} =T_{s(x)}R_{g^{-1}}\left(T_{x}s\left(T_{x}M\right)\right)\] \[V_{2} =T_{u}\gamma\left(T_{u}P\right);\] in terms of local coordinates on \(J^{1}\pi_{[G]}\), \(F_{k}\) gives rise to a polynomial in the jet variables, and the result follows. Therefore, instead of considering the variational problem for a Cartan connection over the entire bundle \(J^{1}\pi_{[G]}\), we can restrict ourselves to the open set \(U_{\gamma}\), thereby abandoning condition (b) above. With this consideration in mind, let us generalize the notion of Cartan connection; to this end we will use the notion of _generalized Cartan connection_, as it is defined in [1]. **Definition 2** (Generalized Cartan connection).: A _generalized (principal) \(g/h\)-Cartan connection_ on an \(H\)-principal bundle \(\pi:P\to M\) is a \(g\)-valued \(1\)-form \(A\) on \(P\) such that 1. \(R_{h}^{*}A=Ad_{h^{-1}}\circ A\) for every \(h\in H\), and 2. \(A|\,V\pi=\kappa_{H}\). Let us consider the following diagram The bundle \(\pi\circ pr_{1}:P\times G\to M\) together with the action \[(u,g)\cdot(h,g^{\prime}):=(u\cdot h,gg^{\prime})\] is an \(H\times G\)-principal bundle. Thus we have the following result. **Lemma 5**.: _If \(\theta_{J^{1}(\pi\circ pr_{1})}\in\Omega^{1}\left(J^{1}\left(\pi\circ pr_{1} \right),\mathfrak{h}\times\mathfrak{g}\right)\) denotes the canonical connection on \(J^{1}\left(\pi\circ pr_{1}\right)\) and \(\theta_{J^{1}\pi_{[G]}}\in\Omega^{1}\left(J^{1}\pi_{[G]},\mathfrak{g}\right)\) is the canonical connection on \(J^{1}\pi_{[G]}\), then_ \[\left(j^{1}p_{H}^{P\times G}\right)^{*}\theta_{J^{1}\pi_{[G]}}=pr_{2}^{h \times\mathfrak{g}}\circ\theta_{J^{1}(\pi\circ pr_{1})},\] _where_ \[pr_{2}^{h\times\mathfrak{g}}:\mathfrak{h}\times\mathfrak{g}\to\mathfrak{g}\] _stands for the projection onto the second factor._ Proof.: Recall that the canonical connection on the jet bundle of a principal bundle \(\pi:\mathsf{P}\to\mathsf{M}\) is nothing but the \(\mathsf{V}\pi\)-valued contact form, which can be seen as \(\mathfrak{g}\)-valued through the identification \[\mathsf{V}\pi\simeq\mathsf{P}\times\mathfrak{g}.\] It means that they have the property \[\left(\mathsf{j}^{1}\mathsf{p}_{\mathsf{H}}^{\mathsf{P}\times\mathsf{G}}\right) ^{*}\theta_{\mathsf{J}^{1}\pi_{[\mathsf{G}]}}=\operatorname{Tp}_{\mathsf{H}}^ {\mathsf{P}\times\mathsf{G}}\circ\theta_{\mathsf{J}^{1}\left(\pi\circ\mathsf{pr }_{1}\right)},\] and the lemma follows from the identification between the vertical bundles and the Lie algebras. ### Generalized Cartan connections as equivariant sections of a jet bundle Recall that a principal connection on \(\mathsf{P}\) can be represented by an equivariant map \(\Gamma:\mathsf{P}\to\mathsf{J}^{1}\pi\). The next result gives an analogous representation for generalized Cartan connections. **Proposition 5**.: _Any generalized Cartan connection \(A:\mathsf{TP}\to\mathfrak{g}\) gives rise to a \(\mathsf{H}\)-equivariant bundle map_ \[\Gamma_{A}:\mathsf{P}\to\mathsf{J}^{1}\pi_{[\mathsf{G}]}\] _covering the immersion \(\gamma:\mathsf{P}\hookrightarrow\mathsf{P}\left[\mathsf{G}\right]\). Conversely, for any \(\mathsf{H}\)-equivariant bundle map \(\Gamma:\mathsf{P}\to\mathsf{J}^{1}\pi_{[\mathsf{G}]}\) making the following diagram_ _commutative, there exists a generalized Cartan connection \(A_{\Gamma}:\mathsf{TP}\to\mathfrak{g}\) such that_ \[A_{\Gamma}=\Gamma^{*}\mathfrak{g}_{\mathsf{J}^{1}\pi_{[\mathsf{G}]}}.\] Proof.: The map \(\Gamma_{A}:\mathsf{P}\to\mathsf{J}^{1}\pi_{[\mathsf{G}]}\) is given by \[\Gamma_{A}\left(\mathsf{u}\right):v\in\mathsf{T}_{\mathsf{x}}M\mapsto\mathsf{ T}_{\left(\mathsf{u},\mathsf{e}\right)}\mathsf{p}_{\mathsf{H}}^{\mathsf{P} \times\mathsf{G}}\left(\widehat{v}_{\mathsf{u}},-A_{\mathsf{u}}\left(\widehat {v}_{\mathsf{u}}\right)\right),\] where \(\widehat{v}_{\mathsf{u}}\in\mathsf{T}_{\mathsf{u}}\mathsf{P}\) is any lifting of \(v\in\mathsf{T}_{\mathsf{x}}M\) to \(\mathsf{T}_{\mathsf{u}}\mathsf{P}\), and \[\mathsf{p}_{\mathsf{H}}^{\mathsf{P}\times\mathsf{G}}:\mathsf{P}\times\mathsf{ G}\to\mathsf{P}\left[\mathsf{G}\right]\] stands for the canonical projection onto the quotient; it is well-defined because of the identity \[\mathsf{T}_{\left(\mathsf{u},\mathfrak{g}\right)}\mathsf{p}_{\mathsf{H}}^{ \mathsf{P}\times\mathsf{G}}\left(-\zeta_{\mathsf{P}}\left(\mathsf{u}\right), \mathsf{T}_{\mathsf{e}}\mathsf{R}_{\mathfrak{g}}\left(\zeta\right)\right)=0\] for every \(\left(\mathsf{u},\mathfrak{g}\right)\in\mathsf{P}\times\mathsf{G}\) and \(\zeta\in\mathfrak{h}\). Let us now prove that \[\Gamma_{A}^{*}\mathfrak{g}_{\mathsf{J}^{1}\pi_{[\mathsf{G}]}}=A\] where \(\mathfrak{g}_{\mathsf{J}^{1}\pi_{[\mathsf{G}]}}\in\Omega^{1}\left(\mathsf{J} ^{1}\pi_{[\mathsf{G}]},\mathfrak{g}\right)\) is the canonical connection form on \(\mathsf{J}^{1}\pi_{[\mathsf{G}]}\). In order to accomplish it, let us define the section \(\widetilde{\Gamma}:\mathsf{P}\left[\mathsf{G}\right]\to\mathsf{J}^{1}\pi_{[ \mathsf{G}]}\) for \(\left(\pi_{[\mathsf{G}]}\right)_{\mathsf{J}^{0}}\) such that \[\widetilde{\Gamma}\left(\left[\mathsf{u},\mathfrak{g}\right]\right):v\in \mathsf{T}_{\mathsf{x}}M\mapsto\mathsf{T}_{\gamma\left(\mathsf{u}\right)} \mathsf{R}_{\mathfrak{g}}\left(\Gamma_{A}\left(\mathsf{u}\right)\left(v\right) \right)\in\mathsf{T}_{\left[\mathsf{u},\mathfrak{g}\right]}\mathsf{P}\left[ \mathsf{G}\right].\] Accordingly, we have the following commutative diagram Moreover, by fixing an auxiliary connection \(\omega_{0}\in\Omega^{1}\left(P,\mathfrak{h}\right)\) for \(P\), we can define a map \(\widehat{\Gamma}_{A}^{\omega_{0}}:P\times G\to J^{1}\left(\pi\circ\mathrm{pr}_{1}\right)\) such that \[\widehat{\Gamma}_{A}^{\omega_{0}}\left(u,g\right)\left(v\right):=\left(v_{u}^{H },-T_{e}R_{g^{-1}}\left(A_{u}\left(v_{u}^{H}\right)\right)\right)\] for every \(\left(u,g\right)\in P\times G\) and \(v\in T_{x}M\); here \(v_{u}^{H}\) indicates the horizontal lifting of \(v\) to \(T_{u}P\) by means of the connection \(\omega_{0}\). These maps fit in the following diagram Namely, Cartan connection \(\Gamma_{A}\) can be retrieved through the formula \[\Gamma_{A}=\widetilde{\Gamma}\circ\gamma=j^{1}p_{H}^{p\times G}\circ\widehat{ \Gamma}_{A}^{\omega_{0}}\circ\mathrm{inc};\] therefore, because \[\left(\widehat{\Gamma}_{A}^{\omega_{0}}\circ\mathrm{inc}\right)^{*}\theta_{J ^{1}\left(\pi_{0}\mathrm{pr}_{1}\right)}=\omega_{0}+A,\] Lemma 5 implies that \[\Gamma_{A}^{*}\theta_{J^{1}\pi_{\left(G\right)}}=A\] as required. We can rephrase Proposition 3 using this correspondence. **Corollary 1**.: _For any map \(\Gamma:P\to J^{1}\pi_{\left(G\right)}\) covering \(\gamma\) we have a section \(\widetilde{\Gamma}:P\left[G\right]\to J^{1}\pi_{\left(G\right)}\) for \(\left(\pi_{\left(G\right)}\right)_{10}\), which is defined through_ \[\widetilde{\Gamma}\left(\left[u,g\right]\right):v\in T_{x}M\mapsto T_{\gamma \left(u\right)}R_{g}\left(\Gamma\left(u\right)\left(v\right)\right)\in T_{ \left[u,g\right]}P\left[G\right].\] _Conversely, any connection \(\widetilde{\Gamma}:P\left[G\right]\to J^{1}\pi_{\left(G\right)}\) gives rise to a map \(\Gamma:P\to J^{1}\pi_{\left(G\right)}\) covering \(\gamma\) by restriction to \(\gamma\left(P\right)\subset P\left[G\right]\)._ Recall also the usual extension of principal connections in this context. **Proposition 6**.: _Let \(H\subset G\) be a pair of Lie groups. Consider a \(G\)-principal bundle \(\tau:Q\to M\) and let \(\pi:P\to M\) be a \(H\)-principal subbundle; let \(i:P\hookrightarrow Q\) be the canonical immersion. For every connection_ \[\Gamma:P\to J^{1}\pi\] _the extension \(\widehat{\Gamma}:Q\to J^{1}\tau\) of \(\Gamma\) is the map_ \[\widehat{\Gamma}\left(\widehat{u}\right):=j^{1}i\left(j^{1}R_{g}\left(\Gamma \left(u\right)\right)\right)\] _if and only if \(\widehat{u}=i\left(u\right)g\)._ ### The bundle of (generalized) Cartan connections We know [1, 1] that connections on a \(H\)-principal bundle \(\pi:P\to M\) can be seen either as equivariant sections of the map \(\pi_{10}:J^{1}\pi\to P\) or as sections of the quotient bundle \[\overline{\pi}:J^{1}\pi/H\to M.\] Both descriptions are related by the following result. **Proposition 7**.: _Let \(\pi:P\to M\) be an \(H\)-principal bundle and suppose that we have a bundle \(p:Q\to P\) together with an \(H\)-action on \(Q\) such that \(p\) is an \(H\)-equivariant map; consider the induced bundle \(\overline{p}:Q/H\to M\). Then, any section \(\sigma:M\to Q/H\) of \(\overline{p}\) can be lifted to a section \(\widehat{\sigma}:P\to Q\) of \(p\) and viceversa, any section of \(p\) induces a section of \(\overline{p}\) by quotient._ Proof.: Because the \(H\)-action on \(P\) is free, given \(\left[q\right]\in Q/H\) and \(u\in\pi^{-1}\left(\overline{p}\left(\left[q\right]\right)\right)\), there exists a unique element \(\widetilde{q}\in\left[q\right]\) such that \[p\left(\widetilde{q}\right)=u.\] Thus, for every \(x\in M\), define \[\widehat{\sigma}\left(u\right):=\widetilde{q}\] if and only if \(\sigma\left(x\right)=\left[q\right]\) and \(\widetilde{q}\in\left[q\right]\) is such that \[p\left(\widetilde{q}\right)=u.\qed\] In the case of Cartan connections, the description in terms of equivariant maps is provided by the following corollary of Proposition 5. **Corollary 2**.: _A generalized Cartan connection can be represented as an \(H\)-equivariant section of the pullback bundle_ Thus, for \(g/h\)-Cartan connections, we have the following commutative diagram of bundles where on \(\gamma^{*}\left(J^{1}\pi_{\left[G\right]}\right)/H\) the \(H\)-diagonal action is considered. By proceeding in analogy with the principal connections case, we obtain the following definition. **Definition 3** (Bundle of (generalized) \(\mathfrak{g}/\mathfrak{h}\)-Cartan connections).: The _bundle of (generalized) \(\mathfrak{g}/\mathfrak{h}\)-Cartan connections_ is the bundle _Remark 1_.: The Cartan connection form is thus recovered from a section \[\sigma:M\to\gamma^{*}\left(J^{1}\pi_{[G]}\right)/H\] through the following procedure: we construct the unique \(H\)-equivariant section \(\widehat{\sigma}:P\to\gamma^{*}\left(J^{1}\pi_{[G]}\right)\), which can be seen as a map \(\Gamma_{\sigma}:P\to J^{1}\pi_{[G]}\). The Cartan connection form is then the pullback form \[A_{\sigma}:=\Gamma_{\sigma}^{*}\mathfrak{g}_{J^{1}\pi_{[G]}}\in\Omega^{1} \left(P,\mathfrak{g}\right).\] Conversely, given a Cartan connection form \(A\), we use Proposition 5 in order to construct an \(H\)-equivariant map \(\Gamma_{A}:P\to J^{1}\pi_{[G]}\), and so a section \[\sigma_{A}:M\to\gamma^{*}\left(J^{1}\pi_{[G]}\right)/H,\] as required. ## 4. Extensions and reductions of generalized Cartan connections We want to describe the connection between gravity and field theory using generalized connections on principal fiber bundles with the general linear group as the structure group. To accomplish this task, it will be essential to have an operation of extension and reduction of connections similar to those available for principal connections, but that work on generalized Cartan connections. Therefore, in the present section we will use the correspondence between generalized Cartan connections and principal connections on a extended bundle, as described by Proposition 3, in order to generalize these constructions to the realm of (generalized) Cartan connection. ### How to extend a generalized Cartan connection Let us consider the following problem: Given the diagram of Lie groups (4.1) such that \(G_{1}/H\) and \(G/G_{2}\) have the same dimension than \(M\), and a generalized Cartan connection \(A:TP\to\mathfrak{g}_{1}\) on a \(H\)-principal bundle \(\pi:P\to M\), construct a generalized Cartan connection \(A_{[G_{2}]}:TP\left[G_{2}\right]\to\mathfrak{g}\) in a canonical way. In order to properly address this problem, let us establish the following auxiliary result. **Lemma 6**.: _Let \(K\subset G_{1}\subset G\) be a chain of Lie groups, and consider a \(K\)-principal bundle \(\pi:P\to M\). Then_ \[P\left[G\right]\simeq\left(P\left[G_{1}\right]\right)\left[G\right].\] Proof.: The identification is given by the map \[[u,g]\in P\left[G\right]\mapsto\Phi\left([u,g]\right):=\left[[u,e]\,,g\right].\] In fact, for every \(\left[[u,h]\,,g^{\prime}\right]\in\left(P\left[G_{1}\right]\right)\left[G\right]\), we have that \[\left[[u,h]\,,g^{\prime}\right]=\left[[u,e]\,,hg^{\prime}\right]=\Phi\left([u,hg ^{\prime}]\right),\] showing that \(\Phi\) is surjective. On the other hand, if \([u,g_{1}],[u,g_{2}]\in P\left[G\right]\) are such that \[\Phi\left([u_{1},g_{1}]\right)=\Phi\left([u_{2},g_{2}]\right),\] we will have that \[\left[[u_{1},e]\,,g_{1}\right]=\left[[u_{2},e]\,,g_{2}\right],\] meaning that \[\left[u_{1},h^{-1}\right]=\left[u_{2},e\right],\qquad hg_{1}=g_{2}\] for some \(h\in G_{1}\). Therefore \[u_{1}k^{-1}=u_{2},\qquad kh^{-1}=e,\qquad hg_{1}=g_{2}\] and so \[\left[[u_{2},e]\,,g_{2}\right]=\left[\left[u_{1}k^{-1},e\right],kg_{1}\right] =\left[[u_{1},e]\,,g_{1}\right].\] This shows that \(\Phi\) is also injective. This lemma allows us to lift Diagram 4.1 to principal bundles level: (4.2) _Remark 2_.: It is interesting to note that, in view of Lemma 3, the solution to this problem described below, will apply to any diagram of principal bundles where \(P,P_{1},P_{2},Q\) are \(H,G_{1},G_{2},G\)-principal bundles respectively, and the arrows indicate principal bundle immersions. The idea to extend the \(g_{1}/h\)-Cartan connection on \(P\) is to use Proposition 3; with its help, we can find a principal connection on the bundle \(P\left[G_{1}\right]\), and then lift it through the map \(\gamma_{G_{1}}:P\left[G_{1}\right]\hookrightarrow P\left[G\right]\). Afterwards, we can induce a \(g/g_{2}\)-Cartan connection on \(P\left[G_{2}\right]\) using Proposition 3 together with the map \(\gamma_{G_{2}}:P\left[G_{2}\right]\hookrightarrow P\left[G\right]\). **Proposition 8**.: _Given any generalized \(\mathfrak{g}_{1}/\mathfrak{h}\)-Cartan connection \(A:TP\to\mathfrak{g}_{1}\), there exists a unique generalized \(\mathfrak{g}/\mathfrak{g}_{2}\)-Cartan connection \(B:TP\left[G_{2}\right]\to\mathfrak{g}\) with the following property: If \(\widetilde{B}:TP\left[G\right]\to\mathfrak{g}\) is the principal connection on \(P\left[G\right]\) associated to \(B\) and \(\widetilde{A}:TP\left[G_{1}\right]\to\mathfrak{g}_{1}\) is the corresponding principal connection for \(A\), we have that_ \[\gamma_{G_{1}}^{*}\widetilde{B}=\widetilde{A}.\] In terms of bundle maps, this correspondence proceeds as follows: Given a Cartan connection \[\Gamma:P\to\mathsf{J}^{1}\pi_{\left[G_{1}\right]},\] we can define a principal connection \(\widetilde{\Gamma}:P\left[G\right]\to\mathsf{J}^{1}\pi_{\left[G\right]}\) through \[\widetilde{\Gamma}\left(\left[u,g\right]\right):v\in T_{x}M\mapsto T_{Y_{G_{ 1}}\left(\left[u,e\right]\right)}R_{g}\left(T_{uY_{G_{1}}}\left(\Gamma\left(u \right)\left(v\right)\right)\right).\] Then the induced \(\mathfrak{g}/\mathfrak{g}_{2}\)-Cartan connection is the map \[\Gamma^{\sharp}\left(\left[u,g\right]\right):v\in T_{x}M\mapsto T_{Y_{G_{1}} \left(\left[u,e\right]\right)}R_{g}\left(T_{uY_{G_{1}}}\left(\Gamma\left(u \right)\left(v\right)\right)\right) \tag{4.3}\] for every \(\left[u,g\right]\in P\left[G_{2}\right]\equiv\gamma_{G_{2}}\left(P\left[G_{2} \right]\right)\). ### Reducible Cartan connection What about the converse of this result? Namely, given a \(\mathfrak{g}/\mathfrak{g}_{2}\)-Cartan connection, will it restrict to a \(\mathfrak{g}_{1}/\mathfrak{h}\)-Cartan connection on \(P\)? The problem with this question is that, because of the way in which the extension is defined, we have that \(\widetilde{B}\) should be \(\mathfrak{g}_{1}\)-valued, at least when it is restricted to \(P\left[G_{1}\right]\). It poses some restrictions to the desired converse result. **Proposition 9**.: _Let \(\Gamma^{\sharp}:P\left[G_{2}\right]\to\mathsf{J}^{1}\pi_{\left[G\right]}\) be a generalized \(\mathfrak{g}/\mathfrak{g}_{2}\)-Cartan connection on \(P\left[G_{2}\right]\). If the induced principal connection_ \[\widetilde{\Gamma}:P\left[G\right]\to\mathsf{J}^{1}\pi_{\left[G\right]}\] _is such that_ \[\widetilde{\Gamma}\left(\gamma_{G_{1}}\left(\left[u,g_{1}\right]\right) \right)\in\mathsf{j}^{1}\gamma_{G_{1}}\left(\mathsf{J}^{1}\pi_{\left[G_{1} \right]}\right)\] _for all \(\left[u,g_{1}\right]\in P\left[G_{1}\right]\), then it reduces to a generalized \(\mathfrak{g}_{1}/\mathfrak{h}\)-Cartan connection on \(P\)_ \[\Gamma:P\to\mathsf{J}^{1}\pi_{\left[G_{1}\right]}\] _through the formula_ \[\widetilde{\Gamma}\left(\gamma_{G_{1}}\left(\left[u,e\right]\right)\right)= \mathsf{j}^{1}\gamma_{G_{1}}\left(\Gamma\left(u\right)\right).\] A commutative diagram could be clearer in illustrating these matters (4.4) ### Examples of extensions for (generalized) Cartan connections Here we will consider some examples for the constructions devised above; the example concerning K-structures will be relevant when we will consider the relationship between Wise formulation of the correspondence Chern-Simons and gravity and the formulation of the correspondence in terms of bundle with the group \(\operatorname{GL}\left(\mathfrak{m}\right)\) as structure group (see Section 6.3 below). #### 4.3.1. Extension of principal connections Let \(G\) be a Lie group and \(H\subset G\) a closed Lie subgroup. As a first example, let us consider the pair of diagrams (4.5) where \(\pi:P\to M\) is a \(H\)-principal bundle and \(p:Q\to M\) is a \(G\)-principal bundle; the left diagram is the principal bundles diagram, and the right diagram corresponds to the underlying Lie groups. In this case, Proposition 8 reduces to the usual result on extensions of principal connections (see Proposition 6.1 in [13]); on the other hand, hypothesis in Proposition 9 is equivalent to the reducibility of the principal connection on \(Q\). #### 4.3.2. Induced principal connection for a Cartan connection Let \(H,G,P,Q\) be as in the previous section. We can put Proposition 3 also in this context; to this end, let us consider the diagrams comprising Lie groups and their principal bundle counterparts. In this setting, the additional hypothesis in Proposition 9 is automatically fulfilled. So, once we realize that \(Q\simeq P\left[G\right]\) via Lemma 3, Proposition 8 and Proposition 9 are nothing but Proposition 3. #### 4.3.3. K-structures on space-time The previous scheme can also be applied to the case in which the original relationship between Palatini gravity (formulated on a \(SO\left(2,1\right)\)-subbundle of \(LM\)) and Chern-Simons field theory (on a \(SO\left(2,1\right)\times\mathbb{R}^{3}\)-principal bundle) is promoted to a relationship between gravity formulated in the bundle \(\LM\) and Chern-Simons theory on a bundle with structure group \(\GL\left(3\right)\ltimes\R^{3}\). Diagram (4.1) then becomes (4.6) that in terms of principal bundles turns out to be Here \(\O_{\zeta}\subset\LM\) and \(\O_{\zeta}^{\aff}\subset\AM\) are the \(\SO(p,q)\)- and \(\SO(p,q)\ltimes\R^{m}\)-structures respectively, defined through \[\O_{\zeta}\coloneqq\bigcup_{x\in\M}\left\{u\colon\R^{m}\to T_{x}\M:\zeta(u \left(v\right),u\left(w\right))=\eta\left(v,w\right)\text{ for all }v,w\in\R^{m}\right\}\] and \[\O_{\zeta}^{\aff}\coloneqq\bigcup_{x\in\M}\left\{u\colon\R^{m}\to A_{x}\M: \zeta(\beta\left(u\left(v\right)\right),\beta\left(u\left(w\right)\right))= \eta\left(v,w\right)\text{ for all }v,w\in\R^{m}\right\}\] for some metric \(\zeta:\M\to\Sigma\coloneqq\LM/\K\) with \((p,q)\)-signature. Let us use the local description for the frame bundle and its affine counterpart (see Appendix C) in order to show how the proposed extension works in this case. Let us consider \((x^{\mu},f_{i}^{\nu})\) the natural coordinates of an element in \(\O_{\zeta}\); it means that \[g_{\mu\nu}f_{i}^{\mu}f_{j}^{\nu}=\eta_{ij},\] where \(\zeta=g_{\mu\nu}dx^{\mu}\otimes dx^{\nu}\). The map \[\phi:\LM\times_{\GL\left(m\right)}\A\left(m\right)\to\AM\] constructed in Proposition 14, Appendix C.2, reads in these coordinates \[\phi\left(\left[(x^{\mu},f_{i}^{\nu})\,,\left(a_{i}^{j},0\right)\right]\right) =\left(x^{\mu},a_{i}^{j}f_{j}^{\nu}\right). \tag{4.7}\] Now, from a local data \[\left(A_{\O_{\zeta}}\right)_{\Uparrow}=\left(\Gamma_{\gamma\beta}^{\alpha}dx^ {\beta}\otimes\Sigma_{\alpha}^{\gamma},\sigma_{\beta}^{\alpha}dx^{\beta} \otimes e_{\alpha}\right) \tag{4.8}\] for the Cartan connection \(A_{\O_{\zeta}}\in\Omega^{1}\left(\O_{\zeta},\mathfrak{k}\ltimes\R^{m}\right)\), using Proposition 5 we obtain a map \[\Gamma_{A_{\O_{\zeta}}}:\O_{\zeta}\to\J^{1}\left(\left.\left(\beta\circ\tau \right)\right|_{\O_{\zeta}^{\aff}}\right)\] that is locally given by \[\Gamma_{A_{O_{\zeta}}}\left(x^{\mu},f_{i}^{\nu}\right)=dx^{\mu}\otimes\left( \frac{\partial}{\partial x^{\mu}}-f_{i}^{\gamma}\Gamma_{\gamma\mu}^{\alpha} \frac{\partial}{\partial e_{i}^{\alpha}}-\sigma_{\mu}^{\alpha}\frac{\partial} {\partial v^{\alpha}}\right)\] _Remark 3_.: As an aside comment that will become important later, it can be proved that relation (5.9) below implies that \[F_{\mu}:=\frac{\partial}{\partial x^{\mu}}-f_{i}^{\gamma}\Gamma_{\gamma\mu}^{ \alpha}\frac{\partial}{\partial e_{i}^{\alpha}}\] is a vector field tangent to \(O_{\zeta}\), and so \[\Gamma_{A_{O_{\zeta}}}\left(x^{\mu},f_{i}^{\nu}\right)\in J^{1}\left(\left. \left(\beta\circ\tau\right)\right|_{O_{\zeta}^{\#}}\right).\] Namely, as expected, the metricity condition implies that \(\Gamma_{A_{O_{\zeta}}}\) takes values in the jet bundle of the affine subbundle determined by the metric \(\zeta\). Now, using Equation (4.3), we obtain the induced Cartan connection given by the map \[\Gamma_{A_{O_{\zeta}}}^{\sharp}\left(\left[\left(x^{\mu},f_{i}^{\nu}\right), \left(a_{i}^{i},0\right)\right]\right)=dx^{\mu}\otimes\left(\frac{\partial}{ \partial x^{\mu}}-a_{i}^{j}f_{j}^{\gamma}\Gamma_{\gamma\mu}^{\alpha}\frac{ \partial}{\partial e_{i}^{\alpha}}-\sigma_{\mu}^{\alpha}\frac{\partial}{ \partial v^{\alpha}}\right),\] and using identification (4.7), it will become \[\Gamma_{A_{O_{\zeta}}}^{\sharp}\left(x^{\mu},e_{i}^{\nu}\right)=dx^{\mu}\otimes \left(\frac{\partial}{\partial x^{\mu}}-e_{j}^{\gamma}\Gamma_{\gamma\mu}^{ \alpha}\frac{\partial}{\partial e_{i}^{\alpha}}-\sigma_{\mu}^{\alpha}\frac{ \partial}{\partial v^{\alpha}}\right).\] This is nothing but the Cartan connection associated to local data (4.8), when it is considered as providing a \(a\left(m\right)/\mathfrak{gl}\left(m\right)\)-Cartan connection on the frame bundle LM. ## 5. Variational problems for Chern-Simons field theory and gravity We will use the present section to introduce the variational problem for Chern-Simons field theory used by Wise in [20, 21], and the variational problem for gravity with basis, both in a form suitable for the purposes of this article. Concretely, we will try to find a formulation for these variational problems fitting in the scheme devised by Gotay in the pioneering works [17, 18]. It means that we need to find for each of these descriptions a triple \[\left(\pi\colon P\to M,\lambda,\mathcal{I}\right),\] where \(\pi:P\to M\) is a bundle on the base space \(M\), \(\lambda\in\Omega^{n}\left(P\right)\) is an \(n\)-form (where \(n=\dim M\)), and \(\mathcal{I}\subset\Omega^{\bullet}\left(P\right)\) is a differential ideal in the exterior algebra of \(P\) (a so called _exterior differential system_, see [17, 18, 19]). As Gotay explained in the article referenced above, with these data it is possible to formulate a variational problem in the following way: To find the stationary sections \(\sigma:U\subset M\to P\) for the action \[S\left[\sigma\right]:=\int_{U}\sigma^{*}\lambda\] subject to the constraints \(\sigma^{*}\alpha=0\) for all \(\alpha\in\mathcal{I}\). In the present article we will use the term _Gotay variational problem_ to refer to this kind of variational problems; in this secrtion we will describe Chern-Simons field theory and gravity from this viewpoint. ### Wise variational problem Our first objective is to find a variational problem of this type for the Chern-Simons field theory, as described in the articles by Wise [21, 22]. To do this, we will use as a starting point the description of this field theory given by Tejero Prieto [19], which uses a formulation in terms of local variational problems. First, we will study how the canonical forms of the jet space of a principal fiber bundle can be used to globalize this collection of local variational problems. Then, we will introduce a constraint that will relate the degrees of freedom of the underlying principal bundle with the degrees of freedom of the Cartan connection. It should be clarified that this constraint is absent in Wise's description because the relevant degrees of freedom are associated only with the connection, whereas in our approach, by using the frame bundle as the underlying principal bundle, we have degrees of freedom that can be used to represent the coframe (see Remark 4 below). #### 5.1.1. Local formulation for Wise variational problem Let us suppose that we have a Cartan geometry modelled on the pair \((G,H)\). The Cartan connection \(A\) associated to this geometry can be described locally by a collection of pairs \((U,A_{U})\), where \(U\subset M\) are open sets and \(A_{U}\in\Omega^{1}\left(U,g\right)\) are \(g\)-valued \(1\)-forms such that the map \[\pi_{b}\circ A_{U}|_{x}:T_{x}U\to g/h\] is a linear isomorphism. These forms are related to the Cartan connection \(A\) through a section \(s_{U}:U\to P\), via the map \[A_{U}=s_{U}^{*}A.\] Now, according to Freed [19], on every trivializing neighborhood \(U\subset M\) it is possible to define an action for Chern-Simons through \[S_{U}\left[s_{U},A\right]\coloneqq\int_{U}s_{U}^{*}Tq\left(A,F\right).\] Modulo some topological assumptions regarding the structure group of the principal group, it can be proved that this action is independent of the section \(s_{U}\) involved in its definition. Moreover, whenever \(\dim M=3\) and the Lie group \(H\) is simply connected, any \(H\)-principal bundle on \(M\) is necessarily trivial, so that this prescription gives rise to a well-defined variational problem on the whole principal bundle. On the other hand, Tejero Prieto [19] is able to give global sense to this collection of local actions by requiring that, whenever the domains intersect, the associated Euler-Lagrange equations are the same for any of the local actions. Thus, we will interpret the Wise variational problem as a collection of local actions \[S_{U}\left[A\right]\coloneqq\int_{U}Tq\left(A_{U},F_{U}\right)\] given by the local description \((U,A_{U})\) of a Cartan connection, because its Euler-Lagrange equations will coincide on the intersection of the corresponding domains. Using Proposition 3, we consider the equivalent variational problem on the bundle \(\pi_{\gamma}:U_{\gamma}\to M\); therefore, we can see the Wise variational problem as a problem whose fields are principal connections \(\widetilde{A}\) on \(P\left[G\right]\), and the action is given by the formula \[S_{U}\left[\widetilde{A}\right]\coloneqq\int_{U}Tq\left(\widetilde{A}_{U}, \widetilde{F}_{U}\right),\] where \(\widetilde{A}_{U}=\widetilde{s}_{U}^{*}\widetilde{A}\) is the local description of the principal connection \(\widetilde{A}\), with \[\widetilde{s}_{U}\left(x\right)=\gamma\left(s_{U}\left(x\right)\right)=\left[s_{U }\left(x\right),e\right].\] #### 5.1.2. Wise variational problem in jet bundle formulation It is our next aim to formulate Wise variational problem in terms of a Lagrangian form and a bundle; we are looking for a formulation in which the integrand \(Tq\left(A_{U},F_{U}\right)\) in the action comes from the Lagrangian form through pullback along a section of a suitable bundle, and this section is uniquely determined by the Cartan connection. Recalling the discussion carried out in Section 3.3, we will choose the bundle \[\pi\circ\mathrm{pr}_{1}^{\gamma}:\gamma^{*}\left(J^{1}\pi_{\left[G\right]} \right)\to M\] as the bundle whose sections can be put in one-to-one correspondence with \(g/b\)-Cartan connections on \(P\). Our next task is to find the Lagrangian form. To this end, consider a generalized Cartan connection \(\Gamma:P\to J^{1}\pi_{\left[G\right]}\), with local description \(\left\{A_{U}\right\}\). Then Remark 1 tells us that \[A_{U}=\left(\Gamma\circ s_{U}\right)^{*}\theta_{J^{1}\pi_{\left[G\right]}},\] where \(\theta_{J^{1}\pi_{\left[G\right]}}\in\Omega^{1}\left(J^{1}\pi_{\left[G\right] },g\right)\) is the canonical connection form and \(s_{U}:U\to P\) is a local section for the principal bundle \(\pi:P\to M\). Let us define \[\theta_{J^{1}\pi_{\left[G\right]}}^{*}\coloneqq\left(\mathrm{pr}_{2}^{\gamma }\right)^{*}\theta_{J^{1}\pi_{\left[G\right]}} \tag{5.1}\] and \[\Theta_{J^{1}\pi_{\left[G\right]}}^{*}\coloneqq\left(\mathrm{pr}_{2}^{\gamma }\right)^{*}\Theta_{J^{1}\pi_{\left[G\right]}}\] where \(\mathrm{pr}_{2}^{\gamma}:\gamma^{*}\left(J^{1}\pi_{\left[G\right]}\right)\to J ^{1}\pi_{\left[G\right]}\) is the horizontal projection in the pullback diagram in Corollary 2. Therefore, the Lagrangian form is given by \[\lambda_{CS}\coloneqq Tq\left(\theta_{J^{1}\pi_{\left[G\right]}}^{*},\Theta_{ J^{1}\pi_{\left[G\right]}}^{*}\right)\in\Omega^{2k-1}\left(\gamma^{*}\left(J^{1} \pi_{\left[G\right]}\right)\right);\] we have that \[L_{U}\coloneqq Tq\left(A_{U},F_{U}\right)=\left(s_{U},\Gamma\circ s_{U} \right)^{*}\lambda_{CS}.\] Thus, we obtain the following corollary. **Corollary 3**.: _The extremals of the local variational problem determined by the Lagrangian \(L_{U}\) are in one to one correspondence with the local extremals of the variational problem given by the triple_ \[\left(\pi\circ pr_{1}^{\gamma}:\gamma^{*}\left(J^{1}\pi_{\left[G\right]} \right)\to M,\lambda_{CS},0\right).\] Proof.: The correspondence is given by \[\left(A_{U},s_{U}\right)\mapsto\left(s_{U},\Gamma\circ s_{U}\right)\in\left( \pi\circ pr_{1}^{\gamma}\right)^{-1}\left(U\right)\subset\gamma^{*}\left(J^{1} \pi_{\left[G\right]}\right).\qed\] #### 5.1.3. Wise variational problem for first order geometries Recall [10, 11] that a pair \(\left(P,A\right)\), where \(\pi:P\to M\) is an \(H\)-principal bundle and \(A\) is a \(g/b\)-Cartan connection, is called a _first order geometry_ if and only if the representation \[\mathrm{Ad}:H\to\mathrm{GL}\left(g/b\right)\] is faithful. Then we have the following result. **Proposition 10**.: \(P\) _admits a first order geometry if and only if it is an \(H\)-structure._ Proof.: Recall that a Cartan connection \(A\) on \(P\) induces a family of isomorphisms \[\phi_{u}:T_{x}M\to\mathfrak{g}/\mathfrak{h},\qquad u\in\pi^{-1}\left(x\right)\] such that \(\phi_{uh}=\operatorname{Ad}_{h^{-1}}\circ\phi_{u}\). Thus, the isomorphism is given by \[u\in P\mapsto\left(\phi_{u}^{-1}\left(w_{1}\right),\cdots,\phi_{u}^{-1}\left(w _{m}\right)\right),\] where \(\{w_{1},\cdots,w_{m}\}\) is a basis for \(\mathfrak{g}/\mathfrak{h}\). In the present section we will assume that \(P\) admits a first order geometry; according to the previous proposition, it means that \(P\) can be considered a subbundle of the bundle of frames \(LM\). Therefore, for every element \(v_{x}\in T_{x}M\) and \(u\in\pi^{-1}\left(x\right)\), we have two ways to represent it, namely \[v_{x}=\phi_{u}^{-1}\left(w_{u}\right)=u\left(c_{u}\right)\] for some elements \(w_{u}\in\mathfrak{g}/\mathfrak{h}\) and \(c_{u}\in\mathbb{R}^{m}\); it gives rise to a linear isomorphism \[\kappa_{u}:\mathbb{R}^{m}\to\mathfrak{g}/\mathfrak{h}:c\mapsto\phi_{u}\circ u \left(c\right). \tag{5.2}\] **Example 1** (Affine connections).: Let us calculate this isomorphism for \(\mathfrak{a}\left(m\right)/\mathfrak{gl}\left(m\right)\)-Cartan connections on \(LM\), the so called _generalized affine connections_[10]. Assuming that the local version of a Cartan connection on \(LM\) is \[\widetilde{\omega}_{u}=\left(\Gamma_{\gamma\beta}^{\alpha}\operatorname{dx}^{ \beta}\otimes E_{\alpha}^{\gamma},\sigma_{\beta}^{\alpha}\operatorname{dx}^{ \beta}\otimes e_{\alpha}\right)\] for some local functions \(\Gamma_{\beta\gamma}^{\alpha}\) and \(\sigma_{\beta}^{\alpha}\), from Equation (C.5) we obtain \[\phi_{\left(x^{\alpha},e_{i}^{\beta}\right)}=e_{\alpha}^{i}\sigma_{\beta}^{ \alpha}\operatorname{dx}^{\beta}\otimes e_{i}.\] Now, recall that in this case \(\mathfrak{a}\left(m\right)/\mathfrak{gl}\left(m\right)=\mathbb{R}^{m}\) in canonical fashion, so that \(\kappa_{\left(x^{\alpha},e_{i}^{\beta}\right)}\) is a linear endomorphism of \(\mathbb{R}^{m}\). Therefore, the matrix for this morphism in terms of the canonical basis \(\{e_{i}\}\) will become \[\left[\kappa_{\left(x^{\alpha},e_{i}^{\beta}\right)}\right]_{j}^{i}=e_{\alpha }^{i}e_{j}^{\beta}\sigma_{\beta}^{\alpha}.\] According to the classical definition [10], a Cartan connection on \(LM\) is an _affine connection_ if and only if \(\kappa_{\left(x^{\alpha},e_{i}^{\beta}\right)}\) is the identity on \(\mathbb{R}^{m}\), or equivalently \[e_{\alpha}^{i}e_{j}^{\beta}\sigma_{\beta}^{\alpha}=\delta_{j}^{i}.\] We can see this constraint in terms of the jet bundle description for the Cartan connection (see Section 3.1). In order to proceed, let us define the map \[j:J^{1}\tau\hookrightarrow J^{1}\left(\tau\circ\beta\right):j_{x}^{1}s\mapsto j _{x}^{1}\left(\gamma\circ s\right)-\left.\varphi\right|_{s\left(x\right)}, \tag{5.3}\] where \(\varphi\) indicates the canonical \(\mathbb{R}^{m}\)-valued \(1\)-form on \(LM\), which can be seen as an element of \[T_{x}^{*}M\otimes\mathbb{R}^{m}\subset T_{x}^{*}M\otimes\mathfrak{a}\left(m \right)\simeq T_{x}^{*}M\otimes V_{\gamma\left(s\left(x\right)\right)}\left( AM\right)\] and thus acts on the affine space \(J_{\gamma\left(s\left(x\right)\right)}^{1}\left(\tau\circ\beta\right)\). In local terms, we have that \[\varphi\rvert_{\left(x^{\alpha},e_{i}^{\beta}\right)}=e_{\beta}^{i} \operatorname{dx}^{\beta}\otimes e_{i}\] and if an element \(j_{x}^{1}s\in J^{1}\tau\) has coordinates \(\left(x^{\alpha},e_{i}^{\beta},e_{i}^{\beta}\right)\), then it represents the map \[j_{x}^{1}s:\frac{\partial}{\partial x^{\alpha}}\mapsto\frac{\partial}{ \partial x^{\alpha}}+e_{i\alpha}^{\beta}\frac{\partial}{\partial e_{i}^{\beta}};\] additionally, the map \(\gamma:\mathrm{LM}\to\mathrm{AM}\) simply reads \[\gamma\left(x^{\alpha},e_{i}^{\beta}\right)=\left(x^{\alpha},e_{i}^{\beta},0 \right).\] Then, from Definition (5.3) we can conclude that \[j\left(x^{\alpha},e_{i}^{\beta},e_{i\gamma}^{\alpha}\right)=\left(x^{\alpha},e _{i}^{\beta},0,e_{i\gamma}^{\alpha},-e_{i}^{\alpha}e_{\beta}^{i}\right), \tag{5.4}\] as required. Our next task is to adapt the Equation (5.2) to the description discussed in Section 3.2, where Cartan connections were considered as sections in a bundle. The main reason to do that is that, just as we saw in the previous example, the pullback bundle \(\gamma^{*}\left(J^{1}\pi_{[G]}\right)\) would contain degrees of freedom associated to the principal bundle \(P\) and the \(g/b\)-part of the Cartan connection. In fact, given an element \(\overline{u}=\left(u,j_{x}^{1}s\right)\in\gamma^{*}\left(J^{1}\pi_{[G]}\right)\), we can construct the map \[\phi_{\overline{u}}:T_{x}M\to g/b\] in the following way: Given a tangent vector \(v_{x}\in T_{x}M\), fix a lift \(\widehat{v}_{u}\in T_{u}P\), and use it to construct the tangent vector \[W_{s(x)}:=T_{u}\gamma\left(\widehat{v}_{u}\right)-T_{x}s\left(v_{x}\right) \in T_{s(x)}\left(P\left[G\right]\right).\] Then we have that \[T_{s(x)}\pi_{[G]}\left(T_{u}\gamma\left(\widehat{v}_{u}\right)\right)=v_{x}\] and so \(W_{s(x)}\in V_{s(x)}\left(P\left[G\right]\right)\); it means in particular that there exists an element \(\widetilde{W}(\widehat{v}_{u})\in g\) such that \[W_{s(x)}=\left(\widetilde{W}\left(\widehat{v}_{u}\right)\right)_{P\left[G \right]}\left(s\left(x\right)\right).\] Now, changing the lift \(\widehat{v}_{u}\) in this definition will produce a shift in \(\widetilde{W}\left(\widehat{v}_{u}\right)\) by an element living in \(b\); therefore, we can define the map \(\phi_{\overline{u}}:T_{x}M\to g/b\) by projecting into the quotient by \(b\), namely \[\phi_{\overline{u}}\left(v_{x}\right):=\left[\widetilde{W}\left(\widehat{v}_ {u}\right)\right]_{b},\] where \(\left[\cdot\right]_{b}\) indicates the equivalence class in \(g/b\). Using Example 1 and Equation (5.2), we can define a map \[\kappa_{\overline{u}}:=\phi_{\overline{u}}\circ u:\mathbb{R}^{m}\to g/b\] for every \(\overline{u}=\left(u,j_{x}^{1}s\right)\in\gamma^{*}\left(J^{1}\pi_{[G]}\right)\). It will allow us to identify the degrees of freedom mentioned above by means of the constraint \[\kappa_{u}=\kappa_{0} \tag{5.5}\] for some fixed isomorphism \(\kappa_{0}:\mathbb{R}^{m}\to g/b\), under the assumption that \(P\) admits a first order geometry. So we are ready to introduce the following definition. **Definition 4** (Wise variational problem for first order geometries).: Let \(\pi:P\to M\) be an \(H\)-principal bundle admitting a first order geometry associated to the pair \(\left(G,H\right)\); fix an isomorphism \(\kappa_{0}:\mathbb{R}^{m}\to g/b\). The _Wise variational problem_ is the triple \[\left(\pi\circ\mathrm{pr}_{1}^{\gamma}:\gamma^{*}\left(J^{1}\pi_{[G]}\right) \to M,\lambda_{CS},\mathcal{K}_{\kappa_{0}}\right).\] where \(\mathcal{K}_{\kappa_{0}}\) is the EDS induced by the constraint (5.5). _Remark 4_ (On the the constraint \(\mathcal{K}_{\kappa_{\diamond}}\)).: The introduction of the constraint \(\mathcal{K}_{\kappa_{\diamond}}\) implies a departure from the scheme devised by Wise in the previously cited works, where the degrees of freedom associated to the \(\mathfrak{g}/\mathfrak{h}\)-part of the Cartan connection are the ones devoted to describe the vielbein when the relationship with gravity is established. In our present description, we are choosing to use the degrees of freedom of the underlying principal bundle in order to describe it, and the role of the constraint \(\mathcal{K}_{\kappa_{\diamond}}\) is to enforce the identification of the \(\mathfrak{g}/\mathfrak{h}\)-part of the Cartan connection with the group coordinates in the principal bundle \(\mathsf{P}\). For example, when \(\mathsf{P}=\mathrm{LM},\mathsf{H}=\mathrm{GL}\left(\mathsf{m}\right),\mathsf{G }=\mathsf{A}\left(\mathsf{m}\right)\), we can deal with the constraint \(\mathcal{K}_{\kappa_{\diamond}}\) in a straightforward manner. In fact, because of the Equation (5.3), we have that a section \(\sigma:\mathsf{U}\subset\mathsf{M}\to\gamma^{*}\left(\mathsf{J}^{1}\left( \mathsf{\tau}\circ\mathfrak{g}\right)\right)\) is integral for this EDS (with \(\kappa_{\diamond}=\mathrm{id}\), given the identification \(\mathfrak{g}/\mathfrak{h}=\mathbb{R}^{m}\) we have at our disposal is this case) if and only if there exists a section \(\mathsf{s}:\mathsf{U}\to\mathsf{J}^{1}\mathsf{\tau}\) such that \[\sigma\left(\mathsf{x}\right)=\left(\mathsf{\tau}_{\mathsf{1}\circ}\left( \mathsf{s}\left(\mathsf{x}\right)\right),\mathsf{j}\left(\mathsf{s}\left( \mathsf{x}\right)\right)\right).\] Thus we can fulfill this constraint by using this form for the sections we consider in the variational problem. ### Variational problem for gravity with basis It is usual [26] to describe gravity with a pair \(\left(\mathrm{e}_{\mathsf{i}}^{\mu},\omega_{\mathsf{j}}^{\mathsf{i}}\right)\), where \(\mathrm{e}_{\mathsf{i}}^{\mu}\) are the components of a local basis of the tangent bundle \(\mathsf{TM}\) respect to some coordinates \(\mathsf{x}^{\mu}\), and \(\omega_{\mathsf{j}}^{\mathsf{i}}\) are a set of local \(1\)-form, the so called _spin connection forms_. This pair can be represented in terms of the bundle of frames \(\mathrm{LM}\) on the space-time: Fixing a common open domain \(\mathsf{U}\subset\mathsf{M}\), \(\mathrm{e}_{\mathsf{i}}^{\mu}\) gives rise to a local section \(\sigma\) of \(\mathrm{LM}\) through the formula \[\mathsf{x}\in\mathsf{U}\mapsto\sigma\left(\mathsf{x}\right):=\left(\mathrm{e} _{\mathsf{1}}^{\mu}\left(\mathsf{x}\right)\frac{\partial}{\partial\mathsf{x}^{ \mu}},\cdots,\mathrm{e}_{\mathsf{m}}^{\mu}\left(\mathsf{x}\right)\frac{ \partial}{\partial\mathsf{x}^{\mu}}\right).\] The forms \(\omega_{\mathsf{j}}^{\mathsf{i}}\) together with the section \(\sigma\) can be used in order to define the Christoffel symbols \(\Gamma_{\sigma\sigma}^{\mu}\) of the underlying connection, through the so called _vielbein postulate_ \[\omega_{\mathsf{j}}^{\mathsf{i}}=\mathrm{e}_{\mathsf{i}}^{\mathsf{i}}\left( \mathrm{e}_{\mathsf{j}}^{\mathsf{\gamma}}\Gamma_{\nu\sigma}^{\mu}+\frac{ \partial\mathrm{e}_{\mathsf{j}}^{\mu}}{\partial\mathsf{x}^{\alpha}}\right) \mathrm{dx}^{\sigma}. \tag{5.6}\] The connection form on \(\mathrm{LM}^{2}\) associated to \(\left\{\omega_{\mathsf{j}}^{\mathsf{i}}\right\}\) can be retrieved by the formula [26, 27] \[\left(\omega_{\mathrm{LM}}\right)_{\mathsf{j}}^{\mathsf{i}}\Big{|}_{\mathsf{u}} =\mathrm{e}_{\mathsf{i}}^{\mathsf{i}}\mathrm{e}_{\mathsf{j}}^{\mathsf{ \gamma}}\Gamma_{\nu\sigma}^{\mu}\mathrm{dx}^{\sigma}+\mathrm{e}_{\mathsf{i}}^ {\mathsf{i}}\mathrm{d}\mathrm{e}_{\mathsf{j}}^{\mathsf{i}}\] where \(\mathsf{u}\in\mathrm{LM}\) is represented by the set of coordinates \(\left(\mathsf{x}^{\mu},\mathrm{e}_{\mathsf{i}}^{\mu}\right)\). This gives rise to a locally defined action \[S_{\mathrm{PG}}:=\int_{\mathsf{U}}\mathrm{e}_{\mathsf{i}\mathsf{j}\mathsf{k} \mathsf{l}}\eta^{\mathrm{k}\mathsf{P}}\mathrm{e}_{\mathsf{i}}^{\mathsf{i}} \mathrm{e}_{\mathsf{v}}^{\mathsf{j}}\mathrm{dx}^{\mu}\wedge\mathrm{dx}^{\nu} \wedge\Omega_{\mathsf{P}}^{\mathsf{i}}, \tag{5.7}\] where \(\Omega_{\mathsf{j}}^{\mathsf{i}}\) is the local curvature \(2\)-form associated to \(\omega_{\mathsf{j}}^{\mathsf{i}}\). A constraint that should be adopted on the set of forms \(\omega_{\mathsf{j}}^{\mathsf{i}}\) is that the underlying connection form is \(\mathfrak{o}\left(3,1\right)\)-valued, namely \[\eta^{\mathsf{i}\mathsf{j}}\omega_{\mathsf{j}}^{\mathsf{k}}+\eta^{\mathsf{k} \mathsf{j}}\omega_{\mathsf{j}}^{\mathsf{i}}=0. \tag{5.8}\] Before to continue, we have to deal with the constraints given by Eqs. (5.8); according to formula (5.6), it is equivalent to \[\frac{\partial g^{\mu\nu}}{\partial x^{\sigma}}+g^{\mu\rho}\Gamma^{\nu}_{\rho \sigma}+g^{\nu\rho}\Gamma^{\mu}_{\rho\sigma}=0 \tag{5.9}\] namely, the connection associated to the symbols \(\Gamma^{\mu}_{\nu\sigma}\) is metric with respect to the metric associated to the vielbein, \[g^{\mu\nu}=\eta^{ij}e^{\mu}_{i}e^{\nu}_{j}.\] At jet bundle level it is equivalent to the set of equations \[\left(\eta^{kj}e^{i}_{\mu}+\eta^{ij}e^{k}_{\mu}\right)\left(-e^{\mu}_{j\sigma} +\frac{\partial e^{\mu}_{j}}{\partial x^{\sigma}}\right)=0. \tag{5.10}\] Thus, a solution for gravity with basis is a section \[\sigma:U\subset M\to J^{1}\tau\] which is both an extremal for the functional (5.7) and also verifies the condition (5.10). On the other hand, there is a formulation for field theory in which the action is calculated using a Lagrangian density, that is a bundle map \[\mathcal{L}:J^{1}q\to\wedge^{4}\left(T^{*}M\right),\] where \(q:E\to M\) is a bundle on \(M\). Namely, for every section \(\sigma:M\to E\) of \(q\), we have a map \[\mathcal{L}\circ j^{1}\sigma:M\to\wedge^{4}\left(T^{*}M\right),\] which is a \(4\)-form on \(M\), and so you can integrate, \[S_{PG}\left[\sigma\right]\coloneqq\int_{M}\mathcal{L}\circ j^{1}\sigma.\] We want to find this kind of formulation for gravity with basis; in order to carry out this task, it is necessary to identify the bundle \(E\), and then to write down a Lagrangian density on this bundle. Because \(e^{\alpha}_{i}\) are part of the degrees of freedom for this flavor of gravity, the bundle \(E\) should include it, namely, we are searching for a bundle of the form \[E=LM\times_{M}\star,\] where \(\star\) stands for the degrees of freedom associated to \(\omega^{i}_{j}\). As we said before, these data corresponds to the specification of a linear connection on \(LM\); from a geometrical viewpoint, the relevant bundle in this regard is the connection bundle \(\tau:C\left(LM\right)\to M\), defined in such a way that it fits in the following diagram Therefore, a section of the connection bundle is equivalent to a equivariant section of the projection \(\tau_{10}\). Now, recall that an element of \(J^{1}\tau\) is a linear map \(m:T_{x}M\to T_{u}\left(LM\right)\) such that \[T_{u}\tau\circ m=id_{T_{x}M}.\] Using the data \(\omega_{j}^{i}\), this section can be constructed according to the formula \[\Gamma\left(u\right) \coloneqq\mathrm{d}x^{\mu}\otimes\left(\frac{\partial}{\partial x^{ \mu}}\right)_{u}^{H}\] \[=\mathrm{d}x^{\mu}\otimes\left(\frac{\partial}{\partial x^{\mu}}- e_{j}^{\rho}\Gamma_{\rho u}^{\sigma}\frac{\partial}{\partial e_{j}^{\sigma}}\right)\] where \(u=\left(X_{1},\cdots,X_{m}\right)\in LM\) and the symbols \(\Gamma_{\nu\sigma}^{\mu}\) are calculated through Equation (5.6); here \(\left(\cdot\right)_{u}^{H}\) indicates the horizontal lift of a vector field to \(T_{u}LM\). Thus the bundle describing this flavor of gravity will become \[E=LM\times_{M}C\left(LM\right).\] Moreover, it can be proved [1, 10] that this bundle is isomorphic as affine bundle on \(LM\) to the first jet bundle \(J^{1}\tau\); in terms of the induced coordinates \(\left(x^{\mu},e_{i}^{\nu},e_{k\rho}^{\sigma}\right)\), this correspondence is given by the formula \[\Gamma_{\nu\sigma}^{\mu}=-e_{\nu}^{k}e_{k\sigma}^{\mu}. \tag{5.11}\] Then, the relevant bundle in the description of gravity with basis will be the first jet bundle of the frame bundle. Having identified the basic bundle, we need to write down a Lagrangian density and to encode the constraint imposed by Eq. (5.8). In order to proceed, let us define the _canonical connection form_ \[\theta_{J^{1}\tau}j_{J^{1}_{J^{1}_{J}}}\coloneqq e_{\mu}^{i}\left(\mathrm{d}e _{j}^{\mu}-e_{j\rho}^{\mu}dx^{\rho}\right)\otimes E_{i}^{j}\] where \(j^{1}_{x}s=\left(x^{\mu},e_{j}^{\gamma},e_{k\rho}^{\sigma}\right)\) are the induced coordinates on \(J^{1}\tau\); it can be proved that this formula defines a \(gl\left(m\right)\)-valued \(1\)-form on \(J^{1}\tau\), and that it becomes a connection form on the \(GL\left(m\right)\)-principal bundle \[P_{GL\left(m\right)}^{J^{1}\tau}:J^{1}\tau\to C\left(LM\right).\] Let \[\rho\left(x\right)\coloneqq\left(x^{\mu},e_{i}^{\mu}\left(x\right),e_{k\sigma }^{\nu}\left(x\right)\right)\] be a local description for a section \(\rho:U\to J^{1}\tau\); then from the tetrad postulate (5.6) and using Eq. (5.11), we have that \[\rho^{*}\theta_{J^{1}\tau}=\left(\omega_{LM}\right)_{j}^{i}\otimes E_{i}^{j}.\] Let \(\Omega_{LM}\in\Omega^{2}\left(J^{1}\tau,gl\left(m\right)\right)\) be the curvature form associated to \(\gamma\); naturatility of the pullback implies that \[\rho^{*}\Theta_{J^{1}\tau}=\left(\Omega_{LM}\right)_{j}^{i}\otimes E_{i}^{j}\] at curvature forms level. The formula \[\varphi_{J^{1}_{J^{1}_{J}}}\coloneqq e_{\mu}^{i}dx^{\mu}\otimes e_{j}\] also defines a global \(R^{m}\)-valued \(1\)-form on \(J^{1}\tau\); out from this form and the bilinear map \(\eta:R^{m}\times R^{m}\to R\), we can define a \(GL\left(m\right)\)-valued \(\left(m-2\right)\)-form as follows: First, define the \(\wedge^{m-2}R^{m}\)-valued \(\left(m-2\right)\)-form given by the formula \[\varphi^{m-2}:=\overbrace{\varphi\wedge\cdots\wedge\varphi}^{m-2}.\] Then, use the star map \(\star:\wedge^{m-2}\mathbb{R}^{m}\to\wedge^{2}\mathbb{R}^{m}\) determined by the bilinear form \(\eta\) to define a \(\wedge^{2}\mathbb{R}^{m}\)-valued \((m-2)\)-form \(\star\left(\varphi^{m-2}\right)\); finally, use the isomorphism \(\eta:\mathbb{R}^{m}\to\left(\mathbb{R}^{m}\right)^{*}\) to define a map \[\eta^{\sharp}:\wedge^{2}\mathbb{R}^{m}\to\left(\mathbb{R}^{m}\right)^{*}\otimes \mathbb{R}^{m}=\mathfrak{gl}\left(m\right)\] giving rise to a \(\mathfrak{gl}\left(m\right)\)-valued \((m-2)\)-form, called _Sparkling form_ \[\varphi_{m-2}^{\sharp}\coloneqq\eta^{\sharp}\left(\varphi^{m-2}\right).\] Additionally, \(\eta\) induces a bilinear pairing between \(\mathfrak{gl}\left(m\right)\)-forms on \(\mathsf{J}^{1}\tau\). Using the naturality properties of the pullback, we can conclude that (in the case \(m=4\)) \[\rho^{*}\left(\eta\left(\varphi_{2}^{\sharp}\ \dot{\gamma}\ \Gamma\right) \right)=\epsilon_{ijk\tau}\eta^{kp}e_{\mu}^{i}e_{\nu}^{j}dx^{\mu}\wedge dx^{ \gamma}\wedge\Omega_{p}^{1};\] also, for the case \(m=3\) it would result \[\rho^{*}\left(\eta\left(\varphi_{1}^{\sharp}\ \dot{\gamma}\ \Gamma\right) \right)=\epsilon_{ijk\tau}\eta^{kp}e_{\mu}^{i}dx^{\mu}\wedge\Omega_{p}^{j}.\] These considerations allows us to define the _Lagrangian form for gravity with basis_, that becomes \[\lambda_{PG}:=\eta\left(\varphi_{m-2}^{\sharp}\ \dot{\gamma}\ \Gamma\right). \tag{5.12}\] Strictly speaking, \(\lambda_{PG}\) is not a \(\tau_{1}\)-horizontal form on \(\mathsf{J}^{1}\tau\), and so it is not associated to a Lagrangian density; if we wanted to deal with the underlying variational problem as an usual variational problem, we would have to lift it to the jet bundle \(\mathsf{J}^{1}\tau_{1}\) (or perhaps to the subbundle \(\mathsf{J}^{2}\tau\)). Instead, we will treat it as a Griffiths variational problem, in order to avoid the introduction of additional variables. Regarding the constraint (5.8), let \[\mathfrak{gl}\left(m\right)=\mathfrak{p}\oplus\mathfrak{k}\] be the decomposition of the general linear Lie algebra in terms of the \(\pm 1\)-eigenspaces of the involutive operator \[A\mapsto\eta A^{T}\eta;\] then this constraint is recovered by the equation \[\rho^{*}\left(\pi_{p}\circ\theta_{\mathsf{J}^{1}\tau}\right)=0, \tag{5.13}\] where \(\pi_{p}:\mathfrak{gl}\left(m\right)\to\mathfrak{p}\) is the associated projection onto the factor \(p\). **Definition 5** (Variational problem for gravity with basis).: The _variational problem for gravity with basis_ is the triple \[\left(\tau_{1}:\mathsf{J}^{1}\tau\to M,\lambda_{PG},\mathcal{I}_{PG}\right)\] where \(\mathcal{I}_{PG}\) is the exterior differential system on \(\mathsf{J}^{1}\tau\) generated by the forms (5.13). ## 6. Extension of generalized Cartan connections and Chern-Simons field theory In this section we will formulate a Chern-Simons field theory on a principal bundle with structure group \(A\left(m\right)\), and we will relate it with the usual Chern-Simons field theory on a \(K\)-structure using the operation of extension for generalized Cartan connections. ### Chern-Simons variational problem with Lie group \(A\left(3,\mathbb{R}\right)\) In the present section we will work with the affine frame bundle on a manifold of dimension \(m=3\). Using Corollary 3 and the geometrical constructions performed in Appendix C, we will define a Griffiths variational problem on \(\gamma^{*}\left(J^{1}\left(\tau\circ\beta\right)\right)\) in order to represent Chern-Simons gauge theory. To proceed, let us define the bilinear form \(\left\langle\cdot,\cdot\right\rangle:\mathfrak{gl}\left(m\right)\times \mathfrak{gl}\left(m\right)\rightarrow\mathbb{R}\) given by \[\left\langle a,b\right\rangle:=a_{i}^{k}b_{k}^{i}.\] **Lemma 7**.: _The bilinear form \(\left\langle\cdot,\cdot\right\rangle\) is non degenerate and \(\operatorname{GL}\left(3\right)\)-invariant._ Recall that \(K\subset\operatorname{GL}\left(m,\mathbb{R}\right)\) is the Lorentz group defined by the matrix \(\eta\) (see Equation (2.4)); as always, \(\mathfrak{k}\) will indicate its Lie algebra. Then we have the isomorphism [20] \[\mathfrak{k}\simeq\mathbb{R}^{3}\] given by \[\xi=\left(\xi^{i}\right)\mapsto a_{i}^{j}:=\eta^{jk}\epsilon_{ik1}\xi^{1}.\] It allows us to use the prescription \[\left\langle\left(a,\xi\right),\left(b,\zeta\right)\right\rangle:=\left\langle a,\zeta\right\rangle+\left\langle b,\xi\right\rangle\] for the extension of the bilinear form defined above to \(\mathfrak{gl}\left(3\right)\times\mathbb{R}^{3}\). Thus we have a quadratic form \[\mathfrak{q}:\mathfrak{a}\left(3\right)\rightarrow\mathbb{R}:\left(a,\xi \right)\mapsto\left\langle\left(a,\xi\right),\left(a,\xi\right)\right\rangle,\] and using Equation (B.2), we obtain the following definition. **Definition 6** (Chern-Simons Lagrangian \(3\)-form).: The _Lagrangian form for Chern-Simons variational problem_ is the \(3\)-form \(\mathcal{L}_{CS}\in\Omega^{3}\left(\gamma^{*}\left(J^{1}\left(\tau\circ\beta \right)\right)\right)\) defined through \[\mathcal{L}_{CS}:=\left\langle\theta_{J^{1}\left(\tau\circ\beta\right)}^{*} \overset{\wedge}{\gamma}\Theta_{J^{1}\left(\tau\circ\beta\right)}^{*}\right\rangle -\frac{1}{6}\left\langle\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}\overset{ \wedge}{\gamma}\left[\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}\overset{ \wedge}{\gamma}\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}\right]\right\rangle,\] where \(\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}\in\Omega^{1}\left(\gamma^{*} \left(J^{1}\left(\tau\circ\beta\right)\right)\right)\) is the pullback to \(\gamma^{*}\left(J^{1}\left(\tau\circ\beta\right)\right)\) of the canonical connection on the principal bundle \(p_{A\left(3\right)}^{J^{1}\left(\tau\circ\beta\right)}:J^{1}\left(\tau\circ \beta\right)\to C\left(\operatorname{AM}\right)\) (see Equation (5.1) above). Because \(\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}\) is \(\mathfrak{gl}\left(3\right)\oplus\mathbb{R}^{3}\)-valued, we can write \[\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}=o_{J^{1}\tau}^{*}+e\] Using Equation (C.6) we see that, in local terms, this form becomes \[\theta_{J^{1}\left(\tau\circ\beta\right)}^{*}=e_{\beta}^{i}\left(\mathrm{de} _{i}^{\beta}-e_{i\alpha}^{\beta}\mathrm{d}x^{\alpha}\right)\otimes E_{j}^{i}- e_{\beta}^{i}\nu_{\alpha}^{\beta}dx^{\alpha}\otimes e_{i},\] so that \[o_{J^{1}\left(\tau\circ\beta\right)}^{*}=e_{\beta}^{i}\left(\mathrm{de}_{i}^{ \beta}-e_{i\alpha}^{\beta}\mathrm{d}x^{\alpha}\right)\otimes E_{j}^{i},\qquad e =-e_{\beta}^{i}\nu_{\alpha}^{\beta}dx^{\alpha}\otimes e_{i}.\] Also, \[\Theta_{J^{1}\left(\tau\circ\beta\right)}^{*}=O_{J^{1}\left(\tau\circ\beta \right)}^{*}+E.\] Now, the splitting \(\mathfrak{a}\left(3\right)=\mathfrak{gl}\left(3\right)\oplus\mathbb{R}^{3}\) has the following properties \[\left[\mathfrak{gl}\left(3\right),\mathfrak{gl}\left(3\right)\right]\subset \mathfrak{gl}\left(3\right),\qquad\left[\mathfrak{gl}\left(3\right),\mathbb{R}^{ 3}\right]\subset\mathbb{R}^{3},\qquad\left[\mathbb{R}^{3},\mathbb{R}^{3} \right]=0\] and \[\mathfrak{gl}\left(3\right)\perp\mathfrak{gl}\left(3\right),\qquad\mathbb{R}^{ 3}\perp\mathbb{R}^{3},\] so that we can obtain a result that is the equivalent in this context to Proposition 1 in [20]. **Proposition 11**.: _The Chern-Simons Lagrangian can be written as_ \[\mathcal{L}_{CS}=\left\langle\mathrm{e}\,\,\diagup\,\mathrm{O}_{\mathrm{J}^{1}( \tau\circ\beta)}^{*}\right\rangle.\] In local coordinates, this Lagrangian becomes \[\mathcal{L}_{CS}=\epsilon_{j\mathrm{k}1}\eta^{\mathrm{k}p}v_{\beta}^{\mathrm{ e}}e_{\nu}^{j}\mathrm{d}x^{\beta}\wedge\left(\mathrm{O}_{\mathrm{J}^{1}\tau}^{*} \right)_{\mathrm{p}}^{1}.\] It is interesting to note that, except by the factors \(v_{\beta}^{\alpha}\), this Lagrangian is equivalent to the Lagrangian for \(2+1\)-gravity with basis; we need to take care of them, and it will be done through the imposition of constraints. So, it is necessary to prescribe the set of constraints that sections \(\sigma:U\subset M\to\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ\beta\right)\right)\) should obey in order to be evaluated in the action associated to \(\mathcal{L}_{CS}\). The first set of constraints we need to consider are those imposed by Equation (5.8) above; in order to achieve it, let us consider the decomposition \[\mathfrak{a}\left(3\right)=\left(\mathrm{p}+\mathcal{0}\right)\oplus\left( \mathfrak{k}\oplus\mathbb{R}^{3}\right);\] accordingly, let \[\pi_{\mathrm{p}}:\mathfrak{a}\left(3\right)\to\mathrm{p}+\mathcal{0},\qquad \pi_{\mathrm{t}}:\mathfrak{a}\left(3\right)\to\mathfrak{k}\oplus\mathbb{R}^{3}\] be the corresponding projectors. Then we will have that \[\pi_{\mathrm{p}}\circ\theta_{\mathrm{J}^{1}(\tau\circ\beta)}^{*}=\eta^{ \mathrm{i}k}\left(\mathrm{o}_{\mathrm{J}^{1}(\tau\circ\beta)}^{*}\right)_{ \mathrm{k}}^{j}+\eta^{\mathrm{j}k}\left(\mathrm{o}_{\mathrm{J}^{1}(\tau\circ \beta)}^{*}\right)_{\mathrm{k}}^{i},\] so that this set of forms is suitable for the incarnation of Equation (5.8) in this setting. As we mentioned above, another constraint to be taken into account has to do with coordinates \(v_{\beta}^{\alpha}\); the most natural thing is to use the map \(\mathrm{j}:\mathrm{J}^{1}\tau\to\mathrm{J}^{1}\left(\tau\circ\beta\right)\) discussed in Example 1, where the classical notion for affine connection was introduced. In particular, it was proved there (see Equation (5.4)) that the image set of the map \(\mathrm{j}\) is described by the equations \[v^{\alpha}=0,\quad v_{\beta}^{\alpha}=-e_{\beta}^{i}e_{\mathrm{i}}^{\alpha}.\] Then the constraint in this case deals with the extra degrees of freedom \(v_{\beta}^{\alpha}\) through this map; concretely, we are imposing the form \[x\mapsto\left(\tau_{10}\left(s\left(x\right)\right),\mathrm{j}\left(s\left(x \right)\right)\right)\in\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ\beta \right)\right)\] for the allowed sections in the variational problem with Lagrangian \(\mathcal{L}_{CS}\), where \(s:U\subset M\to\mathrm{J}^{1}\tau\) is a section of \(\tau_{1}:\mathrm{J}^{1}\tau\to M\). **Definition 7** (Constraints for Chern-Simons variational problem on \(A\left(3,\mathbb{R}\right)\)).: We will say that a section \(\sigma:U\subset M\to\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ\beta\right)\right)\) is _admissible for the Chern-Simons variational problem_ if and only if \[\sigma\left(x\right)=\left(\tau_{10}\left(s\left(x\right)\right),\mathrm{j} \left(s\left(x\right)\right)\right)\] for some section \(s:U\subset M\to\mathrm{J}^{1}\tau\) and also \[\sigma^{*}\left(\pi_{\mathrm{p}}\circ\theta_{\mathrm{J}^{1}(\tau\circ\beta)} ^{*}\right)=0.\] The admissible section will have the following property. **Proposition 12**.: _Let \(\theta_{\mathrm{J}^{1}(\tau\circ\beta)}\in\Omega^{1}\left(\mathrm{J}^{1} \left(\tau\circ\beta\right),\mathfrak{a}\left(m\right)\right)\) be the canonical connection on \(\mathrm{J}^{1}\left(\tau\circ\beta\right)\) and \(\theta_{\mathrm{J}^{1}\tau}\in\Omega^{1}\left(\mathrm{J}^{1}\tau,\mathfrak{ gl}\left(m\right)\right)\) be the canonical connection form on \(\mathrm{J}^{1}\tau\). Then_ \[\mathrm{j}^{*}\theta_{\mathrm{J}^{1}(\tau\circ\beta)}=\theta_{\mathrm{J}^{1} \tau}+\tau_{10}^{*}\varphi.\] Proof.: We will give an argument in local terms. Using Equation (C.6) and the local expression (5.4), it follows that \[\mathrm{j}^{*}\theta_{\mathrm{J}^{1}(\tau\circ\beta)}=\mathrm{e}_{\beta}^{ \mathrm{i}}\left(\mathrm{d}\mathrm{e}_{\mathrm{i}}^{\beta}-\mathrm{e}_{\mathrm{ i}\alpha}^{\beta}\mathrm{d}\mathrm{x}^{\alpha}\right)\otimes\mathrm{E}_{ \mathrm{j}}^{\mathrm{i}}+\mathrm{e}_{\beta}^{\mathrm{i}}\delta_{\alpha}^{\beta }\mathrm{d}\mathrm{x}^{\alpha}\otimes\mathrm{e}_{\mathrm{i}}=\theta_{\mathrm{J}^ {1}\tau}+\tau_{10}^{*}\varphi,\] as required. With these elements at hand, it is immediate to formulate the variational problem we will use to represent Chern-Simons gauge theory in this context. **Definition 8** (Chern-Simons variational problem on \(A\left(3,\mathbb{R}\right)\)).: It is the variational problem prescribed by the action \[\sigma\mapsto\int_{\mathrm{U}}\sigma^{*}\left(\mathcal{L}_{\mathrm{CS}}\right)\] for \(\sigma:\mathrm{U}\subset M\to\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ \beta\right)\right)\) an admissible section. From Proposition 11 and 12 we obtain the correspondence between Chern-Simons field theory and gravity with basis in this setting. **Theorem 1**.: _The extremals of the Chern-Simons variational problem on \(A\left(3,\mathbb{R}\right)\) are in one-to-one correspondence with the extremals of the variational problem described in Definition 5._ Proof.: The correspondence is determined by the bijective map \[\mathrm{s}\mapsto\sigma:=\left(\tau_{10}\circ\mathrm{s},\mathrm{j}\circ \mathrm{s}\right)\] between a section \(\mathrm{s}\) of \(\tau_{1}:\mathrm{J}^{1}\tau\to M\) and an admissible section \(\sigma\) for \(\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ\beta\right)\right)\). ### Gauge properties of the Chern-Simons Lagrangian \(3\)-form In order to prove that this variational problem is able to reproduce the usual Chern-Simons field theory on a subbbundle, it will be necessary to prove that \(\mathcal{L}_{\mathrm{CS}}\) established by Definition 6 has the correct transformation properties with respect to a gauge transformation (see Appendix A). To this end is devoted the following lemma, which is a reformulation of a previous result of Freed [10] to this new setting. **Lemma 8**.: _Let \(\left(Q,\pi,N\right)\) be a \(H\)-principal bundle; indicate with \(\theta\in\Omega^{1}\left(\mathrm{J}^{1}\pi,\mathfrak{h}\right)\) the canonical connection on the bundle_ \[\mathrm{p}_{\mathfrak{h}}^{1}\pi:\mathrm{J}^{1}\pi\to C\left(Q\right).\] _Also, let \(\mathrm{s}:\mathrm{U}\subset M\to\mathrm{J}^{1}\pi\) be a local section, and \(g:\mathrm{U}\to H\) a map. Define the new section_ \[\overline{\mathrm{s}}:\mathrm{U}\to\mathrm{J}^{1}\pi:\mathrm{x}\mapsto \mathrm{s}\left(\mathrm{x}\right)\cdot\mathrm{g}\left(\mathrm{x}\right).\] _Then_ \[\left(\overline{\mathrm{s}}^{*}\theta\right)\rvert_{\mathrm{x}}=Ad_{g\left( \mathrm{x}\right)}\circ\left(s^{*}\theta\right)\rvert_{\mathrm{x}}+\left(g^{*} \lambda\right)\rvert_{\mathrm{x}},\] _where \(\lambda\in\Omega^{1}\left(H,\mathfrak{h}\right)\) is the (left) Maurer-Cartan \(1\)-form on \(H\)._ In the previous setting let us take \(Q=\mathrm{AM}\). By using the identification \[\mathrm{LM}\simeq\gamma\left(\mathrm{LM}\right)\subset\mathrm{AM}\] we can consider \[\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ\beta\right)\right)\subset\mathrm{ J}^{1}\left(\tau\circ\beta\right);\] therefore, any section \(\mathrm{s}:\mathrm{U}\subset\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ \beta\right)\right)\) can be seen as a section of the bundle \(\left(\tau\circ\beta\right)_{1}:\mathrm{J}^{1}\left(\tau\circ\beta\right)\to M\) taking values in this subbundle. With this in mind, we have the following consequence of the previous lemma. **Corollary 4**.: _For \(\mathcal{L}_{\mathrm{CS}}\) given by Definition 6 and a section \(\overline{s}\) constructed as in the previous lemma, the following relation holds_ \[\overline{s}^{*}\mathcal{L}_{\mathrm{CS}}=s^{*}\mathcal{L}_{\mathrm{CS}}+d \left\langle Ad_{g^{-1}}\circ s^{*}\left(\theta^{*}\right)\ \wedge\ g^{*}\lambda\right\rangle-\frac{1}{6}\left\langle g^{*}\lambda\ \wedge\ \left[g^{*}\lambda\ \wedge\ \left[g^{*}\lambda\ \wedge\ \mathrm{g}^{*}\lambda\right]\right\rangle.\] _Remark 5_.: The 3-form \[\left\langle g^{*}\lambda\ \wedge\ \left[g^{*}\lambda\ \wedge\ \mathrm{g}^{*} \lambda\right]\right\rangle\] is closed; therefore, the last term does not contribute to the action when performing variations, and so \[\delta\int_{M}\overline{s}^{*}\mathcal{L}_{\mathrm{CS}}=\delta\int_{M}s^{*} \mathcal{L}_{\mathrm{CS}}.\] Chern-Simons variational problem on \(A\left(3\right)\) as extension of the Wise variational problem We will prove in this section that sections of \(J^{1}\tau\) that are extensions of extremals for the Wise variational problem on the jet bundle \(J^{1}\pi_{\zeta}\) associated to any \(K\)-structure \(O_{\zeta}\), are extremals for the Chern-Simons variational problem on \(A\left(3\right)\) and viceversa. In order to achieve this result, we will need to use the relationship that connects extensions of Cartan connections with the original connections, as described in Diagram (4.4). In this case, with the help of the naturality of the canonical connection established in Lemma 2, the following proposition can be proven (the notation used is the one employed in Section 4). **Proposition 13**.: _Let_ \[\Gamma:P\to J^{1}\pi_{\left\{G_{1}\right\}}\qquad\Gamma^{\sharp}:P\left[G_{2} \right]\to J^{1}\pi_{\left\{G\right\}}\] _be a pair of (generalized) Cartan connections such that_ \[j^{1}\gamma_{G_{1}}\circ\Gamma=\Gamma^{\sharp}\circ\gamma_{H}^{2}.\] _Then_ \[\left(\Gamma^{\sharp}\circ\gamma_{H}^{2}\right)^{*}\theta_{J^{1}\pi_{\left\{G \right\}}}=\Gamma^{*}\theta_{J^{1}\pi_{\left\{G_{1}\right\}}}.\] We will use this result to prove that the Chern-Simons variational problem on \(A\left(3\right)\) can be seen as an extension of the Wise variational problem, namely, that we can establish a one-to-one correspondence between the extremals of these variational problems through the operations of extension and reduction of generalized Cartan connections, as defined in Section 4. To this end, let us apply Proposition 13 to the following diagram Now, recall from Section 4.3.3 that the choice of a metric \(\zeta:M\to\Sigma\) allows us to select a \(K\)-structure \(O_{\zeta}\subset LM\) and a subbundle \(O_{\zeta}^{\mathrm{aff}}\subset AM\); let us indicate with \[\tau_{\zeta}^{\mathrm{aff}}:O_{\zeta}^{\mathrm{aff}}\to M\] the restriction of the canonical projection \(\tau^{\mathrm{aff}}:\mathrm{AM}\to\mathrm{M}\) to this subbundle. It follows that if \[\sigma:\mathrm{U}\to\gamma^{*}\left(\mathrm{J}^{1}\left(\tau\circ\beta\right)\right)\] is a section of the jet space for the affine frame bundle, and \[\sigma_{\zeta}:\mathrm{U}\to\gamma_{\zeta}^{*}\left(\mathrm{J}^{1}\left(\left( \beta\circ\tau\right)\right|_{\mathcal{O}_{\zeta}^{\mathrm{aff}}}\right)\right)\] is a section of the jet space for the restriction of the affine frame bundle to orthonormal basis respect to the metric \(\zeta\), then they will be related by a relation of reduction or extension if and only if \[\tau_{10}\circ\mathrm{pr}_{1}\circ\sigma=\left(\tau_{\zeta}^{\mathrm{aff}} \right)_{10}\circ\mathrm{pr}_{1}\circ\sigma_{\zeta}\] and \[\mathrm{j}^{1}\gamma_{\zeta}^{\mathrm{aff}}\circ\mathrm{pr}_{2}\circ\sigma= \mathrm{pr}_{2}\circ\sigma_{\zeta}\circ\gamma_{\mathcal{O}_{\zeta}},\] where \(\mathrm{pr}_{i},\mathrm{i}=1,2\) are the projections onto the factors in the cartesian product. Then by Proposition 13 we will have that \[\sigma^{*}\mathcal{L}_{\mathrm{CS}}=\sigma_{\zeta}^{*}\lambda_{\mathrm{CS}},\] for sections related by the operations of reduction or extension of generalized Cartan connections; it means that the correspondences \[\sigma\mapsto\sigma_{\zeta},\qquad\sigma_{\zeta}\mapsto\sigma\] establish a one-to-one correspondence between the Chern-Simons variational problem on \(\mathrm{A}\left(3\right)\) and the Wise variational problem, as required. Thus, we can prove the following result. **Theorem 2**.: _The operations of reduction and extension of generalized Cartan connections establish a one-to-one correspondence between the extremals of the Wise variational problem for any first order geometry associated to the pair \(\left(\mathrm{SO}\left(2,1\right)\times\mathbb{R}^{3},\mathrm{SO}\left(2,1 \right)\right)\) (Definition 4) and the Chern-Simons variational problem on \(\mathrm{A}\left(3\right)\) (as it is described by Definition 8)._ Proof.: Let us suppose that we have a section \[\sigma:\mathrm{U}\subset\mathrm{M}\to\gamma^{*}\left(\mathrm{J}^{1}\left( \tau\circ\beta\right)\right)\subset\mathrm{LM}\times\mathrm{J}^{1}\left(\tau \circ\beta\right);\] then, the induced section \[s_{\sigma}\coloneqq\mathrm{pr}_{1}\circ\sigma\] has its image in a subbundle \(\mathrm{O}_{\zeta}\subset\mathrm{LM}\) for some metric \(\zeta\), which is given by the formula \[\zeta:=\eta^{\mathrm{ij}}X_{\mathrm{i}}\otimes X_{\mathrm{j}},\] where \[s_{\sigma}\left(\mathrm{x}\right)=\left\{X_{1}\left(\mathrm{x}\right),\cdots,X _{m}\left(\mathrm{x}\right)\right\},\qquad\mathrm{x}\in\mathrm{U}.\] Recalling Remark 3 and constraints (5.13), we will have that its associated map \(\Gamma_{A_{\mathrm{O}_{\zeta}}}\) has its image in \(\mathrm{J}^{1}\left(\left(\beta\circ\tau\right)\right|_{\mathcal{O}_{\zeta}^{ \mathrm{aff}}}\right)\) and so its can be reduced to a connection \(\sigma_{\zeta}\) for the subbundle \(\mathrm{O}_{\zeta}\). Now, the complicated part of the proof is the one that demonstrates that each extremal section of the variational problem in the restricted bundle is an extremal section for the general variational problem, because the variations of the first problem do not encompass all possible variations of the second problem. So, let us suppose that we have a section \(\sigma_{\lambda}:\mathrm{U}\to\mathrm{J}^{1}\left(\left(\beta\circ\tau\right) \right|_{\mathcal{O}_{\zeta}^{\mathrm{aff}}}\right)\) for the restricted variational problem; let \(\gamma^{*}\left(J^{1}\left(\tau\circ\beta\right)\right)\) be a section for \(\gamma^{*}\left(J^{1}\left(\tau\circ\beta\right)\right)\) induced by \(\sigma_{\zeta}\). Consider \(\sigma_{t}:U\subset M\to\gamma^{*}\left(J^{1}\left(\tau\circ\beta\right)\right)\) a variation for \(\sigma\); then for every \(t\) there exists a map \(g_{t}:U\to A\left(3\right)\) such that the section \[\widetilde{\sigma}_{t}\coloneqq\sigma_{t}\cdot g_{t}\] verifies the condition \(\operatorname{Im}\left(pr_{1}\circ\widetilde{\sigma}_{t}\right)\subset O_{\zeta}\), and so it is induced by a variation \(\overline{\sigma}_{t}\) for \(\sigma_{\zeta}\). Then from Corollary 4 we have that \[\overline{\sigma}_{t}^{*}d\lambda_{CS}=\widetilde{\sigma}_{t}^{*}d\mathcal{L}_ {CS}=\sigma_{t}^{*}d\mathcal{L}_{CS},\] and therefore \(\sigma\) is an extremal whenever \(\sigma_{\zeta}\) is. ## Appendix A Geometry of principal bundles The following appendix contains the usual construction of a principal fiber bundle and its jet space from a local point of view. Although this construction is well-known, the article utilizes some of its consequences. Therefore, we have decided to include it here to establish the notation and provide the reader with a quick reference to these results. ### Connections and principal bundles It is a usual to represent a principal connection by a family of locally defined \(g\)-valued \(1\)-forms \(A_{U}\in\Omega^{1}\left(U,g\right)\), with \(U\) belonging to a covering \(\mathcal{C}\coloneqq\left\{U\right\}\) for \(M\); these forms should obey a _gauge transformation condition_: For every pair \(U,V\in\mathcal{C}\) there must exists a map \[t_{UV}:U\cap V\to G\] such that (A.1) \[A_{V}=Ad_{t_{UV}^{-1}}\left(A_{U}\right)+t_{UV}^{-1}dt_{UV}=Ad_{t_{UV}^{-1}} \left(A_{U}\right)+t_{UV}^{*}\lambda,\] where \(\lambda\in\Omega^{1}\left(G,g\right)\) is the (left) Maurer-Cartan form on \(G\). Additionally, these maps must be compatible in the sense that, for every \(U,V,W\in\mathcal{C}\) such that \(U\cap V\cap W\neq\emptyset\), then \[t_{UV}\left(x\right)=t_{UV}\left(x\right)\cdot t_{WW}\left(x\right)\] for every \(x\in U\cap V\cap W\). The existence of these maps is equivalent to have a \(G\)-principal bundle on \(M\)[13, p. 51], for which the family \(\mathcal{C}\) becomes a covering of trivializing open sets; as it would seem obvious, we want to stress the fact that a principal bundle is singled out when dealing with a connection described in this way. Before to go on, let us discuss briefly about a notion of equivalence involving the principal bundle structure. As we know, a gauge transformation does not change the connection; therefore, a collection of forms \[A_{U}^{\prime}:=Ad_{g_{U}^{-1}}\left(A_{U}\right)+g_{U}^{*}\lambda\] where \(g_{U}:U\to G\) is a family of smooth functions, should describe the same connection. Accordingly, the structure functions \(t_{UV}\) must change obeying the rule (A.2) \[t_{UV}^{\prime}:U\cap V\to G:x\mapsto g_{U}\left(x\right)t_{UV}\left(x \right)g_{V}\left(x\right),\] with the underlying principal bundle remaining invariant; therefore, set of transformations \(\left\{t_{UV}\right\},\left\{t_{UV}^{\prime}\right\}\) related by Equation (A.2) should be considered as equivalent. Let us look more closely to the principal bundle so constructed. Consider the triples \[\left(U,x,a\right)\in\mathcal{C}\times M\times G\] such that \(x\in U\); we say that \((U,x,a)\) is equivalent to \((V,y,b)\) if and only if \(U\cap V\neq\emptyset\), \(x=y\) and \[b=t_{VU}\left(x\right)a.\] We will indicate with \([U,x,a]\) the equivalence class containing \((U,x,a)\); the space obtained by quotient by this equivalence relation becomes a \(G\)-principal bundle \(\pi:P\to M\), where \[\pi\left([U,x,a]\right)=x\] and the \(G\)-action is simply given by \[[U,x,a]\cdot h=[U,x,ah]\,.\] The covering \(\mathcal{C}\) contains trivializing open sets; in fact, on \(\pi^{-1}\left(U\right)\) we have the trivializing map \[t_{U}:\pi^{-1}\left(U\right)\to U\times G:[U,x,a]\mapsto\left(x,a\right).\] This description of the bundle \(P\) allows us to construct a family of local sections for \(P\), namely \[s_{U}:U\to\pi^{-1}\left(U\right):x\mapsto\left[U,x,e\right],\] where we have used the symbol \(e\) for the unit in \(G\); it follows that (A.3) \[s_{V}\left(x\right)=[V,x,e]=[U,x,t_{UV}\left(x\right)]=[U,x,e]\cdot t_{UV} \left(x\right)=s_{U}\left(x\right)\cdot t_{UV}\left(x\right)\] for every \(x\in U\cap V\). It means that the local sections \(s_{U}\) are related by the gauge transformations associated to the transition functions \(t_{UV}\). Let \(U\in\mathcal{C}\) be an open set in the covering; then the pair \((A_{U},s_{U})\) allow us to construct the connection form \(\omega\) on \(P\) through the formula [20] (A.4) \[\omega|_{u}:=\operatorname{Ad}_{\left(g_{U}\left(u\right)^{-1}\right)}\left( \pi^{\ast}\left.A_{U}\right|_{x}\right)+g_{U}^{\ast}\lambda;\] here \(x=\pi\left(u\right)\), \(g_{U}:\pi^{-1}\left(U\right)\to G\) is defined by \[u=s_{U}\left(x\right)\cdot g_{U}\left(u\right)\] and \(\lambda\) is the (left) Maurer-Cartan form on \(G\). The transformation properties of these local data imply that \[\operatorname{Ad}_{\left(g_{U}\left(u\right)^{-1}\right)}\left(\pi^{\ast} \left.A_{U}\right|_{x}\right)+g_{U}^{\ast}\lambda=\operatorname{Ad}_{\left(g_ {V}\left(u\right)^{-1}\right)}\left(\pi^{\ast}\left.A_{V}\right|_{x}\right)+g _{V}^{\ast}\lambda,\qquad x=\pi\left(u\right)\] for any pair \(U,V\in\mathcal{C}\) and \(u\in\pi^{-1}\left(U\cap V\right)\); therefore, this definition is independent of the open set used to calculate it through Eq. (A.4). ### Gauge transformations Recall that a _gauge transformation of a principal bundle_\(P\) is a bundle map \(\phi:P\to P\) over the identity that commutes with the \(G\)-action, namely \[\phi\left(u\cdot g\right)=\phi\left(u\right)\cdot g.\] Using the trivialization maps \(t_{U}:\pi^{-1}\left(U\right)\to U\times G\), a gauge transformation is locally described by the maps \[\phi_{U}\left(x,a\right)\coloneqq\left(t_{U}\circ\phi\circ t_{U}^{-1}\right) \left(x,a\right)=\left(x,h_{U}\left(x\right)a\right),\] where the functions \(h_{U}:U\to G\) must have the following transformation property \[h_{V}\left(x\right)=t_{UV}\left(x\right)h_{U}\left(x\right)\] for every \(x\in U\cap V\). In fact, let us define \[\phi\left([U,x,a]\right)\coloneqq\phi_{U}\left(x,a\right)=[U,x,h_{U}\left(x \right)a]\] for any \([U,x,a]\in P\); if \(x\in U\cap V\), we will have that \[\left[V,x,h_{V}\left(x\right)a\right] =\left[U,x,t_{VU}\left(x\right)h_{V}\left(x\right)a\right]\] \[=\left[U,x,h_{U}\left(x\right)a\right],\] showing that this definition is independent of the open set \(U\) containing \(x\). ### Jet bundle and the bundle of principal connections Let us suppose that every \(U\in\mathcal{C}\) is a coordinate domain; therefore, we can write \[A_{U}=\zeta_{i}^{U}dx^{i},\qquad\zeta_{i}^{U}:U\to\mathfrak{g}\] and so, locally, we have that a connection can be seen as a section of the bundle \[\operatorname{pr}_{1}:U\times G\times\overbrace{\mathfrak{g}\otimes\cdots \otimes\mathfrak{g}}^{\text{m times}}\to U.\] As with \(P\), we can take advantage of the transition functions \(t_{UV}\) in order to glue together these local fibrations. Accordingly, let us consider the \((m+3)\)-uples \[(U,x,a,\xi_{1},\cdots,\xi_{m})\in\mathcal{C}\times M\times G\times\overbrace{ \mathfrak{g}\otimes\cdots\otimes\mathfrak{g}}^{\text{m times}}\] such that \(x\in U\), and define the equivalence relation given by \[(U,x,a,\xi_{i})\sim(V,y,b,\zeta_{i})\] if and only if \(U\cap V\neq\emptyset\), \(x=y\), \(b=t_{VU}\left(x\right)a\) and \[\zeta_{i}dx^{i}=\xi_{i}dx^{i}+Ad_{a^{-1}}\left(t_{VU}^{*}\lambda\right).\] The quotient space \(J^{1}\pi\) is a manifold, the so called _jet space of the bundle \(\pi:P\to M\)_; we have canonical projections \[\pi_{10}:J^{1}\pi\to P:[U,x,a,\xi_{i}]\mapsto[U,x,a]\] and \[\pi_{1}:J^{1}\pi\to M:[U,x,a,\xi_{i}]\mapsto x\] giving it bundle structure on both spaces \(P\) and \(M\). As before, we have a local trivialization for \(J^{1}\pi\) through the formula \[T_{U}:\pi_{1}^{-1}\left(U\right)\to U\times G\times\overbrace{\mathfrak{g} \otimes\cdots\otimes\mathfrak{g}}^{\text{m times}}:[U,x,a,\xi_{i}]\mapsto\left(x,a,\xi_{i}\right).\] Every element \([U,x,g,\xi_{i}]\in J^{1}\pi\) is equivalent to a linear map \[\frac{\partial}{\partial x^{i}}\in T_{x}M\longmapsto T_{[U,x,e]}R_{g}\circ T_{ x}s_{U}\left(\frac{\partial}{\partial x^{i}}\right)+\left(\xi_{i}\right)_{p} \left([U,x,g]\right)\in T_{[U,x,g]}P,\] where \(\zeta_{P}\) is the infinitesimal generator of the \(G\)-action on \(P\) corresponding to the element \(\zeta\in\mathfrak{g}\). Using the fact that \[t_{U}\circ R_{h}=(id\times R_{h})\circ t_{U}\] for every \(h\in G\), we can see that \[T_{UU}\circ\zeta_{P}=\zeta_{G}^{R}\] for all \(\zeta\in\mathfrak{g}\); here \(\xi_{G}^{R}\) indicates the infinitesimal generator on \(G\) associated to the right action. Therefore, the map \(T_{U}\) is induced by \(T_{U}\). It can be seen that \(J^{1}\pi\) is a \(G\)-space; the action is given by the formula \[[U,x,a,\xi_{i}]\cdot h=[U,x,ah,Ad_{h^{-1}}\xi_{i}]\] for all \(h\in G\). This action allows us to construct a quotient bundle \[C\left(P\right)\coloneqq J^{1}\pi/G,\] and the canonical projection gives rise to a new bundle \[p_{G}^{1}\pi:J^{1}\pi\to C\left(P\right).\] The map \(\overline{\pi}:C\left(P\right)\to M\) such that \[\pi_{1}=\overline{\pi}\circ p_{G}^{J^{1}\pi}\] gives \(C\left(P\right)\) structure of a bundle on \(M\); its elements are the equivalence classes \[\left[U,x,a,\xi_{i}\right]_{G}\coloneqq\left\{\left[U,x,ah,Ad_{h^{-1}}\xi_{i} \right]:h\in G\right\}.\] For every \(U\in\mathcal{C}\), we have a trivialization map \[\phi_{U}:\overline{\pi}^{-1}\left(U\right)\to U\times\mathfrak{g}^{\otimes m }:\left[U,x,a,\xi_{i}\right]_{G}\mapsto\left(x,Ad_{a}\xi_{i}\right);\] for every pair \(U,V\in\mathcal{C}\) such that \(U\cap V\neq\emptyset\), we have that \[\phi_{V}\circ\phi_{U}^{-1}\left(x,\xi_{i}\right) =\phi_{V}\left(\left[U,x,e,\xi_{i}\right]_{G}\right)\] \[=\phi_{V}\left(\left[V,x,t_{VU}\left(x\right),\xi_{i}+\left(t_{VU }^{*}\lambda\right)\left(\frac{\partial}{\partial x^{i}}\right)\right]_{G}\right)\] \[=\left(x,Ad_{t_{VU}\left(x\right)}\left[\xi_{i}+\left(t_{VU}^{*} \lambda\right)\left(\frac{\partial}{\partial x^{i}}\right)\right]\right)\] \[=\left(x,Ad_{t_{UV}\left(x\right)^{-1}}\xi_{i}-\left(t_{UV}^{*} \lambda\right)\left(\frac{\partial}{\partial x^{i}}\right)\right)\] where it was used that \(t_{VU}\left(x\right)=t_{UV}\left(x\right)^{-1}\). The fact that this expression is equivalent to the transformation law (A.1) allows us to consider \(C\left(P\right)\) as the bundle of principal connections for \(P\); namely, we have an identification (A.5) \[\left[U,x,a,\xi_{i}\right]_{G}\longleftrightarrow A_{U}\coloneqq-Ad_{a}\xi_ {i}dx^{i}.\] With this correspondence in mind, the existence of a \(g\)-valued \(1\)-form \(A_{U}\) for every \(U\in\mathcal{C}\) such that compatibility conditions (A.1) are fulfilled, is equivalent to the existence of a section \[\sigma:M\to C\left(P\right).\] In fact, we have the formula \[\sigma\left(x\right)=\left[U,x,e,-A_{U}\right]_{x}\left(\frac{\partial}{ \partial x^{i}}\right)\right]_{G}\] for every \(U\in\mathcal{C}\) and \(x\in U\). ### The canonical connection form on \(J^{1}\pi\) A fundamental geometric structure on \(J^{1}\pi\) is the _canonical connection form_\(A\), which is a \(g\)-valued \(1\)-form on \(J^{1}\pi\) inducing a connection on the \(G\)-principal bundle \(p_{G}^{J^{1}\pi}:J^{1}\pi\to C\left(P\right)\); for every \(U\in\mathcal{C}\), it is given by the formula \[A_{\left[U,x,a,\xi_{i}\right]}\coloneqq\lambda\big{|}_{a}-\xi_{i}dx^{i}.\] In fact, because \(b=t_{VU}\left(x\right)a\), it results that \[\lambda\big{|}_{b}=\lambda\big{|}_{a}+Ad_{a^{-1}}\left(t_{VU}^{*}\lambda \right);\] thus, if \([U,x,a,\xi_{\iota}]=[V,x,b,\zeta_{\iota}]\) for \(x\in U\cap V\), we will have \[A|_{[V,x,b,\zeta_{\iota}]} =\lambda|_{b}-\zeta_{\iota}dx^{\iota}\] \[=\lambda|_{a}+\operatorname{Ad}_{a^{-1}}\left(t_{V\iota}^{*} \lambda\right)-\left(\xi_{\iota}dx^{\iota}+\operatorname{Ad}_{a^{-1}}\left(t_ {V\iota}^{*}\lambda\right)\right)\] \[=\lambda|_{a}-\xi_{\iota}dx^{\iota}\] \[=A|_{[U,x,a,\xi_{\iota}]}\,,\] and the form \(A\) is well-defined. There is an important property that the canonical form has. In order to formulate it, let us consider another way to specify a connection, namely, through an equivariant bundle map \[\Gamma:P\to J^{1}\pi.\] In fact, using the above description of these bundles, and given a local description \(\{A_{U}:U\in\mathcal{C}\}\) for a connection, we can construct the map \[\Gamma:[U,x,a]\mapsto\left[U,x,a,-\operatorname{Ad}_{a^{-1}}\left(A_{U}|_{x} \left(\frac{\partial}{\partial x^{\iota}}\right)\right)\right],\] where identification (A.5) was used. Having the gauge transformation property (A.1) in mind, we can prove that it is a good definition. Now, given that \[s_{U}\left(x\right)=[U,x,e]\] and from \(u=s_{U}\left(x\right)\cdot g_{U}\left(u\right)\) for every \(u=[U,x,a]\in\pi^{-1}\left(U\right)\), we obtain the formula \[u=[U,x,g_{U}\left(u\right)]\,.\] Then \[\Gamma^{*}\left(A|_{\Gamma\left(u\right)}\right)=\Gamma^{*}\left(\lambda-\xi_ {\iota}dx^{\iota}\right)=g_{U}^{*}\lambda+\operatorname{Ad}_{g_{U}\left(u \right)^{-1}}\left.A_{U}|_{x}=\omega|_{u}\,,\] namely, the connection form can be obtained through pullback along the map \(\Gamma\) of the canonical form. ## Appendix B Local and global Chern-Simons Lagrangians Let \(K\) be a Lie group and \(\pi_{\zeta}:R_{\zeta}\to M\) a \(K\)-principal bundle (the notation will be explained later); on its first order jet space \[\left(\pi_{\zeta}\right)_{1}:J^{1}\pi_{\zeta}\to M\] we will define a (global) variational problem, which we will prove to represent Chern-Simons gauge theory. In order to accomplish this task, it will be necessary to lift it to \(J^{1}\left(\pi_{\zeta}\right)_{1}\), and compare it with the variational problem defined by local data, which lives on \(J^{1}\overline{\pi}_{\zeta}\). Let us now suppose that we have an invariant polynomial \(q:\mathfrak{t}\to\mathbb{R}\) of degree \(n\). According to the Chern-Simons theory [13, 14], the \(2n\)-form \[\alpha:=q\left(F\right)\in\Omega^{2n}\left(J^{1}\pi_{\zeta}\right)\] is closed. For example, when the Lie algebra comes with an invariant bilinear form, we can consider the quadratic polynomial \[q\left(F\right):=\left\langle F\overset{\wedge}{,}F\right\rangle;\] Additionally, it can be proved that the bundle \[pr_{1}:J^{1}\pi_{\zeta}\times_{C\left(R_{\zeta}\right)}J^{1}\pi_{\zeta}\to J ^{1}\pi_{\zeta},\] is a trivial \(K\)-principal bundle; it means that \(\alpha\) is not only closed, but also exact. Therefore, there exists a \((2n-1)\)-form (B.1) \[\beta:=\operatorname{Tq}\left(A,F\right)\in\Omega^{2n-1}\left(J^{1}\pi_{\zeta} \right),\] such that \[d\beta=\operatorname{pr}_{1}^{*}\alpha;\] the polynomial \(\operatorname{Tq}\) can be found by a transgression formula [10]. When \(q\) is the quadratic form determined by an invariant bilinear form, the transgression is given by (B.2) \[\operatorname{Tq}\left(A,F\right)=\left\langle A\overset{\wedge}{\cdot}F \right\rangle-\frac{1}{6}\left\langle A\overset{\wedge}{\cdot}\left[A\overset{ \wedge}{\cdot}A\right]\right\rangle.\] How can these forms be related to the local forms usually used to described Chern-Simons field theory? In this case, we have another \(K\)-principal bundle structure, namely the quotient map \[p_{K}^{J^{1}\pi_{\zeta}}:J^{1}\pi_{\zeta}\to C\left(R_{\zeta}\right),\] and \(q\left(F\right)\) defines a Chern class for it. Accordingly, there exists a \(2n\)-form \(\gamma\) on \(C\left(R_{\zeta}\right)\) such that \[\left(p_{K}^{J^{1}\pi_{\zeta}}\right)^{*}\gamma=q\left(F\right).\] Moreover, using the canonical \(2\)-form \(F_{2}\) on the bundle of connections \(C\left(R_{\zeta}\right)\), we can prove that \[\gamma=q\left(F_{2}\right).\] But now, this bundle is not trivial in general; in short, from decomposition \[J^{1}\pi_{\zeta}=R_{\zeta}\times_{M}C\left(R_{\zeta}\right),\] we obtain that it is trivial if and only if the bundle \[\pi_{\zeta}:R_{\zeta}\to M\] is. In consequence, it is not expected that the \(2n\)-form \(\gamma\), although closed, should also be exact; it is the reason why, although we have a global variational problem on \(J^{1}\pi_{\zeta}\), it cannot be reproduced on \(C\left(R_{\zeta}\right)\), even having in mind that the transformations properties of \(\mathcal{L}_{CS}\) are telling us that the degrees of freedom associated to the \(R_{\zeta}\)-factor can be ignored. Nevertheless, because \(R_{\zeta}\) admits trivializing open sets, the previous considerations can be used to associate to every such set \(U\subset M\) a Lagrangian \(\mathcal{L}_{U}:=\operatorname{Tq}\left(A_{2}^{U},F_{2}\right)\), where \[A_{2}^{U}\in\Omega^{1}\left(\left(\pi_{\zeta}\right)^{-1}\left(U\right),F\right)\] is a \(1\)-form such that \[\left(p_{K}^{J^{1}\pi_{\zeta}}\right)^{*}A_{2}^{U}=A\big{|}_{\left(\left(\pi_ {\zeta}\right)_{1}\right)^{-1}\left(U\right)}.\] _Remark 6_.: A study of conditions ensuring the existence of global solutions for the local variational problem for Chern-Simons gauge theory can be found in [11]. In this regard, it is interesting to note that here we have changed a variational problem determined by local data and whose sections could be globally defined, by a variational problem described by global data, but whose sections are forced to have local nature (part of these sections are sections of a principal bundle). ## Appendix C Geometry of the affine frame bundle From now on we will devoted ourselves to particularize this definition to a very specific principal bundle, the so called _affine frame bundle_ (see Definition 9 below), and to relate the variational problem so obtained with Palatini gravity. ### The affine general linear group and the affine frame bundle We have the splitting short exact sequence where \(A\left(m,\mathbb{R}\right)\subset\operatorname{GL}\left(m+1,\mathbb{R}\right)\) is the subgroup of matrices of the form \[B:=\begin{bmatrix}a&\xi\\ 0&1\end{bmatrix}\] where \(a\in\operatorname{GL}\left(m,\mathbb{R}\right)\) and \(\xi\in\mathbb{R}^{m}\); the maps in the sequence read \[\alpha\left(\xi\right):=\begin{bmatrix}1&\xi\\ 0&1\end{bmatrix},\qquad\beta\left(\begin{bmatrix}a&\xi\\ 0&1\end{bmatrix}\right):=\begin{bmatrix}a&0\\ 0&1\end{bmatrix},\] and \[\gamma\left(a\right):=\begin{bmatrix}a&0\\ 0&1\end{bmatrix}.\] Because this short sequence splits, we can consider \[A\left(m,\mathbb{R}\right)=\operatorname{GL}\left(m,\mathbb{R}\right)\oplus \mathbb{R}^{m},\] with the isomorphism of groups given by \[\operatorname{GL}\left(m,\mathbb{R}\right)\oplus\mathbb{R}^{m}\to A\left(m, \mathbb{R}\right):\left(a,\xi\right)\mapsto\gamma\left(a\right)+\alpha\left( \xi\right).\] Now, let \(A^{m}\) be the set \(\mathbb{R}^{m}\) considered as an affine space; we can set an isomorphism between \(A\left(m,\mathbb{R}\right)\) and the set of affine maps \[f:A^{m}\to A^{m};\] in fact, given \(B=\left(a,\xi\right)\), the associated affine map reads \[f_{B}\left(z\right):=az+\xi,\] for every \(z\in A^{m}\). Using the expression of an element \(\left(a,\xi\right)\) as a matrix, we have that \[\left(a,\xi\right)^{-1}=\left(a^{-1},-a^{-1}\xi\right)\] and (C.1) \[\operatorname{Ad}_{\left(a,\xi\right)}\left(b,\zeta\right)=\left(\operatorname {Ad}_{a}b,a\zeta-\left(\operatorname{Ad}_{a}b\right)\xi\right)\] for any \(\left(a,\xi\right)\in A\left(m,\mathbb{R}\right),\left(b,\zeta\right)\in a \left(m,\mathbb{R}\right)\). In the same vein, let \(A_{x}\left(M\right)\) be the set \(T_{x}M\) considered as an affine space, for every \(x\in M\). As it is well-known [11], the set of affine maps \[u:A^{m}\to A_{x}\left(M\right)\] for every \(x\in M\) has structure of \(A\left(m,\mathbb{R}\right)\)-principal bundle; the action of an element \(B\in A\left(m,\mathbb{R}\right)\) is simply given by \[u:B:=u\circ B.\] **Definition 9** (Bundle of affine frames).: The _bundle of affine frames on \(M\)_ will be the set \[\operatorname{\mathrm{AM}}:=\bigcup_{x\in M}\left\{u:A^{m}\to A_{x}\left(M\right) \text{ affine}\right\}.\] As it follows from the general theory of principal bundles, there exists a pair of principal bundle morphisms associated to the homomorphisms \(\beta:A\left(m,\mathbb{R}\right)\to\operatorname{\mathrm{GL}}\left(m,\mathbb{R}\right)\) and \(\gamma:\operatorname{\mathrm{GL}}\left(m,\mathbb{R}\right)\to A\left(m,\mathbb{ R}\right)\) For any \(G\)-principal bundle \(\pi:P\to M\), the affine bundle \(\left(C\left(P\right),\overline{\pi},M\right)\) defined through the diagram is called the _bundle of connections of the bundle \(P\)_, and we can establish a canonical one-to-one correspondence between its sections and principal connections on \(P\). The correspondence is given as follows: Any element \(j_{x}^{1}s\in J^{1}\pi\) is a linear map \[j_{x}^{1}s:T_{x}M\to T_{s\left(x\right)}P\] such that \[T_{s\left(x\right)}\pi\circ j_{x}^{1}s=id_{T_{x}M},\] and so a \(G\)-orbit \(\left[j_{x}^{1}s\right]_{G}\) can be interpreted as a linear map \[\left[j_{x}^{1}s\right]_{G}:T_{x}M\to\left(TP/G\right)_{x}.\] Given \(u\in P\), there exists a unique \(m\)-dimensional subspace \(H_{u}\subset T_{u}P\) such that \[\left[j_{x}^{1}s\right]_{G}\left(T_{x}M\right)=p_{G}^{TP}\left(H_{u}\right);\] the assignment \(u\mapsto H_{u}\) is the connection associated to \(\left[j_{x}^{1}s\right]_{G}\). Therefore we have the diagram It is a theorem that any connection \(\Gamma:M\to C\left(\mathrm{LM}\right)\) gives rise to a unique connection \(\widetilde{\Gamma}:M\to C\left(\mathrm{AM}\right)\) such that if \[\omega_{\mathrm{LM}}\in\Omega^{1}\left(\mathrm{LM},\mathfrak{gl}\left(\mathrm{ m},\mathbb{R}\right)\right)\qquad\text{and}\qquad\omega_{\mathrm{AM}}\in\Omega^{1} \left(\mathrm{AM},\mathfrak{a}\left(\mathrm{m},\mathbb{R}\right)\right)\] are the corresponding connection forms, then (C.2) \[\gamma^{*}\omega_{\mathrm{AM}}=\omega_{\mathrm{LM}}+\varphi,\] where \(\varphi\in\Omega^{1}\left(\mathrm{LM},\mathbb{R}^{m}\right)\) is the canonical solder 1-form on \(\mathrm{LM}\). ### The affine frame bundle as extension of the frame bundle It remains to interpret the affine frame bundle as the extension of the frame bundle using the group immersion \(\mathrm{GL}\left(\mathrm{m},\mathbb{R}\right)\subset\mathrm{A}\left(\mathrm{m },\mathbb{R}\right)\). **Proposition 14**.: _Let \(H=\mathrm{GL}\left(\mathrm{m},\mathbb{R}\right)\), \(G=\mathrm{A}\left(\mathrm{m},\mathbb{R}\right)\) and \(P=\mathrm{LM}\). Then_ \[P\times_{H}G\simeq\mathrm{AM}.\] Proof.: Let us define the map \[\phi\left(\left[\mathrm{u},\left(\mathrm{h},\mathrm{v}\right)\right]_{H} \right):=a\in\left.\mathrm{AM}\right|_{x}\] if and only if \(x=\tau\left(\mathrm{u}\right)\) and \[a:\mathbb{R}^{m}\to T_{x}M:w\mapsto\left(\mathrm{u}\circ\mathrm{h}\right)\left( w\right)+\mathrm{u}\left(\mathrm{v}\right).\] Then, for any \(\left(\mathrm{h}^{\prime},\mathrm{v}^{\prime}\right)\in\mathrm{A}\left( \mathrm{m},\mathbb{R}\right)\), we have that \[\phi\left(\left[\mathrm{u},\left(\mathrm{h},\mathrm{v}\right) \right]_{H}\cdot\left(\mathrm{h}^{\prime},\mathrm{v}^{\prime}\right)\right) =\phi\left(\left[\mathrm{u},\left(\mathrm{h},\mathrm{v}\right) \left(\mathrm{h}^{\prime},\mathrm{v}^{\prime}\right)\right]_{H}\right)\] \[=\phi\left(\left[\mathrm{u},\left(\mathrm{h}\mathrm{h}^{\prime}, \mathrm{h}\mathrm{v}^{\prime}+\mathrm{v}\right)\right]_{H}\right)\] \[=\left[w\mapsto\left(\mathrm{u}\circ\mathrm{h}\circ\mathrm{h}^{ \prime}\right)\left(w\right)+\mathrm{u}\left(\mathrm{h}\mathrm{v}^{\prime}+ \mathrm{v}\right)\right]\] \[=\left[w\mapsto\mathrm{u}\left(\mathrm{h}\left(\mathrm{h}^{\prime }w+\mathrm{v}^{\prime}\right)\right)+\mathrm{u}\left(\mathrm{v}\right)\right]\] \[=\phi\left(\left[\mathrm{u},\left(\mathrm{h},\mathrm{v}\right) \right]_{H}\right)\circ\left(\mathrm{h}^{\prime},\mathrm{v}^{\prime}\right),\] proving that \(\phi\) is a bundle map. For every \(a\in\left.\mathrm{AM}\right|_{x}\), we have that \[\phi\left(\left[\mathrm{u},\left(e,\mathrm{v}\right)\right]_{H}\right)=a\] if and only if \(v=a\left(0\right)\) and \[\mathrm{u}\left(w\right)=a\left(w\right)-a\left(0\right)\] for all \(w\in\mathbb{R}^{m}\); therefore, \(\phi\) es an epimorphism of bundles. Additionally, if \(\left[u_{1},\left(h_{1},v_{1}\right)\right]_{H},\left[u_{2},\left(h_{2},v_{2} \right)\right]_{H}\) are such that \[\phi\left(\left[u_{1},\left(h_{1},v_{1}\right)\right]_{H}\right)=\phi\left( \left[u_{2},\left(h_{2},v_{2}\right)\right]_{H}\right),\] then \[\left(u_{1}\circ h_{1}\right)\left(w\right)+u_{1}\left(v_{1}\right)=\left(u_{2 }\circ h_{2}\right)\left(w\right)+u_{2}\left(v_{2}\right)\] for all \(w\in\mathbb{R}^{m}\). With \(w=0\) it gives us that (C.3) \[u_{1}\left(v_{1}\right)=u_{2}\left(v_{2}\right)\] and so \[u_{2}\circ h_{2}=u_{1}\circ h_{1}\qquad\Longrightarrow\qquad u_{2}=u_{1} \circ h_{1}\circ h_{2}^{-1}.\] Thus, Equation (C.3) tells us that \[v_{2}=\left(h_{2}\circ h_{1}^{-1}\right)\left(v_{1}\right)\] and it means that \[\left[u_{2},\left(h_{2},v_{2}\right)\right]_{H}=\left[u_{2}\circ h_{1}\circ h _{2}^{-1},\left(h_{2},\left(h_{2}\circ h_{1}^{-1}\right)\left(v_{1}\right) \right)\right]_{H}=\left[u_{1},\left(h_{1},v_{1}\right)\right]_{H}.\] Namely, \(\phi\) is a monomorphism of bundles. ### Local expressions We will use the constructions developed in Section A in order to find coordinates for \(LM\), \(AM\) and its jet bundles. The first thing to note is that the frame bundle \(LM\) can be trivialized on every coordinate chart \(\left(U,\phi\right)\) for \(M\); namely, for \(u\in\tau^{-1}\left(U\right)\) there exists a collection \(\left(e_{i}^{\alpha}\left(u\right)\right)\) of real numbers such that \[u\left(c^{1},\cdots,c^{m}\right)=c^{i}e_{i}^{\alpha}\left(u\right)\frac{ \partial}{\partial x^{\alpha}},\qquad\left(c^{1},\cdots,c^{m}\right)\in \mathbb{R}^{m},\] where \(\phi=\left(x^{\alpha}\right)\) are the coordinate functions on \(\phi\left(U\right)\subset\mathbb{R}^{m}\). It induces the coordinate chart on \(\tau^{-1}\left(U\right)\) given by \[\Phi_{U}\left(u\right):=\left(x^{\alpha}\left(\tau\left(u\right)\right),e_{i} ^{\alpha}\left(u\right)\right).\] In the same vein, given \(\overline{u}\in\left(\tau\circ\beta\right)^{-1}\left(U\right)\subset AM\), we can find numbers \(\left(e_{i}^{\alpha}\left(\overline{u}\right),v^{\alpha}\left(\overline{u} \right)\right)\) such that (C.4) \[\overline{u}\left(c^{1},\cdots,c^{m}\right)=\left[c^{i}e_{i}^{\alpha}\left( \overline{u}\right)+v^{\alpha}\left(\overline{u}\right)\right]\frac{ \partial}{\partial x^{\alpha}},\qquad\left(c^{1},\cdots,c^{m}\right)\in \mathbb{R}^{m};\] it defines a coordinate chart on \(\left(\tau\circ\beta\right)^{-1}\left(U\right)\) through the formula \[\overline{\Phi}_{U}\left(\overline{u}\right):=\left(x^{\alpha}\left(\tau \left(\beta\left(\overline{u}\right)\right)\right),e_{i}^{\alpha}\left( \overline{u}\right),v^{\alpha}\left(\overline{u}\right)\right).\] The map \(\gamma\) fits nicely with these coordinates; in fact, we have that \[\overline{\Phi}_{U}\circ\gamma\circ\Phi_{U}^{-1}\left(x^{\alpha},e_{i}^{ \beta}\right)=\left(x^{\alpha},e_{i}^{\beta},0\right).\] Let us consider now the action of an element \(\left(a,w\right)\in A\left(m,\mathbb{R}\right)=GL\left(m,\mathbb{R}\right) \oplus\mathbb{R}^{m}\) on \(AM\); because of the equation (C.4), we have that \[\left[\overline{u}\cdot\left(a,w\right)\right]\left(c^{1},\cdots,c^ {m}\right) =\overline{u}\left(a_{j}^{1}c^{j}+w^{1},\cdots,a_{j}^{m}c^{j}+w^{ m}\right)\] \[=\left[\left(a_{j}^{i}c^{j}+w^{i}\right)\right.e_{i}^{\alpha} \left(\overline{u}\right)+v^{\alpha}\left(\overline{u}\right)\right]\frac{ \partial}{\partial x^{\alpha}}\] \[=\left[c^{i}e_{i}^{\alpha}\left(\overline{u}\right)+\left(v^{ \alpha}\left(\overline{u}\right)+c^{i}e_{i}^{\alpha}\left(\overline{u}\right) \right)\right]\frac{\partial}{\partial x^{\alpha}},\] namely, \[\left(x^{\alpha},e_{i}^{\beta},v^{\rho}\right)\cdot\left(a,w\right)=\left(x^{ \alpha},a_{i}^{i}e_{i}^{\beta},v^{\rho}+e_{i}^{\rho}w^{i}\right).\] We are ready to deal with connections on \(\operatorname{AM}\) in a local fashion; according to the discussion carried out in Section A.1, the local version of a connection on \(\operatorname{AM}\) is an \(a\left(m,\mathbb{R}\right)\)-valued \(1\)-form, namely \[\widetilde{\omega}_{\mathfrak{U}}=\left(\Gamma_{\gamma\beta}^{\alpha}\,dx^{ \beta}\otimes\mathbb{E}_{\alpha}^{\gamma},\sigma_{\beta}^{\alpha}\,dx^{\beta} \otimes e_{\alpha}\right).\] Accordingly, it can be globalized through formula (A.4); first, recall that in this case we have \[g_{\mathfrak{U}}\left(x^{i},e_{i}^{\alpha},v^{\beta}\right)=\left(e_{i}^{ \alpha},v^{\beta}\right),\] and so \[g_{\mathfrak{U}}^{*}\lambda=\left(e_{i}^{\alpha},v^{\beta}\right)^{-1}\left( de_{i}^{\alpha},dv^{\beta}\right)=\left(e_{\alpha}^{i}de_{j}^{\alpha},e_{\beta}^{i} \,dv^{\beta}\right).\] Additionally, the adjoint action formula (C.1) tells us that \[\operatorname{Ad}_{\left(g_{\mathfrak{U}}\left(x^{i},e_{i}^{\alpha},v^{ \beta}\right)\right)^{-1}}\widetilde{\omega}_{\mathfrak{U}}=\left(e_{i}^{ \gamma}e_{\alpha}^{j}\Gamma_{\gamma\beta}^{\alpha}dx^{\beta}\otimes E_{j}^{i },e_{\alpha}^{i}\left(\Gamma_{\gamma\beta}^{\alpha}v^{\gamma}+\sigma_{\beta}^ {\alpha}\right)dx^{\beta}\otimes e_{i}\right),\] so that (C.5) \[\widetilde{\omega}|_{\overline{\Phi}_{\mathfrak{U}}\left(x^{i},e_{j}^{\beta}, v^{\gamma}\right)}=\left(e_{\alpha}^{j}\left(de_{i}^{\alpha}+e_{i}^{\gamma} \Gamma_{\gamma\beta}^{\alpha}dx^{\beta}\right)\otimes E_{j}^{i},e_{\alpha}^{i }\left[dv^{\alpha}+\left(\Gamma_{\gamma\beta}^{\alpha}v^{\gamma}+\sigma_{ \beta}^{\alpha}\right)dx^{\beta}\right]\otimes e_{i}\right).\] Coordinates \(\Phi_{\mathfrak{U}}\) and \(\overline{\Phi}_{\mathfrak{U}}\) induce coordinates on \(J^{1}\tau\) and \(J^{1}\left(\tau\circ\beta\right)\) respectively, which will be indicated as \[\left(x^{\alpha},e_{i}^{\beta},e_{i\gamma}^{\beta}\right)\qquad\text{and} \qquad\left(x^{\alpha},e_{i}^{\beta},v^{\alpha},e_{i\gamma}^{\beta},v_{\beta }^{\alpha}\right).\] The \(A\left(m,\mathbb{R}\right)\)-action lifts to \(J^{1}\left(\tau\circ\beta\right)\) as follows \[\left(x^{\alpha},e_{i}^{\beta},v^{\rho},e_{i\gamma}^{\beta},v_{\beta}^{\alpha }\right)\cdot\left(a,w\right)=\left(x^{\alpha},a_{i}^{i}e_{i}^{\beta},v^{\rho} +e_{i}^{\rho}w^{i},a_{i}^{i}e_{i\gamma}^{\beta},v_{\beta}^{\alpha}+e_{i\beta }^{\alpha}w^{i}\right).\] using the form of this action, we can consider the projection from \(J^{1}\left(\tau\circ\beta\right)\) to \(C\left(\operatorname{AM}\right)\); we have that \[P_{A\left(m,\mathbb{R}\right)}^{J^{1}\left(\tau\circ\beta\right)}\left(x^{ \alpha},e_{i}^{\beta},v^{\rho},e_{i\gamma}^{\beta},v_{\beta}^{\alpha}\right)= \left(x^{i},e_{\gamma}^{j}e_{j\beta}^{\alpha},v_{\beta}^{\alpha}-e_{\gamma}^ {i}e_{i\beta}^{\alpha}v^{\gamma}\right).\] Let us now consider the canonical connection \(\theta_{J^{1}\left(\tau\circ\beta\right)}\) on \(J^{1}\left(\tau\circ\beta\right)\); it results that the contact structure is \[T\left(\tau\circ\beta\right)_{1\circ}-T_{x}s\circ T\left(\tau\circ\beta\right) _{1}=\left(de_{i}^{\beta}-e_{i\alpha}^{\beta}dx^{\alpha}\right)\otimes\frac{ \partial}{\partial e_{i}^{\beta}}+\left(dv^{\beta}-v_{\alpha}^{\beta}dx^{\alpha }\right)\otimes\frac{\partial}{\partial v^{\beta}},\] and because the infinitesimal generators for the \(A\left(m,\mathbb{R}\right)\)-action on \(\operatorname{AM}\) are \[\left(E_{i}^{j}\right)_{\operatorname{AM}}\left(x^{\alpha},e_{i}^{\beta},v^{ \gamma}\right)=e_{i}^{\alpha}\frac{\partial}{\partial e_{j}^{\alpha}},\qquad \left(e_{i}\right)_{\operatorname{AM}}\left(x^{\alpha},e_{i}^{\beta},v^{\gamma }\right)=e_{i}^{\alpha}\frac{\partial}{\partial v^{\alpha}},\] we obtain that (C.6) \[\theta_{J^{1}\left(\tau\circ\beta\right)}|_{\left(x^{\alpha},e_{i}^{\beta},v^{ \rho},e_{i\gamma}^{\beta},v_{\beta}^{\alpha}\right)}=e_{\beta}^{j}\left(de_{i}^ {\beta}-e_{i\alpha}^{\beta}dx^{\alpha}\right)\otimes E_{j}^{i}+e_{\beta}^{i} \left(dv^{\beta}-v_{\alpha}^{\beta}dx^{\alpha}\right)\otimes e_{i}.\]
2304.06947
TimelyFL: Heterogeneity-aware Asynchronous Federated Learning with Adaptive Partial Training
In cross-device Federated Learning (FL) environments, scaling synchronous FL methods is challenging as stragglers hinder the training process. Moreover, the availability of each client to join the training is highly variable over time due to system heterogeneities and intermittent connectivity. Recent asynchronous FL methods (e.g., FedBuff) have been proposed to overcome these issues by allowing slower users to continue their work on local training based on stale models and to contribute to aggregation when ready. However, we show empirically that this method can lead to a substantial drop in training accuracy as well as a slower convergence rate. The primary reason is that fast-speed devices contribute to many more rounds of aggregation while others join more intermittently or not at all, and with stale model updates. To overcome this barrier, we propose TimelyFL, a heterogeneity-aware asynchronous FL framework with adaptive partial training. During the training, TimelyFL adjusts the local training workload based on the real-time resource capabilities of each client, aiming to allow more available clients to join in the global update without staleness. We demonstrate the performance benefits of TimelyFL by conducting extensive experiments on various datasets (e.g., CIFAR-10, Google Speech, and Reddit) and models (e.g., ResNet20, VGG11, and ALBERT). In comparison with the state-of-the-art (i.e., FedBuff), our evaluations reveal that TimelyFL improves participation rate by 21.13%, harvests 1.28x - 2.89x more efficiency on convergence rate, and provides a 6.25% increment on test accuracy.
Tuo Zhang, Lei Gao, Sunwoo Lee, Mi Zhang, Salman Avestimehr
2023-04-14T06:26:08Z
http://arxiv.org/abs/2304.06947v1
# TimelyFL: Heterogeneity-aware Asynchronous Federated Learning with Adaptive Partial Training ###### Abstract In cross-device Federated Learning (FL) environments, scaling synchronous FL methods is challenging as stragglers hinder the training process. Moreover, the availability of each client to join the training is highly variable over time due to system heterogeneities and intermittent connectivity. Recent asynchronous FL methods (e.g., FedBuff [22]) have been proposed to overcome these issues by allowing slower users to continue their work on local training based on state models and to contribute to aggregation when ready. However, we show empirically that this method can lead to a substantial drop in training accuracy as well as a slower convergence rate. The primary reason is that fast-speed devices contribute to many more rounds of aggregation while others join more intermittently or not at all, and with stale model updates. To overcome this barrier, we propose TimelyFL, a heterogeneity-aware asynchronous FL framework with adaptive partial training. During the training, TimelyFL adjusts the local training workload based on the real-time resource capabilities of each client, aiming to allow more available clients to join in the global update without staleness. We demonstrate the performance benefits of TimelyFL by conducting extensive experiments on various datasets (e.g., CIFAR-10, Google Speech, and Reddit) and models (e.g., ResNet20, VGG11, and ALBERT). In comparison with the state-of-the-art (i.e., FedBuff), our evaluations reveal that TimelyFL improves participation rate by 21.13%, harvests 1.28\(\times\) - 2.89\(\times\) more efficiency on convergence rate, and provides a 6.25% increment on test accuracy. ## 1 Introduction Federated learning (FL) has emerged as a promising distributed machine learning paradigm that preserves privacy [12, 28]. The gist of FL is to keep the clients' private data on the devices and perform local model training for each client. A central server will collect these locally trained models to update a global model and then push it back for the next round of training. Most existing FL protocols are based on synchronous FL training (SyncFL), meaning that at each round all clients (or a selected cohort of clients) are updating their local models based on the latest update broadcast by the server at the beginning of that round. Due to the unbalanced communication or hardware capabilities and non-identical training data distribution, however, the time consumption for a local update can vary substantially from device to device, and some clients may even be temporarily disconnected during the training process [27]. Thus, leaving the server with two suboptimal choices: to wait for _all_ clients participating in each round to finish their local training and contribute to model aggregation (which will cause significant delays due to stragglers), or to only wait for a _subset_ of the faster clients (which will ignore all the work and contributions from slower clients). These critical challenges largely impede the scalability of SyncFL and make it difficult to land in large-scale cross-device scenarios. To address those challenges, recent works have proposed asynchronous federated learning (AsyncFL) [3, 9, 22, 30], which allows slower clients to continue local training and contribute to future aggregation rounds. AsyncFL _decouples_ client local training from global model aggregation/updates, as only certain clients would simultaneously get an update from the cloud server, which decreases the impact of stragglers. The most recent AsyncFL work - FedBuff [9, 22] - proposes that the server should perform a gradient aggregation to create a global model once the number of received local updates reaches a requisite threshold, which is a tunable parameter referred as _aggregation goal_. The slower clients can still upload their updates later as long as they finish local training, but their updates may not be included based on staleness information. As highlighted in Figure 0(c), we empirically demonstrate that while FedBuff achieves much faster convergence to a certain intermediate accuracy, it can, unfortunately, lead to a substantial drop in final accuracy compared to SyncFL. The intuitive explanation is that, as FedBuff only accepts a _fixed_ number of local updates to contribute to the global model in every communication round, it decreases the parallel computing efficiency by blocking other completed local updates into global aggregation, which turns them into stale updates as they would be postponed to the next round of global update. Moreover, the server aggregator prefers the fast-speed devices contributing more rounds of training, whereas low-speed devices may not enjoy the same frequency of contribution. Even when the slow devices participate in global training, they occasionally send the staled updates that potentially harm the convergence of the global loss. As shown in Figure 0(a) and 0(b), compared to SyncFL, FedBuff only includes a fixed number of local updates per round, and achieves a low participation rate (i.e., the number of aggregation participated divided by the total number of aggregation rounds) on average with a biased distribution, indicating that the inclusiveness of a group has been diminished, which is the root cause of the test accuracy gap. To close the gap between SyncFL and AsyncFL, we propose TimelyFL, a heterogeneity-aware asynchronous federated learning framework based on adaptive partial training. One key distinction of TimelyFL from previous AsyncFL works is that TimelyFL does not set a fixed number limit to the number of local updates for the global aggregation per round. Instead, to accommodate a _flexible_ number of clients joining in the global update, we set a wall-clock time limit for every aggregation interval. The aggregation interval equals the \(k\)th fastest local update time among all clients, where \(k\) is a tunable parameter. As long as the device can deliver its model update to the server within this interval, it will be part of the global aggregation. To include more available devices to join in global aggregation without staleness, we introduce _partial model training_ for clients with low capacity. Instead of fully training a model, only a part of the model that composes of a subset of consecutive output-side layers will be assigned to them for backward pass training. With partial model training, both local computation time and communication time will decrease for stale clients. As shown in Figure 2, TimelyFL unifies the local training time by adaptively adjusting the workload (i.e., the local epoch number and partial training ratio) for each client, making it feasible for clients to finish the local training and upload the updates to the server within the calculated aggregation interval in every communication round. As such, TimelyFL tackles the system heterogeneity issue and eliminates the staleness of local update reports for slower devices. We evaluate the performance of TimelyFL across various application tasks, including image classification, speech recognition, and natural language processing on CIFAR-10 [13], Google Speech Command [29], and Reddit Comment [1] benchmark datasets, respectively, with two commonly used aggregation functions, FedAvg [20] and FedOpt [23]. Our results show that 66.4% of devices increase the participation rate and the average participation rate increases by 21.1% in TimelyFL compared to FedBuff. Under the same scale of the FL system, TimelyFL outperforms FedBuff [22] on both time-to-accuracy and final test accuracy, as shown in Figure 0(c). ## 2 Related Work **Asynchronous Federated Learning.** Due to intermittent connectivity and availability among clients, asynchronous FL is a promising solution to tackle device het Figure 1: Empirical performance of SyncFL, FedBuff, and TimelyFL in CIFAR-10 classification task with FedOpt as server aggregator (for experiment details and other evaluations see Section 4). TimelyFL includes more devices join in global update during the training (shown in (a) and (b)). As more devices participate timely, TimelyFL harvests both convergence rate and accuracy boost compared to FedBuff (shown in (c)). erogeneity in FL [34]. Most asynchronous FL works concentrate on solving the straggler problem, such as [19], [30], and [31]. PAPAYA[9] and FedBuff [22] have been proposed to mitigate stragglers and enable secure aggregation jointly. Specifically, the individual updates are not incorporated by the server as soon they arrive. Instead, the server will keep receiving local updates in a secure buffer, where the buffer size is a tunable parameter, and update the global model whenever the buffer is full. The slow devices can also send the local update to the server after the global aggregation. Their update will be considered for the next available global update. However, practically speaking, fast devices participate in global updates many more times than slow devices, and some slow devices cannot join in the global aggregation even once due to the staleness control. All of the above approaches assume that the client should process the local training within the full-model size. As the slower users participate in the global aggregation, they can only contribute with stale updates. Some previous works have pointed out that the effects of the stale update on distributed machine learning can directly harm the overall convergence behavior for the final model, aligned with the asynchronous distributed optimization theory suggested by [4, 5, 36]. Moreover, the participation rate is mainly unbalanced due to the high-speed devices contributing more rounds to global updates compared to the slow-speed devices. In contrast to previous approaches, we focus on enabling all clients to join in the global aggregation effectively based on their local resources to improve the inclusiveness of the final global model training. **Partial Model Training.** Partial model training can be viewed as an efficient approach to reduce both communication and computation workload on the client-side of the FL system [2]. FedPrune[21] proposes a method that prunes the global model for each client based on their device capabilities, where slow clients are served smaller models and faster clients train on larger models. FedPT[25] leverages the partially trainable neural networks on clients to reduce communication costs and enable faster training with a smaller memory footprint and with few implications on model performance. Other works such as [17, 24, 32] also address that partial model training can save both communication cost and memory usage in cross-device FL. All of the above works maintain the partial ratio for the sub-model of a certain client as constant during the entire FL training process, which neglects that the availability of each device is not stable throughout the time. In this work, we adaptively adjust the partial ratio for the local model training based on the real-time device availability, which aims to improve both efficiency and utility for each client. ## 3 Our Method ### Standard Asynchronous Federated Learning Figure 3 (left) illustrates the standard asynchronous FL framework. Instead of waiting for all clients to finish the local model training, the server stores the individual updates from clients in a buffer and then adjusts the global model once the buffer size reaches the requisite number of the aggregation goal. Other non-participating devices will postpone their contribution to global updates in the latter communication round once they finish the training. Given that the standard AsyncFL framework suffers from inclusiveness constraints described in the introduction section, Figure 2: FedBuff (top): Server updates the global model as it receives the requisite number of local updates, and slower devices still could send their updates at a later time to the server. Fast devices participate more times in the global update, while slow devices contribute less or no participation. TimelyFL (bottom): Server updates the individual workload every round based on the real-time availability of each client to include more devices in global update timely, largely increases the participation rate for slow devices. we propose an efficient AsyncFL framework as shown in Figure 3 (right) to address this issue. ### TimelyFL Design #### 3.2.1 Preliminaries To increase the participation rate for the slow clients, we aim to design a cross-device asynchronous federated learning framework where each device can finish the local update within a limited time interval by adaptively adjusting its workload per round. Here, the workload is defined as the product of local training epoch number \(E\) and partial model training ratio \(\alpha\). To formalize this, our time utility function, which considers the local training optimization at the client side, is designed as follows: \[\operatorname*{arg\,max}_{E,\alpha}\ \ (\tilde{t}_{cmp,c}\times E\times\alpha+ \tilde{t}_{com,c}\times\alpha)\leq T_{k} \tag{1}\] where \(\tilde{t}_{cmp,c}\) is the estimated local computation time, and \(\tilde{t}_{com,c}\) is the estimated local communication time of the client \(c\) for one epoch of full model training in a certain communication round calculated by the server. Note that both \(\tilde{t}_{cmp,c}\) and \(\tilde{t}_{com,c}\) are not constant throughout the training due to the nature of mobile devices. By adjusting \(E\) and \(\alpha\), each device is supposed to participate in the global aggregation every communication round timely and maximally utilize its resource capability within aggregation interval \(T_{k}\). Therefore, the overall distributed optimization involves more iterations on diverse datasets, leading to faster convergence. #### 3.2.2 Adaptive Model Training Due to resource limitations, some weak devices may not finish the full model training effectively within the time interval \(T_{k}\), making them become stale clients in the system and impeding them from contributing to the global model. To address this issue, we introduce partial model training to slow devices. Figure 3 (right) illustrates our approach when partial model training lands on the device heterogeneity FL system. Instead of a full training model, weak devices will be assigned to train partial models composed of a subset of consecutive output-side layers. During the training, only the subset of assigned layers will operate both forward pass and backward propagation, while the other layers will only process the forward pass for the input data but be frozen during weight updates. After local training finishes, the partially-trained clients only return the update for the assigned layers to the server for the global aggregation, as the frozen layers are unchanged during training. We target to solve two bottlenecks in the cross-device FL with partial model training, communication and computation efficiency for the stale clients. In edge computing, the backward propagation consumes much more time than the forward pass. Partial model training would essentially reduce the training time, as it does not need to calculate gradients for the frozen parameters. The reduced time is roughly proportional to the reduced model size, as we empirically shown in the Appendix A.2.1. Moreover, we only send the trainable part of updates to the server, substantially improving communication efficiency, especially when stragglers with limited network connections exist. By implementing partial model training, we aim to let low-capacity devices report their local updates to the server timely without staleness, thereby improving their participation rate during FL training. #### 3.2.3 TimelyFL Algorithm Based on the adaptive model training, we propose the TimelyFL. TimelyFL tries to unify each client's round Figure 3: **Left**: The standard AsyncFL framework. The server will initiate the global update once it collects the requisite number of local updates. The other clients will be postponed to a latter communication round with stale update information. **Right**: The proposed TimelyFL. The server will include all the received local updates within aggregation interval to global update. Clients with a weaker capacity are assigned to train a subset of the model to catch the aggregation interval time. time to the limited aggregation interval \(T_{k}\) by adaptively adjusting the workload concerning its real-time availability per communication round. The workload is defined as the product of the partial training ratio \(\alpha\) and the local epoch number \(E\). TimelyFL framework is composed of three main parts, TimelyFL server, local time update, and workload scheduling. ``` Input:\(k\): the aggregation participation target, \(n\): the number of training concurrency 1for\(r\in\{0,\cdots,R-1\}\)communication roundsdo 2 Global server do: Sample \(n\) clients uniformly at random to define \(\mathcal{S}\), and send \(W_{s}^{r}\) to clients in \(\mathcal{S}\) ; 3 Clients \(c\in\mathcal{S}\) in parallel do: \(\tilde{t}_{total},\tilde{t}_{cmp},\tilde{t}_{com}=\text{LocalTimeUpdate}(M)\) ; 4 Global server do: \(T_{k}^{r}\leftarrow\) the \(k\)th smallest number in \(\langle\tilde{t}_{total}\rangle\) ; \(\langle E^{r}\rangle,\langle\alpha^{r}\rangle,\langle t_{rpt}^{r}\rangle=\) WorkloadScheduling\((T_{k}^{r},\langle\tilde{t}_{cmp}\rangle,\langle\tilde{t}_{com}\rangle)\) ; 5 Clients \(c\in\mathcal{S}\) in parallel do: \(W_{c}^{r}\leftarrow\) adaptive model training ; 6 Global server do: \(W_{s}^{r+1}\leftarrow\) aggregate \(\langle W_{c}^{r}\rangle\) ; 7 8 end for Output:\(W_{s}^{R}\) ``` **Algorithm 1**TimelyFL. TimelyFL server is in charge of adjusting the aggregation interval \(T_{k}\), local training epoch \(E\), and partial training ratio \(\alpha\) for each device during the FL training, as summarized in Algorithm 1. The aggregation interval \(T_{k}\) in each round equals the \(k\)th smallest value among \(\langle\tilde{t}_{total}\rangle\), as the estimated unit total time for all clients. At each communication round, TimelyFL server randomly samples \(n\) clients to construct the collection \(\mathcal{S}\) and distributes the global model to the clients inside \(\mathcal{S}\), which means \(n\) clients would start the local training in this round, same as the definition of training concurrency in the FedBuff. Each selected client would perform one data batch full model training to estimate its time consumption and report it to the server. Then, aggregation interval time \(T_{k}\) and training hyperparameters for client \(c\) (i.e., local training epoch number \(E\) and partial training ratio \(\alpha\)) would be adjusted based on all selected clients' status during the FL training process. The server would also return a local computation budget time \(t_{rpt,c}\), as the wall-clock time when the client must report its training status. ``` Input:\(M\): the file size of the received global model, \(Bw\): the real-time network bandwidth 1\(t_{cmp},\beta\leftarrow\) one data batch training ; 2\(\tilde{t}_{com}\) = \(M/Bw\) ; 3\(\tilde{t}_{cmp}\) = \(t_{cmp}/\beta\) ; 4\(\tilde{t}_{total}=\tilde{t}_{cmp}+\tilde{t}_{com}\) ; Output:\(\tilde{t}_{total},\tilde{t}_{com},\tilde{t}_{cmp}\) ``` **Algorithm 2**Local Time Update. **Local Time Update.** To efficiently accommodate the capabilities, each participant needs to update its time consumption to the server as summarized in Algorithm 2. Specifically, each client would collect the real computation time \(t_{cmp}\) from one data batch full model training. The unit computation time \(\tilde{t}_{cmp}\) is estimated by \(t_{cmp}\) and progress \(\beta\), where \(\beta\) is defined as the ratio of trained batch number to the total data batch number. The local communication time equals the model's file size \(M\) over the device's real-time network bandwidth \(Bw\), as the same setting in the previous FL system work [14]. ``` Input:\(T_{k}\): aggregation interval time, \(\langle\tilde{t}_{cmp}\rangle\): unit computation time, \(\langle\tilde{t}_{com}\rangle\): unit communication time 1foreach client \(c\in\mathcal{S}\)in paralleldo 2\(E_{c}=\max(\lfloor(T_{k}-\tilde{t}_{com,c})/\tilde{t}_{cmp,c}\rfloor,1)\) ; 3\(\alpha_{c}=\min(T_{k}/(\tilde{t}_{com,c}+\tilde{t}_{cmp,c}),1)\) ; 4\(t_{rpt,c}=T_{k}-\tilde{t}_{com,c}\times\alpha_{c}\) ; 5 6 end for Output:\(\langle E\rangle,\langle\alpha\rangle,\langle t_{rpt}\rangle\) ``` **Algorithm 3**Workload Scheduling. **Workload Scheduling.**TimelyFL would adjust the local epoch number \(E\) and partial training ratio \(\alpha\) for each client in every communication round based on the estimated \(\tilde{t}_{com,c},\tilde{t}_{cmp,c}\) and aggregation interval \(T_{k}\), as the relationship shown in 1. If one's unit total time is smaller than \(T_{k}\), then the server would try to maximize its local training utility and minimize the idle time, as to assign more than one local epoch training for the next round. Otherwise, the server would assign less amount of workload to them by decreasing the model training ratio \(\alpha\), which guarantees they can finish at least one local epoch training within the report time \(t_{rpt,c}\) and catch up the global aggregation timely. We summarized the scheduler as Algorithm 3. ``` Input:\(T_{k}\): aggregation interval time, \(\langle\tilde{t}_{cmp}\rangle\): unit computation time, \(\langle\tilde{t}_{com}\rangle\): unit communication time 1foreach client \(c\in\mathcal{S}\)in paralleldo 2\(E_{c}=\max(\lfloor(T_{k}-\tilde{t}_{com,c})/\tilde{t}_{cmp,c}\rfloor,1)\) ; 3\(\alpha_{c}=\min(T_{k}/(\tilde{t}_{com,c}+\tilde{t}_{cmp,c}),1)\) ; 4\(t_{rpt,c}=T_{k}-\tilde{t}_{com,c}\times\alpha_{c}\) ; 5 6 end for Output:\(\langle E\rangle,\langle\alpha\rangle,\langle t_{rpt}\rangle\) ``` **Algorithm 4**Workload Scheduling. ## 4 Experiment ### Experimental Settings **Datasets, Models, and Tasks.** To demonstrate TimelyFL's effectiveness across tasks, we evaluate TimelyFL on three benchmark datasets from various categories of FL applications: 1. **Image Classification.** The CIFAR-10 dataset [13] consists of 60,000 colour images in 10 classes. There are 50,000 training images and 10,000 test images. To follow the realistic non-iid data in FL scenarios, we partition both datasets into 128 clusters using a Dirichlet distribution with \(\alpha\) equals 0.1. We evaluate the dataset with ResNet-20 [7] model. 2. **Speech Recognition.** The Google Command speech dataset [29] covers 105,829 audio recordings collected from 2,618 clients. The training set includes recordings from 2,112 speakers, the validation set includes 256 speakers, and the test set includes 250 speakers. The data set is composed of 35 common words from the everyday vocabulary, such as "Yes", "No", "Up", and "Down". We evaluate the dataset with VGG11 [26] model for a 35-class keyword spotting task. We also evaluate the dataset with a lightweight model based on one related work [33], and the detailed data-preprocessing methods are presented in Appendix A.1.2. 3. **Natural Language Processing.** Reddit [1] consists of comments from 1,660,820 users in the Reddit forum. In this dataset, we filter the users with less than 20 words in total and restrict to the 30k most frequently used words, as the same settings in the previous work [14]. Then, we train the lightweight Albert [16] model for the next-word-prediction task. The performance is evaluated by the perplexity loss (ppl), which lower is better. **Experiment Setup.** We use the FedML platform [35, 6], an open-source framework for FL, to execute our framework. On the CPU/GPU training side, to approach the real-world heterogeneous client system performance in emulation, we acquire the local computation times of deep learning models across hundreds of device types from the AI benchmark [10] and communication times from Network Measurements on mobiles [8]. These data will be assigned to the simulated devices we create in the experiment, the same as the settings in previous FL works [14, 15, 18]. The distribution of heterogeneous system utility across simulated clients will be shown in the Appendix A.1.2. **Evaluation Metrics and Baselines.** We compare TimelyFL with FedBuff [22, 9] as the AsyncFL baseline. To demonstrate applicability of TimelyFL, we present the evaluation results using two aggregation function, FedAvg [20] and FedOpt [23]. We evaluated the performance of TimelyFL and its baseline using the following three metrics: _test accuracy/loss_, _time-to-accuracy_, and _participation rate_. The participation rate is defined as the total number of rounds that the device contributes to the global update divided by the total communication round number. The rate is distributed in the interval between 0 and 1, which implies how often a client participates in the global model update. **Hyperparameter Settings.** We searched for the client learning rate in a range from \(10^{-6}\) to \(10^{0}\), server learning rate in a range from \(10^{-4}\) to \(10^{0}\), input batch size in a range from \(8\) to \(256\), and total training round in a range from \(1000\) to \(10000\). The aggregation goal and aggregation participation target is searched from 30% to 50% of training concurrency per round for FedBuff and TimelyFL, respectively. We list the detailed hyperparameter selection for each experiment setup in the Appendix A.1.3. ### End-to-End Performance We begin by comparing the end-to-end performance of TimelyFL on benchmark datasets, conducting on the CPU/GPU-based training. The training concurrency is set to 128 for CIFAR-10 related experiments, 20 for Google \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Agg. function & Accuracy/Loss & TimelyFL & FedBuff & SyncFL \\ \hline \hline \multirow{4}{*}{CIFAR-10} & \multirow{4}{*}{FedAvg} & 60\% & 5.50 \(\pm 2.5\%\) hr & 7.86 \(\pm 2.1\%\) hr (1.43x) & 76.81 \(\pm 2.4\%\) hr (13.96x) \\ & & 70\% & 12.81 \(\pm 1.8\%\) hr & \(>\) 200 & 150.98 \(\pm 1.7\%\) hr (11.78x) \\ \cline{2-6} & \multirow{2}{*}{FedOpt} & 60\% & 3.58 \(\pm 2.5\%\) hr & 5.68 \(\pm 2.6\%\) hr (1.59x) & 34.87 \(\pm 2.3\%\) hr (9.74x) \\ & & 70\% & 6.46 \(\pm 1.8\%\) hr & 18.73 \(\pm 2.3\%\) hr (2.89x) & 58.84 \(\pm 0.8\%\) hr (9.11x) \\ \hline \hline \multirow{4}{*}{Google Speech} & \multirow{4}{*}{FedAvg} & 70\% & 22.90 \(\pm 2.1\%\) hr & 42.71 \(\pm 2.3\%\) hr (1.87x) & 103.07 \(\pm 2.1\%\) hr (4.50x) \\ & & 80\% & 40.54 \(\pm 1.2\%\) hr & 70.60 \(\pm 2.0\%\) hr (1.74x) & 187.93 \(\pm 1.2\%\) hr (4.64x) \\ \cline{2-6} & \multirow{2}{*}{FedOpt} & 70\% & 18.08 \(\pm 1.1\%\) hr & 30.60 \(\pm 1.7\%\) hr (1.69x) & 66.13 \(\pm 1.2\%\) hr (3.66x) \\ & & 80\% & 31.39 \(\pm 0.9\%\) hr & 53.36 \(\pm 0.9\%\) hr (1.70x) & 107.38 \(\pm 0.7\%\) hr (3.42x) \\ \hline \hline \multirow{4}{*}{Reddit} & \multirow{4}{*}{FedAvg} & 7.0 (ppl) & 9.56 \(\pm 3.1\%\) hr & 15.82 \(\pm 2.9\%\) hr (1.65x) & 23.36 \(\pm 1.5\%\) hr (2.44x) \\ & & 6.8 (ppl) & 17.99 \(\pm 0.7\%\) hr & \(>\) 200 & 67.32 \(\pm 0.5\%\) hr (3.74x) \\ \cline{2-6} & \multirow{2}{*}{FedOpt} & 7.0 (ppl) & 10.99 \(\pm 2.7\%\) hr & 14.09 \(\pm 2.8\%\) hr (1.28x) & 27.25 \(\pm 2.1\%\) hr (2.48x) \\ & & 6.8 (ppl) & 12.86 \(\pm 0.6\%\) hr & \(>\) 200 & 57.65 \(\pm 0.4\%\) hr (4.48x) \\ \hline \hline \end{tabular} \end{table} Table 1: Wall clock training time to reach target validation accuracy on benchmark datasets (lower is better). “\(>200\) hr” indicates the target accuracy was not reached. speech related experiments, and 100 for Reddit related experiments. The communication round is set to be 2000, 1000, and 500 for CIFAR-10, Google speech, and Reddit, respectively. For both FedBuff and TimelyFL, we set the aggregation goal and aggregation participation target equal to 50% of training concurrency for a fair comparison. We run each experiment five times with different random seeds and report its mean and standard deviation for the time consumption in the Table 1. **Speedup of TimelyFL.** Given the same heterogeneous data, TimelyFL achieves the shortest training time to reach all target accuracy/loss. Table 1 shows the training time needed to converge to the target accuracy/loss for each strategy considered. Compared to TimelyFL, synchronous FL requires 2.44 - 13.96x more times, and FedBuff needs 1.28 - 2.89x in terms of wall clock time. Besides of the time-to-accuracy speedup, TimelyFL also harvests test accuracy increment compared to FedBuff within the same communication rounds. As the learning curve in the Figure 4, TimelyFL achieves 3.27% and 4.01% higher final accuracy on CIFAR-10 and Google Command, and 0.43 lower ppl on Reddit in comparison to FedBuff with FedOpt. Under FedAvg, TimelyFL achieves 4.93% and 6.25% higher final accuracy on CIFAR-10 and Google Command, respectively, and 0.20 lower ppl on Reddit compared to FedBuff. ### Understanding the Advantages of TimelyFL TimelyFL improves inclusiveness1. In Table 1, we view the SyncFL as the standard baseline that does not include any asynchronous technique and FedBuff as the baseline that only introduces a fixed buffer size to accelerate the training. Instead of fixed buffer size, TimelyFL adopts a flexible buffer size controlled by aggregation interval time, which allows more available clients to participate in the global update per round. As illustrated in Figure 5, 66.4% of devices are able to achieve an increased participation rate, and the average participation rate per client increases by 21% in TimelyFL compared to FedBuff under the CIFAR-10 experiment setting we implemented in the last section. The average participation rate increment is the main reason for the time-to-accuracy speed-up. As each client joins the global model update more rapidly, the learning efficiency increases during the FL training. In addition, combined with more devices contributing to the global model more frequently, TimelyFL improves inclusiveness during the model training compared to FedBuff. Footnote 1: In this paper, the inclusiveness increment represents the participation rate increment in the FL training. The contribution of inclusiveness for model performance is especially significant when training on the non-iid dataset, where each client brings a unique local update to the global model. To demonstrate our point, we test both TimelyFL and Fedbuff with FedAvg as an Figure 4: Time-to-accuracy performance for SyncFL, FedBuff and TimelyFL. Figure 5: Participation rate evaluation. aggregator on the CIFAR-10 dataset using a non-iid partition. As shown in Figure 6, as the parameter for Dirichlet distribution goes up, the convergence-time gap between TimelyFL and Fedbuff increases as well, which demonstrates our advantage for non-iid data training compared to Fedbuff. **TimelyFL is heterogeneity-aware.** Under cross-device federated learning, most participating entities have limited computing capability and intermittent connectivity. As such, it could not be guaranteed that devices would complete their training workload in every communication round as assigned initially. To effectively resist the disturbance, the training hyperparameters, such as the partial training ratio and local epoch number, should be adaptively scheduled based on the real-time capability of each device. To demonstrate our strategy, we test the training under the TimelyFL framework both with and without adaptive workload scheduling on the CIFAR-10 dataset, with the training concurrency equals to 64. Figure 7 shows the learning curves for both scenarios. With adaptive workload scheduling, TimelyFL saves 4.09\(\times\) convergence time to 50% accuracy and 10.89% test accuracy increment, which illustrates that real-time workload scheduling essentially improves both learning efficiency and accuracy. **TimelyFL is effective on the lightweight model.** To investigate the effectiveness of the lightweight model on the TimelyFL framework, we implement one lightweight model on the Google Speech Commands dataset for the keyword spotting task. Following one previous work [33], we choose the model that consists of two convolution layers followed by one Gated Recurrent Units (GRU) layer. An average pooling layer is connected to the GRU output, which is then fed through two dense layers to generate the predictions. The parameter size of this model is equal to 79044. We adopt the same baseline selections as in Section 4. The hyperparameters for the experiments are listed in Section A.1.3. The experiment results are summarized in Table 2. TimelyFL achieves a higher convergence speed compared with the other two strategies before reaching the test accuracy, which confirms the simulation results elaborated in Section 4.2 and demonstrates the effectiveness of the TimelyFL on the lightweight model architecture. ## 5 Conclusion In this work, we propose TimelyFL, a heterogeneity-aware asynchronous FL scheme with adaptive partial training. To include more available devices joining in global aggregation in a timely manner, TimelyFL introduces partial model training to the slow-speed devices. Moreover, TimelyFL is resilient to system heterogeneity by adjusting the local training workload based on the real-time resource capabilities of each client during FL training. Our experimental results demonstrate that TimelyFL could outperform major AsyncFL proposals in terms of both time-to-accuracy and test accuracy. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Agg. function & Accuracy & TimelyFL & FedBuff & SyncFL \\ \hline \hline \multirow{4}{*}{Google Speech} & FedAvg & 70\% & 2.23 \(\pm 2.1\%\) hr & 3.55 \(\pm 1.9\%\) hr (1.59\(\times\)) & 18.37 \(\pm 0.6\%\) hr (8.24\(\times\)) \\ & 80\% & 4.16 \(\pm 1.3\%\) hr & 6.13 \(\pm 1.4\%\) hr (1.47\(\times\)) & 32.46 \(\pm 0.4\%\) hr (7.80\(\times\)) \\ \cline{2-6} & FedOpt & 70\% & 0.48 \(\pm 1.7\%\) hr & 1.66 \(\pm 1.0\%\) hr (3.46\(\times\)) & 4.61 \(\pm 2.1\%\) hr (9.60\(\times\)) \\ \cline{2-6} & FedOpt & 80\% & 1.13 \(\pm 1.2\%\) hr & 3.25 \(\pm 0.8\%\) hr (2.88\(\times\)) & 7.47 \(\pm 1.1\%\) hr (6.61\(\times\)) \\ \hline \hline \end{tabular} \end{table} Table 2: Wall clock training time to reach target validation accuracy on benchmark datasets (lower is better). Figure 6: Time-to-accuracy performance under different non-iid distribution. Figure 7: TimelyFL performance under adaptive and non-adaptive workload schedule.
2306.01689
Unique Brain Network Identification Number for Parkinson's Individuals Using Structural MRI
We propose a novel algorithm called Unique Brain Network Identification Number, UBNIN for encoding the brain networks of individual subjects. To realize this objective, we employed structural MRI on 180 Parkinsons disease PD patients and 70 healthy controls HC from the National Institute of Mental Health and Neurosciences, India. We parcellated each subjects brain volume and constructed an individual adjacency matrix using the correlation between the gray matter volumes of every pair of regions. The unique code is derived from values representing connections for every node i, weighted by a factor of 2^1-i. The numerical representation UBNIN was observed to be distinct for each individual brain network, which may also be applied to other neuroimaging modalities. This model may be implemented as a neural signature of a persons unique brain connectivity, thereby making it useful for brainprinting applications. Additionally, we segregated the above datasets into five age cohorts to study the variation in network topology over age. Sparsity was adopted as the threshold estimate to binarize each age-based correlation matrix. For each age cohort, a decreasing trend was observed in the mean clustering coefficient with increasing sparsity. Significantly different clustering coefficients were noted in PD between age cohort B and C, C and E, and in HC between E and B, E and C, E and D, and C and D. Our findings suggest network connectivity patterns change with age, indicating network disruption may be due to the underlying neuropathology. Varying clustering coefficients for different cohorts indicate that information transfer between neighboring nodes changes with age. This provides evidence of age related brain shrinkage and network degeneration. We also discuss limitations and provide an open-access link to software codes and a help file for the entire study.
Tanmayee Samantaray, Utsav Gupta, Jitender Saini, Cota Navin Gupta
2023-06-02T17:03:39Z
http://arxiv.org/abs/2306.01689v2
# Unique Brain Network Identification Number for Parkinson's Individuals Using Structural MRI ###### Abstract We propose a novel algorithm called Unique Brain Network Identification Number (UBINN) for encoding brain networks of individual subject. To realize this objective, we employed T1-weighted structural MRI of 180 Parkinson's disease (PD) patients from National Institute of Mental Health and Neurosciences, India. We parcellated each subject's brain volume and constructed individual adjacency matrix using correlation between grey matter (GM) volume of every pair of regions. The unique code is derived from values representing connections of every node(_i_), weighted by a factor of 2(i-1). The numerical representation (UBINN) was observed to be distinct for each individual brain network, which may also be applied to other neuroimaging modalities. This model may be implemented as neural signature of a person's unique brain connectivity, thereby useful for brainprinting applications. Additionally, we segregated the above dataset into five age-cohorts: A:22-32years, B:33-42years, C:43-52years, D:53-62years and E:63-72years to study the variation in network topology over age. Sparsity was adopted as the threshold estimate to binarize each age-based correlation matrix. Connectivity metrics were obtained using Brain Connectivity toolbox-based MATLAB functions. For each age-cohort, a decreasing trend was observed in mean clustering coefficient with increasing sparsity. Significantly different clustering coefficient was noted between age-cohort B and C (sparsity: 0.63,0.66), C and E (sparsity: 0.66,0.69). Our findings suggest network connectivity patterns change with age, indicating network disruption due to the underlying neuropathology. Varying clustering coefficient for different cohorts indicate that information transfer between neighboring nodes change with age. This provides evidence on age-related brain shrinkage and network degeneration. Unique brain network identification number, Age, Brain connectivity, Clustering coefficient, Parkinson's disease. ## I Introduction P.ARKINSON'S disease (PD), is one of the most common neurological disorder that worsens over age. With a prevalence of 6.1 million people across the globe [1], it is commonly found in older people at 50-60 years [2]. Males are more likely to be affected with PD than females [3]. The associated pathological hallmark is loss of dopaminergic neurons of substantia nigra [4]. It leads to various classical motor and non-motor manifestations such as tremor, bradykinesia, rigidity, behavioral and cognitive dysfunctions. However, progression of neurodegeneration in PD begins and spreads throughout the nervous system long before these symptoms are expressed, termed as prodromal phase [5]. Our understanding of brain regions underlying clinical manifestation of PD is constantly growing as a result of recent developments in neuroimaging [6]. In addition to being non-invasive, structural magnetic resonance imaging (sMRI) is a robust and safe method to produce high-resolution 3-dimensional scans of brain. Parkinson's disease has been associated with grey matter (GM) atrophy which is detected as morphological changes by voxel-based morphometry [7, 8] on sMRI. Hence, grey matter tissue contains important information to be deployed for further analysis and understanding of the disease. The effect of one neuron on another is determined by structural connectivity, which subsequently affects the potential of functional networks [9]. Thus, in diseased condition, aberration in network topology influences functionality of specific brain regions [10]. Structural MRI has the potential to detect alterations in PD brain network triggered by GM volume [11, 12]. Brain networks provide possible biomarkers not only for a disease cohort, but also for individual subjects [13]. Although group-level structural network analysis has been widely explored in PD, there is minimal research in individual-level network [14, 15]. Intriguingly, individual structural connectivity analysis using sMRI may play a crucial role in analyzing brain disease [13] and brainprinting [15]. However, due to inter-individual variability, there is a demand for encoding network into unique representation for each individual. This encoding scheme may also enable efficient storing and transfer of individual networks. Weininger et al. [16] proposed a chemical notation, specifically designed for chemical compound, where its structure is a network and atoms as nodes and bonds as edges. Later, Lukovits I.I. [17] obtained a compact form of a chemical compound, where presence of a bond between carbon atoms was represented as 1 in adjacency
2301.12134
Underwater Robotics Semantic Parser Assistant
Semantic parsing is a means of taking natural language and putting it in a form that a computer can understand. There has been a multitude of approaches that take natural language utterances and form them into lambda calculus expressions -- mathematical functions to describe logic. Here, we experiment with a sequence to sequence model to take natural language utterances, convert those to lambda calculus expressions, when can then be parsed, and place them in an XML format that can be used by a finite state machine. Experimental results show that we can have a high accuracy model such that we can bridge the gap between technical and nontechnical individuals in the robotics field.
Parth Parekh, Cedric McGuire, Jake Imyak
2023-01-28T09:04:43Z
http://arxiv.org/abs/2301.12134v1
# Underwater Robotics Semantic Parser Assistant ###### Abstract Semantic parsing is a means of taking natural language and putting it in a form that a computer can understand. There has been a multitude of approaches that take natural language utterances and form them into lambda calculus expressions - mathematical functions to describe logic. Here, we experiment with a sequence to sequence model to take natural language utterances, convert those to lambda calculus expressions, when can then be parsed, and place them in an XML format that can be used by a finite state machine. Experimental results show that we can have a high accuracy model such that we can bridge the gap between technical and nontechnical individuals in the robotics field. ## 1 Credits Jake Imyak was responsible for the creation of the 1250 dataset terms and finding the RNN encoder/decoder model. This took 48 Hours. Cedric McGuire was responsible for the handling of the output logical form via the implementation of the Tokenizer and Parser. This took 44 Hours. Parth Parekh assembled the Python structure for behavior tree as well as created the actions on the robot. This took 40 Hours. All group members were responsible for the research, weekly meetings, presentation preparation, and the paper. In the paper, each group member was responsible for explaining their respective responsibilities with a collaborative effort on the abstract, credits, introduction, discussion, and references. A huge thanks to our Professor Dr. Huan Sun for being such a great guide through the world of Natural Language Processing. ## 2 Introduction Robotics is a hard field to master. Its one of the few fields which is truly interdisciplinary. This leads to engineers with many different backgrounds working on one product. There are domains within this product that engineers within one subfield may not be able to work with. This leads to some engineers not being able to interact with the product properly without supervision. As already mentioned, we aim to create an interface for those engineers on the Underwater Robotics Team (UWRT). Some members on UWRT specialize in other fields that are not software engineering. They are not able to create logic for the robot on their own. This leads to members of the team that are required to be around when pool testing the robot. This project wants to reduce or remove that component of creating logic for the robot. This project can also be applied to other robots very easily as all of the main concepts are generalized and only require the robots to implement the actions that are used to train the project. ## 3 Robotics Background ### Usage of Natural Language in Robotics Robots are difficult to produce logic for. One big problem that most robotics teams have is having non-technical members produce logical forms for the robot to understand. Those who do not code are not able to manually create logic quickly. ### Finite State Machines One logical form that is common in the robotics space is a Finite State Machine (FSM). FSMs are popular because they allow a representation to be completely general while encoding the logic directly into the logical form. This means things such as control flow, fallback states, and sequences to be directly encoded into the logical form itself. As illustrated in Figure 1, we can easily encode logic into this representation. Since it easily generified, FSM's can be used across any robot which im plements the commands that are contained within it. ### Underwater Robotics Team Robot Since 2016, The Underwater Robotics Team (UWRT) at The Ohio State University has iterated on the foundations of a single Autonomous Underwater Vehicle (AUV) each year to compete at the RoboSub competition. Breaking from tradition, the team decided to take the 2019-2021 school years to design and build a new vehicle to compete in the 2021 competition. Featuring an entirely new hull design, refactored software, and an improved electrical system, UWRT has created its brand-new vehicle, Tempest. (Parekh, 2021) #### 3.3.1 Vehicle Tempest is a 6 Degree of Freedom (DOF) AUV with vectored thrusters for linear axis motion and direct drive heave thrusters. This allows the robot to achieve any orientation in all 6 Degrees of freedom [X, Y, Z, Roll, Pitch, Yaw]. #### 3.3.2 Vehicle Experience With this vehicle, the team has focused on creating a fully fleshed out experience. This includes commanding and controlling the vehicle. One big focus of the team was to make sure that any member, technical or non-technical was able to manage and operate the robot successfully. #### 3.3.3 Task Code System A step to fulfill this focus was to change the vehicle's task code system to use the FSM representation. This is done through the library BehaviorTree.CPP (Fanconti, 2020). This generic FSM representation allows for Tempest to use generified logical forms that can be applied to ANY robotic plant as long as that plant implements those commands. This library also creates and maintains a Graphical User Interface (GUI) which allows for visual tracking and creation of FSM trees. Any tree created by the GUI is stored within an XML file to preserve the tree structure. The structure of the output of the XML syntax is explained within the parser section. ## 4 Data A dataset was to be created in order to use natural language utterances to lambda calculus expressions that a parser would be able to recognize to convert to a finite state machine. For reference, the following datasets were considered: the Geoquery set(Zettlemoyer, 2012) and General Purpose Service Robotics commands set (Walker, 2019). The Geoquery dataset provided a foundation for a grammar to follow for the lambda calculus expression such that consistency would hold for our parser. Moreover, the gpsr dataset provided an ample amount of examples and different general purpose robotics commands that could be extended within the dataset we curated. The dataset followed the following form: natural language utterance followed by a tab then a lambda calculus expression. The lambda calculus expression is of the form ( seq ( action0 ( S0 ( parameter ) ) )... ( actionN ( SN ( parameter ) ) ). The power of the following expression is that it can be extended to N number of actions in a given sequence, meaning that a user can hypothetically type in a very complex string of action and an expression will be constructed for said sequence. Moreover, the format of our dataset allows for it to be extended for any type of robotics Figure 1: A FSM represented in Behaviortree.CPP (Fanconti, 2020) (Fanconti, 2020) Figure 2: A render of Tempest command that a user may have. They just need to include examples in the train set with said action and the model will consider it. The formal grammar is: \(<seq>:\) ( seq ( action ) [ (action) ] ) \(<action>:\) actionName [ (parameter ] ) \(<parameter>:\)paramName A ( $n ( n ) ) The dataset we created had 1000 entries in the training dataset and 250 entries in the test dataset. The size of the vocabulary \(|V|=171\) for the input text and \(|V|=46\) for the output text, which is similar in vocabulary size to the GeoQuery dataset. The expressions currently increase in complexity in terms of the number of actions within the sequence. A way to extend the complexity of the expressions would make the \(<seq>\) tag a nonterminal to chain together nested sequences. The actions within our dataset currently are as follows: move (params: x, y, z, roll, pitch, raw), flatten (params: num), say (params: words), clean (params: obj), bring (params: val), find (params: val), goal, and gate. The most complex sequence is a string of seven subsequent actions. ## 5 Model ### Seq2Seq Model We decided to use the model presented in "Language to Logical Form with Neural Attention" [1]. There was an implementation on GitHub [1] utilizing Google's Tensorflow library to handle all implementation details of the following model. The part of the paper that was presented was the Sequence to Sequence model with an attention mechanism. The model interprets both the input and output from the network as sequences of information. This process is represented in Figure 3: input is passed to the encoder, then passed through the decoder, and through using the attention mechanism, we can get an output that is a lambda calculus expression. Both of these sequences can be represented as L-layer recurrent neural networks with long short-term memory (LSTM) that are used to take the tokens from the sentences and the expressions we have. The model creates 200 (can be changed to increase and decrease the size of the network) units of both LSTM cells and GRU cells. The GRU cells are used to help compensate for the vanishing gradient problem. These LSTM and GRU cells are used in the input sequence to encode \(x_{1},...,x_{q}\) into vectors. Then these vectors are what form the hidden state of the beginning of the sequence in the decoder. Then in the decoder, the topmost LSTM cell predicts the t-th output token by taking the softmax of the parameter matrix and the vector from the LSTM cell multiplied by a one-hot vector used to compute the probability of the output from the probability distribution. The softmax used here is sampled softmax, which only takes into account a subset of our vocabulary V rather than everything to help alleviate the difficulty of finding the softmax of a large vocabulary. ### Attention Mechanism The model also implemented an attention mechanism to help with the predicted values. The motivation behind the attention mechanism is to use the input sequence in the decoding process since it is relevant information for the prediction of the output token. To achieve this, a context vector is created which is the weighted sums of the hidden vectors in the encoder. Then this context vector is used as context to find the probability of generating a given output. ### Training To train the model, the objective is the maximize the likelihood of predicting the correct logical form given some natural language expression. Hence, the goal is to minimize the sum of the log probability of predicting logical form a given natural language utterance \(q\) summed over all training pairs. The model used the \(RMSProp\) algorithm which is an extension of the Adagrad optimizer but utilizes learning rate adaptation. Dropout is also used for regularization which helps out with a smaller datasets to prevent overfitting. We performed 90 epochs. ### Inference To perform inference, the argmax is found of the probability of candidate output given the natural Figure 3: Process of how input natural language are encoded and decoded via recurrent neural networks and an attention mechanism to find the utterance’s respective natural language form. [1] language utterance. Since it is not possible to find the probability of all possible outputs, the probability is put in a form such that a beam search can be employed to generate each individual token of lambda calculus expression to get the appropriate output. ## 6 Results With the default parameters set, the Sequence to Sequence model achieved 86.7% accuracy for exact matches on the test dataset. This is consistent with the model's performance on the Geoquery dataset, achieving 83.9% accuracy. The test dataset provided contained a 250 entries of similar utterances to the train dataset of various complexities ranging anywhere from one to six actions being performed. There are other methods of evaluating we would like to look into in the future such as computing something such as an F1 score rather than solely relying on exact logical form matching. This accuracy for exact logical forms is really important when using the parser. It allows for FSM representation to be easily and quickly built. We were able to build the XML representation and run basic commands on the robot with the model maintaining the order we said them in. ## 7 Logical Form Parser The logical form output of our model is sent to a custom parser. The goal of this parser is to translate the output form into BehaviorTree XML files, in which the robot is able to read in as a finite state machine. ### Tokenizer The Tokenizer comprises the initial framework of the parser. It accepts the raw logical form as a String object and outputs a set of tokens in a Python List. These tokens are obtained by looking for separator characters (in our case, a space) present in the logical form and splitting them into an array-like structure. The Tokenizer method permits custom action, parameter, and variable names from the logical form input, thus allowing ease of scalability in implementing new robot actions. Our model's output nature is not able to generate syntactically incorrect logical forms, thus our implementation does not check for invalid tokens and will assume all input is correct. The Tokenizer is stored in a static \(Singleton\) class such that it can be accessed anywhere in the program once initialized. It keeps track of the current token (using getToken()) and has an implementation to move forward to the next token skipToken(). This functionality is important for the object-oriented approach of the parser, discussed in the next section. ### Parsing Lambda Calculus Expressions The output tokens from the Tokenizer must be interpreted into a proper Python from before they are staged to be turned into XML-formatted robot-ready trees. This is the function of the middle step of the parser, in which a tree of Python objects are built. The parser utilizes an object-oriented approach. As such, we include three objects: \(Sequence\), \(Action\), and \(Parameter\), with each corresponding to an individual member of our custom grammar. The objects orient themselves into a short 3-deep tree, consisting of a \(Sequence\) root, \(Action\) children, and \(Parameter\) grand-children. Each object has its own parse() method that will advance the tokenizer, validate the input structure, and assemble themselves into a Python structure to be staged into an XML file. The validations are enforced through our grammar definitions in Section 4. #### 7.2.1 Sequence Object The \(Sequence\) object is the first object initialized by the parser, along with the root of our action tree. Each \(Sequence\) is composed of a list of 0 or more child actions to be executed in the order they appear. The parseSequence() method will parse each individual action using parseAction(), all the while assembling a list of child actions for this \(Sequence\) object. As of now, \(Sequence\) objects are unable to be their own children (i.e. nesting \(Sequence\)s is not permitted). However, if required, the \(Sequence\) object's parseSequence() method can be modified to recognize a nested action sequence and recursively parse it. #### 7.2.2 Action Object \(Action\) objects define the title of the action being performed. Similar to \(Sequence\), \(Action\) objects have an internally stored list, however with \(Parameter\) objects as children. There may be any number of parameters, including none. When parseAction() method is called, the program validates the tokens and will call parseParameter() on each \(Parameter\) child identified by the action. #### 7.2.3 Parameter Object The \(Parameter\) object is a simple object that stores a parameter's name and value. The parser does not have a check for what the name of the parameter is, nor does it have any restrictions to what the value can be. parseParameter() searches through the tokens for these two items and stores them as attributes to the \(Parameter\) object. This implementation of parameter is scalable with robot parameters and allows any new configuration of parameter to pass by without any changes in the parser as a whole. If a new parameter is needed for the robot, it only has to be trained into the Seq2Seq model on the frontend and into the robot itself on the backend; the Parameter object should take care of it all the same. ### BehaviorTree Output In the end, the parser outputs an XML file which can be read in to BehaviorTree.CPP (Fanconti, 2020). An example of this file structure is shown in Figure 4. This file structure is useful because it encodes sequence of actions within it. The leaves of the sequence are always in order. The tree can also encode subtrees into the sequence which we have not implemented yet. ## 8 Discussion ### Summary We learned that semantic parsing is excellent tool at bridging the gap between both technical and non-technical individuals. The power within semantic parsing with robotics is that any human can automate any task just through using their words. Our dataset is written in a way that just extending the entries with another robot's tasks that use a behavior tree to perform action, that robot's actions can be automated as well. ### Future Plans Future plans with this project would be to expand the logical flow that can be implemented with BehaviorTree.CPP. As an FSM library, BehaviorTree.CPP implements many more helper functions to create more complicated FSMs. These include things like if statements fallback nodes, and subtrees. This would be a valid expansion of our RNN's logical output and with more time, we could support the full range of features from BehaviorTree.CPP We would also like to implement a front end user interface to make this service more accessible to anyone who was not technical. Right now, the only means of running our program is through the command line which is not suitable for individuals who are nontechnical. Moreover, including a speak-to-text component to this project would elevate it since an individual would be able to directly tell a robot what commands to do, similar to a human. ### Source Code You can view the source code here: [https://github.com/jrimyak/parse_seq2seq](https://github.com/jrimyak/parse_seq2seq)
2303.12995
Skew-rack cocycle invariants of closed 3-manifolds
We establish a new approach to obtain 3-manifold invariants via Dehn surgery. For this, we introduce skew-racks with good involution and Property FR, and define cocycle invariants as 3-manifold invariants. We also define some link invariants in the 3-sphere which are invariant up to link-homotopic.
Takefumi Nosaka
2023-03-23T02:17:38Z
http://arxiv.org/abs/2303.12995v2
# Skew-rack cocycle invariants of closed 3-manifolds ###### Abstract We establish a new approach to obtain 3-manifold invariants via Dehn surgery. For this, we introduce skew-racks with good involution and Property FR, and define cocycle invariants as 3-manifold invariants. **Keywords** 3-manifolds, surgery, knots, birack **Subject Codes** 57M27, 20J06, 16T25, 19J25 ## 1 Introduction Every closed 3-manifold \(M\) with orientation can be obtained from a framed link in the 3-sphere \(S^{3}\) by a Dehn surgery. Since there is a one-to-one correspondence between closed 3-manifolds and framed links in \(S^{3}\) modulo either the Kirby moves [Kirby] or Fenn-Rourke moves [FR], any framed link invariant, which is invariant with respect to the moves, is a 3-manifold invariant. For example, in quantum topology, frameworks from Chern-Simons theory have produced many 3-manifold invariants, including the concepts of modular categories _etc_; see, e.g., [RT, Tur] and references therein. In contrast, from classical views as in algebraic topology, the fundamental groups, \(\pi_{1}(M)\), of 3-manifolds, have useful information, and are strong invariants; furthermore, as in the Dijkgraaf-Witten model [DW], starting from a finite group \(G\), we can define a certain weight of the set, \(\operatorname{Hom}(\pi_{1}(M),G)\), in terms of group cohomology of \(G\). However, apart from the quantum invariants and fundamental groups, there are few procedures to yield 3-manifold invariants via Dehn surgery. In this paper, we establish a new approach to yield 3-manifold invariants in a classical situation. For this, we focus on a class of skew-racks (see Section 3), which is an algebraic system and a modification of quandles and biracks. As in quandle theory (see, e.g., [FRS, CEGN, CJKLS, Nbook]), starting from skew-racks, we can also define a set of colorings of framed links and weights of the set, where the weights are evaluated by birack 2-cocycles, and are called _a cocycle invariant_ as a framed link invariant (see Section 3 for details). The point of this paper is to find skew-racks such that the cocycle invariant is stable under the Fenn-Rourke moves. Following the point, we define Property FR of skew-racks (Definition 4.1), and show (Theorem 4.2 and Proposition 6.2) that, in some situations, the associated cocycle invariant gives rise to a 3-manifold invariant. In Section 4, we establish several examples of skew-racks with Property FR; for example, from a group \(G\) and an involutive automorphism \(\kappa:G\to G\), we can define a skew-racks with Property FR (Examples 2.2 and 5.4). Using the examples of skew-racks, we compute the set of colorings and compute some cocycle invariants. For example, we determine the invariants of some Brieskorn 3-manifolds as integral homology 3-spheres (Example 5.6). Following the computations, we discuss a comparison with the Dijkgraaf-Witten invariant and pose some problems (Problem 6.8). Finally, we attempt to make an application from the skew-racks above; Precisely, Section 7 suggests elementary approaches to find some \(3\)-manifolds, which are not the results of surgery of any knot in \(S^{3}\); however, unfortunately, in this paper, we find no examples of their applications. ### Conventional notation. Every \(3\)-manifold is understood to be connected, smooth, oriented, and closed. ### Acknowledgment The author is grateful to Nozomu Sekino, Kimihiko Motegi and Motoo Tange for giving him valuable comments on Dehn surgery. ## 2 Symmetric skew-racks and birack cocycle invariants We introduce skew-racks, as a special class of biracks (see [FR, CEGN] for the definition of biracks). We define _a skew-rack_ to be a triple of a set \(X\), a binary operation \(\lhd:X\times X\to X\), and a bijection \(\kappa:X\to X\) satisfying 1. For any \(a,b\in X\), the equality \(\kappa(a\lhd b)=\kappa(a)\lhd\kappa(b)\) holds. 2. For any \(b\in X\), the map \(X\to X\) that sends \(x\) to \(x\lhd b\) is a bijection. 3. For any \(a,b,c\in X\), the distributive law \((a\lhd b)\lhd c=(a\lhd\kappa(c))\lhd(b\lhd c)\) holds. As a special case, if \(\kappa=\operatorname{id}_{X}\), the definition of skew-racks coincides with that of racks. We often denote the inverse map \(\bullet\lhd b\) of the bijection by \(\bullet\lhd^{-1}b.\) Furthermore, as a slight generalization of symmetric quandles in [Kam, KO], we define _a symmetric skew-rack_ to be a pair of a skew-rack \((X,\lhd,\kappa)\) and an involution \(\rho:X\to X\) satisfying 1. For any \(a,b\in X\), the equalities \((a\lhd b)\lhd\rho(b)=a\) and \(\rho(a)\lhd\kappa(b)=\rho(a\lhd b)\) hold. 2. The involutivity \(\rho\circ\rho=\kappa\circ\kappa=\operatorname{id}_{X}\) and the commutativity \(\rho\circ\kappa=\kappa\circ\rho\) hold. Such a \(\rho\) is called _a good involution_ as in [Kam]. If \(\kappa=\operatorname{id}_{X}\) and the equality \(a\lhd a=a\) holds for any \(a\in X\), the definition of symmetric biracks is exactly the original definition of symmetric quandles [Kam]. Let us give examples of symmetric skew-racks. **Example 2.1**.: Let \(X\) be a group \(G\), and \(\kappa:G\to G\) be an involutive automorphism. Define \(x\lhd y\) by \(\kappa(y^{-1})xy\), and \(\rho(x)\) by \(x^{-1}\). Then, these maps define a symmetric skew-rack structure on \(X\). **Example 2.2**.: Let \(K\) be a group, and \(f:K\to K\) be an involutive automorphism. Consider the direct products \(X=K\times K\) and \(\kappa=f\times f\). Define \((x,a)\lhd(y,b)\) by \((f(x)y^{-1}by,f(a))\), and \(\rho(x,a)=(f(x),f(a)^{-1})\). Then, these \(X,\lhd,\kappa,\rho\) define a symmetric skew-rack structure on \(K\times K\), and \(\operatorname{Tw}(x,a)=(a^{-1}x,a)\). As seen in Sections 5-7, this skew-rack plays a key role in this paper. Finally, we see that, if \(X\) is of finite order, the axiom (SS3) is obtained from the other axioms. **Proposition 2.3**.: _Let \((X,\lhd,\kappa)\) be a skew-rack satisfying \(\kappa^{2}=\operatorname{id}_{X}\) as in (SS2). Define the map \(\operatorname{Tw}:X\to X\) by setting \(\operatorname{Tw}(x)=\kappa(x)\lhd^{-1}\kappa(x)\). Then, the map is bijective, where the inverse is the map \(X\to X\) that sends \(x\) to \(\kappa(x)\lhd x\)._ Proof.: Letting \(y\) be \(\operatorname{Tw}(\kappa(x)\lhd x)\), we may show \(y=x\). Notice that \(x\lhd\kappa(x)=y\lhd(x\lhd\kappa(x))\), which is equal to \[\big{(}(y\lhd^{-1}x)\lhd x\big{)}\lhd(x\lhd\kappa(x))=\big{(}(y\lhd^{-1}x) \lhd x\big{)}\lhd\kappa(x)=y\lhd\kappa(x).\] Thus, by (SS2), we have \(y=x\). Similarly, we can easily check \(\kappa(\operatorname{Tw}(x))\lhd\operatorname{Tw}(x)=x\), leading to the proof. ## 3 Preliminaries; colorings and birack cocycle invariants We will define \(X\)-colorings, although the definition may be seen as a slight modification of classical \(X\)-colorings of quandles or biracks; see [3, 2]. Let \(D\) be a framed link diagram \(D\), and let \((X,\lhd,\kappa,\rho)\) be a symmetric skew-rack. Choose orientations \(o\) on each component of \(D\), and denote by \(D^{o}\) the diagram with the orientations. In this paper, a _semi-arc of \(D\)_ means a path from a crossing to the next crossing along the diagram. Then, _an \(X\)-coloring_ is a map \(\mathcal{C}:\{\text{semi-arc of }D\}\to X\) such that, for every crossing \(\tau\) of \(D\), the semi-arcs around \(\tau\) satisfy \(\mathcal{C}(\gamma_{\tau})=\kappa(\mathcal{C}(\beta_{\tau}))\) and \(\mathcal{C}(\delta_{\tau})=\mathcal{C}(\alpha_{\tau})\lhd\mathcal{C}(\beta_{ \tau})\), where \(\alpha_{\tau},\beta_{\tau},\gamma_{\tau},\delta_{\tau}\) are the semi-arcs as seen in Figure 1. We denote by \(\operatorname{Col}_{X}(D^{o})\) the set of \(X\)-colorings of \(D^{o}\). Then, as a basic fact in quandle theory (see [3, 4]), if two diagrams \(D^{o}\) and \((D^{\prime})^{o^{\prime}}\) are related by a Reidemeister move of type II, type III, or a doubled type I, then there exists a canonical bijection \(\mathcal{B}_{D^{o},(D^{\prime})^{o^{\prime}}}:\operatorname{Col}_{X}(D^{o}) \to\operatorname{Col}_{X}((D^{\prime})^{o^{\prime}})\). Moreover, thanks to the above axioms (SS1) and (SS2), if \(D^{o^{\prime}}\) is the same diagram \(D\) with opposite orientation, the correspondence \(a\mapsto\rho(a)\) on the color of each semi-arcs on the opposite component defines a bijection \(\mathcal{B}_{D^{o},D^{o^{\prime}}}:\operatorname{Col}_{X}(D^{o})\to \operatorname{Col}_{X}(D^{o^{\prime}})\). In particular, the set \(\operatorname{Col}_{X}(D^{o})\) up to bijections does not depend on the choice of orientations of \(D\). Accordingly, we sometimes use the expression \(\operatorname{Col}_{X}(D)\) instead of \(\operatorname{Col}_{X}(D^{o}).\) Finally, we should emphasize that the map \(\operatorname{Tw}^{\pm 1}\) in Proposition 2.3 corresponds with an addition of a \((\mp 1)\)-framing in an arc as in the Reidemeister move of type I. Next, we observe cocycle invariants of a symmetric skew-rack \(X\). According to [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 170, 162, 163, 164, 165, 166, 167, 168, 169, 171, 180, 181, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 219, 230, 224, 225, 226, 227, 228, 229, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 276, 277, 278, 279, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 280, 281, 289, 291, 292, 293, 294, 295, 296, 297, 301, 302, 303, 31, 31, 32, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 14, 15, 16, 17, 18, 19, 19, 12, 19, 13, 19, 14, 15, 17, 19, 16, 18, 19, 19, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 69, 71, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, hold for any \(a,b,c\in X\). Then, we define _the weight (of \(\tau\))_, \(\Phi(\tau)\), with respect to a crossing \(\tau\) on \(D\) to be \(\varepsilon_{\tau}\tau(\mathcal{C}(\alpha_{\tau}),\mathcal{C}(\beta_{\tau}))\in A\), where \(\varepsilon_{\tau}\) is the sign \(\tau\) as in Figure 1; we further define \(\Phi_{D}(\mathcal{C})\in A\) to be the sum \(\sum_{\tau}\Phi(\tau)\), where \(\tau\) runs over every crossing on \(D\). Then, as is known (see, e.g., [CEGS, CEGN]), if two diagrams \(D\) and \(D^{\prime}\) are related by a Reidemeister move of type II, type III or a doubled type I move, then \(\Phi_{D^{\prime}}\circ\mathcal{B}_{D^{o},(D^{\prime})^{\rho^{\prime}}}=\Phi_{D}\) holds as a map \(\operatorname{Col}_{X}(D^{o})\to A\). In other words, the map \(\Phi_{D}:\operatorname{Col}_{X}(D^{o})\to A\) up to bijections is an invariant of framed links with orientation. As in [CJKLS, CEGN], we call the map \(\Phi\)_the (birack) cocycle invariant_. Next, as an analogy of symmetric cocycle invariants in [Kam, KO], we discuss symmetric birack cocycles. We say a birack \(2\)-cocycle \(\phi:X^{2}\to A\) to be _symmetric_ if \[\phi(a,b)=-\phi(a\lhd b,\rho(b))=-\phi(\rho(a),\kappa(b))\in A,\] for any \(a,b\in X\). Then, by a similar discussion to [KO, Theorem 6.3], we can easily check that the weight \(\Phi(\tau)\) does not depend on the choice of orientations of \(D\); neither does the map \(\Phi_{D}:\operatorname{Col}_{X}(D^{o})\to A\) up to bijections. In conclusion, the cocycle invariant \(\Phi_{D}:\operatorname{Col}_{X}(D^{o})\to A\) up to bijections is an invariant of framed links. Finally, we briefly review surgery on links and Fenn-Rourke moves [FR]. Let us regard a framed link diagram as the surgery on the framed link in the \(3\)-sphere. As folklore, every closed \(3\)-manifold \(M\) can be expressed as the result of \(S^{3}\) of surgery on a framed link. Furthermore, it is shown (see [FR]) that two framed links in \(S^{3}\) have orientation-preserving homeomorphic results of surgery if and only if the framed links are related by a sequence of Fenn-Rourke moves and isotopies, where the Fenn-Rourke move is an operation between the framed links shown in Figure 2. Throughout this paper, for a framed link diagram \(D\) of a link \(L\), we denote by \(M_{D}\) the result of surgery of \(S^{3}\) on \(L\). ## 4 Topological invariants from skew racks with Property Fr. It is reasonable to find appropriate skew-racks, which yield birack cocycle invariants that are invariant with respect to the Fenn-Rourke moves. The purpose of this section is to define skew-racks with Property FR, and colorings of closed \(3\)-manifolds. For \(\varepsilon\in\{\pm 1\}\) and \(a_{1},\dots,a_{n}\in X\), let us consider the bijection \(A_{a_{1},\dots,a_{n}}:X\to X\) that sends \(x\) to \((\cdots((x\lhd a_{1})\lhd a_{2})\lhd\cdots)\lhd a_{n}\), and define the following subsets: \[\operatorname{Ann}^{+1}(A_{a_{1},\dots,a_{n}}):=\{x\in X\mid\kappa^{n+1}(x)=A_ {a_{1},\dots,a_{n}}(x)\lhd\kappa^{n+1}(x)\},\] \[\operatorname{Ann}^{-1}(A_{a_{1},\dots,a_{n}}):=\{x\in X\mid\kappa^{n+1}(x) \lhd\kappa\big{(}A_{a_{1},\dots,a_{n}}(x)\big{)}=A_{a_{1},\dots,a_{n}}(x)\}. \tag{2}\] As the case \(n=0\), we define \(\operatorname{Ann}(X)\) to be the subset \(\{x\in X\mid x\lhd\kappa(x)=\kappa(x)\}\) **Definition 4.1**.: A symmetric skew-rack \((X,\lhd,\kappa,\rho)\) is said to have _Property FR_ if it satisfies the followings: * The subset \(\operatorname{Ann}(X)\) is not empty, and bijective to \(\operatorname{Ann}^{\varepsilon}(A_{a_{1},\ldots,a_{n}})\) for any \(a_{1},\ldots,a_{n}\in X\) and \(\varepsilon\in\{\pm 1\}\). * For any \(a_{1},\ldots,a_{n}\in X\) and \(x\in\operatorname{Ann}^{+1}(A_{a_{1},\ldots,a_{n}}),y\in\operatorname{Ann}^{ -1}(A_{a_{1},\ldots,a_{n}})\), the equalities \[\kappa^{n+i}(a_{i})=A_{a_{1},\ldots,a_{n}}\big{(}\kappa^{i+1}(a_{i})\lhd x \big{)},\] (3) \[\kappa^{n+i}(a_{i})\lhd\kappa^{n+1}(y)=A_{a_{1}\kappa(y),a_{2}\lhd\kappa^{2}(y ),\ldots,a_{n}\lhd\kappa^{n}(y)}(\kappa^{i+1}(a_{i}))\] (4) hold, where \(i\leq n\) is arbitrary. Let us analyze the set of colorings of skew-racks with Property FR. **Theorem 4.2**.: _Let \((X,\lhd,\kappa,\rho)\) be a symmetric skew-rack with Property FR. Suppose that two framed link diagrams \(D\) and \(D^{\prime}\) are related by a Fenn-Rourke move as in Figure 2, and take orientations on \(D\) and \(D^{\prime}\)._ _Then, there is a bijection \(\mathcal{B}:\operatorname{Col}_{X}(D)\to\operatorname{Col}_{X}(D^{\prime}) \times\operatorname{Ann}(X)\)._ _In particular, if \(X\) is of finite order, then the rational number \(|\operatorname{Col}_{X}(D)|/|\operatorname{Ann}(X)|^{\#D}\in\mathbb{Q}\) givesrise to a topological invariant of closed 3-manifolds._ Proof.: Take arcs \(\gamma,\alpha_{i}\)'s and \(\beta_{i}\)'s as in Figure 2. We may assume that the framing of the arc \(\gamma\) is \(+1\) since the same proof similarly runs well in the negative case. Furthermore, by the properties of good involutions, the coloring conditions are independent of the choices of orientations of \(D\). Thus, we fix orientations of \(D\) and \(D^{\prime}\) as in Figure 2. Given an \(X\)-coloring \(\mathcal{C}\in\operatorname{Col}_{X}(D^{o})\), we will construct another \(X\)-coloring of \((D^{\prime})^{o^{\prime}}\). Define \(a_{i}:=\mathcal{C}(\alpha_{i})\), \(b_{i}:=\mathcal{C}(\alpha_{i})\), and \(x:=\mathcal{C}(\gamma)\). The coloring condition on the arc \(\gamma\) is \(\kappa^{n+1}(x)=A_{a_{1},\ldots,a_{n}}(x)\lhd\kappa^{n+1}(x)\); hence, \(x\in\operatorname{Ann}(A_{a_{1},\ldots,a_{n}})\). Therefore, thanks to (3), the map which sends \(\alpha_{i}^{\prime}\) to \(\mathcal{C}(\alpha_{i})=a_{i}\) defines an \(X\)-coloring \(\mathcal{C}^{\prime}\in\operatorname{Col}_{X}((D^{\prime})^{o^{\prime}})\). Conversely, given an \(X\)-coloring \(\mathcal{C}^{\prime}\) of \((D^{\prime})^{o^{\prime}}\) and \(x\in\operatorname{Ann}^{+1}(A_{a_{1},\ldots,a_{n}})\neq\emptyset\), we can define an \(X\)-coloring \(\mathcal{C}\) of \(D\) that sends \(\alpha_{i}\) to \(\mathcal{C}(\alpha_{i}^{\prime})\) and \(\gamma\) to \(x\). Thus, the correspondence \(\mathcal{C}\mapsto\mathcal{C}^{\prime}\) gives the required bijection \(\mathcal{B}\). Before going to the next section, we now discuss the triviality of the invariants up to link homotopy. For this, consider the permutation group, \(\operatorname{Bij}(X)\), of a skew-rack \(X\), and define a Figure 2: Fenn-Rourke moves, and labeled semi-arcs. subgroup generated by the following set: \[\{(\kappa(\bullet)\lhd a)\mid a\in X\}\cup\{(\bullet\lhd^{\epsilon_{1}}a_{1}) \lhd^{\epsilon_{2}}a_{2}\mid a_{i}\in X,\epsilon_{i}\in\{\pm 1\}\}. \tag{5}\] The subgroup canonically has the right action on \(X\). We denote the subgroup by \(\operatorname{Inn}_{\kappa}^{\operatorname{even}}(X)\). We say a skew-rack \((X,\lhd,\kappa)\) with Property FR to be \(f\)_-link homotopic_, if \(x\lhd^{\varepsilon}\kappa(x)=x\lhd^{\varepsilon}(x\cdot g)\) holds for any \(x\in X,g\in\operatorname{Inn}_{\kappa}^{\operatorname{even}}(X),\varepsilon \in\{\pm 1\}\). **Proposition 4.3**.: _Suppose that a symmetric skew-rack \((X,\lhd,\kappa,\rho)\) with Property FR is \(f\)-link homotopic. Then, if two framed link diagrams \(D\) and \(D^{\prime}\) are transformed by an operation in Figure 3, then there is a bijection \(\mathcal{B}_{f}:\operatorname{Col}_{X}(D^{o})\to\operatorname{Col}_{X}((D^{ \prime})^{o^{\prime}})\)._ Proof.: For a coloring \(\mathcal{C}\in\operatorname{Col}_{X}(D^{o})\), take \(a\in X\) such that \(\mathcal{C}(\alpha)=\kappa(a)\). Since \(\alpha\) and \(\beta\) lie on the same link-component, there is \(g\in\operatorname{Inn}_{\kappa}^{\operatorname{even}}(X)\) such that \(\mathcal{C}(\beta)=a\cdot g\) from the definition (5). Then, by the rule of colorings, we have \[\mathcal{C}(\gamma)=\operatorname{Tw}^{-1}(\kappa(a\cdot g))=(a\cdot g)\lhd \kappa(a\cdot g),\ \ \mathcal{C}(\delta)=\operatorname{Tw}^{-1}(\kappa(a))\lhd(a\cdot g)=(a\lhd \kappa(a))\lhd(a\cdot g).\] Since \(X\) is \(f\)-link homotopic, \(\mathcal{C}(\delta)=a\) and \(\mathcal{C}(\gamma)=(a\cdot g)\lhd^{-1}a\). Thus, we can define another coloring \(\mathcal{B}_{f}(\mathcal{C})\) by \(\mathcal{B}_{f}(\mathcal{C})(\alpha)=a\) and \(\mathcal{B}_{f}(\mathcal{C})(\beta)=a\cdot g\). Since \(\mathcal{C}(\gamma)=(a\cdot g)\lhd^{-1}a=\mathcal{B}_{f}(\mathcal{C})(\gamma)\) and \(\mathcal{C}(\delta)=a=\mathcal{B}_{f}(\mathcal{C})(\delta)\), the map \(\mathcal{B}_{f}:\operatorname{Col}_{X}(D^{o})\to\operatorname{Col}_{X}((D^{ \prime})^{o^{\prime}})\) is bijective, as required. Since many 3-manifolds can be expressed as the results from \(S^{3}\) of surgery along some framed knots, in order to get non-trivial colorings, we shall consider skew-racks, which are not \(f\)-link homotopic. ## 5 Examples of skew-racks with property FR from groups We give examples of skew-racks with property FR. Throughout this section, we fix a group \(G\), an automorphism \(\kappa:G\to G\) satisfying \(\kappa\circ\kappa=\operatorname{id}_{G}\), and a map \(\delta:G\to G\) satisfying \(\kappa\circ\delta=\delta\circ\kappa\). Consider the binary operation \(\lhd:G\times G\to G\) defined by \(x\lhd y=\kappa(x)\delta(y)\). Then, the twisting map \(\operatorname{Tw}\) in (SS3) is given by \(\operatorname{Tw}(g)=g\delta(g)^{-1}\). **Lemma 5.1**.: _These operations \((\lhd,\kappa)\) define a skew-rack of \(X=G\) if and only if the following holds for any \(x,y\in G\):_ \[\delta(x)\delta(y)=\delta(y)\delta(x\delta(y))\in G. \tag{6}\] Figure 3: The diagrams \(D,D^{\prime}\), where all the semi-arc lies in a link component. _Let \(\rho:G\to G\) be a good involution. Furthermore, assume that the image \(\operatorname{Im}(\delta)\subset G\) is a subgroup of \(G\), and that the cardinality of the preimage \(\delta^{-1}(d)\) is constant for any \(d\in\operatorname{Im}(\delta)\). Then, the symmetric skew-rack on \(X=G\) has Property FR._ _In addition, if the subgroup \(\operatorname{Im}(\delta)\) is commutative, then the skew-rack is \(f\)-link homotopic._ Proof.: Since the former part is shown by direct computation, we will show only the remaining claims. We now analyse the set \(\operatorname{Ann}^{\varepsilon}(A_{a_{1},\ldots,a_{n}})\) in (2). First suppose \(\varepsilon=+1\) for simplicity. Then, the condition \(\kappa^{n+1}(x)=A_{a_{1},\ldots,a_{n}}(x)\lhd\kappa^{n+1}(x)\) is equivalent to \[\delta(\kappa^{n}(a_{1}))\delta(\kappa^{n-1}(a_{2}))\cdots\delta(\kappa(a_{n} ))\delta(\kappa^{n+1}(x))=1. \tag{7}\] Since \(\operatorname{Im}(\delta)\) is a subgroup of \(G\) by assumption, the set \(\operatorname{Ann}^{+1}(A_{a_{1},\ldots,a_{n}})\) is non-empty. Moreover, by the second assumption, the cardinality of \(\operatorname{Ann}^{+1}(A_{a_{1},\ldots,a_{n}})\) does not depend on the choice of \(a_{1},\ldots,a_{n}\), that is, \(X\) satisfies (FR1). Concerning (FR2), the equality (3) is shown by \[A_{a_{1},\ldots,a_{n}}(\kappa^{i-1}(a_{i})\lhd x)=\kappa^{n+i}(a_{i})\delta( \kappa^{n+i}(x))\delta(\kappa^{n+i+1}(a_{1}))\delta(\kappa^{n+i}(a_{2}))\cdots \delta(\kappa^{i}(a_{n}))=\kappa^{n+i+1}(a_{i}).\] On the other hand, it is left to the reader to check (4) in the case \(\varepsilon=-1.\) Hence, \(X\) has Property FR, as required. Finally, we will show the last statement. From the definition of the subgroup \(\operatorname{Inn}_{\kappa}^{\operatorname{even}}(X)\), any \(g\in\operatorname{Inn}_{\kappa}^{\operatorname{even}}(X)\) and \(a\in G\) uniquely admit \(b_{1},\ldots,b_{n}\in\operatorname{Im}(\delta)\) such that \(a\cdot g=a\delta(b_{1})\cdots\delta(b_{n})\in G\). Since \(\operatorname{Im}(\delta)\) is commutative, (6) means \(\delta(a)=\delta(a\delta(b))\). Thus, \[(z\lhd^{\varepsilon}\kappa(a))\lhd^{-\varepsilon}(a\cdot g)=z\delta(a)^{ \varepsilon}\delta(a\delta(b_{1})\cdots\delta(b_{n}))^{-\varepsilon}=z.\] Therefore, the skew-rack is \(f\)-link homotopic by Proposition 4.3. We remark that the equality (6) requires some conditions. For example, if \(|G|>1\), the map \(\delta\) is not surjective. In fact, if \(\delta\) is surjective, then (6) with \(x=1\) is equivalent to \(z^{-1}\delta(1)z=\delta(z)\) for any \(z\in G\), which means that \(\operatorname{Im}(\delta)\) is a conjugacy class, and contradicts the surjectivity. However, we will give some examples satisfying the conditions in Lemma 5.1. **Example 5.2**.: First, we observe the case where \(\delta\) is a group homomorphism. Then, we can easily check that the equality (6) is equivalent to that \(\delta\circ\delta=0\) and the image \(\operatorname{Im}(\delta)\) is abelian. If so, the cardinality of \(\delta^{-1}(k)\) is constant; thus, if \(X\) admits a good involution, then the symmetric skew-rack has Property FR, and is \(f\)-link homotopic by Lemma 5.1. To avoid \(f\)-link homotopic skew-racks, we should focus on \(\delta\) which is not a homomorphism. **Example 5.3** (Twisted conjugacy classes).: Suppose a group automorphism \(f:G\to G\), and define \(\delta(x)=f(x^{-1})x\). Then, the equality (6) is equivalent to \(\operatorname{Im}(\delta\circ\delta)=\{1_{G}\}\). In general, it can be easily checked that, for any \(g\in\operatorname{Im}(\delta)\), the preimage \(\delta^{-1}(g)\) is bijective to the fixed-point subgroup \(\{h\in G\mid f(h)=h\}\); see, e.g., [BNN]. Thus, to apply Lemma 5.1, the remaining point is to analyze the situation such that the image \(\operatorname{Im}(\delta)\) is a subgroup. The image \(\operatorname{Im}(\delta)\) is sometimes called _the twisted conjugacy classes_ or _Reidemeister conjugacy classes_. The papers [BNN, GN] investigate some conditions to require that \(\operatorname{Im}(\delta)\) is a subgroup and \(\operatorname{Im}(\delta\circ\delta)=\{1_{G}\}\). However, many examples in the papers satisfy that \(\operatorname{Im}(\delta)\) is commutative Thus, it seems hard to find examples of pairs \((G,f)\) satisfying that the resulting skew-racks are not \(f\)-link homotopic. **Example 5.4**.: Take a group \(K\) with a normal subgroup \(N\unlhd K\), and an involutive automorphism \(f:K\to K\) satisfying \(f(N)\subset N\). Let \(G\) be \(K\times N\) and \(\kappa\) be \(f\times f\). Define \(\delta(x,y)\) to be \((x^{-1}yx,1)\), where \(x,y\in K\). Next, we will check the conditions in Lemma 5.1. The check of (6) is obvious. Since \(N=\{b^{-1}ab\mid a\in N,b\in K\}\), the image of \(\delta\) is \(N\times 1\) as a subgroup of \(G\). Moreover, for any \((k,1)\in K\times 1\), the preimage \(\delta^{-1}(k,1)\) is equal to \(\{(y^{-1}ky,y)\in G\mid y\in K\}\) which is bijective to \(K\). In conclusion, the symmetric skew-rack on \(G\) has Property FR, by Lemma 5.1. For example, if \(N=K\), the skew-rack on \(G\) is equal to that in Example 2.2 exactly. Finally, we will compute some colorings using the above skew-racks with Property FR. **Example 5.5**.: For natural numbers \(n,m\in\mathbb{N}\), we first compute colorings of the lens space \(L(nm-1,n)\). Here, let \(X=G\) be a skew-rack with good involution and satisfy the conditions in Lemma 5.1. Let \(D\) be the Hopf link with framing \((n,m)\). Then, \(M_{D}\) is knwon to be \(L(nm-1,n)\). We fix two semi-arcs \(\alpha,\beta\) in each link-component on \(D\). Then, from the definition of colorings, a coloring \(\mathcal{C}\in\operatorname{Col}_{X}(D)\) satisfies \[\kappa(\mathcal{C}(\alpha)\lhd\mathcal{C}(\beta))=\operatorname{Tw}^{n}( \mathcal{C}(\alpha)),\ \ \ \ \kappa(\mathcal{C}(\beta)\lhd\mathcal{C}(\alpha))= \operatorname{Tw}^{m}(\mathcal{C}(\beta)). \tag{8}\] Conversely, every \(a,b\in X\) satisfying \(\kappa(a\lhd b)=\operatorname{Tw}^{n}(a)\) and \(\kappa(b\lhd a)=\operatorname{Tw}^{m}(b)\) yield a coloring of \(D\). Since \(\operatorname{Tw}^{n}(a)=a\delta(a)^{-n}\), notice that (8) is equivalent to the condition \(\mathcal{C}(\beta)=\delta(\mathcal{C}(\alpha))^{m}\) and \(\delta(\mathcal{C}(\alpha))^{nm-1}=1\). Hence, \(\operatorname{Col}_{X}(D)\) is bijective to \[\{(a,b)\in G^{2}\mid\delta(a)^{nm-1}=1,\delta(b)=\delta(a)^{n}\}\stackrel{{ \downarrow 1:1}}{{\longleftrightarrow}}\{a\in G\mid\delta(a)^{nm-1}=1\} \times\delta^{-1}(0). \tag{9}\] In particular, the set \(\operatorname{Col}_{X}(D)\) depends only on \(nm\), and thus, cannot classify the lens spaces of the forms \(L(nm-1,n)\). In contrast, we later compute some cocycle invariants, which can distinguish among some lens spaces (see Example 6.7). **Example 5.6**.: Next, we will observe that the sets of colorings of integral homology \(3\)-spheres seem strong invariants, where we consider the skew-rack on \(X=K\times K\) in Example 2.2. Let \(D_{n}^{\pm}\) be the \((2,n)\)-torus knot with framing \(\pm 1\). Then, the resulting \(3\)-manifold \(M_{D_{n}^{\pm}}\) is the Brieskorn \(3\)-manifold of the form \(\Sigma(2,n,2n\mp 1)\), as an integral homology \(3\)-sphere. Then, for a concrete group \(K\), it is not so hard to determine the set \(\operatorname{Col}_{X}(D_{q}^{\pm})\) with the help of the computer program. For example, we give a list of some computation of \(|\operatorname{Col}_{X}(D_{q}^{\pm})|\); see Table 1. **Remark 5.7**.: In this example, we focus on non-abelian groups \(K\). In fact, if \(K\) is abelian, \((x,a)\lhd(y,b)=(x,a)\); hence, the coloring conditions are trivial. Thus, considering the linking matrix of \(D\), we can easily find a \(1\):\(1\)-correspondence \(\operatorname{Col}_{X}(D)\simeq\operatorname{Hom}(H_{1}(M;\mathbb{Z}),K) \times K^{\sharp D}\). ## 6 Cocycle invariants of 3-manifolds As seen in [CEGS, CEGN, Nbook], there are some procedures to concretely find symmetric birack 2-cocycles. However, the conditions of 2-cocycles to require the invariance with respect to Fenn-Rourke moves seem strong. However, in this section, we discuss 2-cocycle invariants to obtain 3-manifold invariants. Throughout this section, we suppose a symmetric skew-rack \(X\) with Property FR, and a map \(\phi\) from \(X^{2}\) to a commutative ring \(A\). We first introduce Property FR of birack 2-cocycles as follows: **Definition 6.1**.: Let us recall the bijection \(\mathcal{B}\) in Theorem 4.2, and denote by \(0_{A}\) the constant map to \(A\) whose image is zero. A symmetric birack 2-cocycle \(\phi:X^{2}\to A\) satisfies _Property FR_, if \(\Phi_{D}=(\Phi_{D^{\prime}}\times 0_{A})\circ\mathcal{B}\) holds for any diagrams \(D\) and \(D^{\prime}\) in Figure 2. Here, \(\Phi_{C}\) is the cocycle invariant explained in SS3. Furthermore, \(\phi\) is said to be \(f\)_-link homotopic_ if \(X\) is \(f\)-link homotopic and the following holds for any \(a\in X\) and \(g\in\operatorname{Inn}_{\kappa}^{\operatorname{even}}(X)\): \[\phi((a\cdot g)\lhd\kappa(a),a\cdot g)+\phi(a,a\lhd\kappa(a))=\phi(\kappa(a), a\cdot g)+\phi((a\cdot g)\lhd\kappa(a),a). \tag{10}\] We will see (Proposition 6.2) that symmetric birack 2-cocycles with Property FR yield topological invariants of closed 3-manifolds. Take two maps \(F:Y\to A\) and \(G:Z\to A\), where \(Y\) and \(Z\) are some sets. We call \(F\)_an FR-stabilization_ of \(G\), if there is a bijection \(B:Z\to Y\times\operatorname{Ann}(X)\) such that \(g\circ B^{-1}=f\times 0_{A}\). More generally, \(F\) and \(G\) are _FR-equivalent_, if \(F\) and \(G\) are related by a finite sequence of FR-(dis-)stabilizations. Then, the following proposition is almost obvious, by definitions. **Proposition 6.2**.: _Let \(\phi\) be a symmetric birack 2-cocycle with Property FR. Then, the correspondence \(D\mapsto\Phi_{D}\) up to FR-equivalent relations is an invariant of closed 3-manifolds._ _Moreover, if \(X\) and \(\phi\) are \(f\)-link homotopic, and if \(D\) and \(D^{\prime}\) are related by the operation in Figure 3, then \(\Phi_{D}=\Phi_{D^{\prime}}\circ B_{f}\), where \(B_{f}\) is the bijection \(\operatorname{Col}_{X}(D)\to\operatorname{Col}_{X}(D^{\prime})\) in the proof of Proposition 4.3._ To conclude, to obtain 3-manifold invariants, it is important to find symmetric birack 2-cocycles with concrete expressions. For this, let us discuss Lemmas 6.3 and 6.4 below. Let \(\tilde{X}\) be \(X\times A\). Define \(\tilde{\lhd}:\tilde{X}\times\tilde{X}\to\tilde{X}\) by \[(x,a)\tilde{\lhd}(y,b)=(x\lhd y,a+\phi(x,y)),\hskip 28.452756pt(x,y\in X,a,b \in A),\] and \(\tilde{\kappa}:\tilde{X}\to\tilde{X}\) by \(\tilde{\kappa}(x,a)=(\kappa(x),-a)\). \begin{table} \begin{tabular}{l|c c c c c c} \(p\) & \(|\operatorname{Col}_{X}(D_{3}^{+})|\) & \(|\operatorname{Col}_{X}(D_{3}^{-})|\) & \(|\operatorname{Col}_{X}(D_{5}^{+})|\) & \(|\operatorname{Col}_{X}(D_{5}^{-})|\) & \(|\operatorname{Col}_{X}(D_{7}^{+})|\) & \(|\operatorname{Col}_{X}(D_{7}^{-})|\) \\ \hline 3 & \(|K|\) & \(|K|\) & \(|K|\) & \(|K|\) & \(|K|\) & \(|K|\) \\ \hline 5 & \(121|K|\) & \(|K|\) & \(121|K|\) & \(|K|\) & \(|K|\) & \(|K|\) \\ \hline 7 & \(|K|\) & \(337|K|\) & \(|K|\) & \(|K|\) & \(|K|\) & \(57|K|\) \\ \hline 11 & \(2641|K|\) & \(|K|\) & \(2641|K|\) & \(2641|K|\) & \(|K|\) & \(|K|\) \\ \hline 13 & \(|K|\) & \(6553|K|\) & \(|K|\) & \(|K|\) & \(|K|\) & \(|K|\) \\ \hline \end{tabular} \end{table} Table 1: The cardinality of \(\operatorname{Col}_{X}(D_{n}^{\pm 1})\) for some \(p,n\). Here \(|K|=|\operatorname{SL}_{2}(\mathbb{F}_{p})|=p(p^{2}-1)\). **Lemma 6.3** (cf. [CEGS, Section 3]).: _These maps \(\widehat{\triangle},\tilde{\kappa}\) define a skew-rack on \(\tilde{X}=X\times A\), if and only if \(\phi\) is a birack 2-cocycle._ This can be immediately shown by direct computation. Furthermore, we now show **Lemma 6.4**.: _Suppose a birack 2-cocycle \(\phi\) satisfying \(\phi(a,b)=-\phi(\rho(a),\kappa(b))\). Then, the map \(\overline{\phi}:X^{2}\to A\) that sends \((a,b)\) to \(\phi(a,b)-\phi(a\lhd b,\rho(b))\) is a symmetric birack 2-cocycle._ Proof.: It is easy to check \(\overline{\phi}(a,b)+\overline{\phi}(a\lhd b,\rho(b))=0\). Thus, it remains to check the cocycle condition (1) of \(\overline{\phi}\). For this, we may show \[\phi(a\lhd b,\rho(b))+\phi((a\lhd b)\lhd c,\rho(c))=\phi(a\lhd\kappa(c),\rho( \kappa(c)))+\phi((a\lhd b)\lhd c,\rho(b\lhd c)). \tag{11}\] Replace \((a\lhd b)\lhd c,\rho(b\lhd c)\) and \(,\rho(\kappa(c))\) by \(a,b,\) and \(c\) respectively. Then, we can easily check that the replacement of (11) coincides with (1), which completes the proof. Using these lemmas, we will give some examples from skew-racks in Example 5.4. Let \(N\unlhd K\) be groups, and \(f:K\to K\) be an involutive automorphism satisfying \(f(N)\subset N\). Furthermore, take a normalized group 2-cocycle \(\theta:K\times K\to A\), that is, \(\theta\) satisfies \[\theta(x,y)-\theta(x,yz)+\theta(xy,z)-\theta(y,z)=0,\ \ \ \ \theta(1_{K},x)=\theta(x,1_{K})=0\in A,\] for any \(x,y,z\in K\). Then, the product \(\tilde{K}=K\times A\) has a group structure with operation \(((x,a),(y,b))\mapsto(xy,a+b+\theta(x,y))\), as a central extension of \(K\). As is known in group cohomology, every central extension over \(K\) with fiber \(A\) can be expressed by the product for some \(\theta.\) Then, from Example 5.4, we can define the symmetric skew-racks on \(G=K\times N\) and \(\tilde{G}=\tilde{K}\times\tilde{N}\), which have Property FR. Moreover, by the definition of \(\lhd\) on \(\tilde{G}\), we notice \[\big{(}(x,a),(y,b)\big{)}\lhd\big{(}(z,c),(w,d)\big{)}=\big{(} \tilde{\kappa}(x,a)(z^{-1},-c-\theta(z,z^{-1}))(w,d)(z,c),\ (y,b)\big{)}\] \[=\big{(}(f(x)z^{-1}wz,f(a)+d+\theta(f(x),z^{-1})+\theta(f(x)z^{ -1},wz)+\theta(w,z)-\theta(z,z^{-1})),(y,b)\big{)}\in\tilde{G}.\] Inspired by Lemma 6.3, we obtain a procedure of producing birack 2-cocycles as follows: **Theorem 6.5**.: _Let \(\lambda:N\to A\) be a group 1-cocycle. Then, the map_ \[\phi_{\lambda,\theta}:G^{2}=(K\times N)\times(K\times N)\to A\] \[(x,y,z,w)\longmapsto\lambda(y)\big{(}\theta(f(x),z^{-1})+\theta(f(x)z^{-1}, wz)+\theta(z,w)-\theta(z,z^{-1})\big{)}\] _is a birack 2-cocycle of the skew-rack \(G=K\times N\) in Example 2.2. If_ \[\lambda(x)\theta(a,b)=\lambda(f(x))\theta(f(a),f(b)),\] _hold for any \(a,b\in K,x\in N,\) then the condition in Lemma 6.4 is true. In particular, the cocycle \(\overline{\phi_{\lambda,\theta}}\) mentioned in Lemma 6.4 is a symmetric birack 2-cocycle._ In general, it seems hard to find group 2-cocycles \(\theta\) such that the associated map \(\phi_{\lambda,\theta}\) has Property FR. However, when \(K\) is a cyclic group, we will give such examples of birack cocycles with Property FR. More precisely, by a direction computation, we can easily show that **Proposition 6.6**.: _Let \(p\in\mathbb{Z}\) be an odd prime. Let \(K=N=\mathbb{Z}/p\), and take \(\varepsilon\in\{\pm 1\}\) such that \(f(x)=\varepsilon x,\) Let us define group cocycles \(\lambda\) and \(\theta\) by setting_ \[\lambda(x)=x,\ \ \ \ \theta(x,y)=\frac{(x+\varepsilon y)^{p}-x^{p}-( \varepsilon y)^{p}}{p}=\sum_{j:1\leq j<p}j^{-1}x^{j}(\varepsilon y)^{p-j},\] _respectively, where \(x,y\in\mathbb{Z}/p\). Then, \(\overline{\phi_{\lambda,\theta}}(x,y,z,w)=2y\theta(x,w)\), and the symmetric birack 2-cocycle \(\overline{\phi_{\lambda,\theta}}\) has Property FR, and is \(f\)-link homotopic._ **Example 6.7**.: Let \(D\) be the Hopf link with framings \((n,m)\), as in Example 5.5. Recall that \(M_{D}\) is the lens space \(L(nm-1,m)\). By (9), if \(nm-1\) is divisible by \(p\) and \(K=\mathbb{Z}/p\), then \(\mathrm{Col}_{X}(D)\) is bijective to \((\mathbb{Z}/p)^{2}\). In addition, we can easily show that the cocycle invariant \(\Phi_{D}:(\mathbb{Z}/p)^{2}\to\mathbb{Z}/p\) is equal to the correspondence \((x,y)\mapsto-mx^{2}\), where we use the 2-cocycle \(\overline{\phi_{\lambda,\theta}}\) in Proposition 6.6. For instance, the invariant can distinguish between the lens spaces \(L(11,1)\) and \(L(11,3)\), which are not homotopy equivalent. More generally, consider the lens space \(L(p,q)\), and a framed diagram \(D_{p,q}\) such that \(M_{D_{p,q}}=L(p,q)\). Then, with the help of a computer program, if \(p,q<100\), it is not so hard to check that the cocycle invariant \(\Phi_{D_{p,q}}:(\mathbb{Z}/p)^{1+\#D_{p,q}}\to\mathbb{Z}/p\) is FR-equivalent to the map \(\mathbb{Z}/p\to\mathbb{Z}/p;x\mapsto-qx^{2}\). From this example, it is natural to pose a problem below, together with a relation to the Dijkgraaf-Witten invariant [DW, SS6]. For this, let us briefly review the invariant. Fix a closed 3-manifold \(M\) with the fundamental homology 3-class \([M]\in H_{3}(M;\mathbb{Z})\cong\mathbb{Z}\). Let \(K\) be a group of finite order, and \(\psi:K^{3}\to A\) be a group 3-cocycle. Denote by \(BK\) the classifying space of \(K\) or the Eilenberg-MacLane space of type \((K,1)\), and by \(c_{M}:M\to B\pi_{1}(M)\) be a classifying map. Then, any group homomorphism \(f:\pi_{1}(M)\to K\) induces a continuous map \(f_{*}:B\pi_{1}(M)\to BK\). Since the (co)-homology of \(BK\) equals that of \(K\), we can define the pullback \((f_{*}\circ c_{M})^{*}(\psi)\) as a 3-cocycle of \(M\). Then, _the Dijkgraaf-Witten invariant_ is defined as the map \[\mathrm{DW}_{\psi}(M):\mathrm{Hom}(\pi_{1}(M),K)\longrightarrow A;\ \ \ f \longmapsto\langle(f_{*}\circ c_{M})^{*}(\psi),[M]\rangle,\] where \(\langle,\rangle\) is the Kronecker map. **Problem 6.8**.: As in Example 2.2, let \(X\) be the symmetric skew-rack on \(K\times K\). Let \(\lambda:K\to A\) and \(\theta:K^{2}\to A\) be group cocycles, and \(\psi\) be the cup product \(\lambda\smile\theta\) as a group 3-cocycle. Let \(D\) be a framed link diagram. Then, is there a bijection \(\mathcal{B}:\mathrm{Col}_{X}(D)\simeq\mathrm{Hom}(\pi_{1}(M),K)\times\mathrm{ Ann}(X)^{\sharp D}\)? Furthermore, find a condition such that the birack 2-cocycle \(\overline{\phi_{\lambda,\theta}}\) in 6.5 has Property FR, and FR-equivalence between the cocycle invariant \(\Phi:\mathrm{Col}_{X}(D)\to A\) and the Dijkgraaf-Witten invariant \(\mathrm{DW}_{\psi}(M_{D})\). If this problem is positively solved, we consequently obtain a diagrammatic computation of the Dijkgraaf-Witten invariant via the cocycle invariants and Dehn surgery. Criteria for 3-manifolds which are not the surgery of any knot As an application of the cocycle invariant, we will give two criteria to detect some 3-manifolds, which are not the results of surgery of any knot in \(S^{3}\); see, e.g., [HKMP], [AFW, Section 7.1] and references therein for the details of such 3-manifolds, and such other criteria. For this, as in Example 5.4, we fix groups \(N\unlhd K\), and \(X=K\times N\) with \(f=\mathrm{id}_{K}\); recall that \(X\) has a skew rack by \((x,a)\lhd(y,b)=(xy^{-1}by,a)\), and has Property FR. **Proposition 7.1**.: _Suppose \(|K|<\infty\) and that a framed link diagram \(D\) and a knot diagram of framing zero are related by a sequence of Fenn-Rourke moves and isotopy. Then, the invariant \(|\mathrm{Col}_{X}(D)|/|K|^{\sharp D}\in\mathbb{Q}\) in Theorem 4.2 is larger than \(|N|\) or equal to._ Proof.: We may suppose that \(D\) is a knot diagram of framing zero. For the proof, it is sufficient to construct \(|K\times N|\) colorings on \(D\). As in Figure 4, take semi-arcs \(\alpha_{i}\) and \(\beta_{i}\) in \(D\), and denote by \(\varepsilon_{i}\in\{\pm 1\}\) the sign of the crossing between \(\alpha_{i}\) and \(\beta_{i}\). For \((g,h)\in K\times N\), we define \(\mathcal{C}_{g,h}(\alpha_{i})\) to be \((h^{\sum_{j=1}^{i-1}\varepsilon_{j}}g,h)\in X=K\times N\). Since every \(\beta_{i}\) lies on the same link component, \(\mathcal{C}_{g,h}(\beta_{i})=(h^{n_{i}}g,h)\) for some \(n_{i}\in\mathbb{Z}\). Hence, we can easily check that \(\mathcal{C}_{g,h}\) defines an \(X\)-coloring as required. As a special case, let \(K=N=\mathbb{Z}/2\). For \(k_{1},k_{2},k_{3}\in\mathbb{Z}/2\), let us define a map \(\phi_{k_{1},k_{2},k_{3}}:X\times X\to\mathbb{Z}/2\) by setting \[\phi_{k_{1},k_{2},k_{3}}((x,a),(y,b))=k_{1}a+k_{2}b+k_{3}ab.\] Then, by direct computation, it is not hard to show the following: **Proposition 7.2**.: _The map is a symmetric birack 2-cocycle with Property FR, and is \(f\)-link homotopic. Furthermore, if a framed link diagram \(D\) is FR-equivalent to a knot diagram of framing zero, then the symmetric birack 2-cocycle invariant is trivial._ However, unfortunately, the author does not find new examples of framed link diagrams that are not FR-equivalent to any knot diagram of framing zero. We end this paper by giving problems as future works. **Problem 7.3**.: As applications from the propositions above, find 3-manifolds that are not the surgery of any knot of framing zero. Establish stronger criteria than the propositions above, which are applicable to many framed link diagrams. Figure 4: Semi-arcs \(\alpha_{i}\) and \(\beta_{i}\) in the knot diagram \(D\).
2303.03640
AHPA: Adaptive Horizontal Pod Autoscaling Systems on Alibaba Cloud Container Service for Kubernetes
The existing resource allocation policy for application instances in Kubernetes cannot dynamically adjust according to the requirement of business, which would cause an enormous waste of resources during fluctuations. Moreover, the emergence of new cloud services puts higher resource management requirements. This paper discusses horizontal POD resources management in Alibaba Cloud Container Services with a newly deployed AI algorithm framework named AHPA -- the adaptive horizontal pod auto-scaling system. Based on a robust decomposition forecasting algorithm and performance training model, AHPA offers an optimal pod number adjustment plan that could reduce POD resources and maintain business stability. Since being deployed in April 2021, this system has expanded to multiple customer scenarios, including logistics, social networks, AI audio and video, e-commerce, etc. Compared with the previous algorithms, AHPA solves the elastic lag problem, increasing CPU usage by 10% and reducing resource cost by more than 20%. In addition, AHPA can automatically perform flexible planning according to the predicted business volume without manual intervention, significantly saving operation and maintenance costs.
Zhiqiang Zhou, Chaoli Zhang, Lingna Ma, Jing Gu, Huajie Qian, Qingsong Wen, Liang Sun, Peng Li, Zhimin Tang
2023-03-07T04:04:23Z
http://arxiv.org/abs/2303.03640v1
# AHPA: Adaptive Horizontal Pod Autoscaling Systems on Alibaba Cloud Container Service for Kubernetes ###### Abstract The existing resource allocation policy for application instances in Kubernetes cannot dynamically adjust according to the requirement of business, which would cause an enormous waste of resources during fluctuations. Moreover, the emergence of new cloud services puts higher resource management requirements. This paper discusses horizontal POD resources management in Alibaba Cloud Container Services with a newly deployed AI algorithm framework named AHPA - the adaptive horizontal pod auto-scaling system. Based on a robust decomposition forecasting algorithm and performance training model, AHPA offers an optimal pod number adjustment plan that could reduce POD resources and maintain business stability. Since being deployed in April 2021, this system has expanded to multiple customer scenarios, including logistics, social networks, AI audio and video, e-commerce, etc. Compared with the previous algorithms, AHPA solves the elastic lag problem, increasing CPU usage by 10% and reducing resource cost by more than 20%. In addition, AHPA can automatically perform flexible planning according to the predicted business volume without manual intervention, significantly saving operation and maintenance costs. \({}^{1}\)DAMO Academy, Alibaba Group, Hangzhou, China \({}^{2}\)DAMO Academy, Alibaba Group, Bellevue, WA, USA \({}^{3}\)Alibaba Cloud, Alibaba Group, Hangzhou, China {zhouzhiqiang.zzq, chaoli.zcl, malingna.mln, zibai.gj, h.qian, qingsong.wen, liang.sun, yuanyi.lp, zhimin.tangzm}@alibaba-inc.com ## Introduction The continuous development of cloud computing technology provides more possibilities for current computer on-line services, and users also have higher expectations for cloud resilience. Furthermore, with the development of container services in cloud-native technologies [16, 1], more and more new types of applications based on container services have emerged. Unlike applications in the Virtual Machine [1] era, where minute-level manual operation is enough, new emerging applications usually require second-level operations. Meanwhile, many fast-developing applications have noticeable cyclical fluctuations [15, 16, 17, 18], such as Internet broadcast, e-Learning, online game, etc. This kind of application repeatedly emerges with peaks and valleys in business demand, so flexible resource utilization with low latency is required. Another main kind of new application is serverless computing [1, 1] which allows clients to build and run services without thinking about servers. It is challenging to elastically manage resources for such applications facing the difficulty of dealing with cold starts, low latency, and scaling efficiency. Thus, proactive autoscaling is popular [19, 18, 17]. Both cloud technological maturity and the emergence of new businesses have led to the development of efficient resource utilization. More specifically, new application characteristics call for a unique design of auto-scaling, which allows adjustment of application instances to improve the utilization under the cloud native concept. Kubernetes (k8s) [10] is the primary open-source container orchestration system for application/software deployment and management maintained in the cloud native computing foundation. Scalability is one of the core requirements of Kubernetes (k8s). Auto-scaling [1, 17, 16] is a necessary feature of the Kubernetes platform to secure scalability. Auto-scaling saves time, prevents performance bottlenecks, and avoids resource waste with appropriate configurations. Currently, there are three conservative ways to manage the number of application instances in Kubernetes, whose process is shown in Fig. 1: fixed number of instances, HPA [14], and CronHPA1. Unlike the fixed number of instances which does not change when fluctuations of demand occur, HPA and CronHPA can adjust the number of instances according to the change in demand. However, they have shortcomings in efficiency, simplicity, and accuracy, and none of these methods could resolve the demand fluctuations elastically. For further details, the policy with a fixed number of instances is the easiest to implement and widely used. Meanwhile, it also has a significant disadvantage of wasting resources in the valleys of business demand. Compared with the static method, the HPA method adjusts the number of instances after the demand varies. Hence, its response to fluctuations in demand is lagging, which may lead to ineffective processing and lousy quality of service, and even worse, the application is terminated. In addition, the CronHPA policy requires expert experience to manually set up the scaling schedule, which might be inaccurate, inflexible, and a massive cost to human resources. To address the shortcomings of existing solutions mentioned above, we designed and deployed a new system named the Adaptive Horizontal Pod Auto-scaling System(AHPA) in the product of Alibaba Cloud Container Service for Kubernetes (ACK2) that supports much better predictive auto-scaling. AHPA solves the problem of existing methods not being able to dynamically adjust POD resources by using a decomposition method to accurately predict the next phase of business. At the same time, AHPA learns the mapping from business workload to the number of required PODs, and provides the final scaling action plan, unlike CronHPA which requires manual intervention. The automated model makes AHPA easy to deploy and scale up, and saves significant operation and maintenance costs. Footnote 2: [https://www.alibabacloud.com/product/kubernetes](https://www.alibabacloud.com/product/kubernetes) AHPA system has been deployed throughout Alibaba Cloud Service since April 2021. It has significantly improved the elastic resource management compared to the previous algorithms used in Cloud Service. More specifically, in the scenario of intelligent voice, AHPA's current call reaches more than 5000 times per day. It has saved POD resources cost about ten thousand daily, directly contributing to more than 28% daily cost savings compared to the original. ## 3 Application Description In this section, we introduce the architecture of the AHPA system, as illustrated in Fig. 2. It consists of two main parts: elastic metrics and elastic objects. In elastic metrics, various metrics are provided to satisfy different demands of clients, including resource metrics (for instance, CPU and memory), custom metrics (for instance, query rate per second and response time), and external metrics (for instance, message queue length). A proactive prediction module is used to predict demand in the future, and a downgrade protection module is designed to guarantee stability, hence intelligent elasticity module outputs resource prediction. With prediction results, the elastic objects module would allocate the required number of pods via application deployment. The primary purpose of elasticity resource management is to save costs for the client when satisfying the stability requirement of service and reduce the cost of human resources in operation and maintenance. Designed for the requirements mentioned above, our new algorithm AHPA has three typical features so it could greatly meet both the elastic demand and the stable requirement: * Stability: AHPA is performed under the condition that the stability of client services is guaranteed. * Zero cost in operation and maintenance: no additional operation or maintenance is needed, including no more added controllers and more concise configurations than HPA. * Serverless application feasibility: AHPA provides resources at the pod level without considering the usage rate at the node level, which enhances the long run of applications. ## 4 Use of AI Technology The use of AI technology mainly focused on the following three modules in the adaptive horizontal pod auto-scaling framework as illustrated in Fig. 3: future workload forecasting, performance model training, and scaling plan genera Figure 1: The procedure of resource management in Kubernetes with three conservative ways (Fixed, HPA, and CronHPA). Figure 2: The architecture of the AHPA system. tion. In the following, we introduce each component in detail. ### Future Workload Forecasting The first module, named future workload forecasting, heavily relies on AI technology and plays an essential role in the AHPA framework. Accurate prediction of future workload would significantly improve the optimality of the schedule of the horizontal pod plan and hence earn more profit from cost savings in the system. However, the forecasting process still faces several complex challenges in the Alibaba Cloud Service: * Missing values and noisy data: there are many possible causes for this issue. For example, when some nodes of the cloud service distributed system is damaged, or some accidents happen during the interaction between the user and the service system, they will lead to the lack of valid data. Besides, if the collected data is too far from the current time, it may affect forecasting accuracy. Therefore, setting the threshold to judge data validity is a necessary procedure. There is also a need for an appropriate method to complement the missing data and normalize the data scale in some situations. * Limited data scale: metrics data in Kubernetes generally uses Prometheus storage. Considering cost and efficiency in a compromise, the general business data storage period is 7 days. The 7-day data volume is too small as a training set, and the trained machine/deep learning model usually has a poor accuracy. How to effectively estimate the future business volume with a limited amount of historical data is worthy of discussion. * High complexity: in general, user demands change frequently, which significantly enriches the complexity of data. For instance, data may have complicated characteristics such as multiple cycles. Therefore, sophisticated data has higher requirements on the ability of the algorithm model to make an accurate prediction. We design a robust decomposition-based statistical method as the main forecasting scheme to address the above challenges and cater to high forecast latency requirements. Specifically, we adopt our previously published three robust time series decomposition algorithms [22, 23] as the preprocessing steps as shown in Fig. 4. Firstly, RobustPeriod [22], based on MODWT (maximal overlap discrete wavelet transform) to decouple multiple periodicities, is utilized to detect if the input time series is periodic and its period lengths. According to the period detecting result, there are two ways to deal with different data characteristics. For periodic data, RobustSTL [22, 23] is adopted to decompose the input time series into trend, seasonality (periodic component), and residual terms. For non-periodic data, RobustTrend [22] is adopted to decompose the input time series into trend and residual terms. Mathematically, the above robust forecasting module decomposes the time series data into the trend item \(\tau_{t}\), the period item \(s_{i,t}\) (if it is periodic), and the residual item \(r_{t}\), and the formula is as follows: \[y_{t}=\tau_{t}+\sum_{i=1}^{m}s_{i,t}+r_{t},t=0,1,...,N-1 \tag{1}\] The historical period item \(s_{i,t}\) is directly shifted to the right as the prediction of the future period item. The trend term \(\tau_{t}\) is predicted by a classical time series model such as exponential smoothing to obtain the prediction of the future trend component. Finally, the residual part uses the quantile regression forest to get an upper bound prediction of the future residuals. The combination of the above three items leads to the final predicted value \(y_{t+1}\) at the next moment: \[\begin{split} y_{t+1}=\sum_{i=1}^{m}s_{i,t}+\text{ExponentialSmoothing}(\tau_{t})\\ +\text{QuantileRegression}(r_{t})\end{split} \tag{2}\] In addition to the robust decomposition-based forecasting schemes introduced above, we also consider the state-of-the-art transformer-based deep learning forecasting models [22] for scenarios with enough data. Specifically, we include our recently developed FEDformer [23] model, which is suitable for long-term time series prediction scenarios, and our recently developed Quadformer [22] model, which is mainly designed for data characteristics with complicated periodical patterns. Shortly we would continue working on the whole scheme and add more forecasting methods that could fit different situations. ### Performance Model Training After getting the forecast volume values, the second module, called the Performance Model, is used to simulate the Figure 3: The framework of the scaling algorithm in AHPA. relationship between the indicator metric and the number of pods. In this section, due to the reality of some specific businesses, we have assumed that pods are equivalent computing units. The model mainly adopts the method of queuing theory in operation research, including two kinds of different queuing models. One is the linear relation using parallel M/M/1 queues, and the other is M/M/c queues with a public buffer pod. In the Pod resource utilization queuing model, we regard the business QPS (Queries Per Second) as the queue rate in the system and the number of Pods as the number of service desks \(c\) (in the M/M/1 queues, each pod could be considered as a private desk and they have the same processing rate), and \(u_{i}\) is the average service of the service desks which can be expressed in terms of Pod CPU. Based on these metrics, we aim to find the average wait time per customer consistent with the RT per request. Different average RT with different queue models could be formalized by the following mathematical formula: \[\begin{split}\text{Avg}RT(M/M/1)&=(u-\frac{QPS}{N })^{-1}+\text{otherlatency}\\ \text{Avg}RT(M/M/c)&=f(QPS,u,N)+\text{otherlatency} \end{split} \tag{3}\] where \(u\) denotes the queries per second that could be processed by one pod, and the \(f\) is given by Erlang's C formula [10] and Little's Law [12]. In general, we should find the minimum value of the pod number while satisfying the requirement of average RT. For the linear M/M/1 model, \(pod=C*QPS\); and for the nonlinear M/M/c queue model, \(pod=g(QPS)\). When the number of adjustable pods is small, the M/M/c model performs better than the other model, and while there are more pods available, M/M/1 queue model outperforms. Therefore, different models could be selected according to different requirements in actual deployments. In summary, this performance training module takes the predicted values of the previous forecasting module and historical indicator data as input and outputs the number of pods that need to be adjusted. ### Scaling Plan Generation Our system generates the final scaling decisions with the future workload forecasting results and the performance model. The scaling plan includes the number of pods and times that should be added or reduced. First, we forecast the number of pods required to satisfy clients' requirements in certain metrics (e.g., RT or CPU usage rate). However, there is an overhead time for pods to start, which means the system is troubled with time delay problems when adding resources, and the deployment throughput is limited. Facing the constraints mentioned above, we take the improved forecasting shift algorithm [13] to handle such a challenge. A simple example is shown in Fig. 5. Assume the pending time (including pods starting and so on) is 1 minute; Fig. 5(a) shows that if the needed pods are scaling precisely in real-time as required, the actual number of pods available will be delayed. Therefore, the pod scaling actions should be done in advance, as Fig. 5(b) shows. However, for the sake of stability, frequent actions are not a good choice. Generally, the operating frequency limit is set as one per every 3 minutes or every 5 minutes maximum. Thus, how to combine the scheduling plans within the corresponding interval is also a question worth exploring. ## Experimental Evaluation Proper validation of the AHPA system is challenging, as these models are designed to adjust the horizontal pod schedule for different deployments. It is not possible to apply different models to one application at the same time. We here provide the real historical data collected from several deployments in Alibaba Cloud Container Service for Kubernetes (ACK) version 1.20.11 with Aliyun Linux/CentOS operating system. The time series data length is 20160, i.e., one data point per minute for 14 days. The numerical results compared with the classical HPA algorithm and FixPod policy are shown in the following subsections. As illustrated in Fig. 6, the dataset **NP/WP/SP** denote three scenarios that time series data with no periodicity, Figure 4: The robust time series decomposition for forecasting module in AHPA. weak periodicity and strong periodicity, respectively. As for the **NoisyData** dataset, many outliers appear due to occasional network jitter caused by underlying network link reconstruction in the system. As mentioned above, missing values are common accidents due to some damage in nodes or monitor systems; we here test the framework by using the **MissingData** dataset, which drops all the data points on any random day. At last, we also consider another common challenging situation: the trend of time series changes, mostly caused by the new application version release. Hence, we would like to evaluate the performance of the AHPA framework when applied to the **TrendChange** dataset. Since all models in a cloud container system aim to minimize the cost of satisfying the quality of service (QoS), we consider the following three metrics for evaluating these models: cost, violation rate (VR), and max pod number. The violation rate (**VR**) is used to evaluate the QoS of each model, i.e., for a specific metric target such as keeping CPU usage less than \(50\%\), then VR would be computed as the time of CPU usage more significant than \(50\%\) divided by the whole time length. The **cost** would be the total number of pods provided over time, computed as the integral of the number of pods over time (minute). According to the service-level agreement (SLA), the contract between a service provider and its customers that documents what services the provider will furnish and defines the service standards the provider is obligated to meet, the violation rate (**VR**) plays the most crucial role among all the criteria in the cloud service. ### Comparison Experiment In this subsection, we consider the performance of the AHPA framework under different situations of complex data, such as the time series data with strong periodicity, week periodicity, or without periodicity. From the results shown in Table 1, we can observe that the FixPod policy provides a zero violation strategy but with a massive waste of pod resources in all cases. However, comparing the HPA and AHPA frameworks, the results demonstrate that AHPA achieves a much better QoS schedule plan than HPA with comparable costs. The advantage of AHPA is further highlighted when applied to time series with a more substantial periodical property. In Fig. 7, the green and orange line represent the number of pods suggested from the AHPA and HPA method, and the blue line stands for the optimal number of pods when given the real workload and target metric value (e.g., keeping CPU usage under 50%). We can obviously observe that adjustment from HPA is always lagging, which could lead to poor service experience at the peaks of workload. Moreover, from the comparison between the green and orange lines, we see the green line is much smoother than the orange line, which shows another advantage of the AHPA algorithm since the smoother of pod number is, the less cost spent on the operations including expanding and shrinking the pods resource. ### Robustness Experiment In this subsection, we evaluate the robustness of the AHPA framework due to the complex situations in real cloud services. Here we implement the algorithm on the actual data with noise, outliers, missing values, and abrupt trend changes. Figure 5: An example of the scaling action plan. Figure 6: Typical signal series sampled from six actual datasets. From the results in Table 2, we can observe the violation rate of the HPA framework is around \(30\%\), which means almost one-third of the time the QoS is worse than customers' expectation. It is almost \(1000\%\) higher than the one of AHPA. Besides, from the results of the AHPA framework, we conclude the AHPA framework performs robustly in both violation rate and cost saving under noisy data, missing values, and trend-changing situations. ## Application Use and Payoff AHPA system has been deployed across Alibaba Cloud Services since April 2021 and has been promoted to many different cloud service scenarios from March 2022. Many related servers have used it to manage the elastic pod resources. Up to now, AHPA has been implemented in multiple customer scenarios, covering logistics, social networking, AI audio and video, e-commerce, online education, sports live+, and retail. Most such applications need to handle the challenges of real-time high-performance, low latency, and large and periodic business load fluctuations. In this section, we will discuss the impact of the AHPA system. Take two applications as an example. For online education, ACK with AHPA system provides services with the following features: minute-level deployment, industry-leading reliability (a commitment of 99.999% availability for individual instances), easy management, and high scalability (scaling according to real-time demands). For live streaming of e-commerce, services satisfying properties of fast deployment at a low-cost, ultra-low latency (2000+ nodes with a bandwidth of 150Tbps, bringing streaming latency less than 2 seconds) are provided by our system. After customer business applications were deployed with the AHPA system, the elastic lag problem was considerably eased, with CPU usage increasing by 10% and resource costs reduced by more than 20%. At present, our AHPA algorithm has obvious advantages over other models. In addition, it can automatically perform flexible planning according to the changing trend of business volume without manual intervention, which significantly saves operation and maintenance costs. It is not easy to use digital numerical calculations for the workforce regarding operation and maintenance costs, but an automated AHPA algorithm greatly liberates the workforce and makes the entire system more efficient and universal. Specifically, for example, in the scenario of intelligent voice, AHPA is currently called around six thousand times a day and performs about 1,000 dispatches. Compared with another original management method, AHPA's result has reached that CPU usage is increased by about 9% and saves the daily cost of POD resources (pieces * minutes) about ten thousand. With the help of AHPA, the AI voice business saves around 28% of the original cost. ## Application Development and Deployment Section _Use of AI Technology_ introduces the details of AI technology used in the algorithm part of the AHPA system. This section will show how such an algorithm is developed and deployed in the real cloud container system. We will first review the main components of AHPA (Fig. 8) and then show the details of how it works in the whole Kubernetes system (Fig. 9). ### System Framework As shown in Fig. 8, the framework of AHPA contains three main parts: Data Collection, Prediction, and Scaling. The main technologies and details of the Prediction part are introduced in section _Use of AI Technology_. More specifically, Preprocessing is the Data Preprocessing module in Fig. 3, the scaling part contains Workload Forecasting and Performance Model Training module in Fig. 3, and the Revise Module consists of the Scale Plan Generation module in Fig. 3. \begin{table} \begin{tabular}{c||c c|c c|c c|c c} \hline DataSet & \multicolumn{3}{c|}{**NP**} & \multicolumn{3}{c|}{**WP**} & \multicolumn{3}{c}{**SP**} \\ \hline Metric & FixPod & HPA & AHPA & FixPod & HPA & AHPA & FixPod & HPA & AHPA \\ \hline Cost & 48830 & 19657 & 22063 & 604680 & 447096 & 495541 & 201340 & 77464 & 92930 \\ \hline VR & 0 & 0.0047 & 0.0045 & 0 & 0.3975 & 0.0695 & 0 & 0.2950 & 0.0266 \\ \hline Max Pod & 10 & 10 & 11 & 60 & 60 & 67 & 20 & 17 & 18 \\ \hline \end{tabular} \end{table} Table 1: Comparison of metric/cost/VR/max pod under different data structure. \begin{table} \begin{tabular}{c||c c|c c|c c} \hline DataSet & \multicolumn{3}{c|}{**NoisyData**} & \multicolumn{3}{c|}{**MissingData**} & \multicolumn{3}{c}{**TrendChange**} \\ \hline Metric & HPA & AHPA & HPA & AHPA & HPA & AHPA \\ \hline Cost & 77590 & 93011 & 75966 & 92519 & 77464 & 92930 \\ \hline VR & 0.2958 & 0.0276 & 0.2934 & 0.0307 & 0.2950 & 0.0266 \\ \hline Max Pod & 31 & 28 & 17 & 18 & 17 & 18 \\ \hline \end{tabular} \end{table} Table 2: Comparison of robustness test. Figure 7: Comparison of the number of pods. Before Prediction, the Data Collection module collects data from various sources and transforms data into a unified form. Collected data consist of resource metrics (CPU, Memory, etc.), custom metrics (QPS, RT, etc.), and other external metrics. The data sources include Prometheus (an open-source system monitoring and alerting toolkit which stores metrics information as time series data), Metrics Server (a scalable, efficient source of container resource metrics for K8s), Log Service (a complete real-time data logging service developed by Alibaba Group) and other custom monitor platforms. After that, the Adapter module transforms different metrics into a unified form. The final module of AHPA is Scaling, which is used to scale pods according to the resource estimation results. Two scaling mechanisms are provided: Auto and Observer. Auto scales the number of pods according to the estimation results; the observer is the dry-run mode provided to the clients to monitor whether AHPA works as expected. ### System Deployment AHPA is deployed on Alibaba Cloud Container Service for Kubernetes (ACK) with Go Programming Language. The deployment architecture of AHPA is shown in Fig. 9. There are two main components: AHPA Algorithm Deployment and AHPA Controller Deployment. CustomResourceDefinition (CRD) is introduced to deploy the pod scaling. CRD is flexible for different configures. The core parameters of CRD are following: * scaleTargetRef: positional, deployment for special objects. * metrics: positional, scaling metrics, e.g., CPU, Memory, RT, QPS, GPU, etc. * averageUtilization: positional, the threshold of target. For instance, 40 means the utilization rate of the CPU can not be larger than 40%. * scaleStrategy: optional, scaling mechanism, Auto or Observer. Auto means pod scaling is deployed and Observer means not deploying pod scaling and only observing whether AHPA works as expected. * maxReplicas: positional, the maximum number of instances to do scaling. * minReplicas: positional, the minimum number of instances to do scaling. * instanceBounds: optional, time bounds of scaling, including start time and end time. Figure 8: The framework of AHPA. Figure 10: An example of visualization in the dashboard of AHPA. Figure 9: The deployed architecture of AHPA in Kubernetes. * cron: optional, setting for timing task. Unique designs are also applied in deployment for **high availability** of AHPA service. Although anomalies always happen in complex systems. our goal is to provide high availability. When failures happen in pods, the failed pod will be killed, and a new pod will be created. Moreover, with the number of applications or services increasing, both the algorithm deployment part and controller deployment part can scale horizontally to satisfy the requirement of high concurrency. To ensure the business task is not aware of the update of AHPA, Algorithm and Controller communicate through service, and they can update independently. The update is rolling through pods, meaning the old pods will be killed only when the newly created pods work well as expected. Abundant **visualization** components are provided to help clients monitor the state of AHPA, including Kubernetes Event Center, Prothemetheus, Log Service, Dashboard, etc. Visualization examples of Dashboard and Log Service are shown in Fig. 10 and Fig. 11, respectively. From the panel interface of Dashboard (shown in Fig. 10), critical metrics, including Workload, CPU, memory, can be monitored in real-time. In addition, log Service (shown in Fig. 11) provides information on every scaling action, including the time of scaling action, the mode (dry-run or not) of scaling action, the number of actual pods, pods number limitations, etc. All of these contribute to helping clients to monitor if AHPA works as expected and locate the cause of unexpected failures timely. ## Maintenance As time goes by, changes in the data flow characteristics require dynamic tuning of the hyper-parameters of our models. However, as a whole, AHPA does not require a lot of post-operation and maintenance modifications. That is, in the workload forecasting module, due to the self-learning ability of deep learning models, after adding new data, there is no need to make many modifications to the original model, but it can be fine-tuned or re-training in response to the new data. Furthermore, in the following two modules, the final pod number adjustment scheme can also be adjusted according to the actual situation and does not involve model modification. The core idea of the entire algorithm is to predict the business as accurately as possible and deal with fluctuations in advance to save resources. Therefore, after AHPA is launched, it is unnecessary to do any significant modifications. In our practice, only some model parameters need to be fine-tuned by April 2021. ## Conclusions and Future Work In this paper, we present our improved framework AHPA (Adaptive Horizontal Pod Auto-scaling) for better resource management in Alibaba Cloud Container Service for Kubernetes to save resources while maintaining user experiences. This platform has been widely spread throughout Alibaba Cloud Services and deployed in various business scenarios not restricted to ACK container controlling. By accurately predicting the business volume of the next moment and obtaining the mapping relationship from historical data, AHPA could address the challenge of saving resources while ensuring business stability. The core section of the AHPA algorithm uses the robust decomposition-based time series forecasting module and the queue theory for performance modeling. Since its deployment in April 2021, AHPA has helped over 20 different business scenarios in Alibaba Cloud Services to solve the elastic lag problem and significantly increased the efficiency of CPU by 10%. Besides, AHPA has demonstrated significant advantages in that it could automatically give the system's planning without any manual intervention. Further plans to expand AHPA in other parts of Alibaba Cloud have been scheduled. In subsequent work to adapt to different business scenarios, we will investigate how to abstract the mapping relationships in appropriate theories other than queuing theory according to the detailed requirements. Furthermore, with the vigorous development of cloud services, more and more large demands are constantly emerging. How to provide a framework with a higher degree of adaptation while ensuring the efficiency of each specific business scenario is also our pursuing goal.
2310.03778
Lightweight Boosting Models for User Response Prediction Using Adversarial Validation
The ACM RecSys Challenge 2023, organized by ShareChat, aims to predict the probability of the app being installed. This paper describes the lightweight solution to this challenge. We formulate the task as a user response prediction task. For rapid prototyping for the task, we propose a lightweight solution including the following steps: 1) using adversarial validation, we effectively eliminate uninformative features from a dataset; 2) to address noisy continuous features and categorical features with a large number of unique values, we employ feature engineering techniques.; 3) we leverage Gradient Boosted Decision Trees (GBDT) for their exceptional performance and scalability. The experiments show that a single LightGBM model, without additional ensembling, performs quite well. Our team achieved ninth place in the challenge with the final leaderboard score of 6.059065. Code for our approach can be found here: https://github.com/choco9966/recsys-challenge-2023.
Hyeonwoo Kim, Wonsung Lee
2023-10-05T13:57:05Z
http://arxiv.org/abs/2310.03778v1
# Lightweight Boosting Models for User Response Prediction Using Adversarial Validation ###### Abstract. The ACM RecSys Challenge 2023, organized by ShareChat, aims to predict the probability of the app being installed. This paper describes the lightweight solution to this challenge. We formulate the task as a user response prediction task. For rapid prototyping for the task, we propose a lightweight solution including the following steps: 1) using adversarial validation, we effectively eliminate uninformative features from a dataset; 2) to address noisy continuous features and categorical features with a large number of unique values, we employ feature engineering techniques; 3) we leverage Gradient Boosted Decision Trees (GBDT) for their exceptional performance and scalability. The experiments show that a single LightGBM model, without additional ensembling, performs quite well. Our team achieved ninth place in the challenge with the final leaderboard score of 6.059065. Code for our approach can be found here: [https://github.com/choco9966/recsys-challenge-2023](https://github.com/choco9966/recsys-challenge-2023). ACM RecSys Challenge 2023, User Response Prediction, Adversarial Validation, Gradient Boosting Decision Trees, CTR Prediction + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + Footnote †: Footnote †: journal: AcM + [MISSING_PAGE_POST] features effectively; 2) we perform feature engineering to deal with noisy continuous features and categorical features with high cardinality; 3) we utilize GBDT due to their superior performance and scalability. Our contributions are as follows. First, we provide a lightweight solution that combines adversarial validation and a set of feature engineering techniques. Second, we show that even a single LightGBM model without an additional ensembling performs quite well in user response prediction, ranking ninth in the leaderboard. ## 2. Related Work ### User Response Prediction Early studies that tackled user response prediction tasks include the following significant studies (mainly CTR prediction): (Grover et al., 2016; Krizhevsky et al., 2015; Krizhevsky et al., 2016). Google (Krizhevsky et al., 2016) presented a regularized LR model based on an FTRL-Proximal algorithm for massive-scale sponsored search advertising. They explored several memory-saving guidances essential for building an industrial-scale CTR prediction system, such as bloom filter, fewer bit encoding, and train data sub-sampling. Another work (Grover et al., 2016) published by Facebook employed a combination of decision trees and LR. The boosted decision tree performs non-linear supervised feature encoding and feeds the encoded features as an input to LR. They also stressed the importance of train data sub-sampling, negative down-sampling, and model calibration to handle large-scale real-world data. Field-aware FM (Krizhevsky et al., 2016), which is a variant of FM (Krizhevsky et al., 2016), also showed an impressive performance, showing the critical importance of capturing predictive feature interactions. Recent years have seen a growth of interest in the adoption of deep learning-based models for CTR prediction. Due to the superior ability of representation learning with multiple levels of abstraction, the use of deep neural networks (DNN) in industrial-scale CTR prediction is gradually becoming the industry standard (Grover et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016). The Deep & Cross Network (DCN) (Krizhevsky et al., 2016) adopts a memory-efficient cross network that explicitly learns predictive cross features without manual feature engineering. This simple yet effective architecture with significantly fewer parameters outperforms other baselines, including DNN, FM, and LR. Alibaba proposed a Deep Interest Network (DIN) (Krizhevsky et al., 2016), which utilizes a local activation unit, similar to an attention mechanism, to dynamically learn the representation of user interests based on their past behaviors related to a specific advertisement. Deep Multifaceted Transformers (DMT) (Grover et al., 2016), which models multiple types of behaviors (e.g., click, cart, and order) with multiple Transformers (Krizhevsky et al., 2016), has been successfully deployed to JD.com, one of the largest E-commerce sites in the world. DMT also shows that properly adopting multi-task learning that recognizes the user's different objectives can significantly improve real-world online performance metrics such as CTR, Conversion Rate (CVR), and Gross Merchandise Volume (GMV). ### Competitive Data Science Data science competitions have evolved from a niche community of passionate participants to a widely popular platform attracting millions of data scientists worldwide (Beng et al., 2016). In 1997, the KDD Cup, one of the earliest data science competitions, was held. In 2006, the Netflix Prize (Beng et al., 2016) was held, significantly impacting the development of personalized recommender systems and collaborative filtering. Since then, various data science competitions focusing on personalized recommendations and user response prediction have been held at academic conferences and competitive data science platforms. We observed a wide variety of methodologies being used for personalized predictions in competitions, from TF-IDF to advanced deep learning architectures. Surprisingly, several top-ranking solutions utilized tree-based models. For example, in 2017, 2018, 2019, 2020, and 2022 ACM RecSys challenges1, the winning solutions utilized GBDT such as LightGBM (He et al., 2019), CatBoost (Cheng et al., 2019), and XGBoost (Cheng et al., 2020) with substantial feature engineering. Also, the winning solution2 of the H&M Personalized Fashion Recommendations competition, recently held at Kaggle, took advantage of different kinds of decision tree models (LightGBM and CatBoost) as a ranker module. Although deep learning has achieved tremendous success in the computer vision and natural language processing domains, it is known that tree-based methods often outperform neural networks in tabular data prediction tasks, especially when dealing with skewed distributions, heavy-tailed feature distributions, and dataset irregularities (Cheng et al., 2019; Chen et al., 2020; Chen et al., 2020). Footnote 1: [https://recsys.acm.org/challenges](https://recsys.acm.org/challenges) Footnote 2: [https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/324070](https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/324070) ## 3. Problem Formulation ### Dataset Description The data available for the challenge was provided by ShareChat3, India's largest homegrown social media company with over 400M MAUs across all its platforms. The dataset consists of 10M users who visited the ShareChat and Moj apps over three months. The organizers preprocessed the dataset to have ten impressions for each user. The objective of the challenge is to predict the probability of the app being installed. Each row of the dataset consists of user and ad features. The train data consists of subsampled history from the past 22 days, and the target variable is the probability of the app being installed on the 23rd day. Footnote 3: [https://sharechat.com/recsys2023](https://sharechat.com/recsys2023) ### Evaluation The objective of the challenge is to predict whether the user will install or not for a given ad impression. The more accurately the probability is estimated, the higher the expected revenue of the platform. The metric used in the challenge was Normalized Cross Entropy (NCE): \[NCE=\frac{-\frac{1}{N}\sum_{i=1}^{N}(\frac{1+y_{i}}{2}\log p_{i}+\frac{1-y_{i} }{2}\log\left(1-p_{i}\right))}{-(p\log p+(1-p)\log\left(1-p\right))}, \tag{1}\] where \(N\), \(p_{i}\), \(p\), and \(y_{i}\in\{-1,1\}\) are the number of the dataset, the estimated probability of the app being installed, the average empirical probability of installation, and the label, respectively. NCE is calculated by dividing the average log loss per impression by the average log loss per impression that would occur if the model predicted the background CTR (in our case, CVR) for every impression. In other words, NCE is the predictive log loss normalized by the entropy of the empirical probability of installation. The lower the NCE value, the better the model performance. In our experiments, we use log loss as a proxy evaluation metric for local validation because we can not access the ground-truth empirical CVR, \(p\). ## 4. Data Preparation & Preprocessing ### Local Validation Strategy The dataset includes temporal information as the \(f_{1}\) variable, which indicates the date for each row. We aim to predict \(p_{i}\) as accurately as possible for test data with \(f_{1}=67\). Therefore, to reflect recent temporal trends and to make the validation set's distribution mimic the test set's distribution, we select the data with \(f_{1}=66\) as a local validation set. The entire procedure of the pipeline is shown in Fig. 1. ### Adversarial Validation In our preliminary experiments, we observed a severe discrepancy between the local validation score and the leaderboard score. In particular, this phenomenon was amplified when certain variables were added as features. This overfitting issue seems to be due to a covariate shift, the discrepancy between the distribution of train data and test data. To identify and address the problem, we employ an adversarial validation approach. The adversarial validation approach can be used to detect and address the covariate shift. This approach involves training a binary classifier to determine whether a sample belongs to the training or test set. When the classifier's performance is close to random guessing (\(\mathsf{AUC}=0.5\)), it suggests that distinguishing between the training and test sets is challenging, indicating that the distribution of both is relatively consistent. On the other hand, if the classifier performs significantly better than random guessing, it suggests the discrepancy between the distribution of the training and test sets. We train an adversarial classifier for each feature to detect potential features that exhibit the covariate shift between train and test data. As shown in Fig. 2, some variables have very high \(\mathsf{AUC}\) scores, and these variables seem to show the covariate shift that may cause the gap between validation performance and test performance. Therefore, we can exclude these features from model training to reduce the inconsistency between the local validation and the leaderboard. ## 5. Feature Engineering Based on the analysis in 4.2, we filter out the variables with \(\mathsf{AUC}>=0.75\) (indicating a potential covariate shift). Next, we encode categorical features to informative continuous values to deal with high cardinality and facilitate effective tree splits for GBDT models. In the following paragraphs, we introduce how to deal with noisy continuous features and how to encode categorical features. Please refer to our source code4 for other details of feature engineering. Figure 1. Flow chart of our lightweight pipeline Figure 2. AUC metrics of each feature in adversarial validation for detecting a covariate shift. ### Dealing with Noisy Continuous Features Fig. shows that some continuous variables exhibit a block structure indicating a strong correlation with each other. Based on this observation, we found a pattern presumed to be artificial noise injection. Specifically, some variables represent an arithmetic sequence of unique values. More concretely, given the ordered _unique_ values \(\{v_{1},v_{2},...v_{M}\}\) for a particular feature, where \(v_{n-1}<v_{n}\) for \(n=2,...,M\), the general formula for the \(n\)-th term is as follows: \(v_{n}=v_{1}+(n-1)\Delta\). For example, variable groups \((f_{42},f_{52}\sim f_{57},f_{74}\sim f_{76})\) and \((f_{44}\sim f_{50}\) and \(f_{71}\sim f_{73})\) have \(\Delta\simeq 0.0385\) and \(\Delta\simeq 0.5711\), respectively. We denoise these feature values by dividing the corresponding \(\Delta\), converting a data type from float to integer. The transformed integer features are expected to be more descriptive and informative when crossing features. ### Encoding Categorical Features The dataset includes many categorical features with high cardinality. We use frequency encoding and CatBoost encoding to avoid the curse of dimensionality. Frequency encoding, a.k.a. count encoding, replaces a category value with its number of occurrences for a given categorical feature. This simple strategy can be effective when the frequency is somewhat related to the target variable. We compared the following three alternatives for frequency encoding: (1) the number of occurrences during the previous day, (2) the number of occurrences during the previous week, and (3) the number of occurrences up to one day ago. We selected the second one due to its highest performance. Next, we conduct CatBoost encoding, a target encoding variation that supports time-aware encoding and regularization to avoid over-fitting. Similar to the sliding window, the category of the target day was transformed after fitting the encoder based on the data up to the previous day. The operations were performed for the two target variables, click and install, respectively. ## 6. Models We employ GBDT as our prediction model. Specifically, we use LightGBM, which has been proven to show state-of-the-art performance in many previous competitions (Krizhevsky et al., 2017; Krizhevsky et al., 2018). We also tested other baseline models known to perform well in predicting tabular data, such as TabNet (Beng et al., 2016) and CatBoost. However, LightGBM was more robust and performed the Figure 3. Pairwise correlation plot for continuous features: \(f_{\Omega}\sim f_{79}\). best in our experiments. The tuned hyperparameters of LightGBM are as follows: _number of leaves = \(491\), max depth = \(-1\), boosting type_ = gbdt, and _number of iterations = \(10,000\)_ with _early stopping rounds = \(100\)_. Note that we did not apply an ensemble technique that blends the prediction results of several different models because we focus on a lightweight solution for rapid prototyping. ## 7. Experiments To evaluate the effectiveness of our method, we conduct detailed experiments and illustrate the results in Table 1. By incorporating validated features, we can observe an improvement in the performance of our model. It is worth noting that CatBoost reported a good performance on local validation but poor performance on the leaderboard. Although both LightGBM and CatBoost belong to GBDT, in our study, LightGBM showed a much more robust performance than CatBoost. Fig. 4 shows the top 20 important features according to the LightGBM feature importance measure based on the number of times the feature is used in tree splitting. All experiments were conducted on virtual machines with 64 vCPUs, 120GB RAM, and NVIDIA RTX 3090 GPU. ## 8. Conclusion This paper presents our approach to the ACM RecSys Challenge 2023. Our solution comprises adversarial validation, feature engineering, and a prediction model. In the adversarial validation phase, we detect and exclude potential \begin{table} \begin{tabular}{l c c} \hline \hline Model & Local Validation Score & Leaderboard Score \\ \hline (1) Vanilla LightGBM & 0.375506 & 6.466970 \\ (2) = (1) + Frequency Encoding & 0.368485 & 6.264767 \\ (3) = (2) + Denoising Features & 0.361400 & 6.218796 \\ (4) = (3) + CatBoost Encoding + Additional Feature Engineering & 0.361309 & 6.071975 \\ (5) Final LightGBM = (4) + Hyperparameter Tuning & **0.360991** & **6.059065** \\ (6) CatBoost & 0.359850 & 6.106765 \\ (7) TabNet & 0.369832 & 6.221323 \\ & & \\ \hline \hline \end{tabular} \end{table} Table 1. Performance of the prediction models. The best-performance model is denoted in bold. Figure 4. LightGBM feature importance plot based on the number of tree splits. features exhibiting the covariate shift. Next, in the feature engineering phase, we focused on denoising noisy continuous features and transforming categorical features into informative continuous values by performing frequency or CatBoost encoding. We employ LightGBM as our primary prediction model with tuned hyperparameters, and the final model showed excellent performance on the leaderboard even without applying an additional ensemble method. The proposed lightweight solution is simple yet effective for the following reasons: 1) adversarial validation can filter out non-informative features in the early stage, and 2) it does not apply the ensemble method that often requires much effort to find the optimal configuration. We believe our solution can be used for rapid prototyping for CTR/CVR prediction tasks.
2305.09785
Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models
Learning vectors that capture the meaning of concepts remains a fundamental challenge. Somewhat surprisingly, perhaps, pre-trained language models have thus far only enabled modest improvements to the quality of such concept embeddings. Current strategies for using language models typically represent a concept by averaging the contextualised representations of its mentions in some corpus. This is potentially sub-optimal for at least two reasons. First, contextualised word vectors have an unusual geometry, which hampers downstream tasks. Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors. To address these issues, we propose two contrastive learning strategies, based on the view that whenever two sentences reveal similar properties, the corresponding contextualised vectors should also be similar. One strategy is fully unsupervised, estimating the properties which are expressed in a sentence from the neighbourhood structure of the contextualised word embeddings. The second strategy instead relies on a distant supervision signal from ConceptNet. Our experimental results show that the resulting vectors substantially outperform existing concept embeddings in predicting the semantic properties of concepts, with the ConceptNet-based strategy achieving the best results. These findings are furthermore confirmed in a clustering task and in the downstream task of ontology completion.
Na Li, Hanane Kteich, Zied Bouraoui, Steven Schockaert
2023-05-16T20:17:02Z
http://arxiv.org/abs/2305.09785v1
# Distilling Semantic Concept Embeddings from Contratively Fine-Tuned Language Models ###### Abstract. Learning vectors that capture the meaning of concepts remains a fundamental challenge. Somewhat surprisingly, perhaps, pre-trained language models have thus far only enabled modest improvements to the quality of such _concept embeddings_. Current strategies for using language models typically represent a concept by averaging the contextualised representations of its mentions in some corpus. This is potentially sub-optimal for at least two reasons. First, contextualised word vectors have an unusual geometry, which hampers downstream tasks. Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors. To address these issues, we propose two contrastive learning strategies, based on the view that whenever two sentences reveal similar properties, the corresponding contextualised vectors should also be similar. One strategy is fully unsupervised, estimating the properties which are expressed in a sentence from the neighbourhood structure of the contextualised word embeddings. The second strategy instead relies on a distant supervision signal from ConceptNet. Our experimental results show that the resulting vectors substantially outperform existing concept embeddings in predicting the semantic properties of concepts, with the ConceptNet-based strategy achieving the best results. These findings are furthermore confirmed in a clustering task and in the downstream task of ontology completion. word embedding, language models, contrastive learning, common-sense knowledge + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition that concept embeddings can similarly benefit from such strategies. Second, and more fundamentally, contextualised word vectors do not only capture information about the meaning of words but also about their syntactic role and other characteristics of the sentences in which they appear (Sen et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). If we are interested in modelling the meaning of concepts, it thus seems beneficial to specialise the contextualised word vectors towards this aspect. Ideally, two contextualised word vectors should be similar if the corresponding sentences express similar properties, and dissimilar otherwise. This key idea is illustrated in the following example. Example 1 ().: _Consider the following sentences1:_ Footnote 1: All sentences were taken from GenericKB (Bang et al., 2019). 1. _Submarines_ can hide under the water. 2. _Some_ submarines run on diesel engines. 3. _Some_ sharks live at the bottom of deep underwater canyons. 4. _Trucks_ are used to transport people or things, they use fuel known as diesel. _We would like the contextualised representation of_ submarines _in sentence (i) to be similar to the contextualised representation of_ sharks _in sentence (iii), as both sentences assert that the target concept has the property of being underwater. Similarly, we would like the representation of_ submarines _in sentence (ii) to be similar to the representation of_ trucks _in sentence (iv)._ If we are able to learn contextualised word vectors that focus on the semantic properties that are expressed in a given sentence, we should be able to learn high-quality concept embeddings by averaging these contextualised representations across different sentences. In this paper, we propose and analyse a number of strategies based on contrastive learning to address the two aforementioned issues. Contrastive learning has already been successfully used for alleviating the anisotropy of BERT-based word and sentence embeddings (Han et al., 2017; Wang et al., 2018), based on the idea that embeddings of corrupted inputs should be similar to embeddings of the original word or sentence. Different from these approaches, our motivation for using contrastive learning is to move contextualised word vectors that capture similar semantic properties closer together, while vectors capturing different properties are pushed further apart. Crucially, to implement this idea, we need examples of sentences that express similar properties. We propose two strategies for identifying such sentences. Our first strategy is purely unsupervised. The main idea is to rely on the neighbourhood structure of standard contextualised word vectors. First note that when obtaining contextualised word vectors, we mask the target concept, following (Wang et al., 2018). This ensures that contextualised word vectors reflect the sentence context of the given concept, rather than any prior knowledge about the concept that is captured by the language model itself. Now suppose we have a contextualised representation of _submarine_, and we look for the most similar contextualised word vectors, across a given corpus. Since the target concept is masked, these vectors may correspond to different words. Suppose, for instance, that they correspond to the words _car, truck_ and _airplane_. Then we can intuitively assume that the given sentence expresses the property of being a vehicle. Based on this idea, we can identify sentences that are likely to express the same property. Our second strategy uses a form of distant supervision, using knowledge about the common-sense properties of concepts from ConceptNet (Wang et al., 2018). For example, ConceptNet contains the triple (_gun,HasProperty,dangerous_). Given this triple, if a sentence contains both the words _gun_ and _dangerous_, we assume it expresses that guns are dangerous. For each property encoded in ConceptNet, we can thus find sentences which express that the target concept has that property. This, in particular, allows us to find sentences that express the same property. We experimentally compare the concept embeddings that are obtained with the two aforementioned strategies. We are specifically interested in the extent to which different kinds of semantic properties can be predicted from these embeddings. We also evaluate our embeddings in a clustering task and an ontology completion task (Wang et al., 2018; Wang et al., 2019). For both strategies, we find that our concept embeddings consistently outperform existing models by a substantial margin. ## 2. Related Work The use of pre-trained language models for generating static word embeddings has already been extensively explored. A popular strategy is to aggregate the contextualised representation of a word w across a number of sentences mentioning this word (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Several variations of this strategy have been studied, which mostly differ in how the contextualised representation of w is computed. It is common to use the representation from the final layer of the transformer model or to average the representations from the final four layers, while Vulic et al. (Vulic et al., 2019) suggested averaging the first \(k\) layers, with the optimal \(k\) depending on the task. For words that consist of multiple tokens, the representations of these tokens are typically averaged. To aggregate the contextualised representations of a given word w across multiple sentences, the most common strategy is to simply average them. Ethayarajh (Ethayarajh, 2019) instead proposed to take the first principal component, which produces almost the same result, given that the contextualised vectors are all located in a very narrow cone. In this paper, we build on the approach from Li et al. (Li et al., 2019), which masks the target word w and uses the contextualised representation of the mask token; this approach is discussed in more detail in the next section. Beyond averaging-based strategies, some approaches have been inspired by Word2Vec (Wang et al., 2019) or GloVe (Yang et al., 2019), relying on BERT to obtain context embeddings (Han et al., 2017; Wang et al., 2019), or to generate synthetic co-occurrence counts (Han et al., 2017). Instead of relying on words in context, some approaches simply feed the word w to the language model. Bommasani et al. (Bommasani et al., 2018) found this to perform poorly with pre-trained models. However, better results were reported by Vulic et al. (Vulic et al., 2019), after fine-tuning the BERT encoder on synonymy and antonymy pairs. Gajbhiye et al. (Gajbhiye et al., 2019) jointly fine-tuned a BERT encoder for concepts and an encoder for properties, using hypernyms from Microsoft Concept Graph (Gajbhiye et al., 2019) and sentences from GenericsKB (Bang et al., 2019) as training data. MirrorBERT (Wang et al., 2019) is a BERT encoder for both words and sentences, which is trained in a fully self-supervised way. It uses dropout to generate different variants of the same input, and then fine-tunes BERT such that these variants are closer to each other than to encodings of other inputs. The resulting encoder can generate high-quality word vectors, again without needing sentences mentioning the word in context. MirrorWiC (Vulic et al., 2019) can be seen as an adaptation of the MirrorBERT strategy to words in context. In particular, given a sentence \(s\) mentioning some word \(w\), multiple encodings of \(w\) are obtained by (i) randomly masking different spans in \(s\) and (ii) using dropout. The model then encourages different encodings of same sentence to be closer to each other than to encodings obtained from different sentences (even if the target word \(w\) is the same). The aforementioned approaches have been developed with different tasks in mind. While word similarity benchmarks remain a popular choice for evaluating word vectors, Li et al. (Li et al., 2019) and Gajbhiye et al. (Gajbhiye et al., 2020) were specifically interested in predicting the commonsense properties of concepts, while Liu et al. (Liu et al., 2020) focused on word sense disambiguation. Accordingly, some of these approaches have complementary strengths. For instance, the model from Li et al. (Li et al., 2019) outperformed the baselines on concept categorisation tasks, but under-performed in word similarity. In terms of downstream applications, since the introduction of BERT, word embeddings have primarily been used in settings where word meaning has to be modelled in the absence of any sentence context. For instance, word embeddings have been used to estimate class prototypes for few-shot learning, e.g. in image classification (Zhu et al., 2019; Li et al., 2019; Li et al., 2019) and for slot tagging in dialogue systems (Liu et al., 2020). In (Li et al., 2019), word vector similarity was used to set an adaptive margin, as part of a margin-based model for few-shot image classification, to capture the idea that image classes with similar labels can be harder to differentiate. Word embeddings have also been used for modelling label dependencies in multi-label classification (Zhu et al., 2019). Furthermore, word vectors have been used for ontology engineering tasks, e.g. for aligning ontologies (Zhu et al., 2019) or for inferring plausible rules (Liu et al., 2020). In such applications, what matters is that concepts with similar word vectors have similar properties. We will focus on ontology completion in more detail in Section 5.3. In other applications, what matters is rather that clusters of word vectors are semantically coherent, e.g. when using word vectors for learning taxonomies (Zhu et al., 2019; Li et al., 2019) or for topic modelling (Li et al., 2019; Li et al., 2019; Li et al., 2019). Word vectors are much easier to train than language models, and can thus more easily be adapted. This advantage has been exploited to learn personal word embeddings, as part of a system for personalised search (Li et al., 2019), or for studying how word meaning changes over time (Li et al., 2019). Finally, some authors have found that even for tasks where we need to model the meaning of words in context, using static word vectors can sometimes be beneficial (Li et al., 2019; Li et al., 2019; Li et al., 2019). ## 3. Distilling Concept Embeddings In this section, we recall the concept embedding strategy from Li et al. (Li et al., 2019), which uses a pre-trained BERT model. The aim of our paper is to analyse how better concept embeddings can be obtained by instead relying on a suitably fine-tuned BERT model. Our proposed fine-tuning strategies will be the focus of Section 4. Let \(s_{1},...,s_{n}\) be sentences in which some concept \(c\) is mentioned. To obtain a vector representation of \(c\) from the sentence \(s_{i}\), Li et al. (Li et al., 2019) replace \(c\) by the _<mask-_ token and take the final-layer contextualised representation of this token, using a BERT-based language model. By masking the concept \(c\), the resulting vector intuitively captures what the sentence \(s_{i}\) reveals about the meaning of \(c\), rather than any prior knowledge about the meaning of \(c\) that is encoded in the language model itself. They found that this masking strategy improves how well the resulting embeddings capture the semantic properties of concepts. Let \(\mathbf{x_{1}},...,\mathbf{x_{n}}\) be the vectors that are thus obtained from the available sentences. We refer to these vectors as the _mention vectors_ of concept \(c\). We write \(\mu(c)=\{\mathbf{x_{1}},...,\mathbf{x_{n}}\}\) for the set of mention vectors associated with \(c\). An embedding of concept \(c\) can be obtained by averaging these mention vectors: \[\mathbf{c}=\frac{1}{|\mu(c)|}\sum\{\mathbf{x}\mid\mathbf{x}\in\mu(c)\}\] However, not all sentences are equally informative. Li et al. (Li et al., 2019) in particular highlighted issues that arise when sentences use concepts in idiosyncratic ways. For instance, sentences about the children's song "Mary had a little lamb" are unlikely to be useful for learning a representation of the concept _lamb_. To reduce the impact of such idiosyncratic sentences, they proposed the following filtering strategy. Let \(V\) be a vocabulary of concepts and let \(M=\bigcup_{v\in V}\mu(v)\) be the set of all mention vectors, across all words in the vocabulary. For each mention vector \(\mathbf{x}\) in \(\mu(c)\), we compute its \(k\) nearest neighbours among the vectors in \(M\). If all \(k\) of these neighbours belong to \(\mu(c)\), \(\mathbf{x}\) is deemed to be idiosyncratic. The embedding of concept \(c\) is then obtained by averaging the remaining mention vectors, after removing the idiosyncratic ones. The underlying intuition is based on the idea that the mention vectors in \(\mu(c)\) capture the properties of \(c\). If all the neighbours of such a mention vector \(\mathbf{x}\) are associated with \(c\), it suggests that the property which is captured by \(\mathbf{x}\) only applies to that concept and is thus unlikely to be important. ## 4. Contrastive Learning Strategies Each mention vector in \(\mu(c)\) intuitively encodes what the corresponding sentence reveals about the concept \(c\). It would thus be desirable if two mention vectors were similar if and only if the corresponding sentences reveal similar properties. Unfortunately, this is not always the case, given that contextualised vectors are affected by aspects such as word position, word frequency, and punctuation (Zhu et al., 2019; Li et al., 2019; Li et al., 2019), which are irrelevant to word meaning, as well as the syntactic role of a word (Li et al., 2019; Li et al., 2019), which is only loosely related. Our solution is to fine-tune the mention vectors using a contrastive learning strategy. While contrastive learning is a popular representation learning technique, it is usually applied in a purely unsupervised setting. For instance, to learn sentence embeddings using contrastive learning, one usually trains the model such that embeddings of corrupted versions of the same sentence are similar to each other, and dissimilar from embeddings of other sentences (Li et al., 2019; Li et al., 2019). The same strategy has been used in (Li et al., 2019) for obtaining word embeddings from BERT. While it leads to embeddings that perform well on word similarity benchmarks, as we will see in our experiments, they are less suitable for tasks such as ontology completion, where we need concept embeddings that capture the semantic properties of the corresponding concepts. In contrast to these existing approaches, our strategies will rely on weakly labelled training examples. Each example consists of two sentence-concept pairs, \((s_{1},c_{1})\) and \((s_{2},c_{2})\), where \(c_{i}\) is a concept that is mentioned in sentence \(s_{i}\). For positive training examples, the assumption is that the property that sentence \(s_{1}\) expresses about concept \(c_{1}\) is the same as what sentence \(s_{2}\) expresses about \(c_{2}\). For instance, if we write \(s_{(i)}\) for sentence \((i)\) from Example 1, and similar for \(s_{(ii)}\) and \(s_{(iii)}\), then \((s_{i},\mathit{submarines})\); \((s_{ii},\mathit{sharks})\) could be a positive training example, while \((s_{i},\mathit{submarines})\); \((s_{ii},\mathit{submarines})\) could be a negative example. To implement our strategy, we thus first need to find a way to obtain such weakly labelled training examples. In Section 4.1 we propose two solutions for this problem: an unsupervised strategy which relies on the neighbourhood structure of the mention vectors, and a distantly supervised strategy which is based on ConceptNet. In Section 4.2 we then describe how the resulting training examples can be used for fine-tuning the model. ### Constructing Weakly Labelled Examples We propose two strategies for obtaining weakly labelled training examples. These examples will then be used in Section 4.2 for fine-tuning the mention vectors. #### 4.1.1. Neighbourhood Structure Consider sentences (i) and (iii) from Example 1. Even though these sentences express a similar property (i.e. being located under water), the resulting mention vectors are not actually similar, even after masking the target concepts. In fact, this is precisely our motivation for fine-tuning the mention vectors. To discover sentences which are likely to express a similar property, it is thus not sufficient to directly compare the corresponding mention vectors. Let us write \(\phi(s,c)\) for the mention vector which is obtained after masking concept \(c\) in sentence \(s\). Essentially, two mention vectors \(\phi(s_{1},c_{1})\) and \(\phi(s_{2},c_{2})\) are similar if the following two conditions are satisfied for the sentences \(s_{1}\) and \(s_{2}\): (i) they express a similar property about their target concepts (i.e. \(c_{1}\) and \(c_{2}\)) and (ii) they have a similar structure, with \(c_{1}\) and \(c_{2}\) moreover occurring in a similar syntactic role. In particular, if two mention vectors are similar, it is likely that they capture a similar property, even if the converse is not true. This insight can be used to compare the mention vectors \(\phi(s_{1},c_{1})\) and \(\phi(s_{2},c_{2})\) in an indirect way: we obtain the set \(X_{1}\) of mentions vectors which are most similar to \(\phi(s_{1},c_{1})\) and the set \(X_{2}\) of mention vectors which are most similar to \(\phi(s_{2},c_{2})\). If the concepts associated with the mention vectors in \(X_{1}\) are broadly the same as the concepts associated with the mention vectors in \(X_{2}\), it intuitively means that the property expressed by the vector \(\phi(s_{1},c_{1})\) applies to the same set of concepts as the property expressed by the vector \(\phi(s_{2},c_{2})\). In such a case, it is likely that \(\phi(s_{1},c_{1})\) and \(\phi(s_{2},c_{2})\) express the same property. We now describe the proposed method more formally. Let \(V\) be the vocabulary of all concepts and let \(M=\bigcup_{c\in V}\mu(c)\) be the set of available mention vectors. In the following, we will assume that \(\mu(c)\cap\mu(d)=\emptyset\) for \(c\neq d\), i.e. we never have the exact same mention vector for different concepts. This assumption simplifies the formulations and is satisfied in practice. In particular, we can then link each mention vector \(\mathbf{x}\in M\) to its unique corresponding concept, which we denote by \(\omega(\mathbf{x})\), i.e. we have \(\omega(\mathbf{x})=c\) if \(\mathbf{x}\in\mu(c)\). For a mention vector \(\mathbf{x}\in M\), we write \(\mathit{neigh}(\mathbf{x})\) for its \(k\) nearest neighbours from \(M\), in terms of cosine similarity. Our central assumption is that when two mention vectors \(\mathbf{x}\) and \(\mathbf{y}\) express a similar property, then the concepts associated with the mention vectors in \(\mathit{neigh}(\mathbf{x})\) and \(\mathit{neigh}(\mathbf{y})\) will be similar. Formally, we define the compatibility degree \(\pi(\mathbf{x},\mathbf{y})\) between \(\mathbf{x}\) and \(\mathbf{y}\) as follows: \[\frac{\sum_{c\in V}\min(\mathit{freq}(c,\mathit{neigh}(\mathbf{x})),\mathit{ freq}(c,\mathit{neigh}(\mathbf{y}))}{\sum_{c\in V}\max(\mathit{freq}(c,\mathit{neigh}( \mathbf{x})),\mathit{freq}(c,\mathit{neigh}(\mathbf{y}))})\] where \(\mathit{freq}(c,X)=|\{\mathbf{x}\in X:\omega(\mathbf{x})=c\}|\) is the number of mention vectors in \(X\) that are associated with concept \(c\). The following toy example provides an illustration of how \(\pi(\mathbf{x},\mathbf{y})\) is computed. **Example 2**: _Figure 1 focuses on mention vectors \(\mathbf{x}\), \(\mathbf{y}\) and \(\mathbf{z}\), along with their \(k=4\) nearest neighbours. While \(\mathbf{x}\) and \(\mathbf{y}\) are not similar, their neighbours correspond to similar words. We have \(\mathit{neigh}(\mathbf{x})=\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}, \mathbf{x}_{4}\}\) and \(\mathit{neigh}(\mathbf{y})=\{\mathbf{y}_{1},\mathbf{y}_{2},\mathbf{y}_{3}, \mathbf{y}_{4}\}\). We thus find:_ \[\mathit{freq}(\mathit{diver},\mathit{neigh}(\mathbf{x})) =1 f(\mathit{freq}(\mathit{diver},\mathit{neigh}(\mathbf{y}))=1\] \[\mathit{freq}(\mathit{shark},\mathit{neigh}(\mathbf{x})) =1 f(\mathit{freq}(\mathit{shark},\mathit{neigh}(\mathbf{y}))=1\] \[\mathit{freq}(\mathit{submarine},\mathit{neigh}(\mathbf{x})) =1\] \[\mathit{freq}(\mathit{coral},\mathit{neigh}(\mathbf{x})) =1 f(\mathit{req}(\mathit{coral},\mathit{neigh}(\mathbf{y}))=2\] _with the frequencies for all other concepts being 0. We thus obtain:_ \[\pi(\mathbf{x},\mathbf{y})=\frac{1+1+0+1}{1+1+1}=\frac{3}{5}\] _As \(\pi(\mathbf{x},\mathbf{y})\) is rather high, we will aim to move \(\mathbf{x}\) and \(\mathbf{y}\) closer together. In particular, \(\mathbf{x}\) should be closer to \(\mathbf{y}\) than to \(\mathbf{z}\), despite the fact that \(\mathbf{x}\) and \(\mathbf{z}\) correspond to the same word._ In the following, we write \(\mathit{Pos}\subseteq(S\times V)\times(S\times V)\) to denote the resulting set of positive examples. Note that the elements of \(\mathit{Pos}\) are pairs of sentence-concept pairs. In particular, we have: \[\mathit{Pos}=\{((s_{1},c_{1}),(s_{2},c_{2}))\mid\pi(\phi(s_{1},c_{1}),\phi(s_{ 2},c_{2}))\geq\theta,s_{1}\neq s_{2}\}\] for some threshold \(\theta>0\). #### 4.1.2. Distant Supervision from ConceptNet We now consider a strategy which uses ConceptNet (Sutton et al., 2017) as a distant supervision signal to identify positive training examples. ConceptNet contains a large number of triples of the form ([concept], \(\mathit{HasProperty}\), [property]). We first collected all the concept-property pairs that appear in such triples. We then removed those concept-property pairs for which the property only appears for at most two concepts. Let \(T\) be the resulting set of concept-property pairs. For each pair \((c,p)\in T\), we identified all sentences in Wikipedia that mention both the concept \(c\) and the property \(p\). We rely on the simplifying assumption that such sentences express the knowledge that concept \(c\) has property \(p\), similar to the standard assumption underpinning distant supervision strategies for relation extraction (Sutton et al., 2017). Let \(S_{p}\) be the resulting set of sentence-concept pairs for property \(p\), i.e. \((s,c)\in S_{p}\) if sentence \(s\) mentions both the concept \(c\) and some property \(p\) such that Figure 1. Illustration of the neighbourhood-based selection of positive examples. \((c,p)\in T\). The set of positive examples is then defined as follows: \[Pos=\{((s_{1},c_{1}),(s_{2},c_{2}))\mid\exists p\,.\,(s_{1},c_{1})\in S_{p},(s_{2},c_{2})\in S_{p},s_{1}\neq s_{2}\}\] In other words, \((s_{1},c_{1})\) and \((s_{2},c_{2})\) are treated as a positive example if (i) the sentences \(s_{1}\) and \(s_{2}\) mention the same property \(p\) and (ii) the corresponding target concepts \(c_{1}\) and \(c_{2}\) have \(p\) in ConceptNet. ### Fine-tuning Strategies We now describe how the positive examples that were identified in Section 4.1 can be used for fine-tuning the mention vectors. The most straightforward strategy, which we discuss in Section 4.2.2, is based on fine-tuning the language model itself. The main drawback of this method is that it is computationally expensive. For this reason, in Section 4.2.1 we first discuss a simpler strategy, which simply learns a linear projection of the standard mention vectors. #### 4.2.1. Projection Method Our aim is to learn a projection matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) such that vectors \(\mathbf{A}\phi(s_{1},c_{1})\) and \(\mathbf{A}\phi(s_{2},c_{2})\) are similar iff \(((s_{1},c_{1}),(s_{2},c_{2}))\in Pos\). Here \(n\) is the dimension of mention vectors while \(m\) is the dimension of the resulting vectors. We can think of \(\mathbf{A}\) as selecting the subspace of the mention vector space that is focused on semantic properties. We use the supervised contrastive loss from Khosla et al. (2016) to learn \(\mathbf{A}\). Let \(B\subseteq S\times V\) be the set of sentence-concept pairs that are considered in a given mini-batch. Let \(X_{(s,c)}=\{(s^{\prime},c^{\prime})\mid((s,c),(s^{\prime},c^{\prime}))\in Pos \cap(B\times B)\}\) be the set of positive examples for \((s,c)\) in the mini-batch. The loss is as follows: \[\sum_{(s,c)\in B}\frac{-1}{|X_{(s,c)}|}\sum_{(s^{\prime},c^{\prime})\in X_{(s, c)}}\log\frac{e^{\cos(A\phi(s,c)A\phi(s^{\prime},c^{\prime}))/\tau}}{ \sum_{(s^{\prime},c^{\prime})}e^{\cos(A\phi(s,c)A\phi(s^{\prime},c^{\prime}))/ \tau}}\] where the summation in the denominator ranges over \((s^{\prime\prime},c^{\prime\prime})\in B\setminus\{(s,c)\}\), and the temperature \(\tau>0\) is a hyperparameter. #### 4.2.2. Fine-Tuning BERT We now consider a variant in which the contrastive loss is used to fine-tune a BERT encoder. This should allow us to learn more informative mention vectors, but at a higher computational cost. Let us write \(\psi(s,c)\) for the encoding of sentence-concept pair \((s,c)\) according to the fine-tuned BERT encoder (to distinguish it from \(\phi\), which uses the pre-trained language model). Let \(B\) and \(X_{(s,c)}\) be defined as before. We use the following loss: \[\sum_{(s,c)\in B}\frac{-1}{|X_{(s,c)}|}\sum_{(s^{\prime},c^{\prime})\in X_{(s,c)}}\log\frac{e^{\cos(\psi(s,c),\psi(s^{\prime},c^{\prime}))/\tau}}{\sum_{(s ^{\prime},c^{\prime\prime})}e^{\cos(\psi(s,c),\psi(s^{\prime},c^{\prime\prime }))/\tau}}\] where the summation in the denominator ranges over \((s^{\prime\prime},c^{\prime\prime})\in B\setminus\{(s,c)\}\), as before, and \(\tau>0\) is again a hyperparameter. ## 5. Experiments We present an evaluation of our proposed strategies2. We will in particular focus on the following variants: Footnote 2: Datasets and code at [http://github.com/fina-luck/semantic_concept_embeddings](http://github.com/fina-luck/semantic_concept_embeddings). **ConProj**: uses the projection method for fine-tuning and the neighbourhood structure for obtaining positive examples. **ConFT**: fine-tunes the BERT encoder and uses the neighbourhood structure for obtaining positive examples. **ConCN**: fine-tunes the BERT encoder and uses the distant supervision strategy based on Conceptnet for obtaining positive examples. By comparing these variants we are particularly interested in answering the following two research questions: (i) is learning a linear projection sufficient or do we need to fine-tune the language model, and (ii) how effective are the two proposed strategies for obtaining weakly labelled positive examples. The primary focus of our experiments is on word classification (Section 5.1), as these allow us to directly evaluate the extent to which our embeddings capture different kinds of semantic properties. This is motivated by the observation that this is precisely what matters in most applications where static concept embeddings are still needed. For instance, tasks such as ontology completion or zero-shot learning directly use concept embeddings to link concepts to their semantic properties. We also evaluate the quality of the clusters that arise from our embeddings (Section 5.2). To verify that the concept embeddings are indeed useful in downstream applications, we present an evaluation on the downstream task of ontology completion (Section 5.3). We conclude with an analysis of the main results (Section 5.4). _Baselines_. We compare our embeddings with Skip-gram (Skiap and GloVe, 2017) and GloVe (2017), as representative examples of traditional word embeddings3, and with SynGCN4(Shi et al., 2018) and Word2Sense5(Kong et al., 2019), as examples of more recent static word embeddings. We furthermore compare with the Numberbatch6 embeddings from Speer et al. (2019), as these were also fine-tuned based on ConceptNet. Beyond traditional word embeddings, we compare with the method from Li et al. (2019), as we use their mention vectors as our starting point. We include two variants: one version where all mention vectors are averaged (_Mask_) and one version where their filtering strategy is applied first (_Mask+filtering_). In addition, we consider a variant in which mention vectors are obtained without masking the target concept (_No-Mask_). In this case, for words that consist of more than one token, the contextualised token representations are averaged. Rather than taking the final layer representation, which has been found to be sub-optimal (Shi et al., 2018), in this case, we select the optimal layer based on a validation split. Finally, we include results for MirrorBERT7(Shi et al., 2018) and MirrorWiC8(Shi et al., 2018), both of which also use a contrastively fine-tuned BERT model. Footnote 3: We used the Skip-gram embeddings trained on Google News ([https://code.google.com/archive/p/word2vecv](https://code.google.com/archive/p/word2vecv)) and Glo embeddings trained on Common Crawd ([https://nln.stanford.edu/projects/glove](https://nln.stanford.edu/projects/glove)). Footnote 4: [https://drive.google.com/file/id/1v9KgJJlllCnlC-Nc28zbByAGoWnt8RSfHh/view](https://drive.google.com/file/id/1v9KgJJlllCnlC-Nc28zbByAGoWnt8RSfHh/view) Footnote 5: [https://drive.google.com/file/id/1v9KgJlllCnlC-Nc28zbByAGoWnt18gJllWjview](https://drive.google.com/file/id/1v9KgJlllCnlC-Nc28zbByAGoWnt18gJllWjview) Footnote 6: [https://conceptnet.s3.amazonaws.com/downloads/2019/numberbatch/numberbatch-en-10.81.txt](https://conceptnet.s3.amazonaws.com/downloads/2019/numberbatch/numberbatch-en-10.81.txt) Footnote 7: [https://huggingface.co/cambridgelHml/mirror-bert-base-uncased-word](https://huggingface.co/cambridgelHml/mirror-bert-base-uncased-word) Footnote 8: [https://huggingface.co/cambridgelHml/mirror-bert-base-uncased](https://huggingface.co/cambridgelHml/mirror-bert-base-uncased) _Training Details_. To obtain mention vectors, for each concept, we randomly sample up to 500 sentences mentioning that concept from Wikipedia. We use the same sentences for our methods, for the baseline methods from Li et al. (2019) and for MirrorWiC. Unless specified otherwise, we use BERT-large-uncased as the pre-trained language model. The learning rate for our models was set to 2e-4, with cosine warm-up for the first 2 epochs. We use early stopping with a patience of 10 and a minimum difference of 1e-10. We used the AdamW optimizer. We set the temperature parameter in the contrastive loss to 0.05 and the number of neighbours \(k\) for evaluating the compatibility degree to 5. The threshold \(\theta\) on compatibility degrees to be considered a positive example was set to 0.5. For implementing the contrastive loss, we relied on the Pytorch Metric Learning library9. Based on the values reported by Li et al. (Li et al., 2020), we set the number of neighbours for the filtering strategy to 50 for X-McRae, WNSS and BabelDomains, and 5 for CSLB, Morrow, BM and AP. The dimension \(m\) of the transformed vectors, for the projection-based fine-tuning method, is 256. For _ConProj_, we obtain the sentence-concept pairs for a given mini-batch by sampling 1024 such pairs from the set _Pos_. For _ConFT_, we proceed similarly, but limit the number of pairs to 512 due to memory constraints. For _ConCN_, the set of sentence-concept pairs for a given mini-batch is obtained by repeatedly (i) sampling a property \(p\) and (ii) sampling 50 sentences from \(S(p)\). Footnote 9: [https://keviningsuerg.github.io/pytorch-metric-learning/](https://keviningsuerg.github.io/pytorch-metric-learning/) Footnote 10: [https://github.com/mdfheys/physical-commonsense](https://github.com/mdfheys/physical-commonsense) ### Word Classification We consider a number of benchmarks which involve predicting whether a given concept belongs to some class, where the classes of interest correspond to different kinds of semantic properties, namely commonsense properties (e.g. being made of wood), taxonomic categories (e.g. being an animal) and thematic domains (e.g. related to music). We evaluate the extent to which these classes can be predicted from different kinds of concept embeddings. We have included the five benchmarks that were used by Li et al. (Li et al., 2020): * the extension of the McRae feature norms (McRae, 2019) that was introduced by Forbes et al. (Forbes et al., 2019) (X-McRae, 2019), covering 513 words and 50 classes (being commonsense properties); * CSLB Concept Property Norms11, with 635 words and 395 classes (being commonsense properties); Footnote 11: [https://chb.psychol.cam.ac.uk/proonyms](https://chb.psychol.cam.ac.uk/proonyms) * the Morrow dataset (Mirror, 2019), covering 888 words and 13 classes (being broad taxonomic categories such as _animals_); Footnote 12: [https://wordnet.princeton.edu/download](https://wordnet.princeton.edu/download) * WordNet supersenses13 (WNNSS), with 18200 words and 25 classes classes (being broad taxonomic categories); Footnote 13: [http://kel.uniroma1.it/babelDomains/](http://kel.uniroma1.it/babelDomains/) * BabelDomains14 (BabelDomains, 2019), covering 12477 words and 28 classes (being thematic domains). Footnote 14: It should be noted that the annotations in CSLB are not complete, i.e. some properties which are not asserted to hold for a given concept are nonetheless valid (Mirror, 2019). This means that care is needed when drawing conclusions from the absolute performance of models on this dataset. As we are mostly interested in the relative performance of different embeddings in this paper, this should not affect the analysis. Footnote 14: [https://github.com/watch-a/word-benchmarks](https://github.com/watch-a/word-benchmarks). For these datasets, we use the same training-tuning-test splits as Li et al. (Li et al., 2020)15. We also include two additional benchmarks16: Footnote 16: [https://github.com/watch-a/word-benchmarks](https://github.com/watch-a/word-benchmarks). Footnote 17: [https://github.com/watch-a/word-benchmarks](https://github.com/watch-a/word-benchmarks). * the Battig and Montague norms (Battig and Montague, 2019), with 5321 words and 56 classes (being fine-grained taxonomic categories such as _weapon_ or _unit of time_); Footnote 18: It should be noted that the annotations in CSLB are not complete, i.e. some properties which are not asserted to hold for a given concept are nonetheless valid (Mirror, 2019). This means that care is needed when drawing conclusions from the absolute performance of models on this dataset. As we are mostly interested in the relative performance of different embeddings in this paper, this should not affect the analysis. * the dataset from Almuhareb and Poesio (2019), with 402 words and 21 classes (being WordNet hypernyms). For both datasets, we randomly split the positive examples, for each category, into 60% for training, 20% for tuning and 20% for testing. As these datasets only specify positive examples, for each concept, we generate 5 negative examples by randomly selecting categories to which the concept does not belong to. _Methodology._ For each class, we train a linear SVM to classify concepts based on their embedding. We report the results in terms of F1 score, macro-averaged across all classes from a given benchmark. We furthermore experiment with a simple Convolutional Neural Network (CNN), which takes the individual mention vectors as input, rather than their average. In particular, each mention vector is first fed through a dense layer and the resulting vectors are aggregated using max-pooling. This aggregated vector is then fed to a classification layer. For the SVM, we used the standard scikit-learn implementation. The C parameter is tuned from {0.1, 1, 10, 100}. For the CNN model, we have used the standard PyTorch implementation, setting the kernel size and stride to 1. We used 64 filters with ReLU activation, a batch size of 32 and a learning rate of 1e-3. The CNN is trained with binary cross-entropy, using Adam. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**X-McRae**} & \multicolumn{2}{c}{**CSLB**} & \multicolumn{2}{c}{**Morrow**} & \multicolumn{2}{c}{**WNSS**} & \multicolumn{2}{c}{**BabelDom**} & \multicolumn{2}{c}{**BM**} & \multicolumn{2}{c}{**AP**} \\ \cline{2-13} & SVM & CNN & SVM & CNN & SVM & CNN & SVM & CNN & SVM & CNN & SVM & CNN & SVM & CNN \\ \hline GloVe & 63.6 & - & 42.7 & - & 57.1 & - & 48.6 & - & 41.9 & - & 59.4 & - & 60.7 & - \\ Skip-Gram & 61.3 & - & 50.2 & - & 64.7 & - & 55.9 & - & 49.3 & - & 60.3 & - & 61.7 & - \\ Word2Sense & 52.3 & - & 50.3 & - & 69.2 & - & 43.9 & - & 32 & - & 63.8 & - & 62.1 & - \\ SynGCN & 56.5 & - & 50.9 & - & 71.4 & - & 42.3 & - & 34.2 & - & 76.2 & - & 75.6 & - \\ Numberbatch & 63.5 & - & 57.8 & - & 71.1 & - & 63.4 & - & 41.5 & - & 80.7 & - & 82.3 & \\ MirrorBERT & 63.3 & - & 51.6 & - & 69.8 & - & 59.1 & - & 50.3 & - & 79.2 & - & 82.8 & - \\ \hline MirrorWiC & 64.2 & 67.6 & 52.7 & 60.1 & 70.6 & 79.3 & 59.1 & 63.4 & 50.4 & 56.2 & 80.1 & 81.6 & 81.4 & 82.6 \\ No-Mask & 55.9 & 57.3 & 45.6 & 46.8 & 67.5 & 68.2 & 50.9 & 51.8 & 40.3 & 42.4 & 67.2 & 68.4 & 62.5 & 64.1 \\ Mask & 62.8 & 66.8 & 44.8 & 47.2 & 57.8 & 59.3 & 56.5 & 57.3 & 49.3 & 51.1 & 78.6 & 80.1 & 79.3 & 81.9 \\ Mask + filtering & 64.1 & 67.7 & 51.4 & 54.3 & 73.5 & 75.4 & 58.5 & 61.3 & 50.9 & 53.6 & 79.6 & 82.6 & 81.9 & 82.3 \\ \hline ConProj & 66.6 & 69.3 & 53.6 & 61.4 & 75.5 & 81.1 & 63.2 & 65.8 & 54.7 & 58.4 & 80.6 & 82.7 & 82.9 & 83.8 \\ ConFT & 67.4 & 69.8 & 55.7 & 63.6 & 76.9 & 82.4 & 65.7 & 67.2 & 55.8 & 59.6 & 81.1 & 82.9 & 83.3 & 84.2 \\ ConCN & 68.3 & 70.9 & 56.2 & 65.1 & 77.5 & 83.8 & 67.1 & 69.4 & 57.3 & 61.7 & 81.8 & 83.6 & 84.1 & 85.3 \\ ConProj + filt. & 70.1 & 73.2 & 56.3 & 68.8 & 78.8 & 83.7 & 65.2 & 68.6 & 59.1 & 63.9 & 81.2 & 83.3 & 83.4 & 84.6 \\ ConFT + filt. & 71.9 & 74.4 & 57.3 & 69.3 & 78.5 & 86.2 & 67.1 & 69.3 & 60.7 & 64.8 & 82.1 & 83.8 & 84.1 & 85.1 \\ ConCN + filt. & **73.7** & **75.2** & **59.4** & **71.8** & **81.1** & **87.5** & **68.9** & **70.8** & **62.5** & **67.1** & **83.2** & **84.7** & **84.7** & **85.9** \\ \hline \hline \end{tabular} \end{table} Table 1. Results (%) for BERT-large-uncased on the lexical classification tasks, in terms of F1 (%). Results.The results are summarised in Table 1. A number of clear observations can be made. First, all three of the proposed methods (_ConProj, ConFT, ConCN_) outperform the baselines16. The main exception is CSLB, where Numberbatch outperforms all SVM-based models apart from _ConCN_ with filtering. Among our proposed methods, _ConCN_ performs best, showing the effectiveness of the ConceptNet-based distant supervision strategy, while _ConFT_ outperforms _ConProj_, as expected. As a second observation, the filtering strategy from Li et al. (2019) is highly effective, offering improvements that are complementary to those of our proposed methods. Third, the CNN consistently outperforms the SVM model, with the margin being particularly large for CSLB. Footnote 16: Note that _MirrorBERT_ and _MirrorWiC_ use BERT-base, whereas our models and those from Li et al. (2019) rely on BERT-large. However, as we will see below, the outperformance of our model remains after changing the encoder to BERT-base. We use BERT-large for the main experiments, as the methods from Li et al. (2019), which are our primary baselines, achieve substantially weaker results for BERT-base. ### Clustering The BM (Bordes and Goyal, 2017) and AP (Bordes and Goyal, 2017) datasets, which we used for word classification, have also been used as clustering benchmarks in previous work (Bordes and Goyal, 2017). Specifically, the aim is to organise the words from the dataset into semantically meaningful clusters. The clusters are evaluated using cluster purity, using the categories which are provided in the dataset as the ground truth. The main aim of this experiment is to analyse the quality of our embeddings in an unsupervised setting, to test their suitability for tasks such as topic modelling (Bordes and Goyal, 2017; Li et al., 2019; Li et al., 2019). We use \(k\)-means to obtain the clusters, choosing \(k\) as the number of categories from the dataset. Since the quality of the clusters is sensitive to the random initialisation of the clusters, we repeat the experiment 10 times and report the average purity. The results are shown in Table 2. As can be seen, our method outperforms all baselines. Similar as for word classification, we can see that ConCN is the best variant and that the filtering strategy consistently improves the results. Among the baselines, the strong performance of Numberbatch is also notable. ### Ontology Completion An ontology can be viewed as a set of rules. A simple rule takes the following form: \[A_{1}(x)\wedge...A_{n}(x)\to B(x)\] It expresses the knowledge that whenever some entity \(x\) belongs to the concepts \(A_{1},...,A_{n}\) then it also belongs to the concept \(B\). In general, rules may also contain constructs of the form \(\exists y\;R(x,y)\wedge A(y)\), which expresses that \(x\) is related, via relation \(R\), to some instance of \(A\). The key principle underpinning the ontology completion benchmarks from (Zhou et al., 2019) is that real-world ontologies often contain sets of closely related rules, which only differ in a single concept. Consider, for instance, an ontology containing the following rules: \[\textit{AppleJuice}(x)\wedge Small(x) \rightarrow SuitableForKids(x)\] \[\textit{PineappleJuice}(x)\wedge Small(x) \rightarrow SuitableForKids(x)\] \[\textit{MangoJuice}(x)\wedge Small(x) \rightarrow SuitableForKids(x)\] For instance, the first rule intuitively captures the knowledge that a small portion of apple juice is suitable for kids to drink. From these rules, we may infer that the following rule should also be considered valid within the context of this ontology, even if it is not actually provided: \[\textit{OrangeJuice}(x)\wedge Small(x)\rightarrow\textit{SuitableForKids}(x)\] The underlying principle is that _orange juice_ satisfies all the properties that are common to _apple juice_, _pineapple juice_ and _mango juice_. To infer such plausible rules, we often need to combine prior knowledge about the meaning of the concepts (e.g. that orange juice has similar properties to apple juice and pineapple juice) with the knowledge that is inferred from the structure of the ontology itself (e.g. to deal with concepts whose name is not descriptive). To this end, (Zhou et al., 2019) introduced a graph neural network, in which the nodes correspond to concepts. Concepts that co-occur in the same rule are connected with an edge. The input representation of each node is a pre-trained concept embedding, which was taken to be the skip-gram embedding of the concept name in (Zhou et al., 2019). Ontology completion has a number of practical applications. For instance, apart from suggesting plausible missing knowledge to ontology engineers, the ability to predict plausible rules also plays an important role in ontology alignment (Zhou et al., 2019). Following, (Zhou et al., 2019), we use ontology completion benchmarks for evaluating the quality of different types of concept embeddings, using the same methodology. In particular, we first tokenise concept names, based on the common naming conventions in ontologies. For instance, the concept _PastaWithWhiteSauce_ becomes "pasta with white sauce". If the resulting concept name does not appear in Wikipedia, we never predict this concept as a positive example. We use the same hyperparameters and training-test splits as (Zhou et al., 2019), and use their evaluation scripts17. The benchmark includes five different ontologies. First, the SUMO ontology was included as a prototypical example of a large open-domain ontology. The other \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Clustering**} & \multicolumn{3}{c}{**Ontology Completion**} \\ \cline{2-7} & **BM** & **AP** & **Wine** & **Econ** & **Olym** & **Tran** & **SUMO** \\ \hline Glove & 57.3 & 44.9 & 14.2 & 14.1 & 9.9 & 8.3 & 34.9 \\ Skip-Gram & 46.7 & 32.4 & 13.8 & 13.5 & 8.3 & 7.2 & 33.4 \\ Word2Sense & 25.5 & 16.6 & 13.4 & 13.2 & 8.1 & 7.2 & 33.1 \\ SynGCN & 56.9 & 39.2 & 13.9 & 13.8 & 9.4 & 8.1 & 33.9 \\ Numberbatch & 73.8 & 53.3 & 25.6 & 26.2 & 26.8 & 16.0 & 47.3 \\ MirrorBERT & 62.4 & 51.4 & 22.5 & 23.8 & 20.9 & 12.7 & 40.1 \\ MirrorWiC & 64.6 & 52.5 & 24.7 & 24.9 & 22.1 & 13.9 & 46.9 \\ Mask + filt. & 61.3 & 48.2 & 24.5 & 24.3 & 22.9 & 13.0 & 46.4 \\ \hline ConProj & 75.8 & 54.2 & 26.9 & 27.3 & 25.6 & 15.9 & 48.2 \\ ConFT & 76.1 & 56.9 & 27.5 & 29.2 & 26.5 & 17.4 & 48.6 \\ ConCN & 76.9 & 57.2 & 29.1 & 31.3 & 27.6 & 19.7 & 50.4 \\ ConProj + filt. & 76.3 & 54.9 & 27.2 & 28.6 & 26.2 & 17.1 & 49.3 \\ ConFT + filt. & 76.8 & 57.3 & 28.7 & 30.3 & 28.2 & 19.1 & 50.3 \\ ConCN + filt. & **77.4** & **57.9** & **31.3** & **32.4** & **29.7** & **20.9** & **52.6** \\ \hline \hline \end{tabular} \end{table} Table 2. Results for clustering and ontology completion using BERT-large-uncased. Clustering results are in terms of purity (%) while ontology completion results are in terms of F1 (%). four are well-known domain-specific ontologies: Wine, Economy, Olympics and Transport18. Footnote 18: We used the training and test splits from [https://github.com/bzdt/GCN-based-Ontology-Completion](https://github.com/bzdt/GCN-based-Ontology-Completion). The results for ontology completion in Table 2 are broadly in line with those from the word classification and clustering experiments. Note in particular how the performance of _ConCN + filt_, our best-performing variant, is substantially higher than that of _Numberbatch_, _MirrorBERT_, _MirrorWiC_ and _Mask + filt_, which in turn substantially outperform the remaining baselines. Overall, these results clearly show that high-quality concept embeddings can be extracted from language models, which have significant benefits over traditional word embeddings. For instance, with the exception of SUMO, all our methods achieve F1 scores which at least double the F1 scores of skip-gram. Moreover, compared to earlier BERT-based methods such as _MirrorBERT_, _MirrorWiC_ and _Mask_, our vectors are more focused on the semantic properties of concepts, which gives them a clear advantage in this task. ### Analysis We now present some additional analysis of our models, focusing primarily on the results for word classification. Outperformance of the CNNThe CNN is expected to outperform when the semantic properties we need to predict are only rarely mentioned in text. Indeed, such properties will only be captured by a small number of mention vectors, and this information will be largely lost after averaging them. CSLB focuses on commonsense properties, many of which are indeed rarely expressed in text (Krizhevsky et al., 2017), which explains the large outperformance of the CNN model for this benchmark (as well as the comparatively strong performance of Numberbatch) in Table 1. For instance, the categories for which the difference in F1 score between the SVM and CNN models is largest, for _ConCN+filtering_, are as follows: _grows on plants_, _is cool_, _has a top_, _is furry_, _has green leaves_, _is for soup_, _is ridden_, _is a body part_, _is found in America, has big eyes_, _has arms_, _has a blade/blades_. For X-McRae, the overall differences are smaller, which can be explained by the fact that several taxonomic properties are included in this dataset as well. However, for many commonsense properties, we similarly observe large differences in F1 score. The largest differences were observed for the following X-McRae properties: _loud_, _used for holding things_, _words on it_, _aten in summer_, _worn for warmth_, _files_, _used for killing_, _used for cleaning_, _worn on feet_. Comparing Language ModelsTable 3 analyses the impact of changing the language model encoder, showing word classification results for BERT-base-uncased, BERT-large-uncased, RoBERTa-base and RoBERTa-large (Zhu et al., 2018), for the SVM model. We can see that BERT-large and RoBERTa-large outperform the base models, as expected, but the differences for our methods are relatively small. In contrast, for the _No-Mask_, _Mask_ and _Mask+filtering_ baselines, switching to the base models is more detrimental. Across all language models, we find that our proposed methods outperform the baselines. Importance of the Compatibility DegreeFor the _ConProj_ and _ConFT_ variants, the set of positive examples is based on the neighbourhood structure of the mention vectors (see Section 4.1.1). Another possibility could be to simply assume that sentences mentioning the same word are likely to express the same property. In other words, we could define the set of positive examples as follows: \[Pos=\{((s_{1},c),(s_{2},c))\mid s_{1},s_{2}\in S,c\in V,s_{1}\neq s_{2}\}\] The effectiveness of this alternative strategy is analysed in Table 3, where it is referred to as _W-ConProj_ (when used in combination with the projection-based contrastive loss). While this alternative strategy also outperforms the baselines, it consistently underperforms our main neighbourhood-based strategy. AnisotropyAs mentioned in the introduction, one of the reasons for the underperformance of the _Mask_ embeddings may be related to the high anisotropy of the BERT mention vectors. Figure 2 shows a histogram of the cosine similarities between randomly sampled concept embeddings, for the _Mask_ and _ConCN_ strategies. As can be seen, the cosine similarities are on average lower for _ConCN_, which shows that this contrastive learning strategy has indeed led to a reduction in anisotropy. Qualitative AnalysisWe now explore how the mention vectors are affected by the proposed fine-tuning strategy. Specifically, we consider pairs \((s_{1},c_{1})\), \((s_{2},c_{2})\) where the mention vector for \((s_{2},c_{2})\) is among the top-100 nearest neighbours of the mention vector for \((s_{1},c_{1})\) when using _ConCN_, while not being among the top-1000 nearest neighbours when using _Mask_ (for the full set of mention vectors \(M\) across all words). Table 4 contains some examples of such sentence pairs. The examples illustrate how fine-tuning allows the model to identify sentences that express similar properties, even when the sentences themselves are not similar, neither in syntactic structure nor in their overall meaning. In the first example, both sentences express that the target concept (which is masked) is some kind of building. Similarly, in the second example, the sentences express that the target concepts can be black. The third example illustrates a more abstract property, capturing the fact that country-specific versions of the target concept exist. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{**BB**} & \multicolumn{2}{c}{**BL**} & \multicolumn{2}{c}{**RB**} & \multicolumn{2}{c}{**RL**} \\ \cline{2-7} & SVM & CNN & SVM & CNN & SVM & CNN & SVM & CNN \\ \hline No-Mask & 49.6 & 51.2 & 55.9 & 57.3 & 50.4 & 52.4 & 53.5 & 57.2 \\ Mask & 53.6 & 57.2 & 62.8 & 66.8 & 52.1 & 54.6 & 63.9 & 67.1 \\ Mask + filtering & 58.2 & 60.3 & 64.1 & 67.7 & 59.5 & 61.8 & 64.8 & 68.1 \\ \hline ConProj & 64.3 & 67.6 & 66.6 & 69.3 & 64.9 & 67.9 & 67.2 & 69.8 \\ ConFT & 65.2 & 68.1 & 67.4 & 69.8 & 65.3 & 68.2 & 67.4 & 70.1 \\ ConCN & 66.4 & 69.5 & 68.3 & 70.9 & 67.2 & 70.0 & 69.6 & 71.3 \\ ConProj + filt. & 66.3 & 68.3 & 70.1 & 73.2 & 67.7 & 69.4 & 70.5 & 73.9 \\ ConFT + filt. & 67.0 & 70.1 & 71.9 & 74.4 & 68.2 & 71.6 & 71.3 & 73.5 \\ ConCN + filt. & **68.3** & **72.5** & **73.7** & **75.2** & **69.1** & **73.3** & **73.9** & **75.8** \\ \hline W-ConProj & 61.2 & 65.9 & 64.9 & 68.7 & 62.1 & 66.2 & 65.9 & 69.2 \\ W-ConProj + filt. & 63.8 & 67.3 & 68.6 & 71.9 & 64.5 & 69.2 & 70.1 & 73.6 \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison of different language models and strategies for selecting positive examples, for X-McRae, in terms of F1 (%). Results are for BERT-base-uncased (BB), BERT-large-uncased (BL), RoBERTa-base (RB) and RoBERTa-large (RL). Finally, Table 5 shows the nearest neighbours of some selected target words, in terms of cosine similarity, for three different concept embeddings: _Numberbatch_, _MirrorBERT_ and _ConCN_ (with filtering). For this analysis, we considered the vocabulary from the WordNet supersenses dataset. A first observation is that the neighbours for _ConCN_ are often taxonomically closer. For instance, for _MirrorBERT_ we see _lemonwood_ as a top neighbour of _lemon_, which is typically related but not taxonomically close. Similarly, for both _Numberbatch_ and _MirrorBERT_ we see _milk_ as the second nearest neighbour of _milkman_. As another difference, for _ConCN_ we can see neighbours which involve some abstraction. For instance, a _button_ has a similar role as an _icon_ in graphical user interfaces. Another notable example is _cow_ as a neighbour of _milkman_, which are both related to the production/delivery of milk. However, this notion of abstraction sometimes also leads to sub-optimal neighbours. For instance, _contradiction_ is shown as one of the top neighbours of _paradox_ for both _Numberbatch_ and _MirrorBERT_ but does not appear as a neighbour for _ConCN_. ## 6. Conclusions We have proposed a method for learning concept embeddings, based on contextualised representations of masked mentions of concepts in a text corpus. Our focus was on improving the contextualised representations that can be obtained from a pre-trained BERT model, using a number of strategies based on contrastive learning. The aim of these strategies is to ensure that two contextualised word embeddings are similar if and only if the corresponding sentences express similar properties. To implement this idea, we need examples of sentences that are likely to express the same property. We have proposed two methods for obtaining such examples: an unsupervised method which relies on the neighbourhood structure of contextualised word vectors, and a distantly supervised method which relies on ConceptNet. In our experimental results, we found the latter method to perform best. Our proposed strategy was also found to outperform a range of baselines, both in word classification experiments and for the task of ontology completion. ###### Acknowledgements. This work was supported by ANR-22-CE23-0002 ERIANA and EPSRC grant EP/V025961/1. Na Li is supported by Shanghai Big Data Management System Engineering Research Center Open Funding. \begin{table} \begin{tabular}{l l l} \hline \hline & **Word** & **Neighbours** \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & lemon & citron, citrange, lineade, lime, lemonade \\ & deepening & broadening, deep, strengthening, deepness, worsening \\ & icon & iconobj, symbol, topography, iconcoalsam, emblem \\ & stunt & trick, aerialist, jugglery, gimmicity, cartwined \\ & millman & dairyman, milk, creamery, clabber, llectal \\ & paradox & antimony, contradiction, duality, oxymoron, inconsistency \\ & desk & office, coprobylder, dektop, bookcase, table \\ & beer & ale, brevary, microthewery, brevpub, keg \\ & steam & steamout, steamout, steamout, steamfitter, guafician, boiling \\ & razor & razorblade, shaew, blade, scissors, shergner \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & lemon & lemonwood, lemonade, orangeade, lineade, dewberry \\ & deepening & deepness, broadening, deep, shallowness, diversification \\ & icon & iconcoalcat, iconography, iconcoalsam, iconology, symbol \\ & stunt & trick, props, joyridle, sabotage, leap \\ & milkman & dairyman, milk, aewire, grocer, milkword \\ & paradox & centralisation, ambiguity, perplexity, singularity, unreality \\ & desk & office, clerk, bookcase, counter, receptionist \\ & beer & ale, liquor, rum, hewph, brandy \\ & steam & steamout, boiler, lurbine, generator, gasification \\ & razor & razorblade, shaew, scissors, needle, knife \\ \hline \multirow{7}{*}{ \begin{tabular}{} \end{tabular} } & lemon & lime, blueberry, tangerine, cranberry, lemonade \\ & deepening, weakening, wellbeing, narrowing, depths \\ & icon & button, plaque, embmel, display, iconography \\ & stunt & handstand, gimmickry, skydiver, fing, skydiving \\ & milkman & cheeseemonger, dairyman, barmail, paepepov, cow \\ & paradox & singularity, irony, doublelithus, unreality, perplexity \\ & desk & counter, sideboard, office, bookcase, dawwer \\ & beer & mead, ale, vodka, brandy, tequain \\ & steam & electricity, furnace, turbine, vent, gasification \\ & razor & penknife, tool, scalpel, razorblade, shawer \\ \hline \hline \end{tabular} \end{table} Table 4. The table shows pairs of sentences whose mention vectors are similar when using the model fine-tuned with the ConCN strategy while being dissimilar when using pre-trained BERT model. Figure 2. Histogram of cosine similarities between the embeddings of two randomly sampled concepts, chosen from those appearing in the X-McRae, for the _Mask_ and _ConCN_.
2306.10784
On the minimum number of arcs in $4$-dicritical oriented graphs
The dichromatic number $\vec{\chi}(D)$ of a digraph $D$ is the minimum number of colours needed to colour the vertices of a digraph such that each colour class induces an acyclic subdigraph. A digraph $D$ is $k$-dicritical if $\vec{\chi}(D) = k$ and each proper subdigraph $H$ of $D$ satisfies $\vec{\chi}(H) < k$. For integers $k$ and $n$, we define $d_k(n)$ (respectively $o_k(n)$) as the minimum number of arcs possible in a $k$-dicritical digraph (respectively oriented graph). Kostochka and Stiebitz have shown that $d_4(n) \geq \frac{10}{3}n -\frac{4}{3}$. They also conjectured that there is a constant $c$ such that $o_k(n) \geq cd_k(n)$ for $k\geq 3$ and $n$ large enough. This conjecture is known to be true for $k=3$ (Aboulker et al.). In this work, we prove that every $4$-dicritical oriented graph on $n$ vertices has at least $(\frac{10}{3}+\frac{1}{51})n-1$ arcs, showing the conjecture for $k=4$. We also characterise exactly the $k$-dicritical digraphs on $n$ vertices with exactly $\frac{10}{3}n -\frac{4}{3}$ arcs.
Frédéric Havet, Lucas Picasarri-Arrieta, Clément Rambaud
2023-06-19T08:58:18Z
http://arxiv.org/abs/2306.10784v2
# On the minimum number of arcs in \(4\)-dicritical oriented graphs ###### Abstract The dichromatic number \(\vec{\chi}(D)\) of a digraph \(D\) is the minimum number of colours needed to colour the vertices of a digraph such that each colour class induces an acyclic subdigraph. A digraph \(D\) is \(k\)-dicritical if \(\vec{\chi}(D)=k\) and each proper subdigraph \(H\) of \(D\) satisfies \(\vec{\chi}(H)<k\). For integers \(k\) and \(n\), we define \(d_{k}(n)\) (respectively \(o_{k}(n)\)) as the minimum number of arcs possible in a \(k\)-dicritical digraph (respectively oriented graph). Kostochka and Stiebitz have shown [9] that \(d_{4}(n)\geqslant\frac{10}{3}n-\frac{4}{3}\). They also conjectured that there is a constant \(c\) such that \(o_{k}(n)\geqslant cd_{k}(n)\) for \(k\geqslant 3\) and \(n\) large enough. This conjecture is known to be true for \(k=3\) (Aboulker et al. [1]). In this work, we prove that every \(4\)-dicritical oriented graph on \(n\) vertices has at least \((\frac{10}{3}+\frac{1}{51})n-1\) arcs, showing the conjecture for \(k=4\). We also characterise exactly the \(k\)-dicritical digraphs on \(n\) vertices with exactly \(\frac{10}{3}n-\frac{4}{3}\) arcs. ## 1 Introduction Let \(G\) be a graph. We denote by \(V(G)\) its vertex set and by \(E(G)\) its edge set; we set \(n(G)=|V(G)|\) and \(m(G)=|E(G)|\). A \(k\)**-colouring** of \(G\) is a function \(\varphi:V(G)\to[k]\). It is **proper** if for every edge \(uv\in E(G)\), \(\varphi(u)\neq\varphi(v)\). The smallest integer \(k\) such that \(G\) has a proper \(k\)-colouring is the **chromatic number**, and is denoted by \(\chi(G)\). Since \(\chi\) is non decreasing with respect to the subgraph relation, it is natural to consider the minimal graphs (for this relation) which are not \((k-1)\)-colourable. Following this idea, Dirac defined \(k\)**-critical** graphs as the graphs \(G\) with \(\chi(G)=k\) and \(\chi(H)<k\) for every proper subgraph \(H\) of \(G\). A first property of \(k\)-critical graph is that their minimum degree is at least \(k-1\). Indeed, if a vertex \(v\) has degree at most \(k-2\), then a \((k-1)\)-colouring of \(G-v\) can be easily extended to \(G\), contradicting the fact that \(\chi(G)=k\). As a consequence, the number of edges in a \(k\)-critical graph is at least \(\frac{k-1}{2}n\). This bound is tight for complete graphs and odd cycles, but Dirac [3] proved an inequality of the form \(m\geqslant\frac{k-1+\varepsilon_{k}}{2}n-c_{k}\) for every \(n\)-vertex \(k\)-critical graph with \(m\) edges, for some \(c_{k}\) and \(\varepsilon_{k}>0\). This shows that, for \(n\) sufficiently large, the average degree of a \(k\)-critical graph is at least \(k-1+\varepsilon_{k}\). This initiated the quest after the best lower bound on the number of edges in \(n\)-vertex \(k\)-critical graphs. This problem was almost completely solved by Kostochka and Yancey in 2014 [10]. **Theorem 1** (Kostochka and Yancey [10]).: _Every \(k\)-critical graph on \(n\) vertices has at least \(\frac{1}{2}(k-\frac{2}{k-1})n-\frac{k(k-3)}{2(k-1)}\) edges. For every \(k\), this bound is tight for infinitely many values of \(n\)._ Kostochka and Yancey [11] also characterised \(k\)-critical graphs for which this inequality is an equality, and all of them contain a copy of \(K_{k-2}\), the complete graph on \(k-2\) vertices. This motivated the following conjecture of Postle [12]. **Conjecture 2** (Postle [12]).: For every integer \(k\geqslant 4\), there exists \(\varepsilon_{k}>0\) such that every \(k\)-critical \(K_{k-2}\)-free graph \(G\) on \(n\) vertices has at least \(\frac{1}{2}\left(k-\frac{2}{k-1}+\varepsilon_{k}\right)n-\frac{k(k-3)}{2(k-1)}\) edges. For \(k=4\), the conjecture trivially holds as there is no \(K_{2}\)-free \(4\)-critical graph. Moreover, this conjecture has been confirmed for \(k=5\) by Postle [12], for \(k=6\) by Gao and Postle [5], and for \(k\geqslant 33\) by Gould, Larsen, and Postle [6]. Let \(D\) be a digraph. We denote by \(V(D)\) its vertex set and by \(A(D)\) its arc set; we set \(n(D)=|V(D)|\) and \(m(D)=|E(D)|\). A \(k\)**-colouring** of \(D\) is a function \(\varphi:V(D)\to[k]\). It is a \(k\)**-dicolouring** if every directed cycle \(C\) in \(D\) is not monochromatic for \(\varphi\) (that is \(|\varphi(V(C))|>1\)). Equivalently, it is a \(k\)-dicolouring if every colour class induces an acyclic subdigraph. The smallest integer \(k\) such that \(D\) has a \(k\)-dicolouring is the **dichromatic number** of \(D\) and is denoted by \(\vec{\chi}(D)\). A **digon** in \(D\) is a pair of opposite arcs between two vertices. Such a pair of arcs \(\{uv,vu\}\) is denoted by \([u,v]\). We say that \(D\) is a **bidirected graph** if every pair of adjacent vertices forms a digon. In this case, \(D\) can be viewed as obtained from an undirected graph \(G\) by replacing each edge \(\{u,v\}\) of \(G\) by the digon \([u,v]\). We say that \(D\) is a bidirected \(G\), and we denote it by \(\overleftrightarrow{G}\). Observe that \(\chi(G)=\vec{\chi}(\overleftrightarrow{G})\). Thus every statement on proper colouring of undirected graphs can be seen as a statement on dicolouring of bidirected graphs. Exactly as in the undirected case, one can define \(k\)**-dicritical** digraphs to be digraphs \(D\) with \(\vec{\chi}(D)=k\) and \(\vec{\chi}(H)<k\) for every proper subdigraph \(H\) of \(D\). It is easy to check that if \(G\) is a \(k\)-critical graph, then \(\overleftrightarrow{G}\) is \(k\)-dicritical. Kostochka and Stiebitz [9] conjectured that the \(k\)-dicritical digraphs with the minimum number of arcs are bidirected graphs. Thus they conjectured the following generalisation of Theorem 1 to digraphs. **Conjecture 3** (Kostochka and Stiebitz [9]).: Let \(k\geqslant 2\). Every \(k\)-dicritical digraph on \(n\) vertices has at least \((k-\frac{2}{k-1})n-\frac{k(k-3)}{k-1}\) arcs. Moreover, equality holds only if \(D\) is bidirected. In the case \(k=2\), this conjecture is easy and weak as it states that a \(2\)-dicritical digraph on \(n\) vertices has at least two arcs, while, for all \(n\geqslant 2\), the unique \(2\)-dicritical digraph of order \(n\) is the directed \(n\)-cycle which has \(n\) arcs. The case \(k=3\) of the conjecture has been confirmed by Kostochka and Stiebitz [9]. Using a Brooks-type result for digraphs due to Harutyunyan and Mohar [7], they proved the following: if \(D\) is a \(3\)-dicritical digraph of order \(n\geqslant 3\), then \(2n\) and equality holds if and only if \(n\) is odd and \(D\) is a bidirected odd cycle. The conjecture has also been proved for \(k=4\) by Kostochka and Stiebitz [9]. However, the conjecture is open for every \(k\geqslant 5\). Recently, this problem has been investigated by Aboulker and Vermande [2] who proved the weaker bound \((k-\frac{1}{2}-\frac{2}{k-1})n-\frac{k(k-3)}{k-1}\) for the number of arcs in an \(n\)-vertex \(k\)-dicritical digraph. For integers \(k\) and \(n\), let \(d_{k}(n)\) denote the minimum number of arcs in a \(k\)-dicritical digraph of order \(n\). By the above observations, \(d_{2}(n)=n\) for all \(n\geqslant 2\), and \(d_{3}(n)\geqslant 2n\) for all possible \(n\), and equality holds if and only if \(n\) is odd and \(n\geqslant 3\). Moreover, if \(n\) is even then \(d_{3}(n)=2n+1\) (see [1]). Kostochka and Stiebitz [8] showed that if a \(k\)-critical graph \(G\) is triangle-free (that is has no cycle of length \(3\)), then \(m(G)/n(G)\geqslant k-o(k)\) as \(k\to+\infty\). Informally, this means that the minimum average degree of a \(k\)-critical triangle-free graph is (asymptotically) twice the minimum average degree of a \(k\)-critical graph. Similarly to this undirected case, it is expected that the minimum number of arcs in a \(k\)-dicritical digraph of order \(n\) is larger than \(d_{k}(n)\) if we impose this digraph to have no short directed cycles, and in particular if the digraph is an **oriented graph**, that is a digraph with no digon. Let \(o_{k}(n)\) denote the minimum number of arcs in a \(k\)-dicritical oriented graph of order \(n\) (with the convention \(o_{k}(n)=+\infty\) if there is no \(k\)-dicritical oriented graph of order \(n\)). Clearly \(o_{k}(n)\geqslant d_{k}(n)\). **Conjecture 4** (Kostochka and Stiebitz [9]).: For any \(k\geqslant 3\), there is a constant \(\alpha_{k}>0\) such that \(o_{k}(n)>(1+\alpha_{k})d_{k}(n)\) for \(n\) sufficiently large. For \(k=3\), this conjecture has been recently confirmed by Aboulker, Bellitto, Havet, and Rambaud [1] who proved that \(o_{3}(n)\geqslant(2+\frac{1}{3})n+\frac{2}{3}\). In view of Conjecture 2, Conjecture 4 can be generalized to \(\overleftrightarrow{K_{k-2}}\)-free digraphs. **Conjecture 5**.: For any \(k\geqslant 4\), there is a constant \(\beta_{k}>0\) such that every \(k\)-dicritical \(\overleftrightarrow{K_{k-2}}\)-free digraph \(D\) on \(n\) vertices has at least \((1+\beta_{k})d_{k}(n)\) arcs. Together with Conjecture 3, this conjecture would imply the following generalisation of Conjecture 2. **Conjecture 6**.: For every integer \(k\geqslant 4\), there exists \(\varepsilon_{k}>0\) such that every \(k\)-dicritical \(\overleftrightarrow{K_{k-2}}\)-free digraph \(D\) on \(n\) vertices has at least \((k-\frac{2}{k-1}+\varepsilon_{k})n-\frac{k(k-3)}{k-1}\) arcs. A \(\overleftrightarrow{K_{2}}\)-free digraph is an oriented graph, and there are infinitely many \(4\)-dicritical oriented graphs. Thus, while Conjecture 2 holds vacuously for \(k=4\), this is not the case for Conjecture 6. In this paper, we prove that Conjectures 4, 5, and 6 hold for \(k=4\). **Theorem 7**.: _If \(\vec{G}\) is a \(4\)-dicritical oriented graph, then_ \[m(\vec{G})\geqslant\left(\frac{10}{3}+\frac{1}{51}\right)n(\vec{G})-1.\] To prove Theorem 7, we use an approach similar to the proof of the case \(k=5\) of Conjecture 2 by Postle [12]. This proof is based on the potential method, which was first popularised by Kostochka and Yancey [10] when they proved Theorem 1. The idea is to prove a more general result on every \(4\)-dicritical digraphs that takes into account the digons. With a slight abuse, we call **digon** a subdigraph isomorphic to \(\overleftrightarrow{K_{2}}\), the bidirected complete graph on two vertices. We also call **bidirected triangle** a subdigraph isomorphic to \(\overleftrightarrow{K_{3}}\), the bidirected complete graph on three vertices. A **packing** of digons and bidirected triangles is a set of vertex-disjoint digons and bidirected triangles. To take into account the digons, we define a parameter \(T(D)\) as follows. \[T(D)=\max\{d+2t\mid\text{there exists a packing of $d$ digons and $t$ bidirected triangles}\}\] Clearly, \(T(D)=0\) if and only if \(D\) is an oriented graph. Let \(\varepsilon,\delta\) be fixed non-negative real numbers. We define the **potential** (with respect to \(\varepsilon\) and \(\delta\)) of a digraph \(D\) to be \[\rho(D)=\left(\frac{10}{3}+\varepsilon\right)n(D)-m(D)-\delta T(D).\] Thus Theorem 7 can be rephrased as follows. **Theorem 7**.: _Set \(\varepsilon=\frac{1}{51}\) and \(\delta=6\varepsilon=\frac{2}{17}\). If \(\vec{G}\) is a \(4\)-dicritical oriented graph, then \(\rho(\vec{G})\leqslant 1\)._ In fact, we prove a more general statement which holds for every \(4\)-dicritical digraph (with or without digons), except for some exceptions called the \(4\)**-Ore digraphs**. Those digraphs, which are formally defined in Section 2, are the bidirected graphs whose underlying graph is one of the \(4\)-critical graphs reaching equality in Theorem 1. In particular, every \(4\)-Ore digraph \(D\) has \(\frac{10}{3}n(D)-\frac{4}{3}\) arcs. Moreover, the statement holds for all non-negative constants \(\varepsilon\) and \(\delta\) satisfying the following inequalities: * \(\delta\geqslant 6\varepsilon\); * \(3\delta-\varepsilon\leqslant\frac{1}{3}\); **Theorem 8**.: _Let \(\varepsilon,\delta\geqslant 0\) be constants satisfying the aforementioned inequalities. If \(D\) is a \(4\)-dicritical digraph with \(n\) vertices, then_ * \(\rho(D)\leqslant\frac{4}{3}+\varepsilon n-\delta\frac{2(n-1)}{3}\) _if_ \(D\) _is_ \(4\)_-Ore, and_ * \(\rho(D)\leqslant 1\) _otherwise._ In order to provide some intuition to the reader, let us briefly describe the main ideas of our proof. We will consider a minimum counterexample \(D\) to Theorem 8, and show that every subdigraph of \(D\) must have large potential. To do so, we need to construct some smaller 4-dicritical digraphs to leverage the minimality of \(D\). These smaller 4-dicritical digraphs will be constructed by identifying some vertices of \(D\). This is why, in the definition of the potential, we consider \(T(D)\) instead of the number of digons: when identifying a set of vertices, the number of digons may be arbitrary larger in the resulting digraph, but \(T(D)\) increases at most by 1. Using the fact that every subdigraph of \(D\) has large potential, we will prove that some subdigraphs are forbidden in \(D\). Using this, we get the final contradiction by a discharging argument. In addition to Theorem 7, Theorem 8 has also the following consequence when we take \(\varepsilon=\delta=0\). **Corollary 9**.: _If \(D\) is a \(4\)-dicritical digraph, then \(m(D)\geqslant\frac{10}{3}n(D)-\frac{4}{3}\). Moreover, equality holds if and only if \(D\) is \(4\)-Ore, otherwise \(m(D)\geqslant\frac{10}{3}n(D)-1\)._ This is a slight improvement on a result of Kostochka and Stiebitz [9] who proved the inequality \(m(D)\geqslant\frac{10}{3}n(D)-\frac{4}{3}\) without characterising the equality case. Another interesting consequence of our result is the following bound on the number of vertices in a \(4\)-dicritical oriented graph embedded on a fixed surface. Since a graph on \(n\) vertices embedded on a surface of Euler characteristic \(c\) has at most \(3n-3c\) edges, we immediately deduce the following from Theorem 7. **Corollary 10**.: _If \(\vec{G}\) is a \(4\)-dicritical oriented graph embedded on a surface of Euler characteristic \(c\), then \(n(\vec{G})\leqslant\frac{17}{6}(1-3c)\)._ The previous best upper bound was \(n(\vec{G})\leqslant 4-9c\)[9]. In Section 2 we prove some first preliminary results on \(4\)-Ore digraphs, before proving Theorem 8 in Section 3. In Section 4, we show that \(o_{k}(n)\leqslant(2k-\frac{7}{2})n\) for every fixed \(k\) and infinitely many values of \(n\). The proof is strongly based on the proof of [1, Theorem 4.4], which shows \(o_{k}(n)\leqslant(2k-3)n\) for every fixed \(k,n\) (with \(n\) large enough). For \(k=4\), the construction implies in particular that there is a \(4\)-dicritical oriented graph with \(76\) vertices and \(330\) arcs, and there are infinitely many \(4\)-dicritical oriented graphs with \(m/n\leqslant 9/2\). ## 2 The \(4\)-Ore digraphs and their properties We start with a few notations. We denote by \(\llbracket x_{1},\ldots,x_{n}\rrbracket\) the bidirected path with vertex set \(\{x_{1},\ldots,x_{n}\}\) in this order. If \(x_{1}=x_{n}\), \(\llbracket x_{1},\ldots,x_{n}\rrbracket\) denotes the bidirected cycle of order \(n\) with cyclic order \(x_{1},\ldots,x_{n}\). If \(D\) is a digraph, for any \(X\subseteq V(D)\), \(D-X\) is the subdigraph induced by \(V(D)\setminus X\). We abbreviate \(D-\{x\}\) into \(D-x\). Moreover, for any \(F\subseteq V(D)\times V(D)\), \(D\setminus F\) is the subdigraph \((V(D),A(D)\setminus F)\) and \(D\cup F\) is the digraph \((V(D),A(D)\cup F)\) Let \(D_{1},D_{2}\) be two bidirected graphs, \([x,y]\subseteq A(D_{1})\), and \(z\in V(D_{2})\). An **Ore-composition**\(D\) of \(D_{1}\) and \(D_{2}\) with **replaced digon**\([x,y]\) and **split vertex**\(z\) is a digraph obtained by removing \([x,y]\) of \(D_{1}\) and \(z\) of \(D_{2}\), and adding the set of arcs \(\{xz_{1}\mid zz_{1}\in A(D_{2})\text{ and }z_{1}\in Z_{1}\}\), \(\{z_{1}x\mid z_{1}z\in A(D_{2})\text{ and }z_{1}\in Z_{1}\}\), \(\{yz_{2}\mid zz_{2}\in A(D_{2})\text{ and }z_{2}\in Z_{2}\}\), \(\{z_{2}y\mid z_{2}z\in A(D_{2})\text{ and }z_{2}\in Z_{2}\}\), where \((Z_{1},Z_{2})\) is a partition of \(N_{D_{2}}(z)\) into non-empty sets. We call \(D_{1}\) the **digon side** and \(D_{2}\) the **split side** of the Ore-composition. The class of the \(4\)**-Ore digraphs** is the smallest class containing \(\overrightarrow{K_{4}}\) which is stable under Ore-composition. See Figure 1 for an example of a 4-Ore digraph. Observe that all the \(4\)-Ore-digraphs are bidirected. **Proposition 11** (Dirac [4], see also [11]).: \(4\)_-Ore digraphs are \(4\)-dicritical._ Proof.: One can easily show that a bidirected digraph is \(4\)-dicritical if and only if its undirected underlying graph is \(4\)-critical. Then the result follows from the undirected analogous proved by [4]. **Lemma 12**.: _Let \(D\) be a \(4\)-dicritical bidirected digraph and \(v\in V(D)\). Let \((N_{1}^{+},N_{2}^{+})\) and \((N_{1}^{-},N_{2}^{-})\) be two partitions of \(N(v)\). Consider \(D^{\prime}\) the digraph with vertex set \(V(D)\setminus\{v\}\cup\{v_{1},v_{2}\}\) with \(N^{+}(v_{i})=N_{i}^{+},N^{-}(v_{i})=N_{i}^{-}\) for \(i=1,2\) and \(D^{\prime}\langle V(D)\setminus\{v\}\rangle=D-v\). Then \(D^{\prime}\) has a \(3\)-dicolouring with \(v_{1}\) and \(v_{2}\) coloured the same except if \(N_{1}^{+}=N_{1}^{-}\) (that is \(D^{\prime}\) is bidirected)._ Proof.: Suppose that \(D^{\prime}\) is not bidirected. Consider a vertex \(u\in N_{D}(v)\) such that \(v_{1}u,uv_{2}\in A(D^{\prime})\) or \(v_{2}u,uv_{1}\in A(D^{\prime})\). Without loss of generality, suppose \(v_{1}u,uv_{2}\in A(D^{\prime})\). As \(D\) is \(4\)-dicritical, \(D\setminus[u,v]\) has a proper \(3\)-dicolouring \(\varphi\). We set \(\varphi(v_{1})=\varphi(v_{2})=\varphi(v)\) and claim that \(\varphi\) is a \(3\)-dicolouring of \(D^{\prime}\). To show that, observe that \(\varphi\) is a proper \(3\)-colouring of the underlying undirected graph of \(D^{\prime}\setminus\{v_{1}u,uv_{2}\}\), and so \(\varphi\) is a \(3\)-dicolouring of \(D^{\prime}\) as wanted. **Lemma 13**.: _Let \(D\) be a digraph. If \(v\) is a vertex of \(D\), then \(T(D-v)\geqslant T(D)-1\)._ Proof.: Let \(M\) be a packing of \(d\) digons and \(t\) bidirected triangles in \(H\) such that \(d+2t=T(D)\). If \(v\) belongs to a digon \([u,v]\) in \(M\), then \(M\setminus\{[u,v]\}\) witnesses the fact that \(T(D-v)\geqslant T(D)-1\). If \(v\) belongs to a bidirected triangle \([\![u,v,w,u]\!]\), then \(M\setminus\{[\![u,v,w,u]\!]\}\cup[u,v]\) witnesses the fact that \(T(D-v)\geqslant T(D)-2+1\). Otherwise \(T(D-v)\geqslant T(D)\). **Lemma 14**.: _If \(D_{1},D_{2}\) are two digraphs, and \(D\) is an Ore-composition of \(D_{1}\) and \(D_{2}\), then \(T(D)\geqslant T(D_{1})+T(D_{2})-2\). Moreover, if \(D_{1}\) or \(D_{2}\) is isomorphic to \(\overleftrightarrow{K_{4}}\), then \(T(D)\geqslant T(D_{1})+T(D_{2})-1\)._ Proof.: Let \(D\) be the Ore-composition of \(D_{1}\) (the digon side with replaced digon \([x,y]\)) and \(D_{2}\) (the split side with split vertex \(z\)). One can easily see that \(T(D)\geqslant T(D_{1}-x)+T(D-z)\geqslant T(D_{1})+T(D_{2})-2\) by Lemma 13. Moreover, if \(D_{1}\) (resp. \(D_{2}\)) is a copy of \(\overleftrightarrow{K_{4}}\), then \(T(D_{1}-x)=2=T(D_{1})\) (resp. \(T(D_{2}-z)=2=T(D_{2})\)) and therefore \(T(D)\geqslant T(D_{1})+T(D_{2})-1\). Figure 1: An example of a 4-Ore digraph obtained by an Ore-composition of two smaller 4-Ore digraphs, with replaced digon \([x,y]\) and split vertex \(z\). **Lemma 15**.: _If \(D\) is \(4\)-Ore, then \(T(D)\geqslant\frac{2}{3}(n(D)-1)\)._ Proof.: If \(D\) is \(\overleftrightarrow{K_{4}}\), then the result is clear. Suppose now that \(D\) is an Ore-composition of \(D_{1}\) and \(D_{2}\). Then \(n(D)=n(D_{1})+n(D_{2})-1\) and, by Lemma 14, \(T(D)\geqslant T(D_{1})+T(D_{2})-2\). By induction, \(T(D_{1})\geqslant\frac{2}{3}(n(D_{1})-1)\) and \(T(D_{2})\geqslant\frac{2}{3}(n(D_{2})-1)\), and so \(T(D)\geqslant\frac{2}{3}(n(D_{1})+n(D_{2})-1-1)=\frac{2}{3}(n(D)-1)\). Let \(D\) be a digraph. A **diamond** in \(D\) is a subdigraph isomorphic to \(\overleftrightarrow{K_{4}}\) minus a digon \([u,v]\), with vertices different from \(u\) and \(v\) having degree \(6\) in \(D\). An **emerald** in \(D\) is a subdigraph isomorphic to \(\overleftrightarrow{K_{3}}\) whose vertices have degree \(6\) in \(D\). Let \(R\) be an induced subdigraph of \(D\) with \(n(R)<n(D)\). The **boundary** of \(R\) in \(D\), denoted by \(\partial_{D}(R)\), or simply \(\partial(R)\) when \(D\) is clear from the context, is the set of vertices of \(R\) having a neighbour in \(V(D)\setminus R\). We say that \(R\) is **Ore-collapsible** if the boundary of \(R\) contains exactly two vertices \(u\) and \(v\) and \(R\cup[u,v]\) is \(4\)-Ore. **Lemma 16**.: _If \(D\) is \(4\)-Ore and \(v\in V(D)\), then there exists either an Ore-collapsible subdigraph of \(D\) disjoint from \(v\) or an emerald of \(D\) disjoint from \(v\)._ Proof.: If \(D\) is a copy of \(\overleftrightarrow{K_{4}}\), then \(D-v\) is an emerald. Otherwise, \(D\) is the Ore-composition of two \(4\)-Ore digraphs: \(D_{1}\) the digon side with replaced digon \([x,y]\), and \(D_{2}\) the split side with split vertex \(z\). If \(v\in V(D_{2}-z)\), then \(D_{1}\setminus\{x,y\}\) is an Ore-collapsible subdigraph with boundary \(\{y,z\}\). Otherwise \(v\in V(D_{1})\) and we apply induction on \(D_{2}\) to find an emerald or an Ore-collapsible subdigraph in \(D_{2}\) disjoint from \(z\). **Lemma 17**.: _If \(D\neq\overleftrightarrow{K_{4}}\) is \(4\)-Ore and \(T\) is a copy of \(\overleftrightarrow{K_{3}}\) in \(D\), then there exists either an Ore-collapsible subdigraph of \(D\) disjoint from \(T\) or an emerald of \(D\) disjoint from \(T\)._ Proof.: As \(D\) is not \(\overleftrightarrow{K_{4}}\), it is an Ore-composition of two \(4\)-Ore digraphs: \(D_{1}\) the digon side with replaced digon \([x,y]\), and \(D_{2}\) the split side with split vertex \(z\). As \(x\) and \(y\) are non adjacent, we have either \(T\subseteq D_{1}\), \(T\subseteq D_{2}-z\), or \(T\) contains a vertex \(w\in\{x,y\}\) and two vertices in \(V(D_{2}-z)\). If \(T\subseteq D_{1}\), then by Lemma 16, in \(D_{2}\) there exists either an Ore-collapsible subdigraph \(O\) or an emerald \(E\) disjoint from \(z\). In the former case \(O\) is an Ore-collapsible subdigraph of \(D\) disjoint from \(T\), and in the later one \(E\) is an emerald in \(D\) disjoint from \(T\). If \(T\subseteq D_{2}-z\), then \(D_{1}\setminus\{x,y\}\) is an Ore-collapsible subdigraph disjoint from \(T\). Assume now that \(T\) contains a vertex \(w\in\{x,y\}\) and two vertices in \(V(D_{2}-z)\). Without loss of generality, we may assume that \(y\not\in T\). Let \(z_{1}\) and \(z_{2}\) be the two vertices of \(T\) disjoint from \(w\). Then \(\{z,z_{1},z_{2}\}\) induces a bidirected triangle \(T^{\prime}\) in \(D_{2}\). If \(D_{2}\neq\vec{K}_{4}\), then by induction in \(D_{2}\), there exists either an Ore-collapsible subdigraph \(O\) or an emerald \(E\) disjoint from \(T^{\prime}\). In the former case \(O\) is an Ore-collapsible subdigraph of \(D\) disjoint from \(T\), and in the later one \(E\) is an emerald in \(D\) disjoint from \(T\) even if \(y\in V(E)\) because \(y\) has the same degree in \(D_{1}\) and \(D\) **Lemma 18**.: _If \(R\) is an Ore-collapsible induced subdigraph of a \(4\)-Ore digraph \(D\), then there exists a diamond or an emerald of \(D\) whose vertices lie in \(V(R)\)._ Proof.: Let \(D\) be a digraph. Let \(R\) be a minimal counterexample to this lemma, and let \(\partial(R)=\{u,v\}\) and \(H=D\langle R\rangle\cup[u,v]\). If \(H=\overleftrightarrow{K_{4}}\), then \(R\) is a diamond in \(D\). Suppose now that \(H\) is the Ore-composition of two \(4\)-Ore digraphs \(H_{1}\) (the digon side with replaced digon \([x,y]\)) and \(H_{2}\) (the split side with split vertex \(z\)). If \(\{u,v\}\not\subset V(H_{2})\), then by Lemma 16 there exists an Ore-collapsible subdigraph in \(H_{2}\) disjoint from \(z\). As it is smaller than \(H\), it contains an emerald or a diamond as desired, a contradiction. Now assume that \(\{u,v\}\subset V(H_{2})\), then \(H_{1}\) is an Ore-collapsible subdigraph of \(D\) smaller than \(H\), and by induction, \(H_{1}\) contains a diamond or an emerald in \(D\). **Lemma 19**.: _If \(D\) is a \(4\)-Ore digraph and \(v\) is a vertex in \(D\), then \(D\) contains a diamond or an emerald disjoint from \(v\)._ Proof.: Follows from Lemmas 16 and 18. **Lemma 20**.: _If \(D\) is a \(4\)-Ore digraph and \(T\) is a bidirected triangle in \(D\), then either \(D=\overleftrightarrow{K_{4}}\) or \(D\) contains a diamond or an emerald disjoint from \(T\)._ Proof.: Follows from Lemmas 17 and 18. The following theorem was formulated for undirected graphs, but by replacing every edge by a digon, it can be restated as follows: **Theorem 21** (Kostochka and Yancey [11], Theorem 6).: _Let \(D\) be a \(4\)-dicritical bidirected digraph._ _If \(\frac{10}{3}n(D)-m(D)>1\), then \(D\) is \(4\)-Ore and \(\frac{10}{3}n(D)-m(D)=\frac{4}{3}\)._ **Lemma 22**.: _If \(D\) is a \(4\)-Ore digraph with \(n\) vertices, then \(\rho(D)\leqslant\frac{4}{3}+\varepsilon n-\delta\frac{2(n-1)}{3}\)._ Proof.: Follows from Theorem 21 and Lemma 15. **Lemma 23** (Kostochka and Yancey [11], Claim 16).: _Let \(D\) be a \(4\)-Ore digraph. If \(R\subseteq D\) and \(0<n(R)<n(D)\), then \(\frac{10}{3}n(R)-m(R)\geqslant\frac{10}{3}\)._ **Lemma 24**.: _Let \(D\) be a \(4\)-Ore digraph obtained from a copy \(J\) of \(\overleftrightarrow{K_{4}}\) by successive Ore-compositions with \(4\)-Ore digraphs, vertices and digons in \(J\) being always on the digon side. Let \([u,v]\) be a digon in \(D\langle V(J)\rangle\). For every \(3\)-dicolouring \(\varphi\) of \(D\setminus[u,v]\), vertices in \(V(J)\) receive distinct colours except \(u\) and \(v\)._ Proof.: We proceed by induction on \(n(D)\), the result holding trivially when \(D\) is \(\overleftrightarrow{K_{4}}\). Now assume that \(D\) is the Ore-composition of \(D_{1}\), the digon side containing \(J\), and \(D_{2}\), with \(D_{1}\) and \(D_{2}\) being \(4\)-Ore digraphs. Let \([x,y]\subseteq A(D_{1})\) be the replaced digon in this Ore-composition, and let \(z\in V(D_{2})\) be the split vertex. Let \(\varphi\) be a \(3\)-dicolouring of \(D\setminus[u,v]\). Then \(\varphi\) induces a \(3\)-dicolouring of \(D\langle V(D_{2}-z)\cup\{x,y\}\rangle\). Necessarily \(\varphi(x)\neq\varphi(y)\), for otherwise \(\varphi_{2}\) defined by \(\varphi_{2}(v)=\varphi(v)\) if \(v\in V(D_{2}-z)\) and \(\varphi_{2}(z)=\varphi(x)\) is a \(3\)-dicolouring of \(D_{2}\), contradicting the fact that \(4\)-Ore digraphs have dichromatic number \(4\) by Lemma 11. Hence \(\varphi\) induces a \(3\)-dicolouring of \(D_{1}\setminus[u,v]\). So, by the induction hypothesis, vertices in \(V(J)\) have distinct colours in \(\varphi\), except \(u\) and \(v\) **Lemma 25**.: _Let \(D\) be a \(4\)-Ore digraph obtained from a copy \(J\) of \(\overleftrightarrow{K_{4}}\) by successive Ore-compositions with \(4\)-Ore digraphs, vertices and digons in \(J\) being always on the digon side. Let \(v\) be a vertex in \(V(J)\). For every \(3\)-dicolouring \(\varphi\) of \(D-v\), vertices in \(J\) receive distinct colours._ Proof.: We proceed by induction on \(n(D)\), the result holding trivially when \(D\) is \(\overleftrightarrow{K_{4}}\). Now assume that \(D\) is the Ore-composition of \(D_{1}\), the digon side containing \(J\), and \(D_{2}\), with \(D_{1}\) and \(D_{2}\) being \(4\)-Ore digraphs. Let \([x,y]\subseteq A(D_{1})\) be the replaced digon in this Ore-composition, and let \(z\in V(D_{2})\) be the split vertex. Let \(\varphi\) be a \(3\)-dicolouring of \(D-v\). If \(v\in\{x,y\}\), then \(\varphi\) is a \(3\)-dicolouring of \(D_{1}-v\) and the result follows by induction. Now assume \(v\not\in\{x,y\}\). Then \(\varphi\) induces a \(3\)-dicolouring of \(D\langle V(D_{2}-z)\cup\{x,y\}\rangle\). Necessarily \(\varphi(x)\neq\varphi(y)\), for otherwise \(\varphi_{2}\) defined by \(\varphi_{2}(v)=\varphi(v)\) if \(v\in V(D_{2}-z)\) and \(\varphi_{2}(z)=\varphi(x)\) is a \(3\)-dicolouring of \(D_{2}\), contradicting the fact that \(4\)-Ore digraphs have dichromatic number \(4\) by Lemma 11. Hence \(\varphi\) induces a \(3\)-dicolouring of \(D_{1}-v\). So, by the induction hypothesis, vertices in \(V(J)\) have distinct colours in \(\varphi\). ## 3 Proof of Theorem 8 Let \(D\) be a \(4\)-dicritical digraph, \(R\) be an induced subdigraph of \(D\) with \(4\leqslant n(R)<n(D)\) and \(\varphi\) a \(3\)-dicolouring of \(R\). The \(\varphi\)**-identification** of \(R\) in \(D\), denoted by \(D_{\varphi}(R)\) is the digraph obtained from \(D\) by identifying for each \(i\in[3]\) the vertices coloured \(i\) in \(V(R)\) to a vertex \(x_{i}\), adding the digons \([x_{i},x_{j}]\) for all \(1\leqslant i<j\leqslant 3\). Observe that \(D_{\varphi}(R)\) is not \(3\)-dicolourable. Indeed, assume for a contradiction that \(D_{\varphi}(R)\) has a \(3\)-dicolouring \(\varphi^{\prime}\). Since \(\{x_{1},x_{2},x_{3}\}\) induces a \(\overleftrightarrow{K_{3}}\), we may assume without loss of generality that \(\varphi^{\prime}(x_{i})=i\) for \(i\in[3]\). Consider the \(3\)-colouring \(\varphi^{\prime\prime}\) of \(D\) defined by \(\varphi^{\prime\prime}(v)=\varphi^{\prime}(v)\) if \(v\not\in R\) and \(\varphi^{\prime\prime}(v)=\varphi(v)\) if \(v\in R\). One easily checks that \(\varphi^{\prime\prime}\) is a \(3\)-dicolouring of \(D\), a contradiction to the fact that \(\vec{\chi}(D)=4\). Now let \(W\) be a \(4\)-dicritical subdigraph of \(D_{\varphi}(R)\) and \(X=\{x_{1},x_{2},x_{3}\}\). Then we say that \(R^{\prime}=D\langle(V(W)\setminus X)\cup R\rangle\) is the **dicritical extension** of \(R\) with **extender**\(W\). We call \(X_{W}=X\cap V(W)\) the **core** of the extension. Note that \(X_{W}\) is not empty, because \(W\) is not a subdigraph of \(D\). Thus \(1\leqslant|X_{W}|\leqslant 3\). See Figure 2 for an example of a \(\varphi\)-identification and a dicritical extension. Let \(D\) be a counterexample to Theorem 8 with minimum number of vertices. By Lemma 22, \(D\) is not \(4\)-Ore. Thus \(\rho(D)>1\). **Claim 1**.: _If \(\tilde{D}\) is a \(4\)-dicritical digraph with \(n(\tilde{D})<n(D)\), then \(\rho(\tilde{D})\leqslant\frac{4}{3}+4\varepsilon-2\delta\)._ Proof of claim.: If \(\tilde{D}\) is not \(4\)-Ore, then \(\rho(\tilde{D})\leqslant 1\) by minimality of \(D\). Thus \(\rho(\tilde{D})\leqslant\frac{4}{3}+4\varepsilon-2\delta\) because \(4\varepsilon-2\delta\geqslant\frac{-1}{3}\). Otherwise, by Lemma 22, \(\rho(\tilde{D})\leqslant\frac{4}{3}+\varepsilon n(\tilde{D})-\delta\frac{2(n( \tilde{D})-1)}{3}\leqslant\frac{4}{3}+4\varepsilon-2\delta\) because \(\delta\geqslant\frac{3}{2}\varepsilon\) and \(n(\tilde{D})\geqslant 4\). **Claim 2**.: _Let \(R\) be a subdigraph of \(D\) with \(4\leqslant n(R)<n(D)\). If \(R^{\prime}\) is a dicritical extension of \(R\) with extender \(W\) and core \(X_{W}\), then_ \[\rho(R^{\prime})\leqslant\rho(W)+\rho(R)-\left(\rho(\overleftrightarrow{K_{| X_{W}|}})+\delta\cdot T(\overleftrightarrow{K_{|X_{W}|}})\right)+\delta\cdot(T(W)-T(W -X_{W}))\] _and in particular_ \[\rho(R^{\prime})\leqslant\rho(W)+\rho(R)-\frac{10}{3}-\varepsilon+\delta.\] Proof of claim.: We have * \(n(R^{\prime})=n(W)-|X_{W}|+n(R)\), * \(m(R^{\prime})\geqslant m(W)+m(R)-m(\overleftrightarrow{K_{|X_{W}|}})\), * \(T(R^{\prime})\geqslant T(W-X_{W})+T(R)\) and by summing these inequalities, we get the first result. Now observe that \(T(W)-T(W-X_{W})\leqslant|X_{W}|\) by Lemma 13, and that the maximum of \(-\left(\rho(\overleftrightarrow{K_{|X_{W}|}})+\delta T(\overleftrightarrow{ K_{|X_{W}|}})\right)+\delta|X_{W}|\) is reached when \(|X_{W}|=1\), in which case it is equal to \(-\frac{10}{3}-\varepsilon+\delta\). The second inequality follows. **Claim 3**.: _If \(R\) is a subdigraph of \(D\) with \(4\leqslant n(R)<n(D)\), then \(\rho(R)\geqslant\rho(D)+2-3\varepsilon+\delta>3-3\varepsilon+\delta\)._ Proof of claim.: We proceed by induction on \(n-n(R)\). Let \(R^{\prime}\) be a dicritical extension of \(R\) with extender \(W\) and core \(X_{W}\). By Claim 2, we have \[\rho(R^{\prime})\leqslant\rho(W)+\rho(R)-\frac{10}{3}-\varepsilon+\delta.\] Either \(V(R^{\prime})=V(D)\) and so \(\rho(R^{\prime})\geqslant\rho(D)\) or \(V(R^{\prime})\) is a proper subset of \(V(D)\) and, since \(R\) is a proper subdigraph of \(R^{\prime}\), by induction \(\rho(R^{\prime})\geqslant\rho(D)+2-3\varepsilon+\delta\geqslant\rho(D)\). In both cases, \(\rho(R^{\prime})\geqslant\rho(D)\). Now \(W\) is smaller than \(D\) so \(\rho(W)\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1. Thus \[\rho(D)\leqslant\rho(R^{\prime})\leqslant\frac{4}{3}+4\varepsilon-2\delta+ \rho(R)-\frac{10}{3}-\varepsilon+\delta.\] Figure 2: A \(4\)-dicritical digraph \(D\) together with an induced subdigraph \(R\) of \(D\) and \(\varphi\) a \(3\)-dicolouring of \(R\), the \(\varphi\)-identification \(D_{\varphi}(R)\) of \(R\) in \(D\) and the dicritical extension \(R^{\prime}\) of \(R\) with extender \(W\) and core \(X_{W}\). For clarity, the digons are represented by undirected edges. This gives \(\rho(R)\geqslant\rho(D)+2-3\varepsilon+\delta>3-3\varepsilon+\delta\), because \(\rho(D)>1\). \(\lozenge\) As a consequence of Claim 3, any subdigraph (proper or not) of size at least \(4\) has potential at least \(\rho(D)\). We say that an induced subdigraph \(R\) of \(D\) is **collapsible** if, for every \(3\)-dicolouring \(\varphi\) of \(R\), its dicritical extension \(R^{\prime}\) (with extender \(W\) and core \(X_{W}\)) is \(D\), has core of size \(1\) (i.e. \(|X_{W}|=1\)), and the border \(\partial_{D}(R)\) of \(R\) is monochromatic in \(\varphi\). **Claim 4**.: _Let \(R\) be an induced subdigraph of \(D\) and \(\varphi\) a 3-dicolouring of \(R\) such that \(\partial(R)\) is not monochromatic in \(\varphi\). If \(D\) is a dicritical extension of \(R\) dicoloured by \(\varphi\) with extender \(W\) and core \(X_{W}\) with \(|X_{W}|=1\), then_ \[\rho(R)\geqslant\rho(D)+3-3\varepsilon+\delta.\] Proof of claim.: Assume \(D\) is a dicritical extension of \(R\) dicoloured by \(\varphi\) with extender \(W\) and core \(X_{W}\) with \(|X_{W}|=1\). Observe that each of the following inequalities holds: * \(n(D)=n(W)-|X_{W}|+n(R)=n(X)+n(R)-1\), * \(m(D)\geqslant m(W)+m(R)-m(\overbrace{K_{|X_{W}|}}^{\mbox{\Large$\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt \leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow\hskip-10.0pt\leftrightarrow \hskip-10.0pt\leftrightarrow \hskip-10. (ii) If \(R^{\prime}=D\) and \(|X_{W}|=2\), then \(\rho(\overbrace{K_{|X_{W}|}})+\delta T(\overbrace{K_{|X_{W}|}})=\frac{14}{3}+2\varepsilon\), and, by Lemma 13, \(T(W)-T(W-X_{W})\leqslant|X_{w}|=2\). Thus, by Claim 2, \[\rho(D)\leqslant\rho(W)+\rho(R)-\frac{14}{3}-2\varepsilon+2\delta\] Now, since \(W\) is smaller than \(D\), \(\rho(W)\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1. Thus \[\rho(D)\leqslant\rho(R)+\frac{4}{3}+4\varepsilon-2\delta-\frac{14}{3}-2 \varepsilon+2\delta=\rho(R)-\frac{10}{3}+2\varepsilon\] and so \(\rho(R)\geqslant\rho(D)+\frac{10}{3}-2\varepsilon\geqslant\rho(D)+\frac{8}{3 }-\varepsilon-\delta\). (iii) If \(R^{\prime}=D\) and \(|X_{W}|=3\), then \(\rho(\overbrace{K_{|X_{W}|}})+\delta T(\overbrace{K_{|X_{W}|}})=4+3\varepsilon\), and, by Lemma 13, \(T(W)-T(W-X_{W})\leqslant|X_{w}|=3\). Thus, by Claim 2, \[\rho(D)\leqslant\rho(W)+\rho(R)-4-3\varepsilon+3\delta.\] Now, since \(W\) is smaller than \(D\), \(\rho(W)\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1. Thus \[\rho(D)\leqslant\rho(R)+\frac{4}{3}+4\varepsilon-2\delta-4-3\varepsilon+3 \delta=\rho(R)-\frac{8}{3}+\varepsilon+\delta\] and so \(\rho(R)\geqslant\rho(D)+\frac{8}{3}-\varepsilon-\delta\). (iv) If \(R^{\prime}=D\), \(|X_{W}|=1\) and \(\partial(R)\) is not monochromatic in \(\varphi\), then, by Claim 4, we have \(\rho(R)\geqslant\rho(D)+3-3\varepsilon+\delta\geqslant\rho(D)+\frac{8}{3}- \varepsilon-\delta\). If \(R\) is not collapsible, then, by definition, it has a dicritical extension \(R^{\prime}\) satisfying the hypothesis of one of the cases (i)-(iv). In any case, \(\rho(R)\geqslant\rho(D)+\frac{8}{3}-\varepsilon-\delta\). \(\Diamond\) Recall that a \(k\)**-cutset** in a graph \(G\) is a set \(S\) of \(k\) vertices such that \(G-S\) is not connected. A graph is \(k\)**-connected** if it has more than \(k\) vertices and has no \((k-1)\)-cutset. A \(k\)**-cutset** in a digraph is a \(k\)-cutset in its underlying graph, and a digraph is \(k\)**-connected** if its underlying graph is \(k\)-connected. **Claim 6**.: \(D\) _is \(2\)-connected._ Proof of claim.: Suppose for contradiction that \(\{x\}\) is a \(1\)-cutset in \(D\). Let \((A_{0},B_{0})\) be a partition of \(V(D-x)\) into non-empty sets such that there is no edge between \(A_{0}\) and \(B_{0}\), and set \(A=A_{0}\cup\{x\}\) and \(B=B_{0}\cup\{x\}\). Since \(D\) is 4-dicritical, there exist a \(3\)-dicolouring \(\varphi_{A}\) of \(D\langle A\rangle\) and a \(3\)-dicolouring \(\varphi_{B}\) of \(D\langle B\rangle\). Free to swap the colours, we may assume \(\varphi_{A}(x)=\varphi_{B}(x)\). Let \(\varphi\) be defined by \(\varphi(v)=\varphi_{A}(v)\) if \(v\in A\) and \(\varphi(v)=\varphi_{B}(v)\) if \(v\in B\). Since \(\vec{\chi}(D)=4\), \(D\), coloured with \(\varphi\), must contain a monochromatic directed cycle. Such a directed cycle must be contained in \(D\langle A\rangle\) or \(D\langle B\rangle\), a contradiction. \(\Diamond\) **Claim 7**.: \(D\) _is \(3\)-connected. In particular, \(D\) contains no diamond._ Proof of claim.: Suppose for contradiction that \(\{x,y\}\) is a \(2\)-cutset of \(D\). Let \((A_{0},B_{0})\) be a partition of \(V(D)\setminus\{x,y\}\) into non-empty sets such that there is no edge between \(A_{0}\) and \(B_{0}\), and set \(A=A_{0}\cup\{x,y\}\) and \(B=B_{0}\cup\{x,y\}\). Assume for a contradiction that there exists a \(3\)-dicolouring \(\varphi_{A}\) of \(D\langle A\rangle\) and a \(3\)-dicolouring \(\varphi_{B}\) of \(D\langle B\rangle\) such that \(\varphi_{A}(x)\neq\varphi_{A}(y)\) and \(\varphi_{B}(x)\neq\varphi_{B}(y)\). Free to swap the colours, we may assume \(\varphi_{A}(x)=\varphi_{B}(x)\) and \(\varphi_{A}(y)=\varphi_{B}(y)\). Let \(\varphi\) be defined by \(\varphi(v)=\varphi_{A}(v)\) if \(v\in A\) and \(\varphi(v)=\varphi_{B}(v)\) if \(v\in B\). Every directed cycle either is in \(D\langle A\rangle\), or is in \(D\langle B\rangle\) or contains both \(x\) and \(y\). Therefore it cannot be monochromatic with \(\varphi\) because \(\varphi_{A}\) and \(\varphi_{B}\) are \(3\)-dicolourings of \(D\langle A\rangle\) and \(D\langle B\rangle\) respectively, and \(\varphi(x)\neq\varphi(y)\). Thus \(\varphi\) is a \(3\)-dicolouring of \(D\), a contradiction. Henceforth either \(D\langle A\rangle\) or \(D\langle B\rangle\) has no \(3\)-dicolouring \(\varphi\) such that \(\varphi(x)\neq\varphi(y)\). Suppose without loss of generality that it is \(D\langle A\rangle\). Let \(D_{A}=D\langle A\rangle\cup[x,y]\). \(D_{A}\) is not \(3\)-dicolourable because in every \(3\)-dicolouring of \(D\langle A\rangle\), \(x\) and \(y\) are coloured the same. Let \(D_{B}\) be the digraph obtained from \(D\langle B\rangle\) by identifying \(x\) and \(y\) into a vertex \(z\). Assume for a contradiction that \(D_{B}\) has a \(3\)-dicolouring \(\psi_{B}\). Set \(\psi(x)=\psi(y)=\psi_{B}(z)\), and \(\psi(u)=\psi_{B}(u)\) for every \(u\in B\setminus\{x,y\}\). Then consider a \(3\)-dicolouring \(\psi_{A}\) of \(D\langle A\rangle\) such that \(\psi_{A}(x)=\psi(x)=\psi_{A}(y)=\psi(y)\) (such a colouring exists because \(A\) is a proper subdigraph of \(D\)) and we set \(\psi(u)=\psi_{A}(u)\) for ever \(u\in V(A)\setminus\{x,y\}\). As \(D\) is not \(3\)-dicolourable, it contains a monochromatic directed cycle \(C\) (with respect to \(\psi\)). The cycle \(C\) is not included in \(D\langle A\rangle\) nor in \(D_{B}\). As a consequence, there is a monochromatic directed path from \(\{x,y\}\) to \(\{x,y\}\) in \(B\), and so there is a monochromatic directed cycle in \(D_{B}\) for \(\psi_{B}\), a contradiction. Therefore \(D_{B}\) is not \(3\)-dicolourable Now \(D_{A}\) has a \(4\)-dicritical subdigraph \(W_{A}\) which necessarily contains \(\{x,y\}\), and \(D_{B}\) has a \(4\)-dicritical subdigraph \(W_{B}\) which necessarily contains \(z\). As \(W_{A}\) and \(W_{B}\) are \(4\)-dicritical digraphs smaller than \(D\), we have \(\rho(W_{A}),\rho(W_{B})\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1. Let \(H\) be the subdigraph of \(D\) induced by \(V(W_{A})\cup V(W_{B}-z)\). Note that \(n(H)=n(W_{A})+n(W_{B})-1\) and \(m(H)\geqslant m(W_{A})+m(W_{B})-2\). Moreover \(T(H)\geqslant T(W_{A}-x)+T(W_{B}-z)\geqslant T(W_{A})+T(W_{B})-2\), by Lemma 13. Hence we have \[\rho(H) \leqslant \rho(W_{A})+\rho(W_{B})-\left(\frac{10}{3}+\varepsilon\right)+ (m(W_{A})+m(W_{B})-m(H))+2\delta \tag{1}\] \[\leqslant \rho(W_{A})+\rho(W_{B})-\frac{10}{3}-\varepsilon+2+2\delta\] \[= \rho(W_{A})+\rho(W_{B})-\frac{4}{3}-\varepsilon+2\delta\] \[\leqslant 2\left(\frac{4}{3}+4\varepsilon-2\delta\right)-\frac{4}{3}- \varepsilon+2\delta\] \[= \frac{4}{3}+7\varepsilon-2\delta\] By Claim 3, if \(n(H)<n(D)\) then \(\rho(H)>3-3\varepsilon+\delta\). As \(10\varepsilon-3\delta\leqslant\frac{5}{3}\), we deduce that \(H=D\). Hence \(1<\rho(D)=\rho(H)\leqslant\frac{4}{3}+7\varepsilon-2\delta+(m(W_{A})+m(W_{B})- m(H)-2)\) and so \(m(H)=m(W_{A})+m(W_{B})-2\) because \(2\delta-7\varepsilon\leqslant\frac{2}{3}\). In particular, there is no arc between \(x\) and \(y\) in \(D\). Moreover, no arc was suppressed when identifying \(x\) and \(y\) into \(z\) to obtain \(D_{B}\), so \(x\) and \(y\) have no common out-neighbour (resp. in-neighbour) in \(B_{0}\). We first show that either \(W_{A}\) or \(W_{B}\) is not \(4\)-Ore. Assume for contradiction that both \(W_{A}\) and \(W_{B}\) are \(4\)-Ore. If \(H=D\) is not bidirected, then by Lemma 12, \(D\langle B\rangle\) admits a \(3\)-dicolouring \(\varphi_{B}\) such that \(\varphi_{B}(x)=\varphi_{B}(y)\). Now let \(\varphi_{A}\) be a \(3\)-dicolouring of \(D\langle A\rangle\). We have \(\varphi_{A}(x)=\varphi_{A}(y)\). Free to exchange colour we may assume, \(\varphi_{A}(x)=\varphi_{A}(y)=\varphi_{B}(x)=\varphi_{B}(y)\). Hence we can define the \(3\)-colouring \(\varphi\) of \(D\) by \(\varphi(v)=\varphi_{A}(v)\) if \(v\in A\), and \(\varphi(v)=\varphi_{B}(v)\) if \(v\in B\). Observe that, since \(A\) is bidirected, all neighbours of \(x\) and \(y\) in \(D\langle A\rangle\) have a colour distinct from \(\varphi(x)\). Therefore there is no monochromatic directed cycle in \(D\) coloured by \(\varphi\). Thus \(\varphi\) is a \(3\)-dicolouring of \(D\), a contradiction. Therefore, \(H=D\) is bidirected, and so \(H\) is an Ore-composition of \(W_{A}\) and \(W_{B}\) (because \(D\) is \(2\)-connected by Claim 6), and so \(D\) is \(4\)-Ore, a contradiction. Henceforth, we may assume that either \(W_{A}\) or \(W_{B}\) is not \(4\)-Ore. If none of \(W_{A}\) and \(W_{B}\) are is a \(4\)-Ore, then by minimality of \(D\), \(\rho(W_{A})\leqslant 1\) and \(\rho(W_{B})\leqslant 1\). Together with Equation (1), this yields \[\rho(H)\leqslant\frac{2}{3}-\varepsilon+2\delta\leqslant 1\] because \(2\delta-\varepsilon\leqslant\frac{1}{3}\), a contradiction. If none of \(W_{A}\) and \(W_{B}\) is \(\overleftrightarrow{K_{4}}\), then \(\rho(W_{A})+\rho(W_{B})\leqslant 1+(\frac{4}{3}+7\varepsilon-4\delta)\) (recall that if a digraph is \(4\)-Ore but not \(\overleftrightarrow{K_{4}}\), then it has potential at most \(\frac{4}{3}+7\varepsilon-4\delta\) by Lemma 22). Thus, with Equation (1), we get \[\rho(H)\leqslant 1+\left(\frac{4}{3}+7\varepsilon-4\delta\right)-\frac{4}{3}- \varepsilon+2\delta=1+6\varepsilon-2\delta\leqslant 1\] because \(\delta\geqslant 3\varepsilon\). Finally, if exactly one of \(W_{A}\) or \(W_{B}\) is isomorphic to \(\overleftrightarrow{K_{4}}\), then either \(T(W_{A}-x)=T(W_{A})=2\) (if \(W_{A}=\overleftrightarrow{K_{4}}\)) or \(T(W_{B}-z)=T(W_{B})=2\) (if \(W_{B}=\overleftrightarrow{K_{4}}\)). Therefore \(T(H)\geqslant T(W_{A}-x)+T(W_{B}-z)\geqslant T(W_{A})+T(W_{B})-1\) by Lemma 13, and so \[\rho(H)\leqslant\rho(W_{A})+\rho(W_{B})-\left(\frac{10}{3}+\varepsilon\right) +2+\delta.\] Now the non \(4\)-Ore digraph among \(W_{A}\), \(W_{B}\) has potential at most \(1\) and the other has potential \(\rho(\overleftrightarrow{K_{4}})=\frac{4}{3}+4\varepsilon-2\delta\). Thus \[\rho(H)\leqslant 1+\left(\frac{4}{3}+4\varepsilon-2\delta\right)-\left(\frac{10 }{3}+\varepsilon\right)+2+\delta=1+3\varepsilon-\delta\leqslant 1\] because \(\delta\geqslant 3\varepsilon\). In all three cases, \(\rho(D)=\rho(H)\leqslant 1\), which is a contradiction. Hence \(D\) is \(3\)-connected. \(\Diamond\) **Claim 8**.: _If \(R\) is a collapsible subdigraph of \(D\), \(u,v\) are in the boundary of \(R\) and \(D\langle R\rangle\cup[u,v]\) is \(4\)-Ore, then there exists \(R^{\prime}\subseteq R\) such that_ 1. _either_ \(R^{\prime}\) _is an Ore-collapsible subdigraph of_ \(D\)_, or_ 2. \(R^{\prime}\) _is an induced subdigraph of_ \(R\)_,_ \(n(R^{\prime})<n(R)\)_, and there exist_ \(u^{\prime},v^{\prime}\) _in_ \(\partial_{D}(R^{\prime})\) _such that_ \(R^{\prime}\cup[u^{\prime},v^{\prime}]\) _is_ \(4\)_-Ore._ _Proof of claim._ If \(\partial(R)=\{u,v\}\), then \(R\) is Ore-collapsible and we are done. Suppose now that there exists \(w\in\partial(R)\) distinct from \(u\) and \(v\). Let \(H=D\langle R\rangle\cup[u,v]\). Observe that \(H\neq\overleftrightarrow{K_{4}}\) as \(u,v\) and \(w\) receive the same colour in any \(3\)-dicolouring of \(D\langle R\rangle\) because \(R\) is collapsible. Hence \(H\) is the Ore-composition of two \(4\)-Ore digraphs \(H_{1}\) (the digon side with replaced digon \([x,y]\)) and \(H_{2}\) (the split side with split vertex \(z\)). If \(u\) or \(v\) is in \(V(H_{2})\), then \(R^{\prime}=D\langle V(H_{1})\rangle\) with \(u^{\prime}=x,v^{\prime}=y\) satisfies (ii). Now we assume that \(u,v\in V(H_{1})\setminus V(H_{2})\). By repeating this argument successively on \(H_{1}\), and then on the digon-side of \(H_{1}\), etc, either we find a subdigraph \(R^{\prime}\) satisfying (ii) or \(u\) and \(v\) are in a copy \(J\) of \(\overleftrightarrow{K_{4}}\) such that \(H\) is obtained by Ore-compositions between \(J\) and some \(4\)-Ore digraphs with \(J\) being always in the digon side. Observe that \(w\not\in V(J)\) because in any \(3\)-dicolouring of \(H\setminus[u,v]\), vertices in \(J\) receive different colours by Lemma 24, except \(u\) and \(v\). Hence at one step in the succession of Ore-compositions, \(w\) was in the split-side \(S\) when a digon \(e\) in \(J\) has been replaced. However \(e\neq[u,v]\), so either \(u\) or \(v\) is not in \(e\). Suppose without loss of generality that \(e\) is not incident to \(v\). We claim that \(H^{\prime}=R-v\cup[u,w]\) is not \(3\)-dicolourable. Otherwise, let \(\varphi\) be a \(3\)-dicolouring of \(H^{\prime}\). Then \(\varphi\) is a \(3\)-dicolouring of \(H-v\) with \(H\)\(4\)-Ore, so vertices in \(J-v\) must receive pairwise different colours by Lemma 25. Let \(\varphi^{\prime}\) be a \(3\)-dicolouring of \(R\). Without loss of generality, we may assume that \(\varphi(x)=\varphi^{\prime}(x)\) for every \(x\in V(J-v)\). If \(y\in S\), let \(\varphi^{\prime\prime}(y)=\varphi(y)\), and let \(\varphi^{\prime\prime}(y)=\varphi^{\prime}(y)\) if \(y\not\in S\). Then \(\varphi^{\prime\prime}\) is a \(3\)-dicolouring of \(R\) but with \(\varphi(u)\neq\varphi(w)\), contradicting the fact that \(R\) is collapsible. This shows that \(H^{\prime}=R-v\cup[u,w]\) is not \(3\)-dicolourable. Hence \(R-v\cup[u,w]\) contains a \(4\)-dicritical digraph \(K\). By Lemma 23, \(R^{\prime}=D\langle V(K)\rangle\), as a subdigraph of \(H\) which is a 4-Ore, satisfies \(\frac{10}{3}n(R^{\prime})-m(R^{\prime})\geqslant\frac{10}{3}\). This implies that \(\frac{10}{3}n(K)-m(K)\geqslant\frac{4}{3}\). Note also that \(K\) is bidirected because \(R-v\) is bidirected. Thus, by Theorem 21, \(K\) is \(4\)-Ore. Hence \(R^{\prime}\) with \(u,w\) satisfies (ii). \(\Diamond\) **Claim 9**.: _If \(R\) is a subdigraph of \(D\) with \(n(R)<n(D)\) and \(u,v\in V(R)\), then \(R\cup[u,v]\) is \(3\)-dicolourable. As a consequence, there is no collapsible subdigraph in \(D\)._ _Proof of claim._ Assume for a contradiction that the statement is false. Consider a smallest induced subdigraph \(R\) for which the statement does not hold. Then \(K=R\cup[u,v]\) is 4-vertex-dicritical, that is for every vertex \(v\in V(K),\vec{\chi}(K-v)<4=\vec{\chi}(K)\). Note that \(4\)-vertex-dicritical digraphs smaller than \(D\) satisfy the outcome of Theorem 8 since adding arcs does not increase the potential. Note that \(\rho(R)\leqslant\rho(K)+2+\delta\). If \(R\) is not collapsible, then, by Claim 5, \(\rho(R)\geqslant\rho(D)+\frac{8}{3}-\varepsilon-\delta>\frac{11}{3}-\varepsilon-\delta\). But we also have \(\rho(R)\leqslant\rho(K)+2+\delta\leqslant\frac{10}{3}+4\varepsilon-\delta\) by Claim 1, which is a contradiction because \(5\varepsilon\leqslant\frac{1}{3}\). Hence \(R\) is collapsible. Let \(\varphi\) be a \(3\)-dicolouring of \(R\). Observe that \(\varphi(u)=\varphi(v)\) for otherwise \(R\cup[u,v]\) would be \(3\)-dicolourable. Let \(R^{\prime}\) be the dicritical extension of \(R\) with extender \(W\) and core \(X_{W}\). We have \(R^{\prime}=D\) and \(|X_{W}|=1\). Since \(R\) is collapsible, for every two vertices \(u^{\prime},v^{\prime}\) on the boundary of \(R\), \(R\cup[u^{\prime},v^{\prime}]\) is not \(3\)-dicolourable. Hence, free to consider \(u^{\prime},v^{\prime}\) instead of \(u,v\), we can suppose that \(u\) and \(v\) are on the boundary of \(R\). If \(K\) is \(4\)-Ore, then, by Claim 8 and by minimality of \(R\), we have that \(R\) is Ore-collapsible, and so has boundary of size \(2\). This contradicts the fact that \(D\) is \(3\)-connected. Hence \(K\) is not \(4\)-Ore. By Claim 2, we have \[1<\rho(D)=\rho(R^{\prime}) \leqslant \rho(W)+\rho(R)-\frac{10}{3}-\varepsilon+\delta\] \[\leqslant \rho(W)+(\rho(K)+2+\delta)-\frac{10}{3}-\varepsilon+\delta\] and as \(\rho(K)\leqslant 1\) (because it is not \(4\)-Ore and by minimality of \(D\)) we get \[1<1+\rho(W)-\left(\frac{4}{3}+\varepsilon-2\delta\right)\] that is \(\rho(W)>\frac{4}{3}+\varepsilon-2\delta\). But as \(W\) is smaller than \(D\), it satisfies Theorem 8. Thus, since \(\varepsilon-2\delta\geqslant\frac{-1}{3}\), \(W\) must be 4-Ore. Moreover, \(W\) must be isomorphic to \(\overleftrightarrow{K_{4}}\), for otherwise \(\rho(W)\) would be at most \(\frac{4}{3}+7\varepsilon-4\delta\), and \(\frac{4}{3}+7\varepsilon-4\delta\geqslant\rho(W)>\frac{4}{3}+\varepsilon-2\delta\) would contradict \(\delta\geqslant 3\varepsilon\). Hence \(\rho(W)=\rho(\overleftrightarrow{K_{4}})=\frac{4}{3}+4\varepsilon-2\delta\) and \(T(W-X_{W})=2=T(W)\). Thus, by Claim 2 and because \(\delta\geqslant 3\varepsilon\), we have \[1<\rho(D)\leqslant\rho(W)+\rho(K)+2+\delta-\frac{10}{3}-\varepsilon\leqslant \rho(K)+3\varepsilon-\delta\leqslant\rho(K)\leqslant 1,\] a contradiction. This implies that \(D\) does not contain any collapsible subdigraph. Indeed, assume for a contradiction that \(D\) contains a collapsible subdigraph \(R\), and let \(u,v\) be two vertices in its boundary. Then there exists a 3-dicolouring \(\varphi\) of \(R\cup[u,v]\), for which \(\partial(R)\) is not monochromatic, a contradiction. \(\Diamond\) **Claim 10**.: _If \(R\) is a subdigraph of \(D\) with \(n(R)<n(D)\) and \(u,v,u^{\prime},v^{\prime}\in R\), then \(R\cup\{uv,u^{\prime}v^{\prime}\}\) is \(3\)-dicolourable. In particular, \(D\) contains no copy of \(\overleftrightarrow{K_{4}}\) minus two arcs._ Proof of claim.: Assume for a contradiction that the statement is false. Consider a smallest subdigraph \(R\) for which the statement does not hold. Then \(K=R\cup\{uv,u^{\prime}v^{\prime}\}\) is \(4\)-dicritical and smaller than \(D\), so \(\rho(K)\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1. By Claim 9, \(R\) is not collapsible, so, by Claim 5, we have \(\rho(R)\geqslant\rho(D)+\frac{8}{3}-\varepsilon-\delta>\frac{11}{3}-\varepsilon-\delta\). But \(\rho(R)\leqslant\rho(K)+2+2\delta\leqslant\frac{10}{3}+4\varepsilon\), which is a contradiction as \(5\varepsilon+\delta\leqslant\frac{1}{3}\). \(\Diamond\) For any \(v\in V(D)\), we denote by \(n(v)\) its number of neighbours, that is \(n(v)=|N^{+}(u)\cup N^{-}(v)|\), and by \(d(v)\) its number of incident arcs, that is \(d(v)=d^{+}(v)+d^{-}(v)\). **Claim 11**.: _Vertices of degree \(6\) in \(D\) have either three or six neighbours._ Proof of claim.: Let \(x\) be a vertex of degree \(6\). If \(n(x)=4\), then let \(a,b,c,d\) be its neighbours such that \(N^{+}(x)=\{a,b,c\}\) and \(N^{-}(x)=\{a,b,d\}\). Consider \(D^{\prime}=D-x\cup dc\). By Claim 10, \(D^{\prime}\) has a \(3\)-dicolouring \(\varphi\). If \(|\varphi(N^{-}(x))|<3\), then choosing \(\varphi(x)\) in \(\{1,2,3\}\setminus\varphi(N^{-}(x))\), we obtain a \(3\)-dicolouring of \(D\), a contradiction. Hence \(\varphi(N^{-}(x))=\{1,2,3\}\). We set \(\varphi(x)=\varphi(d)\). As \(D\) is not \(3\)-dicolourable, \(D\) contains a monochromatic directed cycle \(C\). This cycle \(C\) must contain the arc \(dx\), and an out-neighbour \(z\) of \(x\). Since \(\varphi(a)\), \(\varphi(b)\) and \(\varphi(d)\) are all distinct, necessarily \(z=c\). But then \(C-x\cup dc\) is a monochromatic directed cycle in \(D^{\prime}\), a contradiction. Similarly, if \(n(v)=5\), let \(N^{+}(x)=\{a,b,c\}\) and \(N^{-}(x)=\{a,d,e\}\), and consider \(D^{\prime}=D-x\cup db\cup dc\). By Claim 10, \(D^{\prime}\) has a \(3\)-dicolouring \(\varphi\). If \(|\varphi(N^{-}(x))|<3\), then choosing \(\varphi(x)\) in \(\{1,2,3\}\setminus\varphi(N^{-}(x))\), we obtain a \(3\)-dicolouring of \(D\), a contradiction. Hence \(\varphi(N^{-}(x))=\{1,2,3\}\). We set \(\varphi(x)=\varphi(d)\). As \(D\) is not \(3\)-dicolourable, there is a monochromatic directed cycle \(C\), which must contain the arc \(dx\) and an out-neighbour \(z\) of \(x\). Note that \(z\) must be \(b\) or \(c\) because \(\varphi(a)\neq\varphi(d)\). Then \(C-x\cup dz\) is a monochromatic directed cycle in \(D^{\prime}\), a contradiction. **Claim 12**.: _There is no bidirected triangle containing two vertices of degree \(6\). In particular, \(D\) contains no emerald._ Proof of claim.: Suppose that \(D(\{x,y,z\})=\overleftrightarrow{K_{3}}\) and \(d(x)=d(y)=6\). By Claim 11, \(x\) and \(y\) have exactly three neighbours, and \(N[x]\neq N[y]\) because \(D\) contains no copy of \(\overleftrightarrow{K_{4}}\) minus two arcs by Claim 10. Let \(u\) (resp. \(v\)) be the unique neighbour of \(x\) distinct from \(y\) and \(z\) (resp. \(x\) and \(z\)). Consider \(D^{\prime}=D-\{x,y\}\cup[u,v]\). By Claim 9, \(D^{\prime}\) has a \(3\)-dicolouring \(\varphi\). Without loss of generality, suppose that \(\varphi(u)=1\) and \(\varphi(v)=2\). If \(\varphi(z)=1\) (resp. \(\varphi(z)=2\), \(\varphi(z)=3\)), we set \(\varphi(x)=2\) and \(\varphi(y)=3\) (resp. \(\varphi(x)=3\) and \(\varphi(y)=1\), \(\varphi(x)=2\) and \(\varphi(y)=1\)). In each case, this yields a \(3\)-dicolouring of \(D\), a contradiction. So now we know that \(D\) contains no emerald, and no diamond by Claim 7. **Claim 13**.: _If \(R\) is an induced subdigraph of \(D\) with \(4\leqslant n(R)<n(D)\), then \(\rho(R)\geqslant\rho(D)+3+3\varepsilon-3\delta\), except if \(D-R\) contains a single vertex which has degree \(6\) in \(D\)._ Proof of claim.: Let \(R\) be an induced subdigraph of \(D\) with \(4\leqslant n(R)<n(D)\). By Claim 9, \(R\) is not collapsible. Let \(\varphi\) be a 3-dicolouring of \(R\), \(R^{\prime}\) be a dicritical extension of \(R\) with extender \(W\) and core \(X_{W}\) (with respect to \(\varphi\)). By (the consequence of) Claim 3, we know that \(\rho(R^{\prime})\geqslant\rho(D)\). Assume first that \(R^{\prime}\neq D\). Then, by Claims 3 and 2, \[\rho(D)+2-3\varepsilon+\delta\leqslant\rho(R^{\prime})\leqslant\rho(W)+\rho( R)-\frac{10}{3}-\varepsilon+\delta.\] Since \(\rho(W)\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1, we have \(\rho(R)\geqslant\rho(D)+4-6\varepsilon+2\delta\geqslant\rho(D)+3+3 \varepsilon-3\delta\), because \(1\geqslant 9\varepsilon-5\delta\). In the following we suppose that \(R^{\prime}=D\). We distinguish three cases depending on the cardinality of \(|X_{W}|\). * Assume first that \(|X_{W}|=2\). Then, by Claim 2 and Lemma 13, \[\rho(D)\leqslant\rho(R^{\prime})\leqslant\rho(W)+\rho(R)-\frac{20}{3}-2\varepsilon +2+2\delta\] and, as \(\rho(W)\leqslant\frac{4}{3}+4\varepsilon-2\delta\) by Claim 1, we have \(\rho(R)\geqslant\rho(D)+\frac{10}{3}-2\varepsilon\geqslant\rho(D)+3+3 \varepsilon-3\delta\) because \(5\varepsilon-3\delta\leqslant\frac{1}{3}\). * Assume now that \(|X_{W}|=3\). If there is a vertex \(v\in V(D-R)\) with two out-neighbours (resp. two in-neighbours) in \(V(R)\) with the same colour for \(\varphi\), then * \(m(R^{\prime})\geqslant m(W)+m(R)-m(\overleftrightarrow{K_{|X_{W}|}})+1\) because \(v\) has two in- or out-neighbour in \(V(R)\) with the same colour for \(\varphi\), * \(T(R^{\prime})\geqslant T(W-X_{W})+T(R)\). It follows that \[\rho(D)\leqslant\rho(R^{\prime})\leqslant\rho(W)+\rho(R)-(10+3\varepsilon-6) +3\delta-1\] and so \(\rho(R)\geqslant\rho(D)-\frac{4}{3}-4\varepsilon+2\delta+5+3\varepsilon-3\delta \geqslant\rho(D)+\frac{11}{3}-\varepsilon-\delta\geqslant\rho(D)+3+3 \varepsilon-3\delta\) because \(4\varepsilon-2\delta\leqslant\frac{2}{3}\). Now we assume that there is no vertex with two out-neighbours (resp. two in-neighbours) in \(R\) with the same colour for \(\varphi\). In other words, the in-degrees and out-degrees of vertices in \(D-R\) are the same in \(D\) and in \(W\). If \(W\) is not \(4\)-Ore, then by Claim 2 \[\rho(D)\leqslant\rho(R^{\prime})\leqslant\rho(W)+\rho(R)-(10+3\varepsilon-6) +3\delta\] and, as \(\rho(W)\leqslant 1\), we have \(\rho(R)\geqslant\rho(D)+3+3\varepsilon-3\delta\). Now suppose \(W\) is \(4\)-Ore. If \(W\neq\overleftrightarrow{K_{4}}\), then, by Lemma 20, \(W\) contains a diamond or an emerald disjoint from \(X\), and this gives a diamond or an emerald in \(D\) because the degrees of vertices in \(D-R\) are the same in \(D\) and in \(W\), which is a contradiction. Now suppose that \(W=\overleftrightarrow{K_{4}}\). Then \(D-R\) has a single vertex of degree \(6\) in \(D\). * Assume finally that \(|X_{W}|=1\). Since \(R\) is not collapsible by Claim 9, \(\varphi\) may have been chosen so that \(\partial(R)\) is not monochromatic in \(\varphi\). Then, by Claim 4, \(\rho(R)\geqslant\rho(D)+3-3\varepsilon+\delta\geqslant\rho(D)+3+3 \varepsilon-3\delta\), because \(6\varepsilon-4\delta\leqslant 0\). \(\Diamond\) In \(D\), we say that a vertex \(v\) is a **simple in-neighbour** (resp. **simple out-neighbour**) if \(v\) is a in-neighbour (resp. out-neighbour) of \(u\) and \([u,v]\) is not a digon in \(D\). If \(v\) is a simple in-neighbour or simple out-neighbour of \(u\), we simply say that \(v\) is a **simple neighbour** of \(u\). **Claim 14**.: _Vertices of degree \(7\) have seven neighbours. In other words, every vertex of degree \(7\) has only simple neighbours._ Proof of claim.: Let \(x\) be a vertex of degree \(7\). We suppose, without loss of generality, that \(d^{-}(x)=3\) and \(d^{+}(x)=4\). If \(n(x)=4\), then \(x\) has a unique simple out-neighbour \(a\). As \(D\) is \(4\)-dicritical, \(D\setminus xa\) has a \(3\)-dicolouring \(\varphi\). But then every directed cycle is either in \(D\setminus xa\) or it contains \(xa\) and thus an in-neighbour \(t\) of \(x\). In the first case, it is not monochromatic because \(\varphi\) is a \(3\)-dicolouring of \(D\setminus xa\), and in the second case, it is not monochromatic because \([t,x]\) is a digon and so \(\varphi(t)\neq\varphi(x)\). Hence \(\varphi\) is a \(3\)-dicolouring of \(D\), a contradiction. If \(n(x)=5\), let \(N^{-}(x)=\{a,b,c\}\) and \(N^{+}(x)=\{a,b,d,e\}\). By Claim 10, \(D^{\prime}=D-x\cup\{cd,ce\}\) has a \(3\)-dicolouring \(\varphi\). If \(|\varphi(N^{-}(x))|<3\), then choosing \(\varphi(x)\) in \(\{1,2,3\}\setminus\varphi(N^{-}(x))\) gives a \(3\)-dicolouring of \(D\), a contradiction. If \(|\varphi(N^{-}(x))|=3\), then we set \(\varphi(x)=\varphi(c)\). Suppose for a contradiction that there is a monochromatic directed cycle \(C\) in \(D\) (with \(\varphi\)). Necessarily \(C\) contains \(x\) (since \(\varphi\) is a \(3\)-dicolouring of \(D-x\)) and so it must contain \(c\) and one vertex \(y\) in \(\{d,e\}\) because \(\varphi(a)\), \(\varphi(b)\), and \(\varphi(c)\) are all distinct. Then \(C-x\cup cy\) is a monochromatic directed cycle in \(D^{\prime}\), a contradiction. Therefore \(\varphi\) is a \(3\)-dicolouring of \(D\), a contradiction. If \(n(x)=6\), let \(N^{-}(x)=\{a,b,c\}\) and \(N^{+}(x)=\{a,d,e,f\}\). Consider \(D^{\prime}=D-x\cup\{bd,be,bf\}\). We first show that \(D^{\prime}\) is not \(3\)-dicolourable. Assume for a contradiction that there is a \(3\)-dicolouring \(\varphi\) of \(D^{\prime}\). If \(|\varphi(N^{-}(x))|<3\), then choosing \(\varphi(x)\) in \(\{1,2,3\}\setminus\varphi(N^{-}(x))\) gives a \(3\)-dicolouring of \(D\), a contradiction. Hence \(|\varphi(N^{-}(x))|=3\). We set \(\varphi(x)=\varphi(b)\). Since \(D\) is not \(3\)-dicolourable, there exists a monochromatic directed cycle \(C\) in \(D\) (with \(\varphi\)). Necessarily \(C\) contains \(x\) (since \(\varphi\) is a \(3\)-dicolouring of \(D-x\)) and so it must contain \(b\) and one vertex \(y\) in \(\{d,e,f\}\) because \(\varphi(a)\), \(\varphi(b)\), and \(\varphi(c)\) are all distinct. Then \(C-x\cup by\) is a monochromatic directed cycle in \(D^{\prime}\), a contradiction. This gives a \(3\)-dicolouring of \(D\), a contradiction. Henceforth \(D^{\prime}\) is not \(3\)-dicolourable, and so it contains a \(4\)-dicritical digraph \(\tilde{D}\), smaller than \(D\). If \(\tilde{D}\) does not contain the three arcs \(bd,be,bf\), then it can be obtained from a proper induced subdigraph of \(D\) by adding at most two arcs, and so it is \(3\)-dicolourable by Claim 10, a contradiction. Hence \(\{b,d,e,f\}\subseteq V(\tilde{D})\). Now consider \(U=D\langle V(\tilde{D})\cup\{x\}\rangle\). \(\bullet\) Assume first that \(a\not\in V(U)\) or \(c\not\in V(U)\). Then we have * \(n(U)=n(\tilde{D})+1\), * \(m(U)\geqslant m(\tilde{D})+1\) and * \(T(U)\geqslant T(\tilde{D}-b)\geqslant T(\tilde{D})-1\) by Lemma 13. Hence \[\rho(U) \leqslant\rho(\tilde{D})+\frac{10}{3}+\varepsilon-1+\delta\] \[\leqslant\frac{4}{3}+4\varepsilon-2\delta+\frac{10}{3}+\varepsilon -1+\delta\quad\text{by Claim 1,}\] \[=1+\frac{8}{3}+5\varepsilon-\delta\] \[<\rho(D)+\frac{8}{3}+5\varepsilon-\delta\] \[\leqslant\rho(D)+3+3\varepsilon-3\delta\qquad\qquad\qquad\text{ because }\frac{1}{3}\geqslant 2\delta+2\varepsilon.\] Hence by Claim 13, \(D-U\) has a single vertex of degree \(6\) (in \(D\)), which must be either \(a\) or \(c\). Then we have * \(n(D)=n(\tilde{D})+2\), * \(m(D)\geqslant m(\tilde{D})-3+11\) and * \(T(D)\geqslant T(\tilde{D}-b)\geqslant T(\tilde{D})-1\). Thus \[\rho(D) \leqslant\rho(\tilde{D})+2\left(\frac{10}{3}+\varepsilon\right)-8+\delta\] \[\leqslant\left(\frac{4}{3}+4\varepsilon-2\delta\right)-\frac{4}{ 3}+2\varepsilon+\delta\quad\text{by Claim 1,}\] \[\leqslant 1\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{because }6\varepsilon-\delta\leqslant 1.\] This is a contradiction. \(\bullet\) Assume now that \(a,c\in V(U)\), then we have * \(n(U)=n(\tilde{D})+1\), * \(m(U)\geqslant m(\tilde{D})+4\) and * \(T(U)\geqslant T(\tilde{D}-b)\geqslant T(\tilde{D})-1\) by Lemma 13. Thus \[\rho(U) \leqslant\rho(\tilde{D})+\frac{10}{3}+\varepsilon-4+\delta\] \[\leqslant\left(\frac{4}{3}+4\varepsilon-2\delta\right)+\frac{10}{ 3}+\varepsilon-4+\delta\quad\text{by Claim 1,}\] \[\leqslant 1\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{because }5\varepsilon-\delta\leqslant\frac{1}{3}.\] Together with the consequence of Claim 3, we get that \(\rho(D)\leqslant\rho(U)\leqslant 1\), a contradiction. The \(8^{+}\)**-valency** of a vertex \(v\), denoted by \(\nu(v)\), is the number of arcs incident to \(v\) and a vertex of degree at least \(8\). Let \(D_{6}\) be the subdigraph of \(D\) induced by the vertices of degree \(6\) incident to digons. Let us describe the connected components of \(D_{6}\) and their neighbourhoods. Remember that vertices of degree \(7\) are incident to no digon by Claim 14, and so they do not have neighbours in \(V(D_{6})\). If \(v\) is a vertex in \(D_{6}\), we define its **neighbourhood valency** to be the sum of the \(8^{+}\)-valency of its neighbours of degree at least \(8\). We denote the neighbourhood valency of \(v\) by \(\nu_{N}(v)\). **Claim 15**.: _If \([x,y]\) is a digon and both \(x\) and \(y\) have degree \(6\), then either_ 1. _the two neighbours of_ \(y\) _distinct from_ \(x\) _have degree at least_ \(8\)_, or_ 2. _the two neighbours of_ \(x\) _distinct from_ \(y\) _have degree at least_ \(8\) _and_ \(\nu_{N}(x)\geqslant 4\)_._ Proof of claim.: Let \([x,y]\) be a digon in \(D\) with \(d(x)=d(y)=6\). By Claim 11\(n(x)=n(y)=3\). Let \(u\) and \(v\) be the two neighbours of \(x\) different from \(y\). By Claim 14, \(u\) and \(v\) have degree \(6\) or at least \(8\). If \(u\) and \(v\) are linked by a digon, then by Claim 12, \(u\) and \(v\) do not have degree \(6\), so they have degree \(8\). Moreover \(\nu(u)\geqslant 2\) and \(\nu(v)\geqslant 2\). Thus \(\nu_{N}(x)=\nu(u)+\nu(v)\geqslant 4\) and (ii) holds. Henceforth, we may assume that \(u\) and \(v\) are not linked by a digon. Let \(D^{\prime}\) the digraph obtained by removing \(x\) and \(y\) and identifying \(u\) and \(v\) into a single vertex \(u\star v\). We claim that \(D^{\prime}\) is not \(3\)-dicolourable. To see that, suppose for contradiction that there exists a \(3\)-dicolouring \(\varphi\) of \(D^{\prime}\). Then set \(\varphi(u)=\varphi(v)=\varphi(u\star v)\), choose \(\varphi(y)\) in \(\{1,2,3\}\setminus\varphi(N(y)\setminus\{x\})\), and finally choose \(\varphi(x)\) in \(\{1,2,3\}\setminus\{\varphi(u\star v),\varphi(y)\}\). One can easily see that \(\varphi\) is now a \(3\)-dicolouring of \(D\), a contradiction. This proves that \(D^{\prime}\) is not \(3\)-dicolourable and so it contains a \(4\)-dicritical digraph \(\tilde{D}\), which must contain \(u\star v\) because every subdigraph of \(D\) is \(3\)-dicolourable. Let \(R\) be the subdigraph of \(D\) induced by \((V(\tilde{D})\setminus\{u\star v\})\cup\{u,v,x\}\). We have * \(n(R)=n(\tilde{D})+2\), * \(m(R)\geqslant m(\tilde{D})+4\) and * \(T(R)\geqslant T(\tilde{D}-u\star v)+1\geqslant T(\tilde{D})\) because \([x,u]\) is a digon, and by Lemma 13. If \(\tilde{D}\) is not \(4\)-Ore, then \(\rho(\tilde{D})\leqslant 1\) by minimality of \(D\), and so \[\rho(R) \leqslant\rho(\tilde{D})+2\left(\frac{10}{3}+\varepsilon\right)-4\] \[\leqslant 1+\frac{8}{3}+2\varepsilon\] \[<\rho(D)+3+3\varepsilon-3\delta\qquad\qquad\text{because } \varepsilon-3\delta\geqslant-\frac{1}{3}.\] Similarly, if \(\tilde{D}\) is \(4\)-Ore but not \(\overleftrightarrow{K_{4}}\), then \[\rho(R) \leqslant\rho(\tilde{D})+2\left(\frac{10}{3}+\varepsilon\right)-4\] \[\leqslant\left(\frac{4}{3}+7\varepsilon-4\delta\right)+\frac{8}{3 }+2\varepsilon\quad\text{by Lemma \ref{lem:22},}\] \[=1+3+9\varepsilon-4\delta\] \[<\rho(D)+3+9\varepsilon-4\delta\] \[\leqslant\rho(D)+3+3\varepsilon-3\delta\qquad\qquad\text{because } \delta\geqslant 6\varepsilon.\] In both cases (that is when \(\tilde{D}\) is not \(\overleftrightarrow{K_{4}}\)), by Claim 13, \(D-R\) is a single vertex of degree \(6\), namely \(y\). Then every neighbour \(w\) of \(y\) different from \(x\) has degree at least \(6\) in \(\tilde{D}\) (because \(\tilde{D}\) is \(3\)-dicritical) and so has degree at least \(8\) in \(D\) and (i) holds. Assume now that \(\tilde{D}\) is a copy of \(\overleftrightarrow{K_{4}}\). Let us denote by \(a,b,c\) the vertices of \(\tilde{D}\) different from \(u\star v\). Suppose for a contradiction that \(u\) has degree \(6\). Then \(u\) has exactly three neighbours by Claim 11. If \(|N(u)\cap\{a,b,c\}|=2\), then \(D\langle\{u,a,b,c\}\rangle\) is a copy of \(\overleftrightarrow{K_{4}}\) minus a digon, contradicting Claim 9. If \(|N(u)\cap\{a,b,c\}|\leqslant 1\), then \(v\) must be adjacent to at least two vertices of \(\{a,b,c\}\) with a digon, and so \(D\langle\{v,a,b,c\}\rangle\) contains a copy of \(\overleftrightarrow{K_{4}}\) minus a digon, contradicting Claim 9. Hence \(u\) has degree at least \(8\), and by symmetry so does \(v\). Moreover \(D\langle\{a,b,c\}\rangle\) is a bidirected triangle, and so by Claim 12, at least two of these vertices have degree at least \(8\) (remember that vertices of degree \(7\) are in no digon by Claim 14). Hence at least four arcs between \(\{u,v\}\) and \(\{a,b,c\}\) are incident to two vertices of degree at least \(8\). In other word, \(\nu_{N}(x)=\nu(u)+\nu(v)\geqslant 4\), so (ii) holds. \(\Diamond\) **Claim 16**.: _Let \(C\) be a connected component of \(D_{6}\). Then \(C\) is one of the following (see Figure 3):_ 1. _a single vertex, or_ 2. _a bidirected path on two vertices, or_ 3. _a bidirected path on three vertices, whose extremities have neighbourhood valency at least_ \(4\)_, or_ 4. _a star on four vertices, whose non-central vertices have neighbourhood valency at least_ \(4\)_._ Proof of claim.: First observe that \(C\) does not contain a bidirected path \(\llbracket x,y,z,w\rrbracket\) on four vertices, because otherwise, by Claim 15 applied on \([y,z]\), either \(y\) or \(z\) has two neighbours of degree at least \(8\), a contradiction. Observe also that \(C\) contains no bidirected triangle by Claim 12. Moreover, if \(\llbracket x,y,z\rrbracket\) is a bidirected path in \(C\) on three vertices, then by Claim 15 applied both on \([y,z]\) and \([z,y]\), \(x\) and \(z\) have both neighbourhood valency at least \(4\). The statement of the claim follows. An arc \(xy\) is said to be **out-chelou** if 1. \(yx\not\in A(D)\), and 2. \(d^{+}(x)=3\), and 3. \(d^{-}(y)=3\), and 4. there exists \(z\in N^{-}(y)\setminus N^{+}(y)\) distinct from \(x\). Symmetrically, we say that an arc \(xy\) is **in-chelou** if \(yx\) is out-chelou in the digraph obtained from \(D\) by reversing every arc. See Figure 4 for an example of an out-chelou arc. **Claim 17**.: _There is no out-chelou arc and no in-chelou arc in \(D\)._ Proof of claim.: By directional duality, it suffices to prove that \(D\) has no out-chelou arcs. Let \(xy\) be an out-chelou arc with \(z\in N^{-}(y)\setminus(N^{+}(y)\cup\{x\})\). Consider \(D^{\prime}=D-\{x,y\}\cup\{zz^{\prime}\mid z^{\prime}\in N^{+}(y)\setminus N^ {-}(y)\}\). We claim that \(D^{\prime}\) is not \(3\)-dicolourable. To see that, suppose for contradiction that there is a \(3\)-dicolouring \(\varphi\) of \(D^{\prime}\). As \(d^{+}(x)=3\), we can choose \(\varphi(x)\) in \(\{1,2,3\}\setminus\varphi(N^{+}(x)\setminus\{y\})\) to obtain a \(3\)-dicolouring of \(D-y\). If \(|\varphi(N^{-}(y))|<3\), then choosing \(\varphi(y)\) in \(\{1,2,3\}\setminus\varphi(N^{-}(y))\) gives a \(3\)-dicolouring of \(D\), a contradiction. Hence \(|\varphi(N^{-}(y)|=3\). Set \(\varphi(x)=\varphi(z)\). Suppose there is a monochromatic directed cycle \(C\) in \(D\). It must contain \(y\) and thus \(z\), its unique in-neighbour with its colour. Let \(z^{\prime}\) be the out-neighbour of \(y\) in \(C\). It must be in \(N^{+}(y)\setminus N^{-}(y)\), so \(zz^{\prime}\) is an arc in \(D^{\prime}\). Thus \(C-y\cup zz^{\prime}\) is a monochromatic directed cycle in \(D^{\prime}\), a contradiction. Therefore \(\varphi\) is a \(3\)-dicolouring of \(D\), a contradiction. Hence \(D^{\prime}\) is not \(3\)-dicolourable. Consequently, \(D^{\prime}\) contains a \(3\)-dicritical digraph \(\tilde{D}\), which is smaller than \(D\) and contains \(z\), for otherwise \(\tilde{D}\) would be a subdigraph of \(D\). Consider \(U=D\langle V(\tilde{D})\cup\{y\}\rangle\). We have * \(n(U)=n(\tilde{D})+1\), Figure 4: An example of an out-chelou arc \(xy\). Figure 3: The possible connected components of \(D_{6}\). * \(m(U)\geqslant m(\tilde{D})+1\) and * \(T(U)\geqslant T(\tilde{D}-z)\geqslant T(\tilde{D})-1\) by Lemma 13. First if \(\tilde{D}\) is not 4-Ore, then by minimality of \(D\) we have \(\rho(\tilde{D})\leqslant 1\), so \[\rho(U)\leqslant\rho(\tilde{D})+\frac{10}{3}+\varepsilon-1+\delta\leqslant \frac{10}{3}+\varepsilon+\delta\leqslant\frac{11}{3}-\varepsilon-\delta\] \(2\varepsilon+2\delta\leqslant\frac{1}{3}\). Next if \(\tilde{D}\) is 4-Ore, but not isomorphic to \(\overleftrightarrow{K_{4}}\), then \(\rho(\tilde{D})\leqslant\frac{4}{3}+7\varepsilon-4\delta\) by Lemma 22, and \[\rho(U)\leqslant\rho(\tilde{D})+\frac{10}{3}+\varepsilon-1+\delta\leqslant \frac{11}{3}+8\varepsilon-3\delta\leqslant\frac{11}{3}-\varepsilon-\delta\] because \(9\varepsilon-2\delta\leqslant 0\). Finally if \(\tilde{D}\) is isomorphic to \(\overleftrightarrow{K_{4}}\), then we have \(T(U)\geqslant T(\tilde{D}-z)\geqslant T(\tilde{D})\) and \(\rho(\tilde{D})=\frac{4}{3}+4\varepsilon-2\delta\). So the same computation yields \[\rho(U)\leqslant\rho(\tilde{D})+\frac{10}{3}+\varepsilon-1\leqslant\frac{11 }{3}+5\varepsilon-2\delta\leqslant\frac{11}{3}-\varepsilon-\delta\] because \(6\varepsilon-\delta\leqslant 0\). In all cases, we have \(\rho(U)\leqslant\frac{11}{3}-\varepsilon-\delta\). This contradicts Claim 5 because \(U\) is not collapsible by Claim 9. \(\Diamond\) We now use the discharging method. For every vertex \(v\), let \(\sigma(v)=\frac{\delta}{|C|}\) if \(v\) has degree \(6\) and is in a component \(C\) of \(D_{6}\) of size at least \(2\), and \(\sigma(v)=0\) otherwise. Clearly \(T(D)\) is at least the number of connected components of size at least \(2\) of \(D_{6}\) so \(\sum_{v\in V(D)}\sigma(v)\leqslant\delta T(D)\). We define the **initial charge** of \(v\) to be \(w(v)=\frac{10}{3}+\varepsilon-\frac{d(v)}{2}-\sigma(v)\). We have \[\rho(D)\leqslant\sum_{v\in V(D)}w(v).\] We now redistribute this total charge according to the following rules: * A vertex of degree \(6\) incident to no digon sends \(\frac{1}{12}-\frac{\varepsilon}{8}\) to each of its neighbours. * A vertex of degree \(6\) incident to digons sends \(\frac{2}{d(v)-\nu(v)}(-\frac{10}{3}+\frac{d(v)}{2}-\varepsilon)\) to each neighbour \(v\) of degree at least \(8\) (so \(\frac{1}{d(v)-\nu(v)}(-\frac{10}{3}+\frac{d(v)}{2}-\varepsilon)\) via each arc of the digon). * A vertex of degree \(7\) with \(d^{-}(v)=3\) (resp. \(d^{+}(v)=3\)) sends \(\frac{1}{12}-\frac{\varepsilon}{8}\) to each of its neighbours (resp. out-neighbours). For every vertex \(v\), let \(w^{*}(v)\) be the final charge of \(v\). **Claim 18**.: _If \(v\) has degree at least \(8\), then \(w^{*}(v)\leqslant 0\)._ Proof of claim.: Let \(v\) be a vertex of degree at least \(8\). If \(v\) is not adjacent to a vertex of degree at most \(7\), then \(w^{*}(v)=w(v)=\frac{10}{3}+\varepsilon-\frac{d(v)}{2}\leqslant 0\) (because \(\varepsilon\leqslant\frac{2}{3}\)). Otherwise, \(d(v)-\nu(v)\geqslant 1\) and \[\frac{1}{d(v)-\nu(v)}\left(-\frac{10}{3}+\frac{d(v)}{2}-\varepsilon\right) \geqslant\frac{1}{d(v)}\left(-\frac{10}{3}+\frac{d(v)}{2}- \varepsilon\right)\] \[\geqslant\frac{1}{12}-\frac{\varepsilon}{8}.\] Thus \(v\) receives at most \(\frac{1}{d(v)-\nu(v)}(-\frac{10}{3}+\frac{d(v)}{2}-\varepsilon)\) per arc incident with a vertex of degree \(6\) or \(7\). Since there are \(d(v)-\nu(v)\) such arcs, \(w^{*}(v)\leqslant w(v)-\frac{10}{3}-\varepsilon+\frac{d(v)}{2}=0\). **Claim 19**.: _If \(v\) has degree \(7\), then \(w^{*}(v)\leqslant 0\)._ Proof of claim.: By Claim 14, \(v\) has seven neighbours. Without loss of generality, let us suppose that \(d^{-}(v)=3\) and \(d^{+}(v)=4\). By Claim 17, the in-neighbours of \(v\) can not have out-degree \(3\). In particular, they do not have degree \(6\), and if they have degree \(7\), they do not send anything to \(v\) by Rule (R3). Hence \(v\) receives at most four times the charge \(\frac{1}{12}-\frac{\varepsilon}{8}\) by (R1) or (R3), and it sends three times this charge by (R3). Hence \[w^{*}(v) \leqslant w(v)+\frac{1}{12}-\frac{\varepsilon}{8}\] \[=-\frac{1}{12}+\frac{7}{8}\varepsilon\] and the result comes because \(\varepsilon\leqslant\frac{2}{21}\). **Claim 20**.: _If \(v\) is a vertex of degree \(6\) incident to no digon, then \(w^{*}(v)\leqslant 0\)._ Proof of claim.: The vertex \(v\) sends \(\frac{1}{12}-\frac{\varepsilon}{8}\) to each of its neighbours, and it receives no charge as all its in-neighbours (resp. out-neighbours) have out-degree (resp. in-degree) at least \(4\), by Claim 17. As a consequence, \[w^{*}(v)=w(v)-6\left(\frac{1}{12}-\frac{\varepsilon}{8}\right)=-\frac{1}{6}+ \frac{7\varepsilon}{4}\] and the result comes because \(\varepsilon\leqslant\frac{2}{21}\). **Claim 21**.: _Let \(v\) be a vertex in \(D_{6}\) having at least two neighbours of degree at least \(8\). Then \(w^{*}(v)\leqslant 0\). Moreover, if \(v\) is not an isolated vertex in \(D_{6}\) and \(\nu_{N}(v)\geqslant 4\), then \(w^{*}(v)\leqslant-\frac{1}{9}+\frac{5}{3}\varepsilon-\frac{\delta}{4}\)._ Proof of claim.: Observe that \(v\) receives no charge and sends the following charge to each of its neighbour \(u\) with degree at least \(8\): \[\frac{2}{d(u)-\nu(u)}\left(-\frac{10}{3}-\varepsilon+\frac{d(u)}{2}\right) \geqslant\frac{2}{d(u)}\left(-\frac{10}{3}-\varepsilon+\frac{d(u)} {2}\right)\] \[=1-\frac{2}{d(u)}\left(\frac{10}{3}+\varepsilon\right)\] \[\geqslant\frac{2}{8}\left(-\frac{10}{3}-\varepsilon+4\right)\] \[=\frac{1}{6}-\frac{\varepsilon}{4}.\] Assume first that \(v\) is isolated in \(D_{6}\). By Claim 14, its three neighbours do not have degree 7, and so have degree at least 8. Thus \(v\) sends three times at least \(\frac{1}{6}-\frac{\varepsilon}{4}\), and so \[w^{*}(v)\leqslant w(v)-3\left(\frac{1}{6}-\frac{\varepsilon}{4}\right)=-\frac {1}{6}+\frac{7}{4}\varepsilon\] and the result comes because \(\varepsilon\leqslant\frac{2}{21}\). Assume now that \(v\) is in a connected component \(C\) of \(D_{6}\) of size at least \(2\). By Claim 16, \(\sigma(v)\geqslant\frac{\delta}{4}\), so \(w(v)\leqslant\frac{1}{3}+\varepsilon-\frac{\delta}{4}\). Moreover it sends two times at least \(\frac{1}{6}-\frac{\varepsilon}{4}\). Hence \[w^{*}(v)\leqslant\left(\frac{1}{3}+\varepsilon-\frac{\delta}{4}\right)-2 \left(\frac{1}{6}-\frac{\varepsilon}{4}\right)=\frac{3}{2}\varepsilon-\frac{ \delta}{4}\] and the result comes because \(\delta\geqslant 6\varepsilon\). This shows the first part of the statement. We will now prove the second part of the statement. Assume that \(v\) is not an isolated vertex in \(D_{6}\) and \(\nu_{N}(v)\geqslant 4\). Let \(u_{1}\) and \(u_{2}\) be the two neighbours of \(v\) with degree at least 8. For every \(i\in\{1,2\}\) we have \[\frac{2}{d(u_{i})-\nu(u_{i})}\left(-\frac{10}{3}-\varepsilon+\frac{d(u_{i})}{ 2}\right)=1-\frac{1}{d(u_{i})-\nu(u_{i})}\left(\frac{20}{3}+2\varepsilon-\nu( u_{i})\right)\] **Case 1:**\(\nu(u_{i})\geqslant 7\) for some \(i\in\{1,2\}\). Without loss of generality suppose \(i=1\). Then we have \[1-\frac{1}{d(u_{1})-\nu(u_{1})}\left(\frac{20}{3}+2\varepsilon-\nu(u_{1}) \right)\geqslant 1\] because \(\nu(u_{1})\geqslant 7\geqslant\frac{20}{3}+2\varepsilon\) as \(\varepsilon\leqslant\frac{1}{6}\). Then the total charge sent by \(v\) is at least \(1\), and thus \[w^{*}(v)\leqslant w(v)-1\leqslant\left(\frac{1}{3}+\varepsilon-\frac{\delta}{ 4}\right)-1=-\frac{2}{3}+\varepsilon-\frac{\delta}{4}\] Thus, we have \(w^{*}(v)\leqslant-\frac{1}{9}+\frac{5}{3}\varepsilon-\frac{\delta}{4}\) because \(\varepsilon,\delta\geqslant 0\). **Case 2:**\(\nu(u_{1}),\nu(u_{2})\leqslant 6\). Let \(f:[0,6]\to\mathbb{R}\) be the function defined by \[f(x)=\frac{2}{8-x}\left(-\frac{10}{3}-\varepsilon+\frac{8}{2}\right)=1-\frac{1} {8-x}\left(\frac{20}{3}-2\varepsilon-x\right)\] for every \(x\in[0,6]\). Observe that \(f\) is non decreasing and convex on \([0,6]\) because \(-\frac{10}{3}-\varepsilon+\frac{8}{2}\geqslant 0\). For \(i=1,2\), we have \[\frac{2}{d(u_{i})-\nu(u_{i})}\left(-\frac{10}{3}-\varepsilon+\frac{d(u_{i})}{2 }\right)\geqslant f(\nu(u_{i}))\] because the function \(d\mapsto 1-\frac{1}{d-\nu(u_{i})}\left(\frac{20}{3}+2\varepsilon-\nu(u_{i})\right)\) is non decreasing on \([8,+\infty[\) as \(\nu(u_{i})\leqslant 6\leqslant\frac{20}{3}+2\varepsilon\). Hence the charge sent by \(v\) to \(u_{i}\) is at least \(f(\nu(u_{i}))\). By hypothesis we have \(\nu_{N}(v)=\nu(u_{1})+\nu(u_{2})\geqslant 4\). It follows that the total charge sent by \(v\) is at least \[f(\nu(u_{1}))+f(\nu(u_{2})) \geqslant 2f\left(\frac{\nu(u_{1})+\nu(u_{2})}{2}\right)\] by convexity of \[f\] \[\geqslant 2f(2)\] because \[f\] is non decreasing \[=\frac{4}{9}-\frac{2}{3}\varepsilon.\] Hence \[w^{*}(v)\leqslant w(v)-\left(\frac{4}{9}-\frac{2}{3}\varepsilon\right) \leqslant\left(\frac{1}{3}+\varepsilon-\frac{\delta}{4}\right)-\frac{4}{9}+ \frac{2}{3}\varepsilon=-\frac{1}{9}+\frac{5}{3}\varepsilon-\frac{\delta}{4}.\] showing the second part of the statement. \(\diamondsuit\) **Claim 22**.: _If \(C\) is a connected component of \(D_{6}\), then \(\sum_{v\in V(C)}w^{*}(v)\leqslant 0\)._ Proof of claim.: If \(C\) has a unique vertex \(v\), then, by Claim 21, we have \(w^{*}(v)\leqslant 0\) as wanted. If \(C\) has two vertices \(x\) and \(y\), then, again by Claim 21, \(w^{*}(x),w^{*}(y)\leqslant 0\), and so \(w^{*}(x)+w^{*}(y)\leqslant 0\). If \(C\) is a bidirected path \([x,y,z]\), then, by Claim 16, \(x\) and \(z\) have both neighbourhood valency at least \(4\) and so by Claim 21\(w^{*}(x),w^{*}(z)\leqslant-\frac{1}{9}-\frac{\varepsilon}{6}\). Moreover, \(y\) sends at least \(\frac{2}{8}(-\frac{10}{3}+4-\varepsilon)=\frac{1}{6}-\frac{\varepsilon}{4}\) to its neighbour out of \(C\). Hence \[w^{*}(y)\leqslant w(y)-\left(\frac{1}{6}-\frac{\varepsilon}{4}\right) \leqslant\frac{1}{3}+\varepsilon-\frac{\delta}{3}-\frac{1}{6}+\frac{ \varepsilon}{4}=\frac{1}{6}+\frac{5}{4}\varepsilon-\frac{\delta}{3}.\] Altogether, we get that \[w^{*}(x)+w^{*}(y)+w^{*}(z)\leqslant\frac{1}{6}+\frac{5}{4}\varepsilon-\frac{ \delta}{3}+2\left(-\frac{1}{9}-\frac{\varepsilon}{6}\right)=-\frac{1}{18}+ \frac{11}{12}\varepsilon-\frac{\delta}{3}\leqslant 0\] because \(\delta\geqslant 6\varepsilon\). Finally, if \(C\) is a bidirected star with centre \(x\) and three other vertices \(y,z,w\), then \(w^{*}(x)\leqslant w(x)=\frac{1}{3}+\varepsilon-\frac{\delta}{4}\). Moreover, each of \(y,z,w\) has neighbourhood valency at least \(4\) by Claim 16 and so has final charge at most \(-\frac{1}{9}+\frac{5}{3}\varepsilon-\frac{\delta}{4}\) by Claim 21. Hence \[w^{*}(x)+w^{*}(y)+w^{*}(z)+w^{*}(w)\leqslant\frac{1}{3}+\varepsilon-\frac{ \delta}{4}+3\left(-\frac{1}{9}+\frac{5}{3}\varepsilon-\frac{\delta}{4}\right) \leqslant 6\varepsilon-\delta\leqslant 0\] because \(\delta\geqslant 6\varepsilon\). \(\Diamond\) As a consequence of these last claims, we have \(\rho(D)\leqslant\sum_{v\in V(D)}w(v)=\sum_{v\in V(D)}w^{*}(v)\leqslant 0\leqslant 1\), a contradiction. This proves Theorem 8. ## 4 An upper bound on \(o_{k}(n)\) In this section, we show that, for every fixed \(k\), there are infinitely many values of \(n\) such that \(o_{k}(n)\leqslant(2k-\frac{7}{2})n\). The proof is strongly based on the proof of [1, Theorem 4.4], which shows \(o_{k}(n)\leqslant(2k-3)n\) for every \(k,n\) (with \(n\) large enough). For \(k=4\), the construction implies in particular that there is a \(4\)-dicritical oriented graph with \(76\) vertices and \(330\) arcs, and there are infinitely many \(4\)-dicritical oriented graphs with \(m/n\leqslant 9/2\). **Proposition 26**.: _Let \(k\geqslant 3\) be an integer. For infinitely many values of \(n\in\mathbb{N}\), there exists a \(k\)-dicritical oriented graph \(\vec{G}_{k}\) on \(n\) vertices with at most \((2k-\frac{7}{2})n\) arcs._ Proof.: Let us fix \(n_{0}\in\mathbb{N}\). We will show, by induction on \(k\), that there exists a \(k\)-dicritical oriented graph \(\vec{G}_{k}\) on \(n\) vertices with at most \((2k-\frac{7}{2})n\) arcs, such that \(n\geqslant n_{0}\). When \(k=3\), the result is known ([1, Corollary 4.3]). We briefly describe the construction for completeness. Start from any orientation of an odd cycle on \(2n_{0}+1\) vertices. Then for each arc \(xy\) in this orientation, add a directed triangle \(\vec{C}_{3}\) and every arc from \(y\) to \(V(\vec{C}_{3})\) and every arc from \(V(\vec{C}_{3})\) to \(x\) (see Figure 5). This gadget forces \(x\) and \(y\) to have different colours in every \(2\)-dicolouring. Since we started from an orientation of an odd cycle, the result is a 3-dicritical oriented graph on \(4(2n_{0}+1)\) vertices and \(10(2n_{0}+1)\) arcs. Let us fix \(k\geqslant 4\) and assume that there exists such a \((k-1)\)-dicritical oriented graph \(\vec{G}_{k-1}\) on \(n_{k-1}\geqslant n_{0}\) vertices with \(m_{k-1}\leqslant(2(k-1)-\frac{7}{2})n_{k-1}\) arcs. We start from any tournament \(T\) on \(k\) vertices. Then we add, for each arc \(xy\) of \(T\), a copy \(\vec{G}_{k-1}^{xy}\) of \(\vec{G}_{k-1}\), all arcs from \(y\) to \(\vec{G}_{k-1}^{xy}\) and all arcs from \(\vec{G}_{k-1}^{xy}\) to \(x\). Figure 6 illustrates a possible construction of \(\vec{G}_{4}\), where \(T\) is the transitive tournament on \(4\) vertices. Let \(\vec{G}_{k}\) be the resulting oriented graph. By construction, \(n_{k}=|V(\vec{G}_{k})|\) and \(m_{k}=|A(\vec{G}_{k})|\) satisfy: \[n_{k} =k+\binom{k}{2}n_{k-1}\] \[m_{k} =\binom{k}{2}+\binom{k}{2}\times 2\times n_{k-1}+\binom{k}{2}\times m _{k-1}\] \[\leqslant\binom{k}{2}+\binom{k}{2}\left(2+2(k-1)-\frac{7}{2} \right)n_{k-1}\] \[=\binom{k}{2}+\binom{k}{2}\left(2k-\frac{7}{2}\right)n_{k-1}\] \[=\binom{k}{2}+\left(2k-\frac{7}{2}\right)(n_{k}-k)\] \[\leqslant\left(2k-\frac{7}{2}\right)n_{k}\] where in the last inequality we used \(k\left(2k-\frac{7}{2}\right)\geqslant\binom{k}{2}\), which holds when \(k\geqslant 2\). We will now prove that \(\vec{G}_{k}\) is indeed \(k\)-dicritical. We first prove that \(\vec{\chi}(\vec{G}_{k})=k\). Assume that there exists a \((k-1)\)-dicolouring \(\alpha\) of \(\vec{G}_{k}\). Then there exist \(x,y\in V(T)\) such that \(\alpha(x)=\alpha(y)\). Since \(\vec{\chi}(\vec{G}_{k-1})=k-1\), there exists Figure 5: A 3-dicritical oriented graph with \(\frac{5}{2}n\) arcs. Figure 6: A 4-dicritical oriented graph with at most \(\frac{9}{2}n\) arcs. \(z\in V(\vec{G}_{k-1}^{xy})\) such that \(\alpha(z)=\alpha(x)\). But then \((x,y,z,x)\) is a monochromatic directed triangle in \(\alpha\): a contradiction. Let us now prove that \(\vec{\chi}(\vec{G}_{k}\setminus\{uv\})\leqslant k-1\) for every arc \(uv\in A(\vec{G}_{k})\). This implies immediately that \(\vec{\chi}(\vec{G}_{k}=k\) and shows the result. Consider first an arc \(uv\) in \(A(T)\). We colour each copy \(\vec{G}_{k-1}^{xy}\) of \(\vec{G}_{k-1}\) with a \((k-1)\)-dicolouring of \(\vec{G}_{k-1}\). We then choose a distinct colour for every vertex in \(T\), except \(u\) and \(v\) which receive the same colour. This results in a \((k-1)\)-dicolouring of \(\vec{G}_{k}\setminus\{uv\}\). Consider now an arc \(uv\) of \(\vec{G}_{k-1}^{xy}\) for some \(xy\in A(T)\). Because \(\vec{G}_{k-1}\) is \((k-1)\)-dicritical, there exists a \((k-2)\)-dicolouring \(\xi\) of \(\vec{G}_{k-1}^{xy}\setminus\{uv\}\). Hence we colour \(\vec{G}_{k-1}^{xy}\setminus\{uv\}\) with \(\xi\), every other copy of \(\vec{G}_{k-1}\) a \((k-1)\)-dicolouring of \(\vec{G}_{k-1}\), and we choose a distinct colour for every vertex in \(T\), except \(x\) and \(y\) which both receive colour \(k-1\). This results in a \((k-1)\)-dicolouring of \(\vec{G}_{k}\setminus\{uv\}\). Consider finally an arc \(uv\) arc from \(u\in V(T)\) to \(v\in V(\vec{G}_{k-1}^{uy})\) (the case of \(u\in V(\vec{G}_{k-1}^{xv})\) and \(v\in V(T)\) being symmetric). Because \(\vec{G}_{k-1}\) is dicritical, there exists a \((k-1)\)-dicolouring \(\gamma\) of \(\vec{G}_{k-1}^{uy}\) in which \(v\) is the only vertex coloured \(k-1\). Hence, we colour \(\vec{G}_{k-1}^{uy}\) with \(\gamma\), every other copy of \(\vec{G}_{k-1}\) with a \((k-1)\)-dicolouring of \(\vec{G}_{k-1}\), and we choose a distinct colour for every vertex in \(T\), except \(u\) and \(y\) which both receive colour \(k-1\). This results in a \((k-1)\)-dicolouring of \(\vec{G}_{k}\setminus\{uv\}\).
2310.02466
Parameterized Model-checking of Discrete-Timed Networks and Symmetric-Broadcast Systems
We study the complexity of the model-checking problem for parameterized discrete-timed systems with arbitrarily many anonymous and identical processes, with and without a distinguished "controller", and communicating via synchronous rendezvous. Our framework extends the seminal work from German and Sistla on untimed systems by adding discrete-time clocks to processes. For the case without a controller, we show that the systems can be efficiently simulated -- and vice versa -- by systems of untimed processes that communicate via rendezvous and symmetric broadcast, which we call "RB-systems". Symmetric broadcast is a novel communication primitive that allows all processes to synchronize at once; however, it does not distinguish between sending and receiving processes. We show that the parameterized model-checking problem for safety specifications is pspace-complete, and for liveness specifications it is decidable in exptime. The latter result is proved using automata theory, rational linear programming, and geometric reasoning for solving certain reachability questions in a new variant of vector addition systems called "vector rendezvous systems". We believe these proof techniques are of independent interest and will be useful in solving related problems. For the case with a controller, we show that the parameterized model-checking problems for RB-systems and systems with asymmetric broadcast as a primitive are inter-reducible. This allows us to prove that for discrete timed-networks with a controller the parameterized model-checking problem is undecidable for liveness specifications. Our work exploits the intimate connection between parameterized discrete-timed systems and systems of processes communicating via broadcast, providing a rare and surprising decidability result for liveness properties of parameterized timed-systems, as well as extend work from untimed systems to timed systems.
Benjamin Aminof, Sasha Rubin, Francesco Spegni, Florian Zuleger
2023-10-03T22:26:44Z
http://arxiv.org/abs/2310.02466v2
# Parameterized model-checking of discrete-timed networks and symmetric-broadcast systems ###### Abstract. We study the complexity of the model-checking problem for parameterized discrete-timed systems with arbitrarily many anonymous and identical contributors, with and without a distinguished "controller" process. Processes communicate via synchronous rendezvous. Our work extends the seminal work on untimed systems [German, Sistla: Reasoning about Systems with Many Processes. J. ACM 39(3), 1992] by the addition of discrete-time clocks, thus allowing one to model more realistic protocols. For the case without a controller, we show that the systems can be efficiently simulated -- and vice versa -- by systems of untimed processes that communicate via rendezvous and symmetric broadcast, which we call "RB-systems". Symmetric broadcast is a novel communication primitive that, like ordinary asymmetric broadcast allows all processes to synchronize; however, it has no distinction between sender/receiver processes. We show that the complexity of the parameterized model-checking problem for safety specifications is pspace-complete, and for liveness specifications it is decidable and in exptime. The latter result is proved using automata theory, rational linear programming, and geometric reasoning for solving certain reachability questions in a new variant of vector addition systems called "vector rendezvous systems". We believe these proof techniques are of independent interest and will be useful in solving related problems. For the case with a controller, we show that the parameterized model-checking problems for RB-systems and systems with asymmetric broadcast as a primitive are inter-reducible. This allows us to prove that for discrete timed-networks with a controller the parameterized model-checking problem is undecidable for liveness specifications. Our work exploits the intimate and fruitful connection between parameterized discrete-timed systems and systems of processes communicating via broadcast. This allows us to provide a rare and surprising decidability result for liveness properties of parameterized timed-systems, as well as extend work from untimed systems to timed systems. Key words and phrases:Parameterized systems, timed-systems, broadcast communication, decidability, formal languages 1991 Mathematics Subject Classification: _Key words and phrases:_ Parameterized systems, timed-systems, broadcast communication, decidability, formal languages ## 1. Introduction We systematically study the complexity of the model-checking problem for parameterized discrete-timed systems that communicate via synchronous rendezvous. These systems consist of arbitrarily many anonymous and identical contributors, with and without a distinguished "controller" process. The parameterized model-checking problem asks whether a given specification holds no matter the number of identical contributors. This is in contrast to traditional model-checking that considers a fixed number of contributors. Our model subsumes the classic case of untimed systems [10] -- processes are finite-state programs with (discrete-time) clocks that guard transitions. Timed processes can be used to model more realistic circuits and protocols than untimed processes [1, 1]. We study the computational complexity of the parameterized model-checking problem (PMCP) for safety and liveness specifications. Safety properties are specified by formulas of linear-temporal logic over finite traces (\(\mathsf{LTLf}\)) and nondeterministic finite word automata (NFW), and liveness properties are specified by formulas of linear-temporal logic (\(\mathsf{LTL}\)) and nondeterministic Buchi automata (NBW). We show that without a controller safety is pspace-complete while liveness is in exptime; and with a controller safety is non-elementary while liveness is undecidable. In more detail: 1. For systems without a controller, we prove that the PMCP for safety specifications is pspace-complete -- in fact, pspace-hardness even holds for a fixed specification (known as program complexity) and for a fixed program (known as specification complexity). 2. For systems without a controller, we prove that the PMCP for liveness specifications can be solved in exptime. This is a rare decidability result for liveness properties of any non-trivial model of parameterized timed systems. The algorithms presented make use of interesting and intricate combination of automata theory, rational linear programming, and geometric reasoning for solving certain reachability questions in a new variant of vector addition systems called'vector rendezvous systems'. We believe these techniques are of independent interest and will be useful in solving related problems. 3. For systems with a controller, we prove that the PMCP for liveness specifications is undecidable. This follows from a new reduction between timed-systems with a controller and systems with asymmetric broadcast, and the known undecidability of the PMCP of latter for liveness properties. The novel reduction also allows us to recover the known result that for discrete-time systems with a controller and safety specifications, the PMCP has non-elementary complexity. Although this doesn't completely close the picture (the complexity without a controller for liveness properties is pspace-hard and in exptime), we remark that the parameterized verification problem for liveness properties is notoriously hard to analyze: apart from a single simple cutoff result which deals with processes communicating using conjunctive and disjunctive guards [10], no decidability result for liveness specifications of timed systems was known before this work. To solve the PMCP problem for these systems we introduce _rendezvous-broadcast systems (RB-systems)_ -- systems of finite-state processes communicating via _rendezvous_ and _symmetric broadcast_. Unlike asymmetric broadcast which can distinguish between the sender and the receivers, with symmetric broadcast there is no designated sender, and thus it can naturally model the passage of discrete time, i.e., every symmetric broadcast can be thought of as a tick of the discrete-time clocks. We show that RB-systems and timed-networks with the same number of processes can efficiently simulate each other. Thus, in particular, the PMCP of RB-systems and timed networks are polynomial-time inter-reducible. Furthermore, we show that for the case with a controller, RB-systems (and thus timed-networks) are polynomial-time inter-reducible to systems with asymmetric broadcasts. We remark that this equivalence does not hold for the case without a controller (indeed, we show that without a controller PMCP for liveness specifications is decidable, whereas it is known to be undecidable for systems with asymmetric broadcast [1]). We thus consider the introduction of the notion of a symmetric broadcast to be an interesting communication primitive in itself. We then study the PMCP for RB-systems and in fact establish the itemized results above for RB-systems. Our work is motivated by the program of establishing the complexity of verification of parameterized systems. Consequently, our pspace and exptime bounds suggest potential implementations using SMT-solvers [1]. ### Techniques The bulk of the work concerns the case without a controller, in which we first establish the decidability of the PMCP for safety properties and prove this problem to be pspace-complete (already for a fixed specification). The decidability of the PMCP for safety properties of timed networks has already been known [1]; however, it was obtained using well-structured transition systems and only gives a _non-elementary_ upper bound, which we improve to pspace. We obtain the result for safety properties by constructing a reachability unwinding of the states of the processes of the parameterized system, where we compute precisely those states the system can be in after exactly \(n\) broadcasts; we show that the reachability unwinding has a lasso-shape and can be constructed in pspace, which allows us to obtain the upper complexity bound. We provide a matching lower-bound by reducing the termination problem of Boolean programs to non-reachability in RB-systems. We then prove an exptime upper-bound for the PMCP for liveness properties. This result is considerably more challenging than the upper-bound for safety properties. One source of difficulty is the need to be able to tell whether some rendezvous transition can be executed a bounded (as opposed to unbounded) number of times between two broadcasts -- a property which is not \(\omega\)-regular. In order to deal with this issue we work with B-automata [1], which generalize Buchi-automata by equipping them with counters. A key step in constructing a B-automaton that recognizes the computations of the system that satisfy a liveness specification requires establishing the existence (or lack thereof) of certain cycles in the runs of the parameterized system. Alas, the intricate interaction between broadcasts and rendezvous transitions makes this problem very complicated. In particular, known classical results concerning pairwise rendezvous without broadcast [10] do not extend to our case. We solve this problem in two steps: we first obtain a precise characterization (in terms of a set of linear equations) for reachability of configurations between two broadcasts; we then use this characterization in an iterative procedure for establishing the existence of cycles with broadcasts. To obtain the characterization for reachability mentioned above we introduce vector rendezvous systems (VRS) and their continuous relaxation, called continuous vector rendezvous systems (CVRS). These systems are counter abstractions -- which view configurations as vectors of counters that only store the number of processes at every process state, but not the identity of the processes -- and constitute a new variant of the classical notion of vector addition systems [13]. ### Related Work In this work we parameterize by the number of processes (other choices are possible, e.g., by the spatial environment in which mobile-agents move [1]). Our model assumes arbitrarily many finite-state anonymous identical processes, possibly with a distinguished controller. Finite-state programs are commonly used to model processes, especially in the parameterized setting [1, 2, 3, 4, 5]. The parameterized model-checking problem (PMCP) has been studied for large-scale distributed systems such as swarm robotics [1, 2, 10], hardware design [11], and multi-threaded programs [13]. The PMCP easily becomes undecidable for many types of communication primitives because of the unbounded number of processes (even for safety properties and untimed processes) [14]. Techniques for solving the general case include identifying decidable/tractable subcases (as we do) and designing algorithms that are not guaranteed to terminate, e.g., with acceleration and approximation techniques [1, 2]. The border between decidability and undecidability for various models is surveyed in [1], including token-passing systems [1, 2], rendezvous and broadcast [1, 3, 4], guarded protocols [10], ad hoc networks [11]. The seminal work [12] shows that the PMCP of (untimed) systems that communicate via rendezvous is in ptime without a controller, and is expspace-complete with a controller. We now compare our work against others that explored the PMCP for timed processes (and timed or untimed specifications). In [1], Bertrand and Fournier consider the case of dynamic networks of timed Markov decision processes (TMDPs), a model for agents mixing probabilistic and timed behavior, whose transitions can be guarded by simple conditions on a global real-valued clock variable and through broadcast messages (a broadcaster is distinguished from the receivers) that can force all processes to transition in one computational step. They explore the decidability of several variants of the parameterized probabilistic reachability problem in networks of TMDPs. Interestingly, they observe that in some settings the problem is undecidable if the number of processes is fixed but unknown, while it becomes decidable (but not primitive recursive) by allowing processes to join and leave the network with given probabilities along the executions of the system. Their undecidability results are based on the possibility, using message broadcasting, of distinguishing a process that acts as controller of the network. In [2] Abdulla and Jonsson prove that model checking safety properties for timed networks with a controller process is decidable, provided that each process has at most one (real-valued) clock variable. They assume several processes can take synchronous transitions at once by using a rendezvous primitive and a distinguished process can act as controller of the network. The decidability carries over to discrete timed networks with one clock variable per process, and in case all processes are equal, i.e. the controller process is just another copy of the user processes. Our work proves decidability of safety and liveness properties independently from the number of clock variables in the network, provided that clocks range over a discrete domain and there is no controller process in the network. In [1] Abdulla et al. extends the decidability result of [2] to timed networks with rendezvous and a controller process, assuming each process has any finite number of discrete clocks. They prove decidability by exhibiting a non-elementary complexity upper bound for the problem, and of course this also proves decidability of the same problem for timed network in the absence of a controller process. In our setting, without a controller process, we are able to prove a much smaller upper bound for the complexity of the model checking problem restricted to safety properties, as well as the decidability of liveness properties. In case there is a controller process, our results show that the PMCP for liveness properties for timed networks is undecidable. In [1] Abdulla et al. consider the PMCP for a model of timed processes communicating through Ad Hoc Wireless Networks. In this model, an arbitrary number of processes use either real- or discrete- valued clocks and can connect among themselves in topologies that are defined by some given family of graphs such as bounded path graphs or cliques. Given a family of graphs, each timed process can communicate with its direct neighbors using broadcast or rendezvous messages. In the context of timed networks using rendezvous on a clique graph topology, which is the setting closest to our work, they focus on rendezvous communications and rephrase the decidability results already presented in [1, 1]. In [11] Isenberg provides techniques for finding invariants for safety properties and timed-networks consisting of processes with continuous clocks, a controller, shared global variables and broadcast communication. In contrast, our decision procedures are based on automata theory. In [1], Abdulla et al. study the PMCP for reachability specifications for Timed Petri Nets. There the authors prove that the problem is PSPACE-complete provided that each process carry only one clock variable. Interestingly, this is the same complexity we prove for the discrete case, thus the extension of the problem to the continuous time setting (under the limitation of one clock variable per process) falls in the same complexity class as the discrete time setting. In this work we make a step further by providing an upper bound to the complexity of the PMCP for liveness specifications in the discrete time setting. In [12] Spalazzi and Spegni study the parameterized model-checking problem of Metric Interval Temporal Logic formulae against networks of conjunctive or disjunctive timed automata of arbitrary size. They prove that in case of timed networks with either all conjunctive or all disjunctive Boolean guards and a controller process, a cutoff exist allowing to reduce controlled timed networks of arbitrary size to timed networks of some known size, provided that process locations have no time invariants forcing progress. This implies that PMCP is decidable under such conditions. In contrast, in our setting time is discrete, there is no controller process, communication is by rendezvous, and specifications are qualitative (i.e. LTL). Finally, we remark that simulations between parameterized systems with different communication primitives, including asymmetric broadcast and rendezvous, is systematically studied in [1]. We also contribute to that line of work, involving the newly introduced symmetric-broadcast primitive. Our work establishes for the first time an intimate two-way connection between discrete-timed systems and systems communicating via broadcast: symmetric broadcast when there is no controller, and asymmetric broadcast when there is a controller. We are certain that this intimate connection will prove useful in transferring results between these two types of systems, and discovering new results also if one considers other communication primitives than rendezvous. ## 2. Definitions and Preliminaries For the sake of self-consistency, let us now recall various notions from automata theory that will be used along this work. **Notation.** Let \(\mathbb{N}\) denote the set of positive integers, let \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\), let \(\mathbb{Q}\) denote the set of rational numbers, \(\mathbb{Q}_{>0}\) the set of positive rational numbers, and \(\mathbb{Q}_{\geq 0}\) the non-negative ones. Let \([n,m]\), for \(n<m\in\mathbb{N}\), denote the set \(\{n,n+1,\ldots,m\}\), and if \(m=\infty\) then \([n,m]=\{n,n+1,\ldots\}\). Let \([n]\) denote the set \([1,n]\). Finally, for \(m\leq n\in\mathbb{N}\), we call \(\mu:[m]\to 2^{[n]}\), a _partition_ of \([n]\) if \(i\neq j\) implies \(\mu(i)\cap\mu(j)=\emptyset\), and \([n]=\cup_{i\in[m]}\mu(i)\). For an _alphabet_\(\Sigma\) we denote by \(\Sigma^{*}\) (resp. \(\Sigma^{\omega}\)) the set of all finite (resp. infinite) _words_ over \(\Sigma\) The _concatenation_ of two words \(u\) and \(w\) is written \(uw\) or \(u\cdot w\). The _length_ of a word \(u\) is denoted by \(|u|\), and if \(u\) is infinite then we write \(|u|=\infty\). ### Transition Systems A _labeled transition system (LTS)_ is a tuple \[L=\langle\mathit{AP},\Sigma,S,I,R,\lambda\rangle\] where * \(\mathit{AP}\) is a finite set of _atomic propositions_ (also called _state labels_), * \(\Sigma\) is an alphabet of _edge-labels_, * \(S\) is a set of _states_ (in the following we assume that \(S\subseteq\mathbb{N}\)), * \(I\subseteq S\) is a set of _initial states_, * \(R\subseteq S\times\Sigma\times S\) is an _edge relation_, * and \(\lambda\subseteq S\times\mathit{AP}\) is a _labeling relation_ that associates with each state the atomic propositions that hold in it. We will often use functional notation and write \(\lambda(s)\) for the set of atoms \(p\) such that \((s,p)\in\lambda\). In case all components of \(L\) are finite, we say that \(L\) is a _finite LTS_; and otherwise we say that it is an _infinite LTS_. An edge \(e=(s,a,s^{\prime})\in R\), is also called a _transition_, and may be written \(s\xrightarrow{a}s^{\prime}\). The element \(s\) is called the _source_ (denoted \(\mathsf{src}(e)\)) of \(e\), and \(s^{\prime}\) is called its _destination_ (denoted \(\mathsf{dst}(e)\)), and \(a\) is called the _label_ of \(e\). Given \(\sigma\in\Sigma\), and a state \(s\in S\), we say that \(\sigma\) is _enabled_ in \(s\) if there is some \(s^{\prime}\in S\) such that \(s\xrightarrow{\sigma}s^{\prime}\). A _path_\(\pi\) is a (finite or infinite) sequence \(e_{1}e_{2}\dots\) of transitions such that for every \(1\leq i<|\pi|\) we have that \(\mathsf{dst}(e_{i})=\mathsf{src}(e_{i+1})\), where \(|\pi|\in\mathbb{N}\cup\{\infty\}\) is the _length_ of \(\pi\). We extend the notations \(\mathsf{src}(\pi)\) and \(\mathsf{dst}(\pi)\) to paths (the latter only for finite paths) in the natural way. Extend \(\lambda\) to paths as follows: if \(\pi=e_{1}e_{2}\dotsm e_{k}\) is finite then \(\lambda(\pi)=\lambda(\mathsf{src}(e_{1}))\lambda(\mathsf{src}(e_{2}))\dotsm \lambda(\mathsf{src}(e_{k-1})\lambda(\mathsf{src}(e_{k}))\lambda(\mathsf{dst }(e_{k}))\), and if \(\pi=e_{1}e_{2}\dots\) is infinite then \(\lambda(\pi)=\lambda(\mathsf{src}(e_{1}))\lambda(\mathsf{src}(e_{2}))\dotsm\). A _run_ is a path whose source is an initial state. The set of runs of an LTS \(M\) is written _runs\((M)\)_. A state \(s\in S\) is _reachable_ if it is the destination of some run. The _size_ of a finite LTS is defined to be the sum of the number of states and number of transitions. Let \(L=\langle\mathit{AP},\Sigma,S,I,R,\lambda\rangle\) and \(L^{\prime}=\langle\mathit{AP},\Sigma,S^{\prime},I^{\prime},R^{\prime},\lambda^ {\prime}\rangle\) be two LTSs over the same set of atomic propositions \(\mathit{AP}\) and the same set of edge-labels \(\Sigma\). We now define a few notions of equivalence relating such LTSs. A relation \(M\subseteq S\times S^{\prime}\) is a _simulation_ if (i) for every \(q\in I\) there is \(q^{\prime}\in I^{\prime}\) such that \((q,q^{\prime})\in M\), (ii) \((q,q^{\prime})\in M\) implies \(\lambda(q)=\lambda^{\prime}(q^{\prime})\) and for every \((q,\sigma,r)\in R\) there exists \(r^{\prime}\) with \((q^{\prime},\sigma,r^{\prime})\in R^{\prime}\) such that \((r,r^{\prime})\in M\). In this case we say that \(L^{\prime}\)_simulates_\(L\). Say that \(M\) is a _bisimulation_ if \(M\) is a simulation and \(\{(q^{\prime},q):(q,q^{\prime})\in M\}\subseteq Q^{\prime}\times Q\) is a simulation. We say that runs \(\pi,\pi^{\prime}\), of \(L\) and \(L^{\prime}\) respectively, of the same length are _equi-labeled_ if for every \(i<|\pi|\), if \(\pi_{i}=(s,\sigma,t)\) and \(\pi_{i}^{\prime}=(s^{\prime},\sigma^{\prime},t^{\prime})\) we have that \(\lambda(s)=\lambda^{\prime}(s^{\prime})\), \(\lambda(t)=\lambda^{\prime}(t^{\prime})\), and \(\sigma=\sigma^{\prime}\). It follows immediately from the definitions that if \(L^{\prime}\) simulates \(L\) then for every run in \(L\) there exists an equi-labeled run in \(L^{\prime}\). We will use the following operations: Let \(\mathit{AP}\) be a set of atomic propositions. Given a proposition \(a\in\mathit{AP}\) and a (finite or infinite) sequence \(\pi\in(2^{AP})^{*}\cup(2^{AP})^{\omega}\), we denote by \((\pi)_{a}\) the subsequence of \(\pi\) that consists of all sets that contain \(a\). Given a subset \(\mathit{AP}^{\prime}\subseteq\mathit{AP}\) and a (finite or infinite) sequence \(\pi\in(2^{AP})^{*}\cup(2^{AP})^{\omega}\), we denote by \(\pi|_{\mathit{AP}^{\prime}}\) the sequence that we obtain from \(\pi\) by intersecting every set with \(\mathit{AP}^{\prime}\). ### Automata We will use nondeterministic automata with three types of acceptance conditions, i.e., ordinary reachability acceptance (on finite input words), Buchi acceptance (on infinite words), and a boundedness condition on a single counter (on infinite words). Since automata are like LTSs (except that they include an acceptance condition, and exclude the state labeling function), we will use LTS terminology and notation that is independent of the labeling, e.g., source, destination, path and run. We remark that inputs to the automata will be edge-labeling of paths in certain LTSs, and thus the input alphabet for automata is also denoted \(\Sigma\). A _nondeterministic finite word automaton (NFW)_ is a tuple \[\mathcal{A}=\langle\Sigma,S,I,R,F\rangle\] where * \(\Sigma\) is the _input alphabet_, * \(S\) is the finite set of _states_, * \(I\subseteq S\) are the _initial states_, * \(R\subseteq S\times\Sigma\times S\) is the _transition relation_, and * \(F\subseteq S\) are the _final states_. Given a finite word \(\alpha=\alpha_{1}\alpha_{2}\cdots\alpha_{k}\) over the alphabet \(\Sigma\), we say that \(\rho=\rho_{1}\rho_{2}\ldots\rho_{k+1}\) is a _run of \(\mathcal{A}\) over \(\alpha\)_ if, for all \(i\in[k]\), the label of the transition \(\rho_{i}\) is \(\alpha_{i}\). The run \(\rho\) is _accepting_ if \(\mathsf{dst}(\rho_{k+1})\in F\). A word is _accepted_ by \(\mathcal{A}\) if there is an accepting run of \(\mathcal{A}\) over it. The _language_ of \(\mathcal{A}\) is the set of words that it accepts. A _nondeterministic Buchi word automaton (NBW)_ is a tuple \(\mathcal{A}=\langle\Sigma,S,I,R,G\rangle\), which is like an NFW except that \(F\) is replaced by a _Buchi set_\(G\). Unlike NFW which run over finite words, an NBW runs over infinite words. Hence, given an infinite word \(\alpha=\alpha_{1}\alpha_{2}\cdots\) over the alphabet \(\Sigma\), we say that \(\rho=\rho_{1}\rho_{2}\ldots\) is a _run of \(\mathcal{A}\) over \(\alpha\)_ if, for all \(i\in\mathbb{N}\), the label of the transition \(\rho_{i}\) is \(\alpha_{i}\). The run \(\rho\) induces a set \(inf(\rho)\) consisting of those states \(q\in S\) such that \(q=\mathsf{src}(\rho_{i})\) for infinitely many \(i\). The run \(\rho\) is _accepting_ if \(inf(\rho)\cap G\neq\emptyset\). The definition when a word is _accepted_, and of the _language_ of \(\mathcal{A}\), are as for NFW. An _NBW with one counter_, or _B-automaton_ for short, is a tuple \[\langle\Sigma,S,I,R,G,cc\rangle\] which is like an NBW except that it has an additional _counter command function_\(cc:R\to\{\mathsf{inc},\mathsf{reset},\mathsf{skip}\}\) which associates with each transition a counter-update operation. An infinite run \(\rho=\rho_{1}\rho_{2}\cdots\) of a B-automaton induces a set \(ctr(\rho)=\{c_{i}:i\in\mathbb{N}\}\), where \(c_{1}=0\) and \[c_{i+1}=\begin{cases}c_{i}&\text{if }cc(\rho_{i})=\mathsf{skip}\\ c_{i}+1&\text{if }cc(\rho_{i})=\mathsf{inc}\\ 0&\text{if }cc(\rho_{i})=\mathsf{reset}.\end{cases}\] The run \(\rho\) is _accepting_ if it satisfies the Buchi condition and its counter values are bounded. I.e., if \(inf(\rho)\cap G\neq\emptyset\) and \(\exists n\in\mathbb{N}\) s.t. \(c<n\) for all \(c\in ctr(\rho)\). If \(G=S\) (i.e., if there is effectively no Buchi acceptance condition), then we say that the Buchi set is _trivial_. B-automata were defined in [1], and in the general case may have multiple counters, some of which should be bounded and some of which should be unbounded. Since one can easily simulate a Buchi acceptance condition with a single counter (see [1]), our definition of a B-automaton given above is a special case of the B-automata of [1] with two counters. The proof of Lemma 2.1 below, which also applies to these general multi-counter automata, was communicated to us by Nathanael Fijalkow (as far as we know it is a "folk theorem" for which we could not find a clear statement or proof in the literature). **Lemma 2.1**.: _Deciding whether the language of a B-automaton is not empty can be solved in ptime._ Proof.: We reduce the problem to the emptiness problem for Streett automata 1, which is in ptime[1]. Footnote 1: We remind the reader that a Streett automaton is like an NBW except that its acceptance condition is not a single Büchi set, but a family of pairs of sets \(\{(B_{1},G_{1}),(B_{2},G_{2}),\ldots,(B_{k},G_{k})\}\), and a run \(\rho\) is accepting if for all \(i\in[k]\) we have that \(inf(\rho)\cap G_{i}\neq\emptyset\) implies \(inf(\rho)\cap B_{i}\neq\emptyset\). Given a B-automaton \(\mathcal{A}=\langle\Sigma,S,I,R,G,cc\rangle\), build a Streett automaton \(\mathcal{A}^{\prime}\) whose transition relation is like that of \(\mathcal{A}\) except that it also stores the most recent counter command in the state, and whose acceptance condition encodes the following properties: 'infinitely often see a state in \(G\)' and 'infinitely many increments implies infinitely many resets'. Formally, \(\mathcal{A}^{\prime}\) has states \(S\times\{\mathsf{skip},\mathsf{inc},\mathsf{reset}\}\); initial states \(S\times\{\mathsf{reset}\}\); transitions of the form \(((s,c),\alpha,(s^{\prime},c^{\prime}))\) where \((s,\alpha,s^{\prime})\in R\) and \(cc(s,\alpha,s^{\prime})=c^{\prime}\); and the acceptance condition containing the two pairs \((S,G)\) and \((S\times\{\mathsf{inc}\},S\times\{\mathsf{reset}\})\). Then, the language of \(\mathcal{A}\) is non-empty if and only if the language of \(\mathcal{A}^{\prime}\) is non-empty. To see this, note that accepting runs in \(\mathcal{A}\) induce accepting runs in \(\mathcal{A}^{\prime}\), since a run with infinitely many "increments" that also has a bound on the counter must have infinitely many "resets". On the other hand, an accepting run \(\rho_{1}\) of \(\mathcal{A}^{\prime}\) can be transformed, by "pumping out" loops, into another accepting run \(\rho_{2}\) of \(\mathcal{A}^{\prime}\) in which the distance between two successive reset transitions is bounded (one only needs to ensure that in each infix starting and ending in a reset, if there is a Büchi state in the infix, then there is still one after pumping out). Thus, the counter of \(\rho_{2}\) is bounded, and so is also an accepting run of \(\mathcal{A}\).2 Footnote 2: Observe that the Streett automaton does not, in general, accept the same language as the B-automaton. Indeed the latter’s language may not even be \(\omega\)-regular. ### Linear Temporal Logic For a set \(AP\) of atomic propositions, _formulas of LTL over AP_ are defined by the following BNF (where \(p\in\textit{AP}\)): \[\varphi\,{\mathrel{\mathop{:}}=}\,p\,|\,\varphi\vee\varphi\,|\,{\mathrel{ \mathop{\neg\varphi}}}\,|{\mathrel{\mathop{\mathsf{X}\varphi}}}\,|\,\varphi\, {\mathrel{\mathop{\mathsf{U}}}}\,\varphi\] We use the usual abbreviations, \(\varphi\,{\mathrel{\mathop{\neg\,}}\,}\varphi^{\prime}=\neg\varphi\lor\varphi^ {\prime}\), \(\mathsf{true}=p\vee\neg p\), \(\mathsf{F}\,\varphi=\mathsf{true}\,{\mathrel{\mathop{\mathsf{U}}}}\,\varphi\) (read "eventually \(\varphi\)"), \(\mathsf{G}\,\varphi=\neg\,\mathsf{F}\,\neg\varphi\) (read "always \(\varphi\)"). The _size_\(|\varphi|\) of a formula \(\varphi\) is the number of symbols in it. A _trace_\(\tau\) is an infinite sequence over the alphabet \(\Sigma=2^{AP}\), an infinite sequence of valuations of the atoms. For \(n\geq 0\), write \(\tau_{n}\) for the valuation at position \(n\). Given a trace \(\tau\), an integer \(n\), and an LTL formula \(\varphi\), the satisfaction relation \((\tau,n)\models\varphi\), stating that \(\varphi\) holds at step \(n\) of the sequence \(\tau\), is defined as follows: * \((\tau,n)\models p\) iff \(p\in\tau_{n}\); * \((\tau,n)\models\varphi_{1}\vee\varphi_{2}\) iff \((\tau,n)\models\varphi_{1}\) or \((\tau,n)\models\varphi_{2}\); * \((\tau,n)\models\neg\varphi\) iff it is not the case that \((\tau,n)\models\varphi\); * \((\tau,n)\models{\mathrel{\mathop{\mathsf{X}\varphi}}}\) iff \(n<|\tau|-1\) and \((\tau,n+1)\models\varphi\); * \((\tau,n)\models\varphi_{1}\,{\mathrel{\mathop{\mathsf{U}}}}\,\varphi_{2}\) iff \((\tau,m)\models\varphi_{2}\) for some \(n\leq m<|\tau|\), and \((\tau,j)\models\varphi_{1}\) for all \(n\leq j<m\). Write \(\tau\models\varphi\) if \((\tau,0)\models\varphi\), read \(\tau\)_satisfies_\(\varphi\). We consider the variant LTLf known as "LTL over finite traces"[1, 1, 1]. It has the same syntax and semantics as LTL except that \(\tau\) is a finite sequence. Observe that the satisfaction of \(\mathsf{X}\) and \(\mathsf{U}\) on finite traces is defined "pessimistically", i.e., a trace cannot end before the promised eventuality holds. The following states that one can convert \(\mathsf{LTL}\)/\(\mathsf{LTLf}\) formulas to NBW/NFW with at most an exponential blowup: **Theorem 2.2**.: _[_12_, 13]_ _Let \(\varphi\) be an \(\mathsf{LTL}\) (resp. \(\mathsf{LTLf}\)) formula. One can build an NBW (resp. NFW), whose size is at most exponential in \(|\varphi|\), accepting exactly the models \(\varphi\)._ ## 3. Parameterized systems We first introduce _systems with rendezvous and symmetric broadcast_ (or _RB-systems_, for short), a general formalism suitable for describing the parallel composition of \(n\in\mathbb{N}\) copies of a process _template_. We identify two special cases: Rendezvous systems (or R-systems, for short) and Discrete Timed Systems. ### RB-systems An RB-system is a certain LTS which evolves nondeterministically: either a \(k\)-wise rendezvous action is taken, i.e., \(k\) different processes instantaneously synchronize on some rendezvous action \(a\), or the symmetric broadcast action is taken, i.e., all processes take an edge labeled by \(\mathfrak{b}\). Systems without the broadcast action are called R-systems. We will show that RB-systems (strictly) subsume discrete timed networks [1], a formalism allowing to describe parameterized networks of timed processes with discrete value clocks. A discrete timed network also evolves nondeterministically: either a \(k\)-wise rendezvous action is taken by \(k\) processes of the network, or all the clocks of all the processes advance their value by the same (discrete) amount. In the rest of the paper the number of processes participating in a rendezvous will be denoted by \(k\), we let \(\Sigma_{\mathsf{actn}}\) denote a finite set of _rendezvous actions_, and we let \(\Sigma_{\mathsf{rdz}}=\cup_{a\in\Sigma_{\mathsf{actn}}}\{a_{1},\ldots,a_{k}\}\) which we call the _rendezvous alphabet_. **Definition 3.1** (**Process Template, RB-Template, R-Template)**.: _A process template is a finite LTS \(P=\langle\text{AP},\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{b}\},S,I,R,\lambda\rangle\). A process template \(P\) is an RB-template, if for every state \(s\in S\), we have that \(\mathfrak{b}\) is enabled in \(s\).We call edges labeled by \(\mathfrak{b}\) broadcast edges, and the rest rendezvous edges. A process template \(P\) is an R-template, if \(P\) does not contain any broadcast edges._ We now define the system \(P^{n}\) consisting of \(n\) copies of a given template \(P\): **Definition 3.2** (**RB-System, R-System)**.: _Given an integer \(n\in\mathbb{N}\) and an RB-Template (resp. R-Template) \(P=\langle\text{AP},\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{b}\},S,I,R,\lambda\rangle\), the RB-system\(P^{n}\) (resp. R-system\(P^{n}\)) is defined as the finite LTS \(\langle\text{AP}^{n},\Sigma_{\mathsf{com}}^{n},S^{n},I^{n},R^{n},\lambda^{n}\rangle\) where:_ 1. _The set of atomic propositions_ \(\text{AP}^{n}\) _is_ \(\text{AP}\times[n]\)_; intuitively, the atom_ \((p,i)\) _denotes the fact that atom_ \(p\) _is currently true of process_ \(i\)_._ 2. _The_ communication alphabet__\(\Sigma_{\mathsf{com}}^{n}\) _consists of_ \(\mathfrak{b}\) _and every tuple of the form_ \(((i_{1},a_{1}),\ldots,(i_{k},a_{k}))\) _where_ \(a\in\Sigma_{\mathsf{actn}}\) _and_ \(i_{1},i_{2},\cdots,i_{k}\) _are_ \(k\) _different elements in_ \([n]\)_; intuitively, the system takes this action means that simultaneously for each_ \(j\in[k]\)_, process_ \(i_{j}\) _transitions along an_ \(a_{j}\) _edge._ 3. \(S^{n}\) _is the set of functions (called_ configurations_) of the form_ \(f:[n]\to S\)_. We call_ \(f(i)\) _the_ state of process_ \(i\) _in_ \(f\)_. Note that we sometimes find it convenient to consider a more flexible naming of processes in which we let_ \(S^{n}\) _be the set of functions_ \(f:X\to S\)_, where_ \(X\subseteq\mathbb{N}\) _is some set of size_ \(n\) _ 4. _The set of_ initial configurations \(I^{n}=\{f\in S^{n}\mid f(i)\in I\) _for all_ \(i\in[n]\}\) _consists of all configurations which map all processes to initial states of_ \(P\)_._ 5. _The set of_ global transitions \(R^{n}\subseteq S^{n}\times\Sigma^{n}_{\mathsf{com}}\times S^{n}\) _contains transitions_ \(f\xrightarrow{\sigma}g\) _where one of the following two conditions hold:_ * (broadcast)__\(\sigma=\mathfrak{b}\)_, and_ \(f(i)\xrightarrow{\mathfrak{b}}g(i)\) _in_ \(R\)_, for every_ \(i\in[n]\)_;_ * (rendezvous)__\(\sigma=((i_{1},a_{1}),\dots,(i_{k},a_{k}))\)_, and_ \(f(i_{j})\xrightarrow{a_{j}}g(i_{j})\) _in_ \(R\) _for every_ \(1\leq j\leq k\)_; and_ \(f(i)=g(i)\) _for every_ \(i\not\in\{i_{1},\dots,i_{k}\}\)_. In this case we say that_ \(a\in\Sigma_{\mathsf{actn}}\) _is the_ action taken_._ 6. _The labeling relation_ \(\lambda^{n}\subseteq S^{n}\times AP^{n}\) _consists of the pairs_ \((f,(p,i))\) _such that_ \((f(i),p)\in\lambda\)_._ For every transition \(t=(f,\sigma,g)\in R^{n}\), we define the set of _active processes_, denoted by \(\mathsf{active}(t)\), as follows: * if \(\sigma=\mathfrak{b}\) define \(\mathsf{active}(t)=[n]\), * if \(\sigma=((i_{1},a_{1}),\dots,(i_{k},a_{k}))\) define \(\mathsf{active}(t)=\{i_{1},\dots,i_{k}\}\). Given a process template \(P\) define the _RB-system_\(P^{\infty}\) as the following LTS: \[\langle AP^{\infty},\Sigma^{\infty}_{\mathsf{com}},S^{\infty},I^{\infty},R^{ \infty},\lambda^{\infty}\rangle\] where \(AP^{\infty}=\cup_{n\in\mathbb{N}}AP^{n}\), \(S^{\infty}=\cup_{n\in\mathbb{N}}S^{n}\), \(I^{\infty}=\cup_{n\in\mathbb{N}}I^{n}\), \(R^{\infty}=\cup_{n\in\mathbb{N}}R^{n}\), \(\Sigma^{\infty}_{\mathsf{com}}=\cup_{n\in\mathbb{N}}\Sigma^{n}_{\mathsf{com}}\) and \(\lambda^{\infty}=\cup_{n\in\mathbb{N}}\lambda^{n}\). ### Discussion of our modeling choices Our definition of RB-systems allows one to model finitely many different process templates because a single process template \(P\) can have multiple initial states (representing the disjoint union of the different process templates). We can easily transform a rendezvous action \(a\) involving \(j<k\) processes (in particular where \(j=1\), representing an internal transition taken by a single process) into a \(k\)-wise rendezvous action by simply adding, for every \(j<i\leq k\), and every state in \(P\), a self-loop labeled \(a_{i}\). This transformation works when there are at least \(k\) processes in the system. This is not a real restriction since all the systems with less than \(k\) processes yield a single finite-state system which can be easily model-checked. In any case, all our results hold also if one specifically allows rendezvous actions involving \(j<k\) processes. The assumption that every state in an RB-template is the source of a broadcast edge means that for every configuration \(f\) there is a broadcast global-transition with source \(f\). ### Executions and Specifications Let \(t\) be a global transition \(f\xrightarrow{\sigma}g\), and let \(i\) be a process. We say that \(i\)_moved_ in \(t\) if \(i\in\mathsf{active}(t)\). We write \(edge_{i}(t)\) for the edge of \(P\) taken by process \(i\) in the transition \(t\), i.e., if \(\sigma=\mathfrak{b}\) then \(edge_{i}(t)\) denotes \(f(i)\xrightarrow{\mathfrak{b}}g(i)\); if \(\sigma=((i_{1},a_{1}),\dots,(i_{k},a_{k}))\) then \(edge_{i}(t)\) denotes \(f(i)\xrightarrow{a_{j}}g(i)\) if \(\sigma(j)=(i,a_{j})\) for some \(j\in[k]\); otherwise \(edge_{i}(t):=\bot\). In case that \(edge_{i}(t)\neq\bot\) we say that \(edge_{i}(t)\) is _taken_ or _involved_ or _occurs_ or is _used_ in \(t\). Given a run \(\pi\) of \(P^{n}\) and an edge \(e\) of \(P\), we say that \(e\)_appears_ on \(\pi\) if it is taken by some active process on some transition of \(\pi\). Take an RB-system \(P^{n}=\langle AP^{n},\Sigma^{n}_{\mathsf{com}},S^{n},I^{n},R^{n},\lambda^{n}\rangle\), a path \(\pi=t_{1}t_{2}\dots\) in \(P^{n}\), and a process \(i\) in \(P^{n}\). Define \(\mathit{proj}_{\pi}(i):=edge_{i}(t_{j_{1}})edge_{i}(t_{j_{2}})\dots\), where \(j_{1}<j_{2}<\dots\) are all the indices \(j\) for which \(edge_{i}(t_{j})\neq\bot\). Thus, \(\mathit{proj}_{\pi}(i)\) is the path in \(P\) taken by process \(i\) during the path \(\pi\). Define the set of _executions_ of \(P^{\infty}\), denoted by \(\textsc{exec}(P^{\infty})\), to be the set of the runs of \(P^{\infty}\) projected onto the state labels of a single process. Note that, due to symmetry, we can assume w.l.o.g. that the runs are projected onto process \(1\). Formally, \[\textsc{exec}(P^{\infty})=\{\lambda(\mathit{proj}_{\!\pi}(1))\mid\pi\in\mathit{ runs}(P^{\infty})\},\] where \(\lambda\) is the labeling of the process template \(P\). We denote by \(\textsc{exec-fin}(P^{\infty})\) (resp. \(\textsc{exec-inf}(P^{\infty})\)) the finite (resp. infinite) executions in \(\textsc{exec}(P^{\infty})\). We present two examples. Note that the letter in a state is both the name of that state as well as the (unique) atom that holds in that state. **Example 1**.: _Consider the R-template \(P\) in Figure 1. Note that \(\textsc{exec-fin}(P^{\infty})\) consists of all prefixes of words that match the regular expression \(p^{*}q\), and \(\textsc{exec-inf}(P^{\infty})=\emptyset\)._ **Example 2**.: _Consider the RB-template \(P\) in Figure 2. Note that \(\textsc{exec-fin}(P^{\infty})\) consists of all prefixes of words that match \((r^{*}(p^{*}+q)r)^{*}\). Regarding \(\textsc{exec-inf}(P^{\infty})\), it is not hard to see that in every infinite run of \(P^{n}\), every process is involved in at most \(n-1\) consecutive rendezvous transitions before a broadcast transition is taken, which resets all processes to the initial state. Thus, \(\textsc{exec-inf}(P^{\infty})\) is the set of words of the form \((r^{*}(p^{*}+q)r)^{*}r^{\omega}\) and \(r^{n_{1}}(p^{x_{1}}+q)r^{n_{2}}(p^{x_{2}}+q)\dots\), for some \(n_{1},n_{2},\dots\in\mathbb{N}\) and \(x_{1},x_{2},\dots\in\mathbb{N}\) such that \(\limsup x_{i}<\infty\)._ The definitions above imply the following easy lemma. **Lemma 3.3**.: _Let \(P,P^{\prime}\) be two RB-templates with the same atomic propositions and edge-labels alphabet._ 1. _If_ \(\pi\in P^{\infty}\) _and_ \(\pi^{\prime}\in P^{\prime\infty}\) _are equi-labeled then_ \(\lambda(\mathit{proj}_{\pi}(j))=\lambda^{\prime}(\mathit{proj}_{\!\pi^{\prime} }(j))\) _for every process_ \(j\)_._ 2. _If_ \(P\) _and_ \(P^{\prime}\) _each simulates the other then_ \(\textsc{exec}(P^{\infty})=\textsc{exec}(P^{\prime\infty})\)_._ Proof.: Let \(\pi=e_{1}e_{2}\cdots\) and \(\pi^{\prime}=e_{1}^{\prime}e_{2}^{\prime}\cdots\). For the first item note that for every \(i<|\pi|\) the label of the edge \(e_{i}\) is equal to the label of the edge \(e_{i}^{\prime}\). It follows that the same processes are active in \(e_{i}\) and \(e_{i}^{\prime}\). Thus, the run \(\mathit{proj}_{\!\pi}(i)\) of the template \(P\) is equi-labeled with the run \(\mathit{proj}_{\!\pi^{\prime}}(i)\) of the template \(P^{\prime}\), and in particular they induce the same sequence of sets of atomic propositions. For the second item, suppose \(P^{\prime}\) simulates \(P\) via \(M\subseteq S\times S^{\prime}\). Derive the relation \(C\subseteq S^{\infty}\times(S^{\prime})^{\infty}\) from \(M\) point-wise, i.e., \((f,f^{\prime})\in C\) iff there is \(n\in\mathbb{N}\) such that i) Figure 1. R-template Figure 2. RB-template \(f\in S^{n},f^{\prime}\in(S^{\prime})^{n}\) and ii) for every \(i\leq n\), \((f(i),f^{\prime}(i))\in M\). It is routine to check that \(C\) is a simulation, and thus \(P^{\prime\infty}\) simulates \(P^{\infty}\). By a symmetric argument, \(P^{\infty}\) simulates \(P^{\prime\infty}\). Thus, for every run \(\pi\) in \(P^{\infty}\) there is an equi-labeled run \(\pi^{\prime}\) in \(P^{\prime\infty}\), and vice versa. Now apply the first item. ### Parameterized Model-Checking Problem Specifications represent sets of finite or infinite sequences over the alphabet \(2^{AP}\). In this work we will consider specifications of finite executions to be given by nondeterministic finite word automata (NFW) and specifications of infinite executions to be given by nondeterministic Buchi word automata (NBW). Standard translations allow us to present specifications in linear temporal logics such as LTL and LTL\({}_{f}\), see Theorem 2.2. We now define the main decision problem of this work. **Definition 3.4** (**PMCP**).: _Let \(\mathcal{F}\) be a specification formalism for sets of infinite (resp. finite) words over the alphabet \(2^{AP}\). The Parameterized Model Checking Problem for \(\mathcal{F}\), denoted \(PMCP(\mathcal{F})\), is to decide, given a process-template \(P\), and a set \(W\) of infinite (resp. finite) words specified in \(\mathcal{F}\), if all executions in the set \(\textsc{exec-inf}(P^{\infty})\) (resp. \(\textsc{exec-fin}(P^{\infty})\)) are in \(W\)._ Just as for model-checking [20], we have three ways to measure the complexity of the PMCP problem. If we measure the complexity in the size of the given template and specification, we have the (usual) complexity, sometimes called _combined complexity_. If we fix the template and measure the complexity with respect to the size of the specification we get the _specification complexity_. If we fix the specification and measure the complexity with respect to the size of the template we get the _program complexity_ (we use "program complexity" instead of "template complexity" in order to be consistent with the model-checking terminology). Moreover, if \(C\) is a complexity class, we say that the specification complexity of the PMCP-problem is _\(C\)-hard_ if there is a fixed template such that the induced PMCP problem (that only takes a specification as input) is \(C\)-hard in the usual sense; and it is _\(C\)-complete_ if it is in \(C\) and \(C\)-hard. Symmetric definitions hold for program complexity. **Results.** Our main results solve PMCP for RB-templates for specifications of finite and infinite executions. In both cases we use the automata theoretic approach: given an RB-template \(P\), we show how to build an automaton \(\mathcal{M}\) accepting exactly the executions of the RB-system \(P^{\infty}\). Model checking of a specification given by another automaton \(\mathcal{M}^{\prime}\) is thus reduced to checking if the language of \(\mathcal{M}\) is contained in the language of \(\mathcal{M}^{\prime}\). The automaton \(\mathcal{M}\) will be based on what we call the reachability-unwinding of the given RB-template. In the finite-execution case \(\mathcal{M}\) will be an NFW that is almost identical to the reachability-unwinding. In the infinite execution case \(\mathcal{M}\) will be more complicated. Indeed, classic automata over infinite words (e.g., NBW) will not be powerful enough to capture the system (see Lemma 3.6 below) and so we use B-automata; the automaton \(\mathcal{M}\) will be based on three copies of the reachability-unwinding (instead of one copy) where from each copy certain edges will be removed based on a classification of edges into different types. Our first result classifies the complexity for NFW/LTLf specifications: **Theorem 3.5**.: _Let \(\mathcal{F}\) be specifications of sets of finite executions expressed as NFW or LTLf formulas. Then the complexity of \(PMCP(\mathcal{F})\) for RB-systems is PSPACE-complete, as is the program complexity and the specification complexity._ To see that classical acceptance conditions (e.g. Buchi, Parity) are not strong enough for the case of infinite executions, consider the following lemma. **Lemma 3.6**.: _The process template \(P\) in Figure 2 has the property that the set \(\textsc{EXEC-INF}(P^{\infty})\) is not \(\omega\)-regular._ Proof.: The following pumping argument shows that this language is not \(\omega\)-regular. Assume by way of contradiction that an NBW \(\mathcal{A}\) accepts \(\textsc{EXEC-INF}(P^{\infty})\), and consider an accepting run of \(\mathcal{A}\) on the word \((rp^{n+1})^{\omega}\), where \(n\) is the number of states of \(\mathcal{A}\). It follows that for each \(i\in\mathbb{N}\), while reading the \(i\)'th block of \(p\)'s, \(\mathcal{A}\) traverses some cycle \(c_{i}\). Hence, by correctly pumping the cycle \(c_{i}\), e.g., \(i\) times for every \(i\in\mathbb{N}\), we can obtain an accepting run of \(\mathcal{A}\) on a word \(w^{\prime}\) which is not \(\textsc{EXEC-INF}(P^{\infty})\) since it contains blocks of consecutive \(p\)'s of ever increasing length, contradicting our assumption. On the other hand, there is a \(B\)-automaton (with a trivial Buchi set) recognizing this language (the counter is incremented whenever \(p\) is seen and reset whenever \(r\) is seen). This is no accident: we will prove (Theorem 6.5) that for every RB-template \(P\) one can build a \(B\)-automaton (with a trivial Buchi set) recognizing the infinite executions of \(P^{\infty}\). Combining this with an NBW for the specification, we reduce the model-checking problem to the emptiness problem of a \(B\)-automaton. Hence, our second main result provides an exptime upper bound for \(\mathrm{NBW}/\mathsf{LTL}\) specifications: **Theorem 3.7**.: _Let \(\mathcal{F}\) be specifications of sets of infinite executions expressed as NBW or LTL formulas. Then \(PMCP(\mathcal{F})\) of RB-systems can be solved in exptime._ ### Variants with a controller and asymmetric broadcast We now give two variants of RB-Systems, i.e., one that incorporate a distinguished "controller" process, and another that allows for asymmetric broadcasts [1]. Given two process templates \(P_{C}\) and \(P\) the RB-System with a controller (RBC-System) \(P_{C}\cup P^{n}\) is the finite LTS \(\left\langle AP^{n+1},\Sigma_{\mathsf{com}}^{n+1},S^{n+1},I^{n+1},R^{n+1}, \lambda^{n+1}\right\rangle\), which is defined exactly as in Definition 3.2, with the only difference that for process \(1\) we use the process template \(P_{C}\) and for processes \(2\) to \(n+1\) we use process template \(P\). The RBC-System \(P_{C}\cup P^{\infty}\) is then defined analogously. We now need to adjust the definitions of executions to differentiate between the projection to a controller resp. non-controller processes; we set, \[\textsc{EXEC}(P_{C}\cup P^{\infty})_{C}=\{\lambda(\mathit{proj}_{\pi}(1))\mid \pi\in\mathit{runs}(P_{C}\cup P^{\infty})\},\] and \[\textsc{EXEC}(P_{C}\cup P^{\infty})=\{\lambda(\mathit{proj}_{\pi}(2))\mid\pi \in\mathit{runs}(P_{C}\cup P^{\infty})\},\] where, because of symmetry, we can always project to process \(2\) for a non-controller process. In order to be able to consider asymmetric broadcasts we need to consider an alphabet of broadcast actions \(\Sigma_{\mathsf{bcts}}\) that is disjoint from \(\Sigma_{\mathsf{actn}}\). The communications alphabet is then given by \(\Sigma_{\mathsf{rdz}}=\cup_{a\in\Sigma_{\mathsf{actn}}}\{a_{1},\ldots,a_{k} \}\cup\cup_{b\in\Sigma_{\mathsf{bcts}}}\{b_{\mathsf{bnd}},b_{\mathsf{rcv}}\}\). We now extend the definition of an RB-template to asymmetric broadcasts. A process template \(P\) over the (extended) communications alphabet \(\Sigma_{\mathsf{rdz}}\) is an _RBA-template_, if for every state \(s\in S\), we have that \(b_{\mathsf{rcv}}\) is enabled in \(s\) for every \(b\in\Sigma_{\mathsf{bcts}}\). Given an RBA-template \(P\), the RBA-System \(P^{n}\) is the finite LTS \(\left\langle\mathcal{A}^{n},\Sigma_{\mathsf{com}}^{n},S^{n},I^{n},R^{n}, \lambda^{n}\right\rangle\), which is defined as in Definition 3.2 except for the definition of the global transition relation, where we support asymmetric broadcasts instead of symmetric broadcasts as follows: * _(asymmetric broadcast)_ \(\sigma=\langle c_{1},\ldots,c_{n}\rangle\) is an \(n\)-tuple such that there is some \(b\in\Sigma_{\mathsf{bcts}}\) and some \(i\) such that \(c_{i}=b_{\mathsf{snd}}\) and \(c_{j}=b_{\mathsf{rcv}}\) for all \(j\neq i\), and \(f(i)\xrightarrow{c_{i}}g(i)\) in \(P\), for every \(i\in[n]\); The RBA-System \(P^{\infty}\) is then defined analogously. The set of executions is defined as for RB-Systems. We remark that we define RBA-Systems without a controller for technical convenience. It would be straight-forward to define an RBA-System with a controller in the same way as we did above. However, it is easy to verify that RBA-Systems with a controller are not more powerful than RBA-Systems that lack a controller. That is because having a controller can be simulated through an initial asymmetric broadcast that makes the sender process taking over the role of the controller and the receiver processes continuing as non-controller processes. We now make this statement precise. Recall the notation \((\pi)_{a}\) and \(\pi|_{AP}\) from Section 2.1. **Theorem 3.8**.: _RBC-Systems and RBA-Systems are equally powerful, more precisely,_ * _for each RBC-System, given by process templates_ \(P_{C}\) _and_ \(P\) _over atomic propositions_ \(\mathit{AP}\)_, we can construct in linear time an RBA-System_ \(P^{\prime}\) _over atomic propositions_ \(\mathit{AP}\cup\{c,p\}\)_, with_ \(c,p\not\in\mathit{AP}\) _such that_ \(\textsc{exec}(P_{C}\cup P^{\infty})_{C}=\{(\pi)_{c}|_{AP}\mid\pi\in\textsc{exec }(P^{\prime\infty})\}\) _and_ \(\textsc{exec}(P_{C}\cup P^{\infty})=\{(\pi)_{p}|_{AP}\mid\pi\in\textsc{exec}(P^ {\prime\infty})\}\)_, and_ * _for each RBA System, given by process template_ \(P\) _over atomic propositions_ \(\mathit{AP}\)_, we can construct in linear time an RBC System, given by process templates_ \(P^{\prime}_{C}\) _and_ \(P^{\prime}\) _over atomic propositions_ \(\mathit{AP}\cup\{p\}\) _such that_ \(\textsc{exec}(P^{\prime\infty})=\{(\pi)_{p}|_{AP}\mid\pi\in\textsc{exec}(P^{ \prime}_{C}\cup P^{\prime\infty})\}\) _(the executions of the controller are not important for this statement)._ Proof.: The "simulation" of RBC-Systems by RBA-Systems is easy to establish: The symmetric broadcast can easily be implemented by an asymmetric broadcast. Hence, the only difference consists in the missing controller. However, an initial asymmetric broadcast can be used to elect a controller. We now give the details. We consider some RBC System, given by process templates \(P_{C}\) and \(P\) over atomic propositions \(\mathit{AP}\). We now define the RBA-template \(P^{\prime}\) as the disjoint union of \(P_{C}\) and \(P\) to which we add a fresh initial state. \(\Sigma_{\mathsf{bcts}}\) consists of two symbols, \(\mathfrak{b}\), for modeling the symmetric broadcast of the RBC System, and \(b\), for the initial controller election. We add a transition \(b_{\mathsf{snd}}\) from the new initial state to the initial state of \(P_{C}\), and a transition \(b_{\mathsf{rcv}}\) from the new initial state to the initial state of \(P\); \(P^{\prime}\) further has \(b_{\mathsf{rcv}}\)-self-loops for all other states (these transitions can never be used but are added to satisfy the definition of RBA-templates). We now define the labeling of \(P^{\prime}\): Let \(c,p\not\in\mathit{AP}\) be two fresh atomic propositions. The states of \(P^{\prime}\) that originate from \(P_{C}\) and \(P\) keep their labels except that we add \(c\) to the set of labels for all states of \(P_{C}\) and we add \(p\) to the set of labels for all states of \(P\). The new initial state is labeled by the empty set. It is now easy to verify that \(\textsc{exec}(P_{C}\cup P^{\infty})_{C}=\{(\pi)_{c}|_{AP}\mid\pi\in\textsc{ exec}(P^{\prime\infty})\}\) and \(\textsc{exec}(P_{C}\cup P^{\infty})=\{(\pi)_{p}|_{AP}\mid\pi\in\textsc{exec}(P ^{\prime\infty})\}\). We now establish that RBC-Systems can "simulate" RBA-Systems. The idea it to use rendezvous transitions in combination with the symmetric broadcast to simulate asymmetric broadcasts. For this, the controller acts as a bookkeeper that the simulation is done correctly. We now give the details of the construction. We consider some RBA-System given by process template \(P\). Let \(\Sigma_{\mathsf{rdz}}=\cup_{a\in\Sigma_{\mathsf{actn}}}\{a_{1},\ldots,a_{k}\} \cup\cup_{b\in\Sigma_{\mathsf{bcts}}}\{b_{\mathsf{snd}},b_{\mathsf{rcv}}\}\) be the communication alphabet of \(P\). We will now construct an RBC-System, given by process templates \(P^{\prime}_{C}\) and \(P^{\prime}\), over the communication alphabet \(\Sigma_{\mathsf{rdz}}=\cup_{a\in\Sigma_{\mathsf{actn}}}\{a_{1},\ldots,a_{k}\} \cup\cup_{b\in\Sigma_{\mathsf{bcts}}}\{b_{\mathsf{snd},1},b_{\mathsf{snd},2},b_ {\mathsf{rcv},1},b_{\mathsf{rcv},2}\}\). That is, the RBC-System keeps all rendezvous transitions of the RBA-System, to which we add the binary rendezvous transitions \(b_{\mathsf{snd},1},b_{\mathsf{snd},2}\) (resp. \(b_{\mathsf{rcv},1},b_{\mathsf{rcv},2}\)) that we will use for simulating the sending \(b_{\mathsf{snd}}\) (resp. receiving \(b_{\mathsf{rcv}}\)) of an asymmetric broadcast. We first define the controller process template \(P^{\prime}_{C}\): The controller begins in the waiting state \(w\), and contains states \(b\) for every asymmetric broadcast transition \(b\in\Sigma_{\mathsf{bcts}}\), plus a special dead state. For every asymmetric broadcast transition \(b\in\Sigma_{\mathsf{bcts}}\), there is a rendezvous transition \((w,b_{\mathsf{snd},2},b)\), a rendezvous transition \((b,b_{\mathsf{rcv},2},b)\) (note the self-loop), and a symmetric broadcast transition \((b,\mathfrak{b},w)\); in addition, there is a symmetric broadcast transition from \(w\) to a 'dead' state that only has a symmetric broadcast transition to itself. We now obtain the non-controller process template \(P^{\prime}\) from \(P\) as follows: \(P^{\prime}\) has the same states as \(P\), plus the four copies \(s^{1}_{\mathsf{snd}}\), \(s^{2}_{\mathsf{snd}}\), \(s^{1}_{\mathsf{rcv}}\) and \(s^{2}_{\mathsf{rcv}}\) for every state \(s\) of \(P\). Also, \(P^{\prime}\) keeps all the rendezvous transitions of \(P\). Additionally, for all asymmetric broadcast transitions \((s,b_{\mathsf{snd}},t)\) and \((s,b_{\mathsf{rcv}},t)\) of \(P\) we add rendezvous transitions \((s,b_{\mathsf{snd},1},t^{\prime})\) and \((s,b_{\mathsf{rcv},1},t^{\prime})\) to \(P^{\prime}\). Further, for every state \(s\), we add the symmetric broadcast transitions \((s^{\prime},\mathfrak{b},s)\), as well as a symmetric broadcast from \(s\) to a 'dead' state that only has a symmetric broadcast transition to itself. We now define the labeling of \(P^{\prime}\) (the labeling of \(P^{\prime}_{C}\) is not important): We choose some fresh atomic proposition \(p\not\in AP\). All states of \(P^{\prime}\) that are also states of \(P\) are labeled by the same set of propositions plus the proposition \(p\), and all new states are labeled by the empty set \(\emptyset\). We now argue that \(\textsc{exec}(P^{\infty})=\{(\pi)_{p}|_{AP}\mid\pi\in\textsc{exec}(P^{\prime}_ {C}\cup P^{\prime\infty})\}\). We first argue the inclusion \(\textsc{exec}(P^{\infty})\subseteq\{(\pi)_{\neg p}|_{AP}\mid\pi\in\textsc{exec }(P^{\prime}_{C}\cup P^{\prime\infty})\}\): We fix some \(\pi^{\prime}\in\textsc{exec}(P^{\infty})\) and show that there is a \(\pi\in\textsc{exec}(P^{\prime}_{C}\cup P^{\prime\infty})\) such that \((\pi)_{\neg p}|_{AP}=\pi^{\prime}\). By definition there is some run of \(P^{\infty}\) whose projection is \(\pi^{\prime}\). We now construct some run of \(P^{\prime}_{C}\cup P^{\prime\infty}\) whose projection is \(\pi\). This proof is by a straight-forward induction on the length of the run: Every rendezvous transition of \(P^{\infty}\) can directly be repeated by the corresponding rendezvous transition of \(P^{\prime}_{C}\cup P^{\prime\infty}\). Every asymmetric broadcast transition \(b\in\Sigma_{\mathsf{bcts}}\) of \(P^{\infty}\) can be simulated by a sequence of transitions in \(P^{\prime}_{C}\cup P^{\prime\infty}\): First, the process that is sending the broadcast \(b\) rendezvous with the controller, i.e., the sender process moves from the current state \(s\) to some state \(t^{\prime}\) using rendezvous transition \(b_{\mathsf{snd},1}\) and the controller moves from the waiting state \(w\) to state \(b\) using rendezvous transition \(b_{\mathsf{snd},2}\). Then, the controller process rendezvous with all receiver processes using the rendezvous transition \(b_{\mathsf{rcv},2}\) (which is a self-loop for the controller), and the receiver processes move to some state \(t^{\prime}\), respectively. Then, a symmetric broadcast happens and every non-controller process arrives at a state that concludes the simulation of the asymmetric broadcast, and the controller process is again in its starting state \(w\). We note that the two runs agree for every projection to a single process when the copied states are removed. In particular, the executions of both systems agree when the copied states are removed, i.e., we have \((\pi)_{p}|_{AP}=\pi^{\prime}\). We now argue \(\textsc{exec}(P^{\infty})\supseteq\{(\pi)_{p}|_{AP}\mid\pi\in\textsc{exec}(P^ {\prime}_{C}\cup P^{\prime\infty})\}\): We fix some \(\pi\in\textsc{exec}(P^{\prime}_{C}\cup P^{\prime\infty})\) and show that there is some \(\pi^{\prime}\in\textsc{exec}(P^{\infty})\) such that \((\pi)_{p}|_{AP}=\pi^{\prime}\). By definition there is some run of \(P^{\prime}_{C}\cup P^{\prime\infty}\) whose projection is \(\pi\). W.l.o.g. we can assume that process \(2\) in this run did not move to a dead state (this is because dead states are sinks that are not labeled by \(p\) and the projection \((\cdot)_{p}\) removes the part of the trace not labeled by \(p\); hence we can simply consider the prefix of the run in which process \(2\) did not move to the dead state yet). We now construct some run of \(P^{\infty}\) whose projection is \(\pi^{\prime}\). We iteratively construct a run of \(P^{\infty}\) based on the number of symmetric broadcasts: In every step, we consider all rendezvous transitions between broadcasts. We first append all rendezvous transitions of the run \(P^{\prime}_{C}\cup P^{\prime\infty}\) that do not involve the controller to the run of \(P^{\infty}\); we add those transitions in the same order. We then add a single asymmetric broadcast to the run of \(P^{\infty}\), where the target states of the receiving processes are determined by the rendezvous transitions that involve the controller in the run of \(P^{\prime}_{C}\cup P^{\prime\infty}\). After this asymmetric broadcast the processes in the run of \(P^{\infty}\) will be in the same states as the processes in the run of \(P^{\prime}_{C}\cup P^{\prime\infty}\) after the next symmetric broadcast. However, some processes in the run of \(P^{\prime}_{C}\cup P^{\prime\infty}\) may move to a dead state after the symmetric broadcast, and we also have to define their behavior in the run of \(P^{\infty}\). We now observe that processes that moved to a dead state in \(P^{\prime}_{C}\cup P^{\prime\infty}\) will never again participate in any rendezvous transitions. Hence, we are free to define the target states of these processes during the asymmetric broadcasts (note that by assumption at least one target state exists for every asymmetric broadcast). It is now easy to verify that the two runs agree for every projection to a single process, when the copied states are removed and this process does not move to a dead state. In particular, we get \((\pi)_{\neg p}|_{AP}=\pi^{\prime}\). The PMCP for RBA-Systems is quite well understood [1], i.e., it is undecidable for liveness specifications and decidable, in fact non-elementary [13], for safety specifications. We thus get: **Theorem 3.9**.: 1. _Let_ \(\mathcal{F}\) _be specifications of sets of infinite executions expressed as NBW or LTL formulas. Then_ \(PMCP(\mathcal{F})\) _of RBC-systems is undecidable._ 2. _Let_ \(\mathcal{F}\) _be specifications of sets of finite executions expressed as NFW or LTLf formulas. Then_ \(PMCP(\mathcal{F})\) _of RBC-systems is decidable and non-elementary (upper and lower bounds)._ Proof.: By Theorem 3.8, one can transfer verification tasks between RBA- and RBC-Systems: This can be achieved by a formula/automaton transformation, and the verification of the transformed formula/automaton on the transformed system. We exemplify how to transfer LTL/LTLf specifications: Given an LTL/LTLf specification \(\phi\), we implement the projection operation \((\cdot)_{p}\) by replacing every occurrence of an atomic proposition \(X\) in \(\phi\) by \(\neg p\,\mathsf{U}(p\wedge X)\), resulting in a formula \(\phi^{\prime}\); in order to only consider executions that do not reach the dead state we can then consider the specification \((\mathsf{G}\neg p\,\mathsf{U}\,p)\to\phi^{\prime}\). Similar ideas can be implemented through automata transformations. In particular, the undecidability of liveness properties for RBA-systems [1] transfers to RBC-systems, and the non-elementary decidability of safety properties for RBA-systems [13] transfers to RBC-systems. ### Large systems simulating small systems We provide a simple but useful property of RB-systems that will be used throughout the rest of this paper. Intuitively, a large RB-system can, using a single run, partition its processes into several groups, each one simulating a run of a smaller RB-system, _as long as all the simulated runs have the same number of broadcasts_. In order to state and prove this result, we need the following. **Notation**.: Let \(X\subseteq[n]\) be a set of processes. For a configuration \(f:[n]\to S\) of \(P^{n}\) define \(f\big{|}_{X}\) to be the restriction of \(f\) to the domain \(X\). Similarly, for a global transition \(t\) of \(P^{n}\), say \(f\xrightarrow{\sigma}g\), if \(t\) is a broadcast transition (i.e., \(\sigma=\mathfrak{b}\)), or a rendezvous transition whose active processes are all in \(X\) (i.e., \(\sigma\neq\mathfrak{b}\) and \(\mathsf{active}(t)\subseteq X\)), then we define \(t\big{|}_{X}\) to be \(f\big{|}_{X}\xrightarrow{\sigma}g\big{|}_{X}\); otherwise (i.e., if \(\sigma\neq\mathfrak{b}\) and \(\mathsf{active}(t)\not\subseteq X\)), then \(t\big{|}_{X}\) is undefined. Finally, given a path \(\pi\) in \(P^{n}\), if for every \(1\leq i\leq|\pi|\) we have that \(\mathsf{active}(\pi_{i})\subseteq X\) or \(\mathsf{active}(\pi_{i})\subseteq[n]\setminus X\), then the restriction \(\pi\big{|}_{X}:=\pi_{i_{1}}\big{|}_{X}\pi_{i_{2}}\big{|}_{X}\dots\) is defined by taking \(i_{1}<i_{2}<\dots\) to be exactly the indices \(1\leq j\leq|\pi|\) for which \(\pi_{j}\big{|}_{X}\) is defined; otherwise (i.e., if there is a transition on \(\pi\) in which some of the active processes are in \(X\) and some are not in \(X\)) \(\pi\big{|}_{X}\) is undefined. We will implicitly rename processes as follows. Let \(rename:X\to[|X|]\) be a bijection. Consider configurations \(f\), transitions \(t\), and paths \(\pi\) of \(P^{n}\). By renaming the processes using \(rename\) we can think of \(f\big{|}_{X}\) as a configuration of \(P^{|X|}\), and \(t\big{|}_{X}\) (if defined) as the transition of \(P^{|X|}\) obtained by restricting the configurations \(f\) and \(g\) in \(t\) to \(X\), and \(\pi\big{|}_{X}\) (if defined) as a path of \(P^{|X|}\). For a process template \(P\), paths \(\pi_{1},\pi_{2},\cdots,\pi_{h}\) in \(P^{\infty}\) (possibly using different numbers of processes), and pairwise disjoint subsets \(X_{1},X_{2},\cdots,X_{h}\) of \(\mathbb{N}\), we say that a path \(\pi\) in \(P^{\infty}\)_simulates_\(\pi_{1},\cdots,\pi_{h}\) (with \(X_{1},X_{2},\cdots,X_{h}\)) if \(\pi\big{|}_{X_{i}}=\pi_{i}\) for every \(i\). Observe that, if \(\pi_{1},\cdots,\pi_{h}\) do not have the same number of broadcasts then there is no \(\pi\) that can simulate them. The next lemma shows that this condition is not only necessary but also sufficient. **Lemma 3.10** (Composition).: _Given an integer \(b\), paths (resp. runs) \(\pi_{1},\ldots,\pi_{h}\) in RB-systems \(P^{n_{1}},\ldots,P^{n_{h}}\) each with exactly \(b\) broadcast transitions: for every \(n\geq\Sigma_{i=1}^{h}n_{i}=m\), every configuration \(f\) in \(P^{n}\) and pairwise disjoint subsets \(X_{1},X_{2},\cdots,X_{h}\) of \(\mathbb{N}\) such that \(f\big{|}_{X_{i}}=\text{src}(\pi_{i})\) for every \(i\), there exists a path (resp. run) \(\pi\) in \(P^{n}\) starting in \(f\) that simulates \(\pi_{1},\cdots,\pi_{h}\) with \(X_{1},X_{2},\cdots,X_{h}\)._ Proof.: We begin by proving the lemma in the special case of R-systems. For every \(j\in[h]\), we will have the \(n_{j}\) processes in the set \(X_{j}\) simulate \(\pi_{j}=e_{j,1}\)\(e_{j,2}\ldots\). The extra processes (between \(m+1\) and \(n\)) do not move. Note that all transitions on \(\pi_{1},\ldots,\pi_{h}\) are rendezvous involving \(k\) processes. Whenever a rendezvous appearing on \(\pi_{j}\) is performed in \(P^{n}\) only \(k\) processes in \(X_{j}\) move, leaving the others unaffected. Thus, \(\pi\) can be obtained by any interleaving of the rendezvous appearing on \(\pi_{1},\ldots,\pi_{h}\) as long as the relative internal ordering of rendezvous on each of these paths is maintained (e.g., round-robin \(e_{1,1}\)\(e_{2,1}\ldots e_{h,1}\)\(e_{1,2}\)\(e_{2,2}\ldots e_{h,2}\ldots\)). Now, we consider the case of general RB-systems. As before, for every \(j\in[h]\), we will have the \(n_{j}\) processes in the set \(X_{j}\) simulate \(\pi_{j}\). If \(n>m\), the extra processes are ignored (however, they do move whenever there is a broadcast). Each path \(\pi_{1},\ldots,\pi_{h}\) is cut into \(b+1\) segments (numbered \(0,\ldots,b\)), each containing only rendezvous transitions and followed by a broadcast transition. Thus, the \(i\)'th segment of each path is followed by the \((i+1)\)'th broadcast. The path \(\pi\) is constructed in \(b+1\) phases: in phase \(i\), the \(i\)'th segment of all the paths \(\pi_{1},\ldots,\pi_{h}\) are simulated as was done in the R-systems case, followed (if \(i<b\)) by a single broadcast transition that forces the simulation of the \(i\)'th broadcast on all of these paths at once. We now present a more flexible form of simulation in which the processes that are assigned to simulate a given path are not fixed throughout the simulation (this will be used in the proof of Theorem 6.5). **Definition 3.11**.: _We say that \(\pi_{0}\) weakly-simulates \(\pi_{1},\cdots,\pi_{h}\) if there exists an integer \(l\) and a decomposition of each of these paths into \(l\) segments, the \(i\)'th segment of \(\pi_{j}\) is denoted \(\pi_{j}^{i}\) for \(1\leq i\leq l,0\leq j\leq h\), and pairwise disjoint sets \(X_{1}^{i},\cdots,X_{h}^{i}\) for \(1\leq i\leq l\), such that for every \(i\) we have that \(\pi_{0}^{i}\) simulates \(\pi_{1}^{i},\cdots,\pi_{h}^{i}\)._ The difference between weak-simulation and simulation is that the set of processes simulating each path may be changed at the end of each segment. **Remark 1**.: _If \(\pi_{0}\) weakly-simulates cycles \(\pi_{1},\cdots,\pi_{h}\), then \(\text{dst}(\pi_{0})\big{|}_{X_{i}^{l}}=\text{dst}(\pi_{i})=\text{src}(\pi_{i})\) for every \(i\)._ ## 4. Discrete Timed Networks In this section we give the formal definition of a _discrete timed network_, with minor changes compared with [1]. In particular, we first describe the form of a process template and later the operational semantics defining how networks of such processes evolve. In this work, unless stated otherwise, we only consider timed networks _without_ a controller, and always assume a discrete time model \(\mathbb{N}_{0}\). **Definition 4.1**.: _A timed-network (TN) template is a tuple \(\langle A,C,\mbox{grd},\mbox{rst},\mbox{CP}\rangle\) where \(A=\langle\mbox{AP},\Sigma_{\mbox{rdz}},S,I,R,\lambda\rangle\) is a finite LTS, \(C\) is a finite set of clock variables (also called clocks), each transition \(t\in R\) is associated with a \(\mbox{guard}\)\(\mbox{grd}(t)\) and a reset command\(\mbox{rst}(t)\), and CP is a set of clock predicates, i.e., predicates of the form \(x\bowtie c\) where \(x\in C\), \(c\in\mathbb{N}_{0}\) is a constant, and \(\bowtie\in\{>,=\}\). A guard is a Boolean combination of clock predicates. A reset command is a subset of \(C\)._ The _size_ of a TN template is the size of the LTS \(A\) (i.e., the number of states plus the number of transitions) plus the sizes of all the guards, reset commands, and clock predicates where the constants in the clock predicates are represented in _unary_.3 Footnote 3: The unary representation is chosen in order to elicit the relation to RB-systems, i.e., this representation allows us to show that the PMCP for timed-networks and RB-systems is polynomial-time inter-reducible. A timed network \(T^{n}\) consists of the parallel composition of \(n\in\mathbb{N}\) template processes, each running a copy of the template. Each copy has a local configuration \((q,K)\), where \(q\in S\) and \(K:C\rightarrow\mathbb{N}_{0}\) is a clock evaluation mapping each clock to its (discrete) value. We say that an evaluation \(K:C\rightarrow\mathbb{N}_{0}\)_satisfies_ a Boolean combination of clock predicate \(\phi\) if \(\phi\) evaluates to true when every occurrence of clock \(x\) in \(\phi\) is replaced by the value \(K(x)\). A rendezvous action \(a\) is _enabled_ if there are \(k\) processes \(i_{1},\cdots,i_{k}\) such that for every \(j\in[k]\) process \(i_{j}\) is in a local configuration \((q_{j},K_{j})\) for which there is an edge \(q_{j}\xrightarrow{a_{j}}q^{\prime}_{j}\), say \(t_{j}\), and the clock evaluation \(K_{j}\) satisfies the guard \(\mbox{grd}(t_{j})\). The rendezvous action is _taken_ means that the \(k\) processes change their local configurations to \((q^{\prime}_{i},K^{\prime}_{i})\), where \(K^{\prime}_{i}\) is obtained from \(K_{i}\) after resetting the clocks named in \(\mbox{rst}(t_{i})\). Besides these rendezvous transitions, the system can evolve by taking timed-transitions in which all clocks of all processes advance by one time unit (so every \(K(x)\) increases by one).4 Runs of \(T^{n}\) projected onto a single process induce sequences over the alphabet \(2^{A\mbox{\scriptsize$P\cup$CP}}\) of the atomic predicates and clock predicates that hold at each local configuration. Specifications (for the behavior of a single process) can be given as automata or linear-temporal properties over the alphabet \(2^{AP\cup$CP}\). Footnote 4: Alternatively, as in [1], one can let time advance by any amount. To formally define a timed network as an LTS, its executions, and its corresponding PMCP, one can proceed by instantiating the intuitive description given above, along the lines of, e.g., [1]. Alternatively, one can give an equivalent definition (in the sense that it yields exactly the same LTS for the timed network, and thus also the same set of executions and PMCP) by observing that timed networks are essentially RB-systems whose RB-template \(P_{T}\) is induced by the given TN-template \(T\) by viewing local configurations as states of \(P_{T}\), and thinking of timed transitions as symmetric broadcast transitions. Notice that following this approach, the obtained RB-template \(P_{T}\) would be infinite, due to clocks increasing infinitely. In order to make it finite, one can simply truncate clock values up to an appropriate upper bound. In the following we give a detailed construction. **Defining Timed systems as RB-systems.** 5 Let \(T\) be a TN-template \(\langle A,C,\mathit{grd},\mathit{rst},\mathit{CP}\rangle\) where \(A=\langle\mathit{AP},\Sigma_{\mathsf{rdz}},S,I,R,\lambda\rangle\). Define the infinite RB-template Footnote 5: While Definition 3.1 requires an RB-template to be finite, for the purpose of this section we lift this restriction. \[P_{T}=\langle\mathit{AP}\cup\mathit{CP},\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{ b}\},S_{T},I_{T},R_{T},\lambda_{T}\rangle\] where * \(S_{T}=S\times(\mathbb{N}_{0})^{C}\) is the set of _local configurations_, * \(I_{T}\) consists of all pairs \((q,K)\) where \(q\in I\) and \(K(x)=0\) for all \(x\in C\), * \(R_{T}\) consists of two types of transitions: * _timed transitions_ of the form \((q,K)\xrightarrow{\mathfrak{b}}(q,K^{\prime})\), for every \(q\in S\) and \(K:C\to\mathbb{N}_{0}\), and such that \(K^{\prime}(x)=K(x)+1\) for all \(x\in C\); or * _rendezvous transitions_ of the form \((q,K)\xrightarrow{\sigma}(q^{\prime},K^{\prime})\), for every transition \(t=(q,\sigma,q^{\prime})\in R\), for every evaluation \(K:C\to\mathbb{N}_{0}\) satisfying the guard \(\mathit{grd}(t)\), and such that for every \(x\in C\), if \(x\in\mathit{rst}(t)\) then \(K^{\prime}(x)=0\) and otherwise \(K^{\prime}(x)=K(x)\). * \(\lambda_{T}\subseteq S_{T}\times(\mathit{AP}\cup\mathit{CP})\) consists of all pairs \(((q,K),p)\) such that either \(p\in\mathit{AP}\) and \(p\in\lambda(q)\), or \(p\in\mathit{CP}\) and \(K\) satisfies the clock predicate \(p\). Given a TN-template \(T=\langle A,C,\mathit{grd},\mathit{rst},\mathit{CP}\rangle\) with \(A=\langle\mathit{AP},\Sigma_{\mathsf{rdz}},S,I,R,\lambda\rangle\), and \(n\in\mathbb{N}\), we define the _timed network_\(T^{n}\), composed of \(n\) processes, to be the RB-system \((P_{T})^{n}\), and the _timed network_\(T^{\infty}\) to be the RB-system \((P_{T})^{\infty}\). **Definition 4.2** (**PMCP for Timed-Networks)**.: _Let \(\mathcal{F}\) be a specification formalism for sets of infinite (resp. finite) words over the alphabet \(2^{\mathit{AP}\cup\mathit{CP}}\). The Parameterized Model Checking Problem for Timed-Networks for \(\mathcal{F}\), denoted \(PMCP(\mathcal{F})\), is to decide, given a TN-template \(T\), and a set \(L\) of infinite (resp. finite) words specified in \(\mathcal{F}\), if all executions in the set \(\textsc{exec-INF}(T^{\infty})\) (resp. \(\textsc{exec-FIN}(T^{\infty})\)) are in \(L\)._ In Sections 5 and 6 we show how to solve PMCP for finite RB-templates for specifications of finite and infinite executions respectively. This cannot be used directly to solve the PMCP for timed networks since given a timed template \(T\) the RB-template \(P_{T}\) is infinite. However, the next Lemma shows that given \(T\) there is a finite RB-template \(U\) such that \(\textsc{exec}(T^{\infty})=\textsc{exec}(U^{\infty})\). The template \(U\) is obtained from \(P_{T}\) by clipping the clock values to be no larger than \(1\) plus the maximal constant appearing in the clock predicates \(\mathit{CP}\). **Lemma 4.3**.: _Let \(T\) be a TN-template and let \(d=\max\{c:x\bowtie c\in\mathit{CP}\}+1\). One can construct in time polynomial the size of \(T\) a finite RB-template \(U\) such that \(\textsc{exec}(T^{\infty})=\textsc{exec}(U^{\infty})\)._ Proof.: We use the following clipping operation: for \(d\in\mathbb{N}\) and \(K:C\to\mathbb{N}_{0}\) let \(clip_{d}(K):C\to\{0,\ldots,d\}\) map \(x\) to \(\min\{K(x),d\}\). For a local configuration \((q,K)\) define \(clip_{d}(q,K)\) to be \((q,clip_{d}(K))\), and extend this to sets of configurations point-wise. Let \(t\) be any transition \((q,K)\xrightarrow{\sigma}(q^{\prime},K^{\prime})\), define \(clip_{d}(t)\) to be \(clip_{d}(q,K)\xrightarrow{\sigma}clip_{d}(q^{\prime},K^{\prime})\), and extend this to sets of transitions point-wise. Note that, by choice of \(d\), an evaluation \(K\) satisfies a Boolean combination of clock predicates \(\phi\) iff the evaluation \(clip_{d}(K)\) satisfies \(\phi\). Let \(T\) be a TN-template \(\langle A,C,\mathit{grd},\mathit{rst},\mathit{CP}\rangle\) where \(A=\langle\mathit{AP},\Sigma_{\mathsf{rdz}},S,I,R,\lambda\rangle\), and \(P_{T}=\langle\mathit{AP}\cup\mathit{CP},\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{ b}\},S_{T},I_{T},R_{T},\lambda_{T}\rangle\). Then let \[U=\left\langle\mathit{AP}\cup\mathit{CP},\Sigma_{\mathsf{rdz}},S^{\prime},I^{ \prime},R^{\prime},\lambda^{\prime}\right\rangle\] where * \(S^{\prime}=clip_{d}(S_{T})\), * \(I^{\prime}=clip_{d}(I_{T})\), * \(R^{\prime}=clip_{d}(R_{T})\), and * \(\lambda^{\prime}=\{((q,clip_{d}(K)),p):((q,K),p)\in\lambda_{T}\}\). We claim that \(\textsc{exec}(T^{\infty})=\textsc{exec}(U^{\infty})\). To see this note that \(P_{T}\) and \(U\) are bisimilar using the relation \(B\) defined by letting \(((q,K),(q^{\prime},K^{\prime}))\in B\) iff \((q^{\prime},K^{\prime})=clip_{d}(q,K)\). It is not hard to see by following the definitions that \(B\) is a bisimulation relation. To finish apply Lemma 3.3 item 2. The construction used in Lemma 4.3 is illustrated in Figure 3. We note that the polynomial-time result crucially depends on constants represented in unary (note that the construction polynomially depends on \(d=\max\{c:x\bowtie c\in\mathit{CP}\}+1\)). We leave the investigation of complexity-theoretic consideration when numbers are represented in binary for future work. We next show that RB-systems are not more powerful than timed networks. We show that, by allowing for operations that take subsequence and remove atomic propositions, RB-systems and timed networks can define the same languages of (finite and infinite) executions. Recall the notation \((\pi)_{a}\) and \(\pi|_{\mathit{AP}}\) from Section 2.1. **Lemma 4.4**.: _Let \(P\) be a process template with atomic propositions \(\mathit{AP}\). Then, one can construct in linear time a TN-template \(T\) with a singleton set of clocks \(C=\{c\}\), clock predicates \(\mathit{CP}=\{c=0,c=1\}\) and atomic proposition \(\mathit{AP}\cup\mathit{CP}\) such that \(\textsc{exec}(P^{\infty})=\{(\pi)_{c=0}|_{\mathit{AP}}\mid\pi\in\textsc{Exec}(T ^{\infty})\}\)._ Proof.: We consider the process template \(P=\langle\mathit{AP},\Sigma_{\textsf{rdz}}\cup\{\mathfrak{b}\},S,I,R,\lambda\rangle\) and construct the TN-template \(T=\langle A,C,\mathit{grd},\mathit{rst},\mathit{CP}\rangle\), where \(C=\{c\}\) and \(\mathit{CP}=\{c=0,c=1\}\). We obtain the LTS \(A\) as a modification of the LTS \(P\) as follows: 1. \(A\) contains every rendezvous transition \(t\) of \(P\), where we set \(\mathit{grd}(t):=c=0\) for the guard and \(\mathit{rst}(t):=\{\}\) for the reset. Figure 3. Construction from Lemma 4.3. A TN-template \(T\) (left) with one clock \(x\), and the RB-template \(U\) (right) with \(d=3\). For readability, atomic predicates and clock predicates are not drawn. 2. \(A\) contains an internal transition \(t=q\to q^{\prime}\) for every broadcast transition \(q\xrightarrow{\mathfrak{b}}q^{\prime}\) of \(P\), where we set \(\mathit{grd}(t):=c=1\) for the guard and \(\mathit{rst}(t):=\{c\}\) for the reset (internal transitions are \(0\)-rendezvous transitions, and hence do not require a rendezvous symbol). The set of atomic predicates of \(A\) consists of \(\mathit{AP}\cup\mathit{CP}\). We denote the labeling function of \(A\) by \(\lambda^{\prime}\). We now claim that \(\textsc{exec}(P^{\infty})=\{(\pi)_{c=0}|_{\mathit{AP}}\mid\pi\in\textsc{exec }(T^{\infty})\}\). We prove the two directions separately. We first show \(\textsc{exec}(P^{\infty})\subseteq\{(\pi)_{c=0}|_{\mathit{AP}}\mid\pi\in \textsc{exec}(T^{\infty})\}\): We consider an instantiation \(P^{n}\) of \(P\) with \(n\) processes and some run \(\pi\) of \(P^{n}\). We construct a run \(\pi^{\prime}\) of \(T^{n}\) that simulates \(\pi\). The construction maintains the invariant that the value of the clock of each process is \(0\) when simulating the rendezvous transitions, and \(1\) when simulating the broadcast transitions, whereupon the clock value is reset to \(0\). The run \(\pi^{\prime}\) simply contains all the rendezvous transitions of \(\pi\). Every broadcast transition \(\mathfrak{b}\) of \(\pi\) is simulated in \(\pi^{\prime}\) by a sequence of transitions, exactly one step that increases the time by one, followed by an internal transition of each process (note that the internal transition can be taken because the clock value is \(1\); further the internal transition reset the clock value to \(0\) ensuring the above mentioned invariant). It is easy to verify that \(\lambda(\mathit{proj}_{\pi}(i))=(\lambda^{\prime}(\mathit{proj}_{\pi^{\prime}}( i)))_{c=0}|_{\mathit{AP}}\) for all processes \(i\). Hence, \(\textsc{exec}(P^{\infty})\subseteq\{(\pi)_{c=0}|_{\mathit{AP}}\mid\pi\in \textsc{exec}(T^{\infty})\}\). We now show \(\textsc{exec}(P^{\infty})\supseteq\{(\pi)_{c=0}|_{\mathit{AP}}\mid\pi\in \textsc{exec}(T^{\infty})\}\): We consider an instantiation \(T^{n}\) of \(T\) with \(n\) processes and some run \(\pi\) of \(T^{n}\). We will exploit the property that once the clock value of some process \(i\) in \(\pi\) has a value \(>1\), process \(i\) cannot take rendezvous transitions anymore (because all rendezvous transitions are guarded by \(c=0\) or \(c=1\)) and the clock value of process \(i\) will stay \(>1\) for the rest of the run. The correctness of the construction now hinges on the following reasoning: All processes whose clock value is \(\leq 1\) faithfully simulate a run of \(P^{n}\). Once the clock value of a process becomes \(>1\) the simulation by this process is not faithful anymore, however, this is not problematic because the process also does not interact with the other processes anymore (recall that all rendezvous transitions are blocked). Now, we construct a run \(\pi^{\prime}\) of \(P^{n}\) such that the transitions of each process \(i\) correspond to the transitions of process \(i\) in \(\pi\) as long as the clock value is \(\leq 1\), once the clock value becomes \(>1\) we append an arbitrary broadcast transition for each time step transition (recall that we assume there is a broadcast transition from every state in the process template). It is easy to verify that \(\pi^{\prime}\) is indeed a run of \(P^{n}\). W.l.o.g. we can assume that the clock value of process \(i=1\) does never become \(>1\); otherwise, let \(j\) be the point in time of \(\pi\) where the clock value of process \(i=1\) becomes \(>1\), and consider the run \(\pi_{j}\), which is the prefix of \(\pi\) until time point \(j\). We now observe that by construction of \(\pi^{\prime}\) from \(\pi\) we have \(\lambda(\mathit{proj}_{\pi}(1))=(\lambda^{\prime}(\mathit{proj}_{\pi^{\prime}}( 1)))_{c=0}|_{\mathit{AP}}\). Hence, we get \(\textsc{exec}(P^{\infty})\supseteq\{(\pi)_{c=0}|_{\mathit{AP}}\mid\pi\in \textsc{exec}(T^{\infty})\}\). We note that Lemma 4.4 implies that for every specification \(\varphi\) (given as LTL formula resp. finite automaton) one can construct a specification \(\varphi^{\prime}\) such that all executions \(\textsc{exec-inf}(P^{\infty})\) (resp. \(\textsc{exec-fin}(P^{\infty})\)) satisfy \(\varphi\) iff all executions \(\textsc{exec-inf}(T^{\infty})\) (resp. \(\textsc{exec-fin}(T^{\infty})\)) satisfy the specification \(\varphi^{\prime}\); this is because the operation \((\pi)_{a}\) can be implemented as a formula resp. automaton transformation. From Lemma 4.3 and Lemma 4.4 we immediately obtain the main result of this section: **Theorem 4.5**.: _The Parameterized Model Checking Problems for RB-Systems and Timed-Networks are polynomial-time inter-reducible; in particular, (lower as well as upper) bounds on the program/specification/combined complexity transfer._ #### Timed Networks with a Controller The inter-reducibility between RB-Systems and Timed Networks extends to systems with a controller as we sketch in the following. Given two TN-templates \(T_{C}\) and \(T\), the TN-System with a controller (TNC-System) \(T_{C}\cup T^{n}\) is then defined as the RBC-System \(P_{T_{C}}\cup P_{T}^{n}\), where \(P_{T_{C}}\) and \(P_{T}\) are the induced (infinite) RB-templates introduced above. The TNC-System \(T_{C}\cup T^{\infty}\) is then defined analogously. Lemma 4.3 and Lemma 4.4 can then straightforwardly be extended to the relationship between TNC-Systems and RBC-Systems. This gives us the following results: **Theorem 4.6**.: _The Parameterized Model Checking Problems for RBC- and TNC-Systems are polynomial-time inter-reducible; in particular, (lower as well as upper) bounds on the program/specification/combined complexity transfer._ **Corollary 4.7**.: 1. _Let_ \(\mathcal{F}\) _be specifications of sets of infinite executions expressed as NBW or LTL formulas. Then_ \(PMCP(\mathcal{F})\) _of TNC-Systems is undecidable._ 2. _Let_ \(\mathcal{F}\) _be specifications of sets of finite executions expressed as NFW or LTLf formulas. Then_ \(PMCP(\mathcal{F})\) _of TNC-Systems is decidable and non-elementary._ ## 5. Solving PMCP for Specifications over Finite Executions In this section we solve the PMCP problem for specifications given as nondeterministic finite word automata (NFW), and prove that it is pspace-complete. Following the automata-theoretic approach outlined in Section 3.4, given an RB-template \(P\) we will build an NFW \(\mathcal{A}\) that accepts exactly the executions in exec-fin\((P^{\infty})\). Model checking of a regular specification given by an NFW \(\mathcal{A}^{\prime}\) is thus reduced to checking containment of the language of \(\mathcal{A}\) in that of \(\mathcal{A}^{\prime}\). The structure of \(\mathcal{A}\) is based on the reachability-unwinding of \(P\), which we now introduce. Given a template \(P=\langle AP,\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{b}\},S,I,R,\lambda\rangle\), we show how to construct a new process template \(P^{\rightharpoonup}=\langle AP,\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{b}\},S^{ \rightharpoonup},I^{\rightharpoonup},R^{\rightharpoonup},\lambda^{ \rightharpoonup}\rangle\), called the _reachability-unwinding_ of \(P\), see Figure 4. Intuitively, \(P^{\rightharpoonup}\) is obtained by alternating the following two operations: _(i)_ taking a copy of \(P\) and removing from it all the unreachable rendezvous edges, i.e., all transitions of \(P\) that cannot be taken by any process in any run of \(P^{\infty}\); and _(ii)_ unwinding on broadcast edges. This is repeated until a copy is created which is equal to a previous one; we then stop and close the unwinding back into the old copy, forming a high-level lasso structure. Figure 4. A high level view of the reachability-unwinding lasso. **Intuition.** Technically, it is more convenient to first calculate all the desired copies and then to arrange them in the lasso. Thus, for \(0\leq i\leq m\) (for an appropriate \(m\)), we first compute an R-template \(P_{i}=\langle AP,\Sigma_{\mathsf{rdz}},S_{i},I_{i},R_{i},\lambda_{i}\rangle\) which is a copy of \(P\) with initial states redesignated and all broadcast edges, plus some rendezvous edges, removed. Second, we take \(P_{0},\ldots,P_{m}\) and combine them, to create the single process template \(P^{\mbox{\tiny$-$\circ$}}\). We do this by connecting, for \(i<m\), the states in \(P_{i}\) with the initial states of \(P_{i+1}\) by means of broadcast edges, as induced by transitions in \(P\). In case \(i=m\), then \(P_{i}\) is connected to the copy \(P_{n}\), for some \(n\leq m\), as determined by the lasso structure. **Constructing \(P_{i}\) via a Saturation Algorithm.** Construct the R-template \(P_{i}=\langle AP,\Sigma_{\mathsf{rdz}},S_{i},I_{i},R_{i},\lambda_{i}\rangle\) (called the \(i\)'th _component_ of \(P^{\mbox{\tiny$-$\circ$}}\)) recursively: for \(i=0\), we let \(I_{0}:=I\); and for \(i>0\) we let \(I_{i}:=\{s\in S\mid(h,\mathfrak{b},s)\in R\mbox{ for some }h\in S_{i-1}\}\) be the set of states reachable from \(S_{i-1}\) by a broadcast edge. The elements \(S_{i}\) and \(R_{i}\) are obtained using the following _saturation_ algorithm: start with \(S_{i}:=I_{i}\) and \(R_{i}:=\emptyset\); at each round of the algorithm, consider in turn each edge \(e\in R\setminus R_{i}\) of the form \(s\xrightarrow{a_{h}}t\): if for every \(l\in[k]\setminus\{h\}\) there is some other edge \(s^{\prime}\xrightarrow{a_{l}}t^{\prime}\) with \(s^{\prime}\in S_{i}\), then add \(e\) to \(R_{i}\) and add \(t\) (if not already there) to \(S_{i}\). The algorithm ends when a fixed-point is reached. Finally, let \(\lambda_{i}\) be the restriction of \(\lambda\) to \(S_{i}\times AP\). Observe the following property of this algorithm: **Remark 2**.: _If \(s\xrightarrow{a_{h}}t\) in \(R_{i}\) then for all \(l\in[k]\) there exist \(s^{\prime},t^{\prime}\in S_{i}\) such that \(s^{\prime}\xrightarrow{a_{l}}t^{\prime}\) in \(R_{i}\)._ Now, \(P_{i}\) is completely determined by \(I_{i}\) (and \(P\)), and so there are at most \(2^{|S|}\) possible values for it. Hence, there must exist \(n,m\) with \(n\leq m<2^{|S|}\) such that \(P_{n}=P_{m+1}\). We stop calculating \(P_{i}\)'s when this happens since for every \(i\geq n\) it must be that \(P_{i}=P_{n+((i-n)\mod r)}\), where \(r=m+1-n\). We call \(n\) the _prefix length_ of \(P^{\mbox{\tiny$-$\circ$}}\) and call \(r\) its _period_, i.e., \(n\) is the number of components on the prefix of the lasso and \(r\) is the number of components on the noose of the lasso. For \(i\in\mathbb{N}_{0}\), let \(\mathit{comp}(i)\) denote the associated component number of \(i\), i.e., the position in the lasso after \(i\) moves between components. Formally, \(comp(i)=\min(i,n+((i-n)\mod r))\). We now construct \(P^{\mbox{\tiny$-$\circ$}}\). **Definition 5.1** (Reachability-unwinding).: _Given \(P_{0},\ldots,P_{m}\), define the RB-template_ \[P^{\mbox{\tiny$-$\circ$}}=\left\langle AP,\Sigma_{\mathsf{rdz}}\cup\{ \mathfrak{b}\},S^{\mbox{\tiny$-$\circ$}},I^{\mbox{\tiny$-$\circ$}},R^{\mbox{ \tiny$-$\circ$}},\lambda^{\mbox{\tiny$-$\circ$}}\right\rangle\] _as follows:_ * \(S^{\mbox{\tiny$-$\circ$}}:=\cup_{i=0}^{m}\{(s,i)\mid s\in S_{i}\}\)_;_ * \(I^{\mbox{\tiny$-$\circ$}}:=\{(s,0)\mid s\in I\}\)_;_ * \(R^{\mbox{\tiny$-$\circ$}}\) _contains the following transitions:_ * _the rendezvous transitions_ \(\cup_{i=0}^{m}\{(s,i)\xrightarrow{\mbox{\tiny$\circ$}}(t,i)\mid s\xrightarrow{ \mbox{\tiny$\circ$}}t\in R_{i}\}\)_, and_ * _the broadcast transitions_ \(\cup_{i=0}^{m-1}\{(s,i)\xrightarrow{\mathfrak{b}}(t,i+1)\mid s\xrightarrow{ \mathfrak{b}}t\in R\mbox{ and }s\in S_{i}\}\) _and_ * \(\{(s,m)\xrightarrow{\mathfrak{b}}(t,n)\mid s\xrightarrow{\mathfrak{b}}t\in R \mbox{ and }s\in S_{m}\}\)_._ * \(\lambda^{\mbox{\tiny$-$\circ$}}=\cup_{i=0}^{m}\{((s,i),p):(s,p)\in\lambda_{i}\}\)_._ For \(0\leq i\leq m\), we denote by \(P_{i}^{\mbox{\tiny$-$\circ$}}\) the restriction of \(P^{\mbox{\tiny$-$\circ$}}\) to the states in \(\{(s,i)\mid s\in S_{i}\}\) and the rendezvous transitions between them. We call \(P_{i}^{\mbox{\tiny$-$\circ$}}\) the \(i\)'_th component_ of \(P^{\mbox{\tiny$-$\circ$}}\). Observe that \(P_{i}^{\mbox{\tiny$-$\circ$}}\) can be written as an R-template \(P_{i}^{\mbox{\tiny$-$\circ$}}=\langle AP,\Sigma_{\mathsf{rdz}}\cup\{\mathfrak{b }\},S_{i}^{\mbox{\tiny$-$\circ$}},I_{i}^{\mbox{\tiny$-$\circ$}},R_{i}^{\mbox{ \tiny$-$\circ$}},\lambda_{i}^{\mbox{\tiny$-$\circ$}}\rangle\) which is obtainable from the component \(P_{i}\) by simply attaching \(i\) to every state. We will sometimes find it convenient to speak of the component \(P_{i}^{\mbox{\tiny$-$\circ$}}\), for \(i>m\), in which case we identifify \(P_{i}^{\lnot\circ}\) with \(P_{comp(i)}^{\lnot\circ}\). We say that a configuration \(f\) of \((P^{\lnot\circ})^{\infty}\) is _in_\(P_{i}^{\lnot\circ}\) iff all its processes are in states of the component \(P_{i}^{\lnot\circ}\), i.e., iff \(f(j)\in S_{i}^{\lnot\circ}\) for all \(j\) in the domain of \(f\). **Example 3**.: \(\bullet\) _If \(P\) is the template in Figure 1 then \(P_{1}=P\), and \(P_{2}\) is the empty process (because there are no broadcast edges). Thus \(P^{\lnot\circ}\) is a copy of \(P\)._ \(\bullet\) _If_ \(P\) _is the template in Figure_ 2 _then_ \(P_{1}\) _is equal to_ \(P\) _without the broadcast edges, and_ \(P_{2}=P_{1}\)_. Thus_ \(P^{\lnot\circ}\) _is a copy of_ \(P\)_._ \(\bullet\) _If_ \(P\) _is the template on the right hand side of Figure_ 3_, then_ \(P^{\lnot\circ}\) _contains four components (prefix length_ \(3\) _and period_ \(1\)_). The four components are drawn in Figure_ 5_._ **Definition 5.2** (Legal configuration/path).: _A configuration \(f\) of \((P^{\lnot\circ})^{\infty}\) is legal iff it is in \(P_{i}^{\lnot\circ}\) for some \(i\); a path in \((P^{\lnot\circ})^{\infty}\) is legal iff its source configuration is._ **Remark 3**.: _If \(\pi\) is a finite path of \((P^{\lnot\circ})^{\infty}\), with \(b\) broadcast transitions, with source \(f\) and destination \(f^{\prime}\), then if \(f\) is in \(P_{i}^{\lnot\circ}\) then \(f^{\prime}\) is in \(P_{i+b}^{\lnot\circ}\), and if \(\pi\) is a run then \(f^{\prime}\) is in \(P_{b}^{\lnot\circ}\). In particular, any configuration of \((P^{\lnot\circ})^{\infty}\) that is reachable from an initial configuration is legal._ Recall that we introduced \(P^{\rightsquigarrow}\) in order to define an automaton recognizing \(\textsc{exec-fin}(P^{\infty})\). Before doing so, we have to understand the relationship between \(P^{\rightsquigarrow}\) and \(P\). On the relation between \(P^{\rightsquigarrow}\) and \(P\).Observe that by projecting out the component numbers from states in \(P^{\rightsquigarrow}\) (i.e., by replacing \((s,i)\in S^{\rightsquigarrow}\) with \(s\in S\)), states and transitions in \(P^{\rightsquigarrow}\) are transformed into states and transitions of \(P\). Similarly, paths and runs in \((P^{\rightsquigarrow})^{\infty}\) can be transformed into paths and runs in \(P^{\infty}\). Note, however, that this operation does not induce a bisimulation between \(P^{\rightsquigarrow}\) and \(P\), nor does it induce a bisimulation relation between \(P^{\rightsquigarrow}\) and \((P^{\rightsquigarrow})^{\infty}\), since not all states and transitions of \(P\) appear in every component of \(P^{\rightsquigarrow}\). These missing states and edges are also the reason that a path in \(P^{\infty}\) that is not a run (i.e., that does not start at an initial configuration) may not always be lifted to a path in \((P^{\rightsquigarrow})^{\infty}\). Nonetheless, our construction of \(P^{\rightsquigarrow}\) is such that runs of \(P\) (resp. \(P^{\infty}\)) can be lifted to runs of \(P^{\rightsquigarrow}\) (resp. \((P^{\rightsquigarrow})^{\infty}\)) by simply adding the correct component numbers (based on the number of preceding broadcasts) to the states of the transitions of the run. Winding and Unwinding Notation.More formally, projecting out of component numbers (which we call "winding" and denote by \(\rightsquigarrow\)) is defined as follows: if \((s,j)\) is a state of \(P^{\rightsquigarrow}\) define \((s,j)^{\rightsquigarrow}:=s\), which is a state of \(P\); if \(t\) is a transition \(s\xrightarrow{\rightsquigarrow}s^{\prime}\) of \(P^{\rightsquigarrow}\) define \(t^{\rightsquigarrow}:=(s^{\rightsquigarrow},\varsigma,s^{\prime\rightsquigarrow})\), which is a transition of \(P\); if \(\pi=t_{1}t_{2}\cdots\in\mathit{runs}(P^{\rightsquigarrow})\) define \(\pi^{\rightsquigarrow}:=t_{1}^{\rightsquigarrow}t_{2}^{\rightsquigarrow}\cdots\in \mathit{runs}(P)\). Similarly, if \(\mathfrak{f}\) is a configuration in \((P^{\rightsquigarrow})^{\infty}\) define \(\mathfrak{f}^{\rightsquigarrow}\), a configuration of \(P^{\infty}\), by \(\mathfrak{f}^{\rightsquigarrow}(i):=\mathfrak{f}(i)^{\rightsquigarrow}\) where \(i\) is in the domain of \(\mathfrak{f}\); if \(e\) is a global transition \(\mathfrak{f}\xrightarrow{\sigma}\mathfrak{g}\) of \((P^{\rightsquigarrow})^{\infty}\) then define \(e^{\rightsquigarrow}:=(\mathfrak{f}^{\rightsquigarrow},\sigma,\mathfrak{g}^{ \rightsquigarrow})\); and if \(\rho\in\mathit{runs}((P^{\rightsquigarrow})^{\infty})\) define \(\rho^{\rightsquigarrow}=\rho_{1}^{\rightsquigarrow}\rho_{2}^{\rightsquigarrow} \cdots\in\mathit{runs}(P^{\infty})\). Finally, we apply this to sets: if \(X\subseteq\mathit{runs}((P^{\rightsquigarrow})^{\infty})\) then \(X^{\rightsquigarrow}=\{\rho^{\rightsquigarrow}:\rho\in X\}\). We define the reverse transformation of "unwinding" only with respect to runs of \(P^{\infty}\) (a similar definition can be given for the unwinding of runs of \(P\)) as follows: given a configuration \(f\) of \(P^{\infty}\), and a component number \(j\), denote by \(f^{j}\) the function defined by \(f^{j}(i):=(f(i),j)\) for every \(i\) in the domain of \(f\); given \(\pi\in\mathit{runs}(P^{\infty})\), for \(i\in\mathbb{N}\) let \(\mathfrak{b}^{<i}\) be the number of broadcast transitions on \(\pi\) preceding \(\pi_{i}\). The _unwinding_\(\pi^{\rightsquigarrow}\) of \(\pi\) is defined to be the sequence \(\pi_{1}^{\rightsquigarrow}\pi_{2}^{\rightsquigarrow}\cdots\) obtained by taking for every \(1\leq i\leq|\pi|\) the transition \(\pi_{i}^{\rightsquigarrow}:=(f^{comp(\mathfrak{b}^{<i})},\sigma,g^{comp(\mathfrak{ b}^{<i})})\) if \(\pi_{i}=(f,\sigma,g)\) is a rendezvous transition, and otherwise taking \(\pi_{i}^{\rightsquigarrow}:=(f^{comp(\mathfrak{b}^{<i})},\mathfrak{b},g^{comp( \mathfrak{b}^{<i+1})})\) if \(\pi_{i}=(f,\mathfrak{b},g)\) is a broadcast transition. The next lemma says that we may work with template \(P^{\rightsquigarrow}\) instead of \(P\). **Lemma 5.3**.: _For every \(n\in\mathbb{N}\), we have that \(\mathit{runs}(P^{n})=\mathit{runs}((P^{\rightsquigarrow})^{n})^{\rightsquigarrow}\)._ Proof.: Let us fix any \(n\in\mathbb{N}\). The direction \(\{\rho^{\rightsquigarrow}\mid\rho\in\mathit{runs}((P^{\rightsquigarrow})^{n})\} \subseteq\mathit{runs}(P^{n})\) follows from the fact that \(P^{\rightsquigarrow}\) is obtained from \(P\) by an unwinding process. The reverse inclusion requires more care as \(P^{\rightsquigarrow}\) misses edges and states of \(P\). Let \(\pi\in\mathit{runs}(P^{n})\). We prove that \(\pi^{\rightsquigarrow}\in\mathit{runs}((P^{\rightsquigarrow})^{n})\) by induction on the length \(i\) of each prefix of \(\pi\). Let \(b\) be the number of broadcast edges on \(\xi:=\pi_{1}\ldots\pi_{i-1}\). For the base case \(|\pi|=0\), there is nothing to prove. For the induction step, observe that by the inductive hypothesis the unwinding of \(\xi\) is a run of \((P^{\rightsquigarrow})^{n}\). It remains to show that \(\pi_{i}^{\rightsquigarrow}\) is a transition of \((P^{\rightsquigarrow})^{n}\). Observe that by Remark 3\(f=\mathsf{dst}(\pi_{i-1})=\mathsf{src}(\pi_{i})\) is in \(P^{\rightsquigarrow}_{comp(b)}\). Consider first the case that \(\pi_{i}\) is the broadcast transition \(f\xrightarrow{\mathfrak{b}}g\). By the definition of the broadcast edges in \(P^{\rightsquigarrow}\) we have that \(\pi_{i}^{\rightsquigarrow}\) is a transition \(f^{comp(b)}\xrightarrow{\mathfrak{b}}g^{comp(b+1)}\) of \((P^{\rightsquigarrow})^{n}\). Consider now the case that \(\pi_{i}\) is the rendezvous edge \(f\xrightarrow{\sigma}g\), and let \(\sigma=((j_{1},a_{1}),\ldots,(j_{k},a_{k}))\). Since \(f\) is in \(P^{\rightsquigarrow}_{comp(b)}\), for every \(h\in[k]\), the algorithm used to construct the states and transitions of the component \(P_{comp(b)}\) must have added the edge \(f(j_{h})\xrightarrow{a_{h}}g(j_{h})\) to \(R_{comp(b)}\). It follows that \(\pi_{i}^{-\circ}=(f^{comp(b)},\sigma,g^{comp(b)})\) is a transition of \((P^{-\circ})^{n}\). The state labeling of a run \(\rho\in\mathit{runs}((P^{-\circ})^{\infty})\) and its winding \(\rho^{\otimes}\) are equal. Thus we have the following: **Corollary 5.4**.: _For every template \(P\), we have that \(\textsc{exec}(P^{\infty})=\textsc{exec}((P^{-\circ})^{\infty})\)._ The following lemma says that for every component \(P_{b}^{-\circ}\), there is a run of \((P^{-\circ})^{\infty}\) that "loads" arbitrarily many processes into every state of it. **Lemma 5.5** (Loading).: _For all \(b,n\in\mathbb{N}\) there is a finite run \(\pi\) of \((P^{-\circ})^{\infty}\) with \(b\) broadcasts, s.t., \(|\mathsf{dst}(\pi)^{-1}(s)|\geq n\) for every state \(s\) of \(P_{b}^{-\circ}\)._ Proof.: By Lemma 3.10 (Composition) applied to \(P^{-\circ}\) it is sufficient to prove the following: for every \(b\in\mathbb{N}\), and every state \(q\) in \(P_{b}^{-\circ}\), there exists a finite run \(\pi\) of \((P^{-\circ})^{\infty}\), with \(b\) broadcast transitions, such that \(|\mathsf{dst}(\pi)^{-1}(q)|\geq 1\). Recall that, by definition, \(P_{b}^{-\circ}=P_{comp(b)}^{-\circ}\). The proof is by induction on \(b\). For the base case \(b=0\), proceed by induction on the round \(j\geq 1\) of the saturation algorithm at which \(q\) is added to \(S_{comp(b)}\) (i.e., \(S_{0}\)). The state \(q\) is added at round \(j\), due to some edge \((s_{h},a_{h},q)\) of \(R\), only if for every \(l\in[k]\setminus\{h\}\) there are edges \((s_{l},a_{l},q_{l})\) of \(R\) and, either (i) \(j=1\) and \(s_{l}\in I_{comp(b)}\) or, (ii) \(j>1\) and \(s_{l}\) is already in \(S_{comp(b)}\) (i.e., it was added to \(S_{comp(b)}\) at a round before \(j\)). By the inductive hypothesis on round \(j\), for every \(l\in[k]\setminus\{h\}\) there exists \(\rho_{l}\in\mathit{runs}((P^{-\circ})^{\infty})\), with \(b\) broadcasts, which ends with at least one process in the state \(s_{l}\). By Lemma 3.10 (Composition) there exists \(\rho\in\mathit{runs}((P^{-\circ})^{\infty})\) in which there are \(k\) different processes \(i_{1},\ldots,i_{k}\) such that, for every \(l\in[k]\), the process \(i_{l}\) ends in the state \(s_{l}\). Extend \(\rho\) by a global rendezvous transition in which, for \(l\neq h\), process \(i_{l}\) takes the edge \((s_{l},a_{l},q_{l})\), and process \(i_{h}\) takes the edge \((s_{h},a_{h},q)\). This extended run has \(b\) broadcast transitions, and at least one process in state \(q\), as required. For the inductive step \((b>0)\), suppose it holds for all values \(\leq b\), and let us prove it for \(b+1\) (i.e. take \(q\in S_{comp(b+1)}\)). First consider the case \(q\in I_{comp(b+1)}\): there is an edge \((s,\mathfrak{b},q)\) in \(P^{-\circ}\) and by the inductive hypothesis (on \(b\)) there is a run of \((P^{-\circ})^{\infty}\) with \(b\) broadcasts in which some process \(i\) ends in state \(s\). Extend this run by a global broadcast transition in which process \(i\) takes the edge \((s,\mathfrak{b},q)\). This extended run has \(b+1\) broadcast transitions, and at least one process in state \(q\). Second, suppose \(q\in S_{comp(b+1)}\setminus I_{comp(b+1)}\). Then proceed as in the base case. The first part of the following proposition states that the set of finite executions of the RB-system \(P^{\infty}\) is equal to the set of state labels of the finite runs of \(P^{-\circ}\). This is very convenient since \(P^{-\circ}\) is finite, whereas \(P^{\infty}\) is infinite. The second part of the proposition gives a weaker result for the infinite case. **Proposition 5.6**.: _For every template \(P\), the following holds:_ 1. \(\textsc{exec-FIN}(P^{\infty})=\{\lambda^{-\circ}(\pi)\mid\pi\in\mathit{runs}(P ^{-\circ}),|\pi|\in\mathbb{N}\}\)_._ 2. \(\textsc{exec-INF}(P^{\infty})\subseteq\{\lambda^{-\circ}(\pi)\mid\pi\in \mathit{runs}(P^{-\circ}),|\pi|=\infty\}\)_._ Proof.: We first prove the inclusion \(\textsc{exec}(P^{\infty})\subseteq\{\lambda^{-\circ}(\pi)\mid\pi\in\mathit{ runs}(P^{-\circ})\}\). Every execution of \(P^{\infty}\) is, by definition, of the form \(\lambda(\mathit{proj}_{\xi}(1))\) for some \(\xi\in\mathit{runs}(P^{n})\) and some \(n\). By Lemma 5.3, \(\xi=\rho^{\otimes}\) for some \(\rho\in\mathit{runs}((P^{-\circ})^{n})\). Observe that \(\xi\) and \(\rho\) are equi-labeled. Thus, by Lemma 3.3 part 1 we have that \(\lambda(\mathit{proj}_{\xi}(1))=\lambda^{-\circ}(\mathit{proj}_{\rho}(1))\). We now prove the inclusion \[\{\lambda^{-\circ}(\pi)\mid\pi\in\mathit{runs}(P^{-\circ}),|\pi|\in\mathbb{N} \}\subseteq\textsc{exec-FIN}(P^{\infty}).\] Observe that since \(\lambda^{\rightsquigarrow}(\pi)=\lambda(\pi^{\otimes})\) it is enough to prove the following by induction on the length \(i\) of \(\pi\): there is a run \(\rho\in\mathit{runs}(P^{\infty})\) such that \(\mathit{proj}_{\rho}(1)=\pi^{\otimes}\). For the base case \(i=0\) there is nothing to prove. For the inductive step \(i>0\): first apply the inductive hypothesis to get \(\rho\in\mathit{runs}(P^{\infty})\) such that \(\mathit{proj}_{\rho}(1)=(\pi_{1}\pi_{2}\cdots\pi_{i-1})^{\otimes}\). There are two cases depending on \(\pi_{i}\). If \(\pi_{i}\) is a broadcast edge then extend \(\rho\) by a global broadcast transition \(t\) in which process \(1\) takes \(\pi_{i}^{\otimes}\), i.e., \(edge_{1}(t)=\pi_{i}^{\otimes}\), to obtain the run \(\rho\cdot t\in\mathit{runs}(P^{\infty})\) whose projection on process \(1\) equals \(\pi^{\otimes}\). If \(\pi_{i}=(s,a_{h},t)\) is a rendezvous edge then proceed as follows. Let \(b\) be the number of broadcast transitions in \(\pi_{1}\cdots\pi_{i-1}\). So \(\pi_{i}\), being a rendezvous edge, is in \(P_{b}^{\rightsquigarrow}\). Thus, by Remark 2, after the saturation algorithm, for all \(l\in[k]\) there exist an edge \((s_{l},a_{l},t_{l})\) in \(P_{b}^{\rightsquigarrow}\). By Lemma 5.5 (Loading) there exists \(\rho^{\prime}\in\mathit{runs}((P^{\rightsquigarrow})^{\infty})\) with \(b\) broadcast transitions that loads at least one process into every state \(s\) of \(P_{b}^{\rightsquigarrow}\). By Lemma 5.3, \(\rho^{\prime\otimes}\in\mathit{runs}(P^{\infty})\). By Lemma 3.10 (Composition) compose \(\rho\) and \(\rho^{\prime\otimes}\) to get \(\rho^{\prime\prime}\in\mathit{runs}(P^{\infty})\) such that \(\rho^{\prime\prime}\big{|}_{\{1\}}=\mathit{proj}_{\rho}(1)=(\pi_{1}\pi_{2} \cdots\pi_{i-1})^{\otimes}\) and at the end of \(\rho^{\prime\prime}\) there is at least one process (different from process \(1\)) in every state of \(P_{b}^{\rightsquigarrow}\). Now extend \(\rho^{\prime\prime}\) by the rendezvous transition \(t\) for which \(edge_{1}(t)=\pi_{i}^{\otimes}\) and for each \(l\in[k]\setminus\{h\}\) some process takes the transition \((s_{l},a_{l},t_{l})\) to obtain the run \(\rho\cdot t\in\mathit{runs}(P^{\infty})\) whose projection on process \(1\) equals \(\pi^{\otimes}\). **Remark 4**.: _Unfortunately, the containment in Proposition 5.6 Part 2 is sometimes strict. For example, consider the \(R\)-template \(P\) in Figure 1. Observe that \(P\) equals \(P^{\rightsquigarrow}\), and that \(p^{\omega}\) is the state label of the run of \(P^{\rightsquigarrow}\) that self-loops in the initial state forever, but \(p^{\omega}\) is not an execution of \(P^{\infty}\). In the next section we will use B-automata to capture \(\textsc{exec-inf}(P^{\infty})\)._ We introduced \(P^{\rightsquigarrow}\) in order to define an automaton recognizing \(\textsc{exec-fin}(P^{\infty})\). This automaton is formed from the LTS \(P^{\rightsquigarrow}\) by adding the input alphabet \(2^{AP}\) and having each transition read as input the label of its source state. **Definition 5.7** (NFW \(\mathcal{A}\)).: _Given an RB-template \(P=\langle AP,\Sigma_{\textsf{rdz}}\cup\{\mathfrak{b}\},S,I,R,\lambda\rangle\), consider the reachability-unwinding \(P^{\rightsquigarrow}=\langle AP,\Sigma_{\textsf{rdz}}\cup\{\mathfrak{b}\},S^{ \rightsquigarrow},I^{\rightsquigarrow},R^{\rightsquigarrow},\lambda^{\rightsquigarrow}\rangle\). Define \(\mathcal{A}\) to be the NFW \(\langle\Sigma,S^{\prime},I^{\prime},R^{\prime},F\rangle\) with_ * _input alphabet_ \(\Sigma=2^{AP}\)_,_ * _state set_ \(S^{\prime}=S^{\rightsquigarrow}\)_,_ * _initial-states set_ \(I^{\prime}=I^{\rightsquigarrow}\)_,_ * _transition relation_ \(R^{\prime}\) _consisting of transitions_ \((s,\lambda^{\rightsquigarrow}(s),t)\) _for which there is a_ \(\sigma\) _such that_ \((s,\sigma,t)\in R^{\rightsquigarrow}\)_,_ * _final-states set_ \(F=S^{\rightsquigarrow}\)_._ The following is immediate from Proposition 5.6 Part 1: **Corollary 5.8**.: _The automaton \(\mathcal{A}\) recognizes the language \(\textsc{exec-fin}(P^{\infty})\)._ Applying a standard automata-theoretic technique, we get the following upper bound: **Theorem 5.9**.: _Let \(\mathcal{F}\) be specifications of finite executions expressed as NFWs or \(\mathsf{LTLf}\) formulas. Then \(PMCP(\mathcal{F})\) for RB-systems is in \(\textsc{pspace}\)._ Proof.: Let \(P=\langle AP,\Sigma_{\textsf{rdz}}\cup\{\mathfrak{b}\},S,I,R,\lambda\rangle\) be a process template, and let \(\mathcal{A}\) be the NFW from Definition 5.7. The fact that words accepted by \(\mathcal{A}\) are exactly the executions in \(\textsc{exec-fin}(P^{\infty})\) is by Corollary 5.8. Analyzing the construction of the unwinding template \(P^{\rightsquigarrow}\) (before Definition 5.1), we get that \(\mathcal{A}\) is of size at most exponential in the size of \(P\). We describe a pspace algorithm for checking the containment of the language accepted by \(\mathcal{A}\) in the language of some specification NFW \(\mathcal{A}^{\prime}\). This is done by solving the non-containment problem in nondeterministic polynomial space, and using the fact that npspace = pspace = co-pspace. The algorithm constructs on the fly: (1) a finite word \(\rho\in(2^{AP})^{*}\), and an accepting run of \(\mathcal{A}\) on \(\rho\); and (2) checks that \(\rho\) is not accepted by \(\mathcal{A}^{\prime}\). Item (2) can be done, as usual, simply by storing the subset of states of \(\mathcal{A}^{\prime}\) that are reachable by reading the prefix of \(\rho\) constructed thus far, and validating that, at the end, this set does not contain an accepting state. For item (1), the algorithm does not store all of (the exponentially large \(\mathcal{A}\)). Instead, at each point in time, it only stores a single component \(P_{i}^{\rightsquigarrow}\) of \(P^{\rightsquigarrow}\) (which can be calculated in ptime for every \(i\)), from which it can deduce the part of \(\mathcal{A}\) corresponding to it. The algorithm starts by constructing \(P_{0}^{\rightsquigarrow}\), and sets \(\rho\) to be the empty word, and the run of \(\mathcal{A}\) on \(\rho\) to be the initial state of \(\mathcal{A}\). At each step, it can either declare the guess as finished (if the run constructed thus far ends in an accepting state of \(\mathcal{A}\)) or extend \(\rho\) and the run. Extending \(\rho\) is a trivial guess. Extending the guessed run is done by either guessing a transition of \(\mathcal{A}\) inside the component induced by the currently stored \(P_{i}^{\rightsquigarrow}\); or by guessing a transition that moves to the next component in the lasso, at which point the algorithm also discards \(P_{i}^{\rightsquigarrow}\) and replaces it with \(P_{i+1}^{\rightsquigarrow}\). In case the specification is given as an LTLf formula \(\varphi\), we let \(\mathcal{A}^{\prime}\) be the NFW from Theorem 2.2 corresponding to \(\neg\varphi\) and replace (2) above by a check that \(\rho\) is accepted by \(\mathcal{A}^{\prime}\), which can be done, as usual, simply by storing the subset of states of \(\mathcal{A}^{\prime}\) that are reachable by reading the prefix of \(\rho\) constructed thus far, and validating that, at the end, this set does contain an accepting state. The next theorem gives a corresponding lower bound. Interestingly, its proof shows that the problem is already pspace hard for _safety_ specifications (i.e., that a bad state is never visited). **Theorem 5.10**.: _Let \(\mathcal{F}\) be specifications of finite executions expressed as NFW or LTLf formulas. Then \(PMCP(\mathcal{F})\) for RB-systems is pspace-hard. Moreover, this is true even for a fixed specification, and thus the program-complexity is pspace-hard._ Proof.: The proof proceeds by a reduction from the reachability problem for Boolean programs, known to be pspace-complete [10]. A Boolean program consists of \(m\) Boolean variables \(X_{1},\ldots,X_{m}\) (for some \(m\)) and \(n\) instructions (for some \(n\)), referred to by their _program location_\(l\in[n]\), of two types: _(i) conditionals_ of the form \(l:\texttt{if}\ X_{i}\texttt{ then}\ l_{if}\texttt{ else}\ l_{else}\); _(ii) toggles_ of the form \(l:X_{i}:=\neg X_{i}\). The semantics of the first type of instruction is to move from location \(l\) to location \(l_{if}\) if \(X_{i}\) is true and to location \(l_{else}\) otherwise; instructions of this type are thus conditional jumps that do not change the values of any of the Boolean variables. The semantics of the second type of instruction is to negate the value of the Boolean variable \(X_{i}\); the execution continues from location \(l+1\) (unless that was the last instruction). All Boolean variables are initialized to false and execution begins with instruction 1. We remark that the Boolean programs considered here are deterministic. The reachability problem for Boolean programs is to decide whether the execution of a Boolean program ever reaches its last program location \(n\). Note that we can assume, without loss of generality, that the last instruction of a Boolean program is a conditional instruction. Given a Boolean program \(B\), we build a process template \(P\), and a specification NFW, such that \(P^{\infty}\) satisfies the specification iff the execution of \(B\) dot not reach its last instruction. Formally, \(P=\langle AP,\Sigma_{\texttt{rdz}}\cup\{\mathfrak{b}\},S,\{\iota\},R,\lambda\rangle\), where: * \(AP=\{\text{done}\}\), i.e., there is a single atom; * \(\Sigma_{\mathtt{rdz}}=\cup_{a\in\Sigma_{\mathtt{actn}}}\{a_{1},a_{2}\}\) where \[\Sigma_{\mathtt{actn}}=\cup_{i\in[m]}\{(\text{protect},i),(\text{if},i),( \text{else},i),(\text{toggle},i)\}\] * The set of states \(S:=\{\iota,\mathtt{sink}\}\cup S_{instr}\cup S_{var}\), where \(\iota\) is an initial state, \(\mathtt{sink}\) is a sink state, and: 1. \(S_{instr}:=\cup_{l\in[n]}\{l,l^{\prime}\}\), 2. \(S_{var}:=\cup_{i\in[m]}\{X_{i},\neg X_{i},X^{\prime}_{i},\neg X^{\prime}_{i}\}\), * \(R\) will be defined later; * \(\lambda=\{(n,\text{done})\}\), i.e., the atom is true in state \(n\), and false in all other states. The specification says that the last program location \(n\) is never visited. This can be expressed, for instance, by the LTLf formula \(\mathtt{G}\neg\)done (read "it is always the case that atom done does not occur"), and by an NFW consisting of a single state. In what follows we call the specification \(\varphi\). Before describing the transition relation \(R\), we briefly describe the way the states are used in the simulation of the Boolean program \(B\) by runs of \(P^{\infty}\). At the beginning of every run, every process nondeterministically moves (on a broadcast) from the initial state either to the state \(1\), or to one of the states \(\neg X_{1},\cdots,\neg X_{m}\). A process that moves to the state \(1\) will keep track of (i.e., encode) the program location of the Boolean program, and from this point on will only be in states from the set \(S_{instr}\); whereas a process that moves to a state of the form \(\neg X_{i}\) will keep track of (i.e., encode) the value of variable \(X_{i}\), and from this point on will only be in states from the set \(\{X_{i},\neg X_{i},X^{\prime}_{i},\neg X^{\prime}_{i}\}\). Observe that multiple processes may decide to encode the program location or the same variable. However, the transition relation \(R\) will be defined in such a way as to enforce the invariant that, right after every broadcast, the following holds: **(\(\dagger\)) :**: all processes that encode the same object (i.e., the program location or the value of some variable) agree on its value. Moreover, between every two broadcasts, at most one instruction of the Boolean program can be simulated, namely, the instruction referenced by the processes that track the program location. The primed versions of the states will be used in order to enforce this round structure, as well as the invariant \(\dagger\), as follows: in each round, every rendezvous transition moves a process from an unprimed state to a primed state, from which it can only move on a broadcast; whereas a broadcast takes a process in a primed state back to an unprimed state, and processes in an unprimed state to \(\mathtt{sink}\). Let \(var(l)\) denote the index \(i\) of the variable used (i.e., tested or toggled) in the instruction in program location \(l\). We now define the transition function \(R\) of the template \(P\). It consists of the following transitions: * \(\iota\xrightarrow{\mathtt{b}}1\); \(\iota\xrightarrow{\mathtt{b}}\neg X_{i}\) for \(i\in[m]\), * \(\mathtt{sink}\xrightarrow{\mathtt{b}}\mathtt{sink}\); \(l\xrightarrow{\mathtt{b}}\mathtt{sink}\), \(X_{i}\xrightarrow{\mathtt{b}}\mathtt{sink}\), \(\neg X_{i}\xrightarrow{\mathtt{b}}\mathtt{sink}\), for \(l\in[n],i\in[m]\), * \(l^{\prime}\xrightarrow{\mathtt{b}}l\), \(X^{\prime}_{i}\xrightarrow{\mathtt{b}}X_{i}\), and \(\neg X^{\prime}_{i}\xrightarrow{\mathtt{b}}\neg X_{i}\). * \(l\xrightarrow{(\text{protect},i)_{1}}l\) for \(l\in[n]\) and \(i\in[n]\setminus var(l)\). * \(X_{i}\xrightarrow{(\text{protect},i)_{2}}X^{\prime}_{i}\) and \(\neg X_{i}\xrightarrow{(\text{protect},i)_{2}}\neg X^{\prime}_{i}\) for \(i\in[m]\). * \(l\xrightarrow{(\text{if},var(l))_{1}}l^{\prime}_{if}\) and \(l\xrightarrow{(\text{else},var(l))_{1}}l^{\prime}_{else}\) for all conditional instructions \(l\). * \(X_{i}\xrightarrow{(\text{if},i)_{2}}X^{\prime}_{i}\) and \(\neg X_{i}\xrightarrow{(\text{else},i)_{2}}\neg X^{\prime}_{i}\) for \(i\in[m]\). * \(l\xrightarrow{\text{\tiny$(\text{toggle},var(l))_{1}$}}(l+1)^{\prime}\) for all toggle instructions \(l\). * \(X_{i}\xrightarrow{\text{\tiny$(\text{toggle},i)_{2}$}}\neg X_{i}^{\prime}\) and \(\neg X_{i}\xrightarrow{\text{\tiny$(\text{toggle},i)_{2}$}}X_{i}^{\prime}\) for \(i\in[m]\). We now prove that the reduction is correct. Suppose that the infinite run \(\rho\) of the Boolean program visits its last instruction. We build a run \(\pi\) of \(P^{m+1}\) witnessing the fact that \(P^{\infty}\) does not satisfy \(\varphi\). The run \(\pi\) simulates \(\rho\) as follows. Start with a broadcast, which takes process \(m+1\) (called the _controller process_) to state \(1\), and for every \(i\in[m]\) takes process \(i\) (called the \(i\)_'th variable process_) to \(\neg X_{i}\). Repeatedly extend the run \(\pi\) by the following sequence of global transitions (below, \(l\) denotes the current state of the controller process): 1. For every \(i\neq var(l)\), the controller rendezvous with the \(i\)'th memory process on the action \((\text{protect},i)\). 2. The controller rendezvous with the \(var(l)\)'th memory process as follows: if \(l\) is a toggle instruction then the rendezvous action is \((\text{toggle},i)\); otherwise, it is \((\text{if},i)\) if the \(i\)'th memory process is in state \(X_{i}\), and it is \((\text{else},i)\) if it is in state \(\neg X_{i}\). 3. A broadcast. It is easy to see that \(\pi\) simulates \(\rho\). In particular, the state of the controller after \(z\geq 1\) broadcasts is equal to the program location of the Boolean program after \(z\) steps. For the other direction, we argue as follows. We say that a configuration \(f\) of \(P^{\infty}\) is _consistent_ if it satisfies the invariant \(\dagger\) stated earlier. For such an \(f\), let \(pl(f)\in[n]\) be the program location encoded by \(f\), or \(\bot\) if there are no processes in \(f\) tracking the program location; and for every \(i\in[m]\), let \(val_{i}(f)\in\{\texttt{true},\texttt{false}\}\) be the value of \(X_{i}\) encoded by \(f\), or \(\bot\) if there are no processes in \(f\) tracking the value of \(X_{i}\). Given \(z\in\mathbb{N}\), and a run \(\pi\) of \(P^{\infty}\) with at least \(z\) broadcasts, write \(\pi(z)\) for the configuration in \(\pi\) immediately following the \(z\)'th broadcast. Observe that it is enough to show that \(\pi\) simulates the run \(\rho\) of the Boolean program in the following sense: 1. \(\pi(z)\) is consistent, 2. if \(pl(f_{z})\neq\bot\) then the program location in \(\rho_{z}\) is equal to \(pl(f_{z})\), 3. for every \(i\in[m]\), if \(val_{i}(f_{z})\neq\bot\) then the value of variable \(X_{i}\) in \(\rho_{z}\) is equal to \(val_{i}(f_{z})\). We prove the items above by induction on \(z\). For \(z=1\), i.e., after the first broadcast (which must be the first transition on any run), processes assume different roles. Any process that moves to state \(1\) is called a _controller_, and any process that moves to state \(\neg X_{i}\) (for some \(i\in[m]\)) is called an \(i\)_'th variable processes_. Clearly the induction hypothesis holds. For the inductive step, note that by the inductive hypothesis \(\pi(z-1)\) is consistent, let \(l:=pl(\pi(z-1))\), and observe that the only rendezvous transitions on \(\pi\) between the \(z-1\) and \(z\) broadcasts are of a controller process that rendezvous with a variable process on an action of the form described in items \(1\) and \(2\) in the proof of the first direction (in particular, if \(l=\bot\) then there are no rendezvous between the \(z-1\) and \(z\) broadcasts). Thus, it must be that, just before the \(z\)'th broadcast, processes in a primed state that are encoding the same object are in the same state. Combining this with the fact that any process in an unprimed state will move to sink on the \(z\)'th broadcast one can see that the inductive hypothesis holds also after the \(z\)'th broadcast. **Remark 5**.: _We now show that specification complexity of the PMCP for NFW and LTLf specifications is also PSPACE-hard. We do this by reducing from the standard model-checking problem._ _Recall that the standard model-checking problem is, given an LTS \(L\) without edge labels (aka 'finite state program' or 'Kripke structure') and a specification \(\varphi\), to decide if all finite executions of \(L\) satisfy \(\varphi\). The specification complexity of the model-checking problem for LTLf formulas or for NFW specifications is_ PSPACE_-hard. To see that, given an alphabet \(\Sigma\), take a single state Kripke structure \(K\) that generates all words in \(\Sigma^{*}\), and note that model-checking \(K\) and a given NFW specification is equivalent to deciding the universality problem for NFWs which is (even over a fixed alphabet)_ PSPACE_-hard_[12]. Similarly, model-checking \(K\) and a given LTLf formula is equivalent to deciding LTLf satisfiability (using the negation of the original formula) which is again SPACE-hard even for a fixed alphabet \(P\)[13]. _To reduce the model-checking problem (for a fixed LTS \(L\) without edge labels) to the PMCP problem with a fixed RB-template, simply build an RB-template \(P\) from \(L\) by adding the edge label \(\mathfrak{b}\) to every transition, i.e., every transition in \(L\) becomes a broadcast transition. Clearly, then,_ EXEC-fin\((P^{\infty})\) _is exactly the sequences of the form \(\lambda(\pi)\) where \(\pi\) is a finite run of \(L\). Thus, \(L\models\varphi\) iff all executions in_ EXEC-fin\((P^{\infty})\) _satisfy \(\varphi\)._ Theorem 3.5 from Section 3.4 now follows: the upper bound is in Theorem 5.9, the lower-bound on the program complexity (and thus the combined complexity) is in Theorem 5.10, and the lower-bound on the specification complexity is in Remark 5. ## 6. Solving PMCP for Specifications over Infinite Executions The main step in our automata-theoretic approach to solve the PMCP for infinite executions is the construction, given an RB-template \(P\), of a B-automaton \(\mathcal{B}\) (with a trivial Buchi set) that accepts the language EXEC-inf\((P^{\infty})\). In this section we describe the construction of this automaton. In order to construct the B-automaton \(\mathcal{B}\), it is helpful to recall the source of difficulty in dealing with infinite executions as opposed to finite executions. Recall from Section 5 that the finite executions were dealt with by simply turning the reachability-unwinding \(P^{\mbox{\tiny$-$}}\) into a nondeterministic automaton \(\mathcal{A}\) (by having each transition read as input the label of its source state), and that this worked because of the equality between the state-labels of finite runs of \(P^{\mbox{\tiny$-$}}\) and the finite executions of \(P^{\infty}\), as stated in the first part of Proposition 5.6. Also, recall that the second part of the same proposition, which deals with the infinite case, only states a containment (instead of equality), which may be strict -- as illustrated by the template \(P\) in Example 1. Indeed, looking at this template again, one can see that in order to allow process \(1\) to trace \(p^{z}\) for \(z\in\mathbb{N}\), we can use a system with \(z+1\) processes that rendezvous with process \(1\) one after the other. However, no finite amount of processes can allow process \(1\) to trace \(p^{\omega}\), since once a process rendezvous with process \(1\) it cannot do so ever again. Thus, while the self loop on the initial state can be taken infinitely often in a path in the template \(P\) (and hence also in a run of the automaton \(\mathcal{A}\)), it can not be taken infinitely often in a run of \(P^{\infty}\). The key to modifying \(\mathcal{A}\) to obtain \(\mathcal{B}\) is to treat edges of \(P^{\mbox{\tiny$-$}}\) differently based on the conditions under which they can (or cannot) be taken infinitely often in runs of \((P^{\mbox{\tiny$-$}})^{\infty}\). In particular, one has to distinguish between edges that can appear infinitely often on a run with finitely or infinitely many broadcasts, and among the latter between ones that can or cannot appear unboundedly many times between two consecutive broadcasts. Note that the fact that an edge is only used a bounded number of times between consecutive broadcasts can be naturally tracked by the acceptance condition of a single counter. The rest of this section is organized as follows. We formally present the classification of edges along the lines outlined above, and prove a couple of easy lemmas about this classification. We then give the definition of the automaton \(\mathcal{B}\) and prove the correctness of the construction. **Definition 6.1** (Edge Types).: _An edge \(e\) of \(P^{\rightsquigarrow}\) is_ * _locally-reusable iff it appears infinitely many times on some run of_ \((P^{\rightsquigarrow})^{\infty}\) _with finitely many broadcasts._ * _green iff it appears infinitely many times on some run of_ \((P^{\rightsquigarrow})^{\infty}\) _with infinitely many broadcasts._ * _light green iff it appears unboundedly many times between broadcasts on some run_ \(\pi=\pi_{0}\pi_{1}\dots\) _of_ \((P^{\rightsquigarrow})^{\infty}\) _with infinitely many broadcasts, i.e., if for every_ \(n\in\mathbb{N}\) _there are_ \(i<j\in\mathbb{N}\) _such that_ \(\pi_{i}\dots\pi_{j}\) _contains_ \(n\) _transitions using this edge and no broadcast edges._ * _dark green iff it is_ _green but not_ _light green__._ Note that: * _light green edges are also_ _green__,_ * _dark green edges are exactly those_ _green edges which satisfy that for every run of_ \((P^{\rightsquigarrow})^{\infty}\) _there is a bound on the number of times they appear between any two consecutive broadcasts,_ * _green edges only belong to components of_ \(P^{\rightsquigarrow}\) _that are on the loop of the lasso,_ * _broadcast edges can only be_ _dark green__._ **Example 4**.: _Neither edge in \(P^{\rightsquigarrow}\) for the template \(P\) in Figure 1 is locally-reusable or green._ _On the other hand, every edge in \(P^{\rightsquigarrow}\) for the template \(P\) in Figure 2 is_ _dark green _(and none are locally-reusable)._ It turns out that determining the type of an edge is decidable; this is a non-trivial problem, and we dedicate Section 7 to solving it. We now characterize the edge types in terms of witnessing cycles in \((P^{\rightsquigarrow})^{\infty}\). Recall the definition of legal configuration and path (Definition 5.2). **Lemma 6.2**.: _An edge \(e\) of \(P^{\rightsquigarrow}\) is:_ * _locally-reusable iff it appears on a legal cycle_ \(C_{e}\) _of_ \((P^{\rightsquigarrow})^{\infty}\) _that has no broadcasts._ * _green iff it appears on a legal cycle_ \(C_{e}\) _of_ \((P^{\rightsquigarrow})^{\infty}\) _that has broadcasts._ * _light green iff it appears on a legal cycle_ \(D_{e}\) _of_ \((P^{\rightsquigarrow})^{\infty}\) _that has no broadcasts, that is contained in a legal cycle_ \(C_{e}\) _with broadcasts;_ * _dark green iff it appears on a legal cycle_ \(C_{e}\) _of_ \((P^{\rightsquigarrow})^{\infty}\) _that has broadcasts, but not on any cycle without broadcasts that is contained in a cycle with broadcasts._ Proof.: Observe that it is enough to prove the first three items. For the 'if' directions, let \(n\in\mathbb{N}\) be the number of processes in \(C_{e}\) (i.e., \(C_{e}\) is a cycle in \((P^{\rightsquigarrow})^{n}\)), and recall that since \(C_{e}\) is legal, its source configuration \(f\) is in \((P^{\rightsquigarrow}_{i})^{n}\) for some \(i\). Hence, by Lemma 5.5 (Loading), a configuration \(g\) such that \(g\big{|}_{[n]}=f\) (i.e., the first \(n\) processes of \(g\) form the configuration \(f\)) can be reached from an initial configuration of \((P^{\rightsquigarrow})^{\infty}\); then, for items (i) and (ii), we can simply pump \(C_{e}\) forever (with the extra processes in \(g\) moving only on broadcasts). For item (iii), \(C_{e}\) is pumped in the following way: for every \(i\in\mathbb{N}\), at the \(i\)'th repetition of the outer cycle \(C_{e}\) we pump the inner cycle \(D_{e}\) for \(i\) times. The 'only if' directions follow from the observation that every run in \((P^{\mbox{\tiny$-$}\!\circ})^{\infty}\) involves only finitely many processes, and thus only finitely many distinct configurations, all of which are legal (by Remark 3). Lemma 6.2 implies that every light green edge is locally-reusable, whereas a dark green edge may or may not be locally-reusable.6 The following lemma states that we can assume that the cycles in Lemma 6.2 that witness broadcasts all have the same number of broadcasts. Footnote 6: This overlap is the reason that we decided to use the term “locally-reusable” instead of naming these edges by another color. **Lemma 6.3**.: _There is a number \(K\) such that for every green, light green, or dark green edge \(e\), the cycle \(C_{e}\) mentioned in items ii), iii) and iv) in Lemma 6.2 can be taken to contain exactly \(K\) broadcasts._ Proof.: Apply Lemma 6.2 to all the relevant edges in \(P^{\mbox{\tiny$-$}\!\circ}\) and obtain cycles, say \(C_{e_{1}},C_{e_{2}},\cdots,C_{e_{l}}\). Suppose \(C_{e_{i}}\) has \(k_{i}\) broadcasts. Let \(K\) be the least-common-multiple of the \(k_{i}\)s. By repeating cycle \(C_{e_{i}}\) for \(K/k_{i}\) times, we obtain a witnessing cycle with exactly \(K\) broadcasts. We now informally describe the structure of the automaton \(\mathcal{B}\). It is made up of three copies of \(\mathcal{A}\) (called \(\mathcal{B}^{init}\), \(\mathcal{B}^{grn}\), \(\mathcal{B}^{loc}\)) as follows: \(\mathcal{B}^{init}\) is an exact copy of \(\mathcal{A}\); the copy \(\mathcal{B}^{grn}\) has only the green edges left; and \(\mathcal{B}^{loc}\) has only the locally-reusable edges left (and in particular has no broadcast edges). Furthermore, for every edge \((s,\sigma,s^{\prime})\) in \(\mathcal{B}^{init}\) we add two new edges, both with the same source as the original edge, but one going to the copy of \(s^{\prime}\) in \(\mathcal{B}^{grn}\), and one to the copy of \(s^{\prime}\) in \(\mathcal{B}^{loc}\). The initial states of \(\mathcal{B}\) are the initial states of \(\mathcal{B}^{init}\). The single counter increments at every transition in \(\mathcal{B}^{init}\) and at every dark green rendezvous edge in \(\mathcal{B}^{grn}\), and resets at every broadcast edge in \(\mathcal{B}^{grn}\). In other words, the acceptance condition will enforce that the run eventually leaves \(\mathcal{B}^{init}\), and that boundedly many dark green rendezvous edges are taken between broadcasts in \(\mathcal{B}^{grn}\). Here is the formal definition. **Definition 6.4** (B-Automaton \(\mathcal{B}\)).: _Let_ \[P=\langle\mbox{AP},\Sigma_{\mbox{\scriptsize{rdz}}}\cup\{\mathfrak{b}\},S,I, R,\lambda\rangle\] _be a process template, and let \(P^{\mbox{\tiny$-$}\!\circ}=\langle AP,\Sigma_{\mbox{\scriptsize{rdz}}}\cup \{\mathfrak{b}\},S^{\mbox{\tiny$-$}\!\circ},I^{\mbox{\tiny$-$}\!\circ},R^{ \mbox{\tiny$-$}\!\circ},\lambda^{\mbox{\tiny$-$}\!\circ}\rangle\) be its unwinding. Define the B-automaton \(\mathcal{B}=\langle\Sigma,S^{\prime},I^{\prime},R^{\prime},G,cc\rangle\) as follows:_ * \(\Sigma=2^{\mbox{\scriptsize{AP}}}\)_,_ * \(S^{\prime}=\{init,grn,loc\}\times S^{\mbox{\tiny$-$}\!\circ}\)_,_ * \(G=S^{\prime}\)_, i.e., the Buchi condition is always satisfied,_ * \(I^{\prime}=\{init\}\times I^{\mbox{\tiny$-$}\!\circ}\)_,_ * _The transition relation_ \(R^{\prime}\) _is_ \(\delta_{\mbox{\scriptsize{init}}}\cup\delta_{grn}\cup\delta_{loc}\) _and the counter function_ \(cc\) _are defined as follows. For every transition_ \(e=(s,\sigma,t)\in R^{\mbox{\tiny$-$}\!\circ}\)__ 1. \(\delta_{init}\) _contains the transitions_ \(\tau=((init,s),\lambda^{\mbox{\tiny$-$}\!\circ}(s),(i,t))\) _for every_ \(i\in\{init,grn,loc\}\)_; and_ \(cc(\tau)=\mathsf{inc}\)_._ 2. \(\delta_{grn}\) _contains the transition_ \(\tau=((grn,s),\lambda^{\mbox{\tiny$-$}\!\circ}(s),(grn,t))\) _only if_ \(e\) _is green;_ \(cc(\tau)=\mathsf{inc}\) _if_ \(e\) _is a dark green rendezvous edge,_ \(cc(\tau)=\mathsf{reset}\) _if_ \(e\) _is a broadcast edge, and otherwise_ \(cc(\tau)=\mathsf{skip}\)_;_ 3. \(\delta_{loc}\) _contains the transition_ \(\tau=((loc,s),\lambda^{\mbox{\tiny$-$}\!\circ}(s),(loc,t))\) _only if_ \(e\) _is locally-reusable;_ \(cc(\tau)=\mathsf{skip}\)_._ Since transitions of \(\mathcal{B}\) are induced by transitions of \(P^{-\circ}\), we call transitions of \(\mathcal{B}\) broadcast, light green, etc., based on the classification of the corresponding transition of \(P^{-\circ}\). The rest of this section is concerned with proving that the construction is correct: **Theorem 6.5**.: _For every RB-template \(P\) the language of the B-automaton \(\mathcal{B}\) is EXEC-INF\((P^{\infty})\)._ Right-to-left direction.Take \(\alpha\in\textsc{exec-inf}(P^{\infty})\). By Corollary 5.4 we have that EXEC\((P^{\infty})=\textsc{exec}((P^{-\circ})^{\infty})\). Thus, there is some \(\pi\in P^{-\circ}\) such that \(\alpha=\lambda^{-\circ}(\pi)\). We now show that there is an accepting run of \(\mathcal{B}\) on \(\alpha\). First note that by Definition 5.7 and Proposition 5.6, for every prefix of \(\alpha\) there is a finite run on that prefix that remains in \(\mathcal{B}^{init}\). There are two cases: either \(\pi\) contains infinitely many broadcast transitions or not. If it does not, then from some point on all edges on \(\pi\) are locally-reusable. Thus, at that point, the automaton can move from \(\mathcal{B}^{init}\) to \(\mathcal{B}^{loc}\). The resulting run is accepting since the counter is never incremented in \(\mathcal{B}^{loc}\). On the other hand, if \(\pi\) has infinitely many broadcast transitions, then from some point on, all its edges are green. Thus, at that point, the automaton can move from \(\mathcal{B}^{init}\) to \(\mathcal{B}^{grn}\). Observe that the counter is reset on every broadcast edge and it is only incremented on dark green edges, which by Def. 6.1, appear only boundedly many times between broadcasts. Outline of left-to-right direction.Let \(\Omega\) be an accepting run in \(\mathcal{B}\) on input \(\alpha\). By Corollary 5.4 it is enough to construct a run \(\pi\) in \((P^{-\circ})^{\infty}\) whose projection on process \(1\) has labeling \(\alpha\). Let \(\beta\) be the run in \(P^{-\circ}\) induced by \(\Omega\) (recall that every transition of the automaton is induced by a transition of \(P^{-\circ}\)). The construction of \(\pi\) is guided by having process \(1\) trace \(\beta\). We decompose \(\beta=\beta^{\prime}\cdot\beta^{\prime\prime}\) where \(\beta^{\prime}\) corresponds to the finite prefix of the run \(\Omega\) that stays in \(\mathcal{B}^{init}\). In order to trace \(\beta^{\prime}\), we use the techniques in Section 5 for finite traces. This leaves us with the task of tracing \(\beta^{\prime\prime}\) which contains either only locally reusable edges or only green edges. First, observe that tracing a broadcast edge is easy since we can simply append a global broadcast transition to the run \(\pi\) being constructed. On the other hand, for each rendezvous edge \(e\) we will assign multiple groups of processes to help process \(1\) traverse \(e\) (the number of groups is discussed later). Each group associated with \(e\) has the property that it can trace a _cycle_\(C_{e}\) in which edge \(e\) is taken at some point, say by process \(p_{e}\). So, if \(e\) is the next edge that process \(1\) should take, we progress some group along the cycle \(C_{e}\) to the point where \(e\) should be taken, then process \(1\) swaps places with process \(p_{e}\) (this is virtual, and merely re-assigns process ids); and then the group takes the next transition along the cycle \(C_{e}\), and so process \(1\) takes \(e\). Note that in order for a group to be available to assist process \(1\) again in the future, it has to be'reset', i.e., put back to the same position just before the edge \(e\) was taken. This is done differently, depending on the type of \(e\). If \(e\) is locally-reusable, then so are all subsequent edges \(f,g,\dots\) that process \(1\) should take; so, since \(C_{e},C_{f},C_{g},\dots\) contain no broadcasts, the group can simply loop around \(C_{e}\) immediately after process \(1\) leaves \(C_{e}\); when process \(1\) does leave \(C_{e}\), it swaps with \(p_{f}\) in \(C_{f}\), and so on. If \(e\) is light green, then it is on an inner cycle \(D_{e}\), without broadcasts, of \(C_{e}\) (Lemma 6.2), so the group can loop around \(D_{e}\) after process \(1\) swaps out -- we call this _recharging_ -- thus enabling it to help process \(1\) again even though it has not yet completed the outer cycle \(C_{e}\). Finally, if \(e\) is dark green, this group will only be ready again after the whole cycle \(C_{e}\) is looped once more, which requires waiting for \(K\) broadcasts. Thus, until that happens, if process \(1\) needs to trace \(e\) it will need the help of another group associated with \(e\). The key observation is that the number of these groups is bounded. The reason for this is that \(\Omega\) is an accepting run, and thus one can deduce that there is a bound on the number of times a dark green edge is used until the \(K\) broadcasts needed to complete the cycle \(C_{e}\) are taken. Detailed proof of left-to-right directionLet \(\Omega\) be an accepting run in \(\mathcal{B}\) on input \(\alpha\). Since every transition in \(\mathcal{B}\) corresponds to a transition in \(P^{\rightharpoonup}\), let \(\beta\) be the corresponding run in \(P^{\rightharpoonup}\). Since \(\Omega\) is an accepting run, it either gets trapped in \(\mathcal{B}^{grn}\) or it gets trapped in \(\mathcal{B}^{loc}\). Decompose \(\Omega=\Omega^{\prime}\cdot\Omega^{\prime\prime}\) accordingly, i.e., the prefix \(\Omega^{\prime}\) corresponds to the run until it first enters \(\mathcal{B}^{loc}\) or \(\mathcal{B}^{grn}\). Decompose \(\alpha=\alpha^{\prime}\cdot\alpha^{\prime\prime}\) and \(\beta=\beta^{\prime}\cdot\beta^{\prime\prime}\) accordingly. We are required to construct a run of \(P^{\infty}\) whose projection on process \(1\) is labeled \(\alpha\). By Corollary 5 it is enough to construct a run \(\pi\) of \((P^{\rightharpoonup})^{\infty}\) whose projection onto process \(1\) is labeled \(\alpha\). We first construct a finite run \(\rho^{\prime}\) of \((P^{\rightharpoonup})^{\infty}\) whose projection on process \(1\) is \(\beta^{\prime}\). Since \(\Omega^{\prime}\) stays inside \(\mathcal{B}^{init}\) it is actually also an accepting run of \(\mathcal{A}\) on \(\alpha^{\prime}\). Thus, by Corollary 5, \(\alpha^{\prime}\in\textsc{exec-fin}(P^{\infty})\). By Corollary 5 it is also in \(\textsc{exec-fin}((P^{\rightharpoonup})^{\infty})\), i.e., there is a run \(\rho^{\prime}\) of \((P^{\rightharpoonup})^{t}\) for some number \(t\) of processes whose projection onto process \(1\) is \(\alpha^{\prime}\). Let \(P_{l}^{\rightharpoonup e}\) be the component that the run \(\rho^{\prime}\) ends in. To complete the proof, we will construct an infinite path \(\rho^{\prime\prime}\) of \((P^{\rightharpoonup})^{n}\) for some number \(n\) of processes, satisfying the following: 1) its projection on process \(1\) is \(\beta^{\prime\prime}\) (note that this implies that \(\rho^{\prime\prime}\) starts in a configuration where process \(1\) is in the same state as when it ended \(\rho^{\prime}\)), and 2) it starts in a configuration in \(P_{l}^{\rightharpoonup}\). To see why this is enough to complete the proof, proceed as follows in order to compose \(\rho^{\prime}\) and \(\rho^{\prime\prime}\). Apply Lemma 5 (Loading) to get a finite run \(\rho\) in \((P^{\rightharpoonup})^{\infty}\) that has the same number of broadcasts as \(\rho^{\prime}\), and ends in a configuration that, when restricted to the first \(n\) processes, is the starting configuration of \(\rho^{\prime\prime}\). Note that \(\rho^{\prime}\) may use \(m>n\) processes in order to achieve that. By Lemma 3, we can simultaneously simulate both \(\rho\) and \(\rho^{\prime}\) in a run \(\pi^{\prime}\) of \((P^{\rightharpoonup})^{t+m}\). Assume w.l.o.g. that the first \(t\) processes are simulating \(\rho^{\prime}\), and that the next \(n\) processes are simulating the first \(n\) processes of \(\rho\). Thus, at the final configuration of \(\pi^{\prime}\), these \(n\) processes are exactly in the states needed to start simulating \(\rho^{\prime\prime}\), and processes \(1\) and \(t+1\) are in the same state. Thus, we can extend the simulation by letting process \(1\) exchange roles with process \(t+1\) and having processes \(1,t+2,\ldots t+n\) simulate \(\rho^{\prime\prime}\) (with all other processes doing nothing except responding to broadcasts). The resulting run \(\pi\) has the property that its projection onto process \(1\) is labeled \(\alpha\), as promised. Constructing \(\rho^{\prime\prime}\)First, assume w.l.o.g. that \(P\) (and thus also \(P^{\rightharpoonup}\)) has no self loops 7 -- this is not essential, but simplifies some technicalities in the construction. Second, we differentiate between two cases, depending in which component \(\Omega^{\prime\prime}\) is trapped. We treat the case that \(\Omega^{\prime\prime}\) is trapped in \(\mathcal{B}^{grn}\) (the case that \(\Omega^{\prime\prime}\) is trapped in \(\mathcal{B}^{loc}\) is simpler, and does not use any new ideas). Note that we will ignore the technicality of keeping track of process numbers, as we find it distracts, rather then helps one understand the proof. Footnote 7: A template can be transformed, in linear time, to one without self loops (and the same set of executions) as follows: for every state \(s\) that has a self loop, add a new state \(\hat{s}\) with the same labeling, replace every self loop \((s,\sigma,s)\) with \((s,\sigma,\hat{s})\), and for every outgoing transition \((s,\sigma^{\prime},t)\), including self-loops, add the transition \((\hat{s},\sigma^{\prime},t)\). Let \(E_{\textsf{light green}}\) (resp. \(E_{\textsf{dark green}}\)) be the set of light green (resp. dark green) edges that appear on \(\beta^{\prime\prime}\), and note that these are the only edges that appear on it (by the fact that \(\mathcal{B}^{grn}\) contains only green edges). For every edge \(e\in E_{\textsf{dark green}}\) (resp. \(e\in E_{\textsf{light green}}\)), let \(C_{e}\) (resp. \(C_{e},D_{e}\)) be the witnessing cycle(s) with exactly \(K\) broadcasts (for some fixed \(K\)) from Lemma 6, and assume w.l.o.g. that (a) every such cycle \(C_{e}\) starts in a configuration in the component \(P_{l}^{\rightsquigarrow}\) in which \(\rho^{\prime}\) ends, and (b) that if \(e^{\prime}\) is the first edge on \(\beta^{\prime\prime}\), then \(e^{\prime}\) appears in the first transition taken in the cycle \(C_{e^{\prime}}\). To see how to achieve (a) note that if \(e^{\prime}\) and each subsequent edge is locally-reusable, then because each \(C_{e}\) contains no broadcasts, each \(C_{e}\) is contained in \(P_{l}^{\rightsquigarrow}\). On the other hand, if the edges \(e\) are green, then \(e\) must be on the loop of the lasso, and so \(P_{l}^{\rightsquigarrow}\) is on the loop of the lasso (since \(e^{\prime}\) is), and so since \(C_{e}\) contains at least one broadcast, it must go through \(P_{l}^{\rightsquigarrow}\). Since \(\Omega\) is an accepting run, the counter is bounded on it. Thus, since in \({\mathcal{B}}^{\prime\prime\prime}\) we increment the counter when reading a dark green edge, and reset it when reading a broadcast edge, we can pick \(\mathfrak{m}\in{\mathbb{N}}\) such that every dark green edge appears at most \(\mathfrak{m}\) times on any section of \(\beta\) that contains \(K\) broadcasts. Let the _designated occurrence_ of \(e\) on \(C_{e}\) be defined as follows: if \(e\) is dark green then it is the first transition in \(C_{e}\) in which \(e\) occurs, and if \(e\) is light green then it is the first transition of \(C_{e}\), that is also on the nested cycle \(D_{e}\), in which \(e\) occurs. For every \(h\in[K]\), let \(C_{e}(h)\) denote the portion of \(C_{e}\) just after the \(h-1\) broadcast up to (and including) the \(h\) broadcast. For \(h\) such that \(C_{e}(h)\) contains the designated occurrence of \(e\) we divide \(C_{e}(h)\) further into three pieces: \(C_{e}(h,1)\) is the part up to the designated occurrence, \(C_{e}(h,2)\) is the designated occurrence, and \(C_{e}(h,3)\) is the remainder. Take exactly enough processes to assign them to one copy \(G_{e}^{1}\) of \(C_{e}\) for every \(e\in E_{\text{light green}}\), and \(\mathfrak{m}\) copies \(G_{e}^{1},\ldots G_{e}^{\mathfrak{m}}\) of \(C_{e}\) for every \(e\in E_{\text{dark green}}\). Given a group of processes \(G_{e}^{i}\), for some \(i\) and \(e\), we define the following operations: * **flush**: \(G_{e}^{i}\) simulates (using Lemma 3.10) the portion of \(C_{e}(h)\) which it has not yet simulated, up to but not including the broadcast; * **load**: in case \(C_{e}(h)\) contains the designated occurrence of \(e\), then \(G_{e}^{i}\) simulates (using Lemma 3.10) the path \(C_{e}(h,1)\); * **swap**: we say that we _swap_ process \(1\) into \(G_{e}^{i}\) to mean that process \(1\) and process \(j\) in \(G_{e}^{i}\) (where \(j\) is a process that will take the edge \(e\) the next time the designated transition will be reached by \(G_{e}^{i}\)) exchange their group associations. I.e., process \(1\) joins \(G_{e}^{i}\), and process \(j\) takes its place in the former group of process \(1\); * **recharge**: if \(e\) is light green, we say that we _recharge_\(C_{e}\) to mean that the group \(G_{e}^{i}\) simulates (using Lemma 3.10) tracing \(D_{e}\) until reaching (but not executing) the designated occurrence of \(e\). * **mark**: if \(e\) is dark green, we may mark a group \(G_{e}^{i}\) as _used_ or _fresh_ by setting or resetting a virtual flag. Obviously (except for flush and mark), not every operation above can be taken at any time. For example, a swap is allowed only at a time process \(1\) and \(j\) are in the same state (e.g., just after \(G_{e}^{i}\) is loaded). We are now ready to construct \(\rho^{\prime\prime}\). The initial configuration of \(\rho^{\prime\prime}\) is obtained by having all processes at the beginning of the cycles they were assigned to, and process \(1\) assigned to the group \(G_{e^{\prime}}^{1}\) where \(e^{\prime}\) is the first edge on \(\beta^{\prime\prime}\) (this can be done because \(e^{\prime}\) appears in the first transition taken in the cycle \(C_{e^{\prime}}\)). Note that this satisfies the requirement that \(\rho^{\prime\prime}\) starts in a configuration in \(P_{l}^{\rightsquigarrow}\). The rest is done in blocks, where in the \(i\)'th block we extend \(\rho^{\prime\prime}\) with a path \(\xi_{i}\) containing \(K\) broadcasts and whose projection on process \(1\) is the portion of \(\beta^{\prime\prime}\) after the \((i-1)K\) broadcast up to and including the \(iK\) broadcast -- which we call \(\beta_{i}^{\prime\prime}\). The construction will ensure that \(\xi^{\prime}\), defined as the concatenation of the \(\xi_{i}\)s, weakly-simulates all the cycles of all the groups, and thus by Remark 1, will maintain the following invariant (\(\dagger\)): at the configuration \(f\) at the start of each block, the processes in every group of the form are in states corresponding to the initial configuration \(g\) of \(C_{e}\) (i.e., \(f\big{|}_{G_{e}^{i}}=g\)). The invariant obviously holds for the first block by our choice of the initial configuration of \(\rho^{\prime\prime}\). For \(i\in\mathbb{N}\), assume that blocks \(<i\) have been constructed. We now describe how to build block \(i\). First, **mark** all the groups as _fresh_, then proceed in \(K\) rounds by repeating the following algorithm for every \(1\leq h\leq K\). Let \(e_{1},\ldots e_{x}\) be the prefix of \(\beta_{i}^{\prime\prime}\) not yet traced by process 1, up to and including the next broadcast (obviously, the length \(x\) of this prefix depends on \(h\)). For every \(j\in[x]\), if \(e_{j}\) is light green pick the group \(G_{e_{j}}^{1}\); and if it is dark green pick the first _fresh_ group from among the yet unpicked groups in \(G_{e_{j}}^{1},\ldots G_{e_{j}}^{\mathfrak{m}}\), and **mark** this group as _used_ (we can always pick a fresh group since -- by our choice of \(\mathfrak{m}\), and since \(\beta_{i}^{\prime\prime}\) has exactly \(K\) broadcasts -- there are at most \(\mathfrak{m}\) occurrences of \(e_{j}\) in \(\beta_{i}^{\prime\prime}\)). Denote the group thus picked by \(\mathbb{G}_{e_{j}}\), and let \(\mathbb{G}_{e_{0}}\) denote whatever group process 1 is in at the beginning of the round. 1. For \(1\leq j\leq x\) repeat: 1. **Load** the group \(\mathbb{G}_{e_{j}}\) and **Swap** process 1 into it; 2. if \(j<x\) then have group \(\mathbb{G}_{e_{j}}\) simulate the transition \(C_{e_{j}}(h,2)\), with process 1 taking the edge \(e_{j}\). 3. If \(j>1\), and \(e_{j-1}\) is a light green edge, then **recharge**\(\mathbb{G}_{e_{j-1}}\). 2. **Flush** all groups except \(\mathbb{G}_{e_{x}}\) (that process 1 is currently in). Note that since \(e_{x}\) is a broadcast, the loading of \(\mathbb{G}_{e_{x}}\) already put it in a flushed condition. 3. Perform a broadcast (with process 1 taking the broadcast edge \(e_{x}\)). Since the invariant (\(\dagger\)) holds at the start of each block, it is easy to see that the algorithm can actually be executed. Indeed, the invariant ensures that the algorithm can **load** when needed (and thus, **swap** and **recharge** when needed). It is not hard to see that, as promised, the resulting path \(\xi_{i}\) weakly-simulates all the cycles of all the groups. To see that the projection of \(\xi_{i}\) on process 1 is \(\beta_{i}^{\prime\prime}\), observe that the moves performed by process 1 in lines \(1(b)\) and 3 of the algorithm trace exactly \(\beta_{i}^{\prime\prime}\). Furthermore, process 1 is moved only in these lines since, by our assumption that \(P^{\rightsquigarrow}\) contains no self loops, every edge in \(\beta^{\prime\prime}\) is different than the edge just before and just after it and thus, process 1 is never in a group when it is being loaded (except at the very beginning of the first block, in which case the **load** in line \(1(a)\) of the algorithm does nothing since this group is already in a loaded position in the initial configuration of \(\rho^{\prime\prime}\)). This completes the proof of Theorem 6.5. In Section 7 we will show how to decide the type of the edges in \(P^{\rightsquigarrow}\) (Theorem 7.1) in polynomial time in the size of \(P^{\rightsquigarrow}\). Thus, we can build the B-automaton \(\mathcal{B}\) in exponential time in the size of \(P\). Combining this with Theorem 6.5 we get Theorem 3.7 that says that the PMCP for NBW/LTL specifications of RB-systems can be solved in exptime. Proof of Theorem 3.7.: We reduce the PMCP problem to the emptiness problem for B-automata. Given a process template \(P\), and the corresponding B-automaton \(\mathcal{B}\) (whose Buchi set is trivial), suppose the specification is given as an LTL formula \(\varphi\) (the case of NBW is given afterwards). Let \(L(\mathcal{B})\) denote the language of \(\mathcal{B}\), and let \(L(\neg\varphi)\) denote the set of models of \(\neg\varphi\). Then, every execution in \(\textsc{exec-inf}(P^{\infty})\) satisfies \(\varphi\) if and only if \(L(\mathcal{B})\cap L(\neg\varphi)=\emptyset\). Using Theorem 2.2, let \(\mathcal{A}_{\neg\varphi}\) be an NBW accepting all models of \(\neg\varphi\), and denote its states by \(Q\) and its Buchi set by \(G\). Build the synchronous product of \(\mathcal{A}_{\neg\varphi}\) and \(\mathcal{B}\) to get an NBW with one counter, call it \(M\), whose language is equal to \(L(\mathcal{B})\cap L(\mathcal{A}_{\neg\varphi})\). By Lemma 2.1, one can test whether \(L(M)\) is empty in ptime. Thus, this PMCP algorithm is exponential in the size of the template \(P\) (since computing \(\mathcal{B}\) can be done in time exponential in the size of \(P\)) and exponential in the size of \(\varphi\) (since computing \(\mathcal{A}_{\neg\varphi}\) can be done in time exponential in the size of \(\varphi\)). For the case that the specification is an NBW \(\mathcal{A}\), we proceed in a similar way by noting that we can build an NBW \(\mathcal{A}^{\prime}\) for the complement of \(L(\mathcal{A})\) in time exponential in the size of \(\mathcal{A}\) (see for example [10]). ## 7. Deciding Edge Types This section is dedicated to proving the following result: **Theorem 7.1**.: _Given a reachability-unwinding \(P^{\neg\circ}\) of a process template \(P\), the type (light green, dark green, and locally-reusable) of each edge \(e\) in \(P^{\neg\circ}\) can be decided in ptime (in the size of \(P^{\neg\circ}\))._ We will develop the proof of Theorem 7.1 in several steps: **1.** The starting point for the proof of Theorem 7.1 is the characterization of edge types in Lemma 6.2 through the existence (or lack thereof) of suitable cycles in \((P^{\neg\circ})^{\infty}\). In Subsection 7.1, we weaken this characterization using the notion of _pseudo-cycles_. Pseudo cycles are paths that start and end in configurations that are identical up to the renaming of processes, i.e., there are exactly the same number of processes in every state, though the identities of the processes in each of the states may differ. Pseudo-cycles can always be pumped to a cycle by iterating the pseudo-cycle until the initial configuration is reached again. Hence, pseudo-cycles can be seen as more compact representations of cycles. Importantly, we are able to obtain a bound on the number of broadcasts in pseudo-cycles, which we need for deciding edge types. **2.** In order to be able to conveniently reason about pseudo-cycles we will work with counter abstractions of R-Systems: In Subsection 7.2, we define vector rendezvous systems (VRS) and their continuous relaxation, called continuous vector rendezvous systems (CVRS). The notions of VRSs and CVRS are inspired by the notion of Vector Addition Systems (VAS) [11], where configurations only store the number of processes for every process state but not the identity of the processes. VRSs are counter abstractions of R-Systems; in particular, pseudo-cycles of R-Systems correspond to cycles in VRSs and vice versa. CVRSs are a continuous relaxation of VRSs in which steps can be taken by a rational fraction. CVRSs have the advantage that we can characterize reachability and the existence of cycles in them by solving linear programming problems over the rationals. We will be able to work with CVRs instead of VRSs because we will show that we can scale a CVRSs cycle to a VRS cycle (as we are interested in the parameterized verification problem we can always scale the number of processes). We then reduce the existence of witnessing pseudo-cycles for the type of an edge to corresponding reachability statements for CVRSs, as outlined below. **3.** In Subsection 7.3, we develop a characterization of reachability for CVRSs. This characterization will give rise to an equation system and a fixed point algorithm, which is the basis for our edge type computation procedure. The results in this subsection already allow us to compute the locally-reusable edges and the light green edges (under the assumption that the green edges are already known; the computation of the green edges is, however, only done in the next subsection). **4.** In Subsection 7.4, we show how to decide whether an edge of \(P^{\rightharpoonup}\) is \(\mathsf{green}\). This problem represents the main difficulty in deciding the type of an edge. We give an algorithm that computes all \(\mathsf{green}\) edge of \(P^{\rightharpoonup}\) and prove its correctness. We further remark that, though not actually needed, one can derive a pseudo-cycle that contains all \(\mathsf{green}\) edges from our procedure. ### Pseudo-cycles **Definition 7.2**.: _Let \(P\) be an RB-template with state set \(S\), and let \(n\in\mathbb{N}\). Two configurations \(f,f^{\prime}\) of \(P^{n}\) are called twins if every state is covered by the same number of processes in \(f\) and \(f^{\prime}\), i.e. \(|f^{-1}(q)|=|f^{\prime-1}(q)|\) for every \(q\in S\)._ Let \(g\circ h\) denote the composition \(g(h(\cdot))\). Observe that \(f,f^{\prime}\) are twins if and only if there is a (not necessarily unique) permutation \(\theta:[n]\to[n]\) such that \(f^{\prime}=f\circ\theta\). Intuitively, \(\theta\) maps each process in \(f^{\prime}\) to a matching process in \(f\) (i.e., one in the same state). Thus, given a transition \(f\xrightarrow{\sigma}g\), say \(t\), in \(P^{n}\), we denote by \(t[\theta]\) the transition \(f\circ\theta\xrightarrow{\sigma^{\prime}}g\circ\theta\) resulting from replacing process \(i\) with process \(\theta(i)\) in \(t\); and having \(\sigma^{\prime}=\mathfrak{b}\) if \(\sigma=\mathfrak{b}\), and \(\sigma^{\prime}=((\theta^{-1}(i_{1}),a_{1}),\ldots,(\theta^{-1}(i_{k}),a_{k}))\) if \(\sigma=((i_{1},a_{1}),\ldots,(i_{k},a_{k}))\). Note that the rendezvous or broadcast action of the transition taken in \(t\) and \(t[\theta]\) are the same -- it is only the identities of the processes involved that are different. Extend \(\theta\) to paths point-wise, i.e., if \(\pi=t_{1}t_{2}\ldots\) is a path then define \(\pi[\theta]=t_{1}[\theta]t_{2}[\theta]\ldots\). **Definition 7.3**.: _A finite path \(\pi\) of an RB-system \(P^{n}\) is a pseudo-cycle if \(\mathsf{src}(\pi)\) and \(\mathsf{dst}(\pi)\) are twins._ Obviously, every cycle is also a pseudo-cycle, but not vice-versa. For example, for \(P\) in Figure 6, the following path in \(P^{4}\) is a pseudo-cycle that is not a cycle: \((p,q,q,r)\xrightarrow{((3,c_{1}),(4,c_{2}))}(p,q,r,p)\xrightarrow{((2,c_{1} ),(3,c_{2}))}(p,r,p,p)\xrightarrow{((3,a_{1}),(4,a_{2}))}(p,r,q,q)\). **Remark 6**.: _Similar to cycles, for which one can chose any point on the cycle as its start (and end) point, one can chose any point along a pseudo-cycle as the start point. Indeed, if \(C\) is a pseudo-cycle that starts in a configuration \(f\) and ends in a twin \(f^{\prime}\), then given any configuration \(g\) along \(C\) we can obtain a new pseudo-cycle, that uses exactly the same edges (but with possibly different processes taking these edges) as follows: start in \(g\) and traverse Figure 6. R-template. Figure 7. RB-template. the suffix of \(C\) until \(f^{\prime}\), reassign process id's according to the permutation transforming \(f\) to \(f^{\prime}\) and traverse the prefix of \(C\) from \(f\) to \(g\) using these reassigned processes to reach a twin \(g^{\prime}\) of \(g\)._ The following immediate lemma states that every pseudo-cycle \(\pi\) can be pumped to a cycle. **Lemma 7.4**.: _Given a pseudo-cycle (resp. legal pseudo-cycle) \(\pi\) in \(P^{n}\), and a permutation \(\theta\) such that \(\mathsf{dst}(\pi)=\mathsf{src}(\pi)\circ\theta\), then there is a \(z\geq 1\) such that_ \[\pi[\theta^{0}]\pi[\theta^{1}]\pi[\theta^{2}]\ldots\pi[\theta^{z-1}],\] _is a cycle (resp. legal cycle) in \(P^{n}\), where \(\theta^{j}\) denotes the composition of \(\theta\) with itself \(j\) times._ Proof.: We can choose \(z\) such that the permutation \(\theta^{z}\) is the identity (recall that the set of permutations of a finite set forms a finite group, and that the order of every element in a finite group is finite, i.e., there is some \(z\in\mathbb{N}\) such that \(\theta^{z}\) is the identity permutation). We observe that \(\mathsf{dst}(\pi[\theta^{0}]\pi[\theta^{1}]\pi[\theta^{2}]\ldots\pi[\theta^{z -1}])=\mathsf{src}(\pi)\circ\theta^{z}=\mathsf{src}(\pi)\), and thus it is a cycle. Recall that \(n,r\) denote the prefix-length and period of \(P^{-\circ}\), respectively. The following lemma states that we can assume that if an edge of \(P^{-\circ}\) appears on a pseudo-cycle with broadcasts then it also appears on one with exactly \(r\) broadcasts. Knowing this bound will be crucial for decidability of edge types (in contrast, Lemma 6.2 only says that a bound exists), as well as for obtaining good complexity for deciding PMCP. **Lemma 7.5** (Spiral).: _An edge \(e\) appears on a legal pseudo-cycle \(D\) in \((P^{-\circ})^{\infty}\), which contains broadcasts, iff it appears on a legal pseudo cycle \(C\) of \((P^{-\circ})^{\infty}\), containing exactly \(r\) broadcasts and starting in a configuration in \(P^{-\circ}_{n}\)._ Proof.: Assume \(D\) is a legal pseudo-cycle in \((P^{-\circ})^{m}\), for some \(m\in\mathbb{N}\). By Remark 6 we can assume w.l.o.g. that \(D\) starts and ends in a configuration in \(P^{-\circ}_{n}\). Observe that (by the lasso structure of \(P^{-\circ}\)) \(D\) must have \(lr\) broadcast transitions for some \(l\in\mathbb{N}\), and that after every \(r\) broadcasts all processes are in \(P^{-\circ}_{n}\). Let \(f_{0}\) be the initial configuration on \(D\). For every \(i\in[l]\), let \(f_{i}\) be the configuration in \(D\) just after \(ir\) broadcasts, and let \(\rho_{i}\) be the portion of \(D\) from \(f_{i-1}\) to \(f_{i}\). Observe that \(\rho_{i}\) contains exactly \(r\) broadcasts. By Lemma 3.10, we can compose the paths \(\rho_{1},\ldots\rho_{l}\) into a single path \(C\) in the system \((P^{-\circ})^{ml}\) with \(ml\) processes. It is not hard to see that \(C\) is a legal pseudo-cycle. Indeed, after \(r\) broadcasts the processes that were simulating \(\rho_{i}\) are in states that match the configuration \(f_{(i+1)\bmod l}\), i.e., the configuration that the processes that simulate \(\rho_{(i+1)\bmod l}\) started in. Obviously, \(e\) appears on \(C\), which completes the proof. We now use the notion of a pseudo-cycle to give a characterization of the edges types that is easier to detect than the one given in Section 6: **Lemma 7.6**.: _An edge \(e\) of \(P^{-\circ}\) is:_ 1. _locally-reusable iff it appears on a legal pseudo-cycle_ \(C_{e}\) _of_ \((P^{-\circ})^{\infty}\) _without broadcasts._ 2. _green iff it appears on a legal pseudo-cycle_ \(C_{e}\) _of_ \((P^{-\circ})^{\infty}\) _with_ \(r\) _broadcasts._ 3. _light green iff it appears on a legal pseudo-cycle_ \(C_{e}\) _of_ \((P^{-\circ})^{\infty}\) _without broadcasts that only uses green edges;_ 4. _dark green iff it is green and does not appear on a legal pseudo-cycle_ \(C_{e}\) _of_ \((P^{-\circ})^{\infty}\) _without broadcasts that only uses green edges._ Proof.: Item iv. follows from item iii. since green edges are partitioned to light green and dark green edges. Items i. and ii. follow immediately from Lemmas 6.2 and 7.4 and the fact that every cycle is also a pseudo-cycle (for the 'only if' direction of item ii. use Lemma 7.5 to obtain a pseudo-cycle with exactly \(r\) broadcasts). The same fact, combined with Lemma 6.2, gives the 'only if' direction of item iii.. For the 'if' direction of item iii., we claim that it is enough to show that: \((\dagger)\) there is a configuration \(f\) of \((P^{\rightsquigarrow})^{\infty}\) and two pseudo-cycles \(C,D\) starting in \(f\), such that \(D\) contains broadcasts, and \(C\) contains the edge \(e\) and no broadcasts. Indeed, by Lemma 7.4, one can pump \(C,D\) to obtain cycles \(C^{\prime},D^{\prime}\) starting and ending in \(f\), and the 'figure-eight' cycle obtained by concatenating \(C^{\prime}\) to \(D^{\prime}\) is (according to Lemma 6.2) a cycle witnessing that the edge \(e\) is light green. We now show that if \(e\) appears on a pseudo-cycle \(C_{e}\) satisfying item iii. then \((\dagger)\) holds. Let \(g\) be the initial configuration of \(C_{e}\) and let \(H:=\{s\mid g(s)>0\}\) be the set of states of \(P^{\rightsquigarrow}\) for which there is some process in that state in \(g\). Assume w.l.o.g. that there is no process that does not move on \(C_{e}\) (such processes can simply be removed), and for every \(s\in H\), let \(e^{s}\) be some edge in \(P^{\rightsquigarrow}\) appearing in \(C_{e}\) whose source is \(s\). Observe that by item iii. the edge \(e^{s}\) is green, and apply Lemma 6.2 to obtain a witnessing cycle \(C_{e^{s}}\) starting with some process in \(s\). By Lemma 6.3, we can assume that the cycles \(C_{e^{s}}\) thus obtained for all states in \(H\) have the same number of broadcasts. Hence, by Lemma 3.10, we can run together \(g(s)\) copies of \(C_{e^{s}}\), for all \(s\) in \(H\), in one cycle \(D\). Let \(f\) be the starting configuration of \(D\). Note that \(f(s)\geq g(s)\) for every state \(s\) in \(P^{\rightsquigarrow}\), and obtain a pseudo-cycle \(C\) starting in \(f\) which simulates \(C_{e}\) (since \(C_{e}\) has no broadcasts, processes that \(f\) has in excess of \(g\) can simply not move). The proof is complete by noting that \(f,C\) and \(D\) satisfy \(\dagger\). ### Vector Rendezvous Systems We now formally introduce VRSs and CVRSs. We recall that \(k\) denotes the number of processes participating in a rendezvous, \(\Sigma_{\mathsf{actn}}\) denotes a finite set of _rendezvous actions_, and \(\Sigma_{\mathsf{rdz}}=\cup_{a\in\Sigma_{\mathsf{actn}}}\{a_{1},\ldots,a_{k}\}\) denotes the _rendezvous alphabet_. Given a finite set \(S\), we can think of \(\mathbb{Q}^{S}\) as the set of rational vectors of dimension \(|S|\), and we use the elements of \(S\) as indices into these vectors. We also use the standard operations of vector addition and scalar multiplication. Finally, we compare vectors point-wise, i.e., given \(c,c^{\prime}\in\mathbb{Q}^{S}\) we say that \(c\leq c^{\prime}\) iff \(c(s)\leq c^{\prime}(s)\) for all \(s\in S\). **Continuous Vector Rendezvous System (CVRS)**.: A _continuous vector rendezvous system_ (CVRS) is a tuple \(\mathcal{V}=\langle\Sigma_{\mathsf{rdz}},S,R\rangle\), where \(S\) is a finite set of states and \(R\subseteq S\times\Sigma_{\mathsf{rdz}}\times S\) is a finite set of _transitions_. The _configurations_ of \(\mathcal{V}\) are the vectors \(\mathbb{Q}^{S}_{\geq 0}\). For a transition \(t=(p,\sigma,q)\), we denote by \(\mathsf{rdz}(t)=\sigma\) its rendezvous symbol, by \(\mathsf{src}(t)=p\) the _source_ state of \(t\), and by \(\mathsf{dst}(t)=q\) the _destination_ state of \(t\). Also, we denote by \(\mathsf{out}(t)\in\{0,1\}^{S}\) the vector that has a \(1\) entry at index \(p\) and zero entries otherwise, and by \(\mathsf{in}(t)\in\{0,1\}^{S}\) the vector that has a \(1\) entry at index \(q\) and zero entries otherwise. We now define what it means for a CVRS to take a step. **Definition 7.7** (Step of a CVRS).: _Given a \(k\)-tuple \(\mathsf{t}=(t_{1},\ldots,t_{k})\) of transitions in \(R\), the CVRS \(\mathcal{V}\) can step with multiplicity \(\alpha\in\mathbb{Q}_{>0}\) from a configuration \(c\) to a configuration \(c^{\prime}\) using transitions \(\mathsf{t}=(t_{1},\ldots,t_{k})\), denoted \(c\xrightarrow{t_{1},\ldots,t_{k}:\alpha}c^{\prime}\), or \(c\xrightarrow{\mathrm{t}\alpha}c^{\prime}\), if:_ (i) _there is an action \(a\in\Sigma_{\mathsf{actn}}\) such that \(\mathsf{rdz}(t_{i})=a_{i}\) for all \(i\in[k]\),_ 2. \(c\geq\alpha\sum_{i=1}^{k}\textsf{out}(t_{i})\)_, and_ 3. \(c^{\prime}=c+\alpha\sum_{i=1}^{k}\left(\textsf{in}(t_{i})-\textsf{out}(t_{i})\right)\)_._ \(A\) _step is said to synchronize on \(a\). We say that a transition \(t\in R\) participates in a step \(c\xrightarrow{t_{1},\ldots,t_{k}:\alpha}c^{\prime}\) if \(t=t_{i}\) for some \(i\in[k]\)._ A _trace_ of \(\mathcal{V}\) is a sequence of steps \(c_{1}\xrightarrow{\texttt{t}_{1}:\alpha_{1}}c_{2}\xrightarrow{\texttt{t}_{2} :\alpha_{2}}\cdots c_{n}\). We say that a configuration \(c^{\prime}\) is _reachable_ from configuration \(c\), denoted \(c\to^{\star}c^{\prime}\), if there is a trace \(c_{1}\xrightarrow{\texttt{t}_{1}:\alpha_{1}}c_{2}\xrightarrow{\texttt{t}_{2} :\alpha_{2}}\cdots c_{n}\), with \(c=c_{1}\) and \(c^{\prime}=c_{n}\). **Vector Rendezvous System (VRS).** A _vector rendezvous system (VRS)_, is a restriction of a CVRS, where the set of configurations is \(\mathbb{N}_{0}^{S}\) and all steps are restricted to have multiplicity \(\alpha=1\). Given any VRS \(\mathcal{V}=\langle\Sigma_{\textsf{rdz}},S,R\rangle\), the _relaxation_ of \(\mathcal{V}\) is the same tuple interpreted as a CVRS. Since a natural number is also rational, it is obvious that any VRS \(\mathcal{V}=\langle\Sigma_{\textsf{rdz}},S,R\rangle\) is also a CVRS. Note that every trace of a VRS is also a trace of its relaxation, and that the relaxation has more traces than the VRS since every rational multiple of a step in the VRS is a step in its relaxation. **Remark 7**.: _Every R-template \(P=\langle\mathit{AP},\Sigma_{\textsf{rdz}},S,I,R,\lambda\rangle\) defines a VRS \(\mathcal{V}=\langle\Sigma_{\textsf{rdz}},S,R\rangle\) with the same set of rendezvous alphabet, states, and transitions. These two systems are closely related. Intuitively, \(\mathcal{V}\) is an abstraction of \(P^{\infty}\) in the sense that it does not keep track of the state of every individual process, but only keeps track of the number of processes in every state. More formally, every configuration \(f\in P^{\infty}\) induces a configuration \(c\) of \(\mathcal{V}\), called its counter representation, defined as \(c(s):=|f^{-1}|(s)\) for \(s\in S\), i.e., \(c(s)\) is the number of processes of \(f\) that are in state \(s\). Furthermore, \(c\xrightarrow{\texttt{t}_{1},\cdots,\texttt{t}_{k}}c^{\prime}\) is a step of \(\mathcal{V}\) if and only if there is a global transition \((f,\sigma,f^{\prime})\) in \(P^{\infty}\) such that \(c\) and \(c^{\prime}\) are the counter representations of \(f\) and \(f^{\prime}\) respectively, and \(t_{1},\cdots,t_{k}\) are exactly the rendezvous edges taken by the \(k\) active processes in the transition \((f,\sigma,f^{\prime})\)._ We now define operations for manipulating traces. Given a trace \(\xi:=c_{1}\xrightarrow{\texttt{t}_{1}:\alpha_{1}}c_{2}\xrightarrow{\texttt{ t}_{2}:\alpha_{2}}\cdots c_{n}\) of a CVRS \(\mathcal{V}\), we define the following two operations: 1. Multiplication by a scalar \(0<\gamma\): Let \(\gamma\otimes\xi\) be the trace \(\gamma c_{1}\xrightarrow{\texttt{t}_{1}:\gamma\alpha_{1}}\gamma c_{2} \xrightarrow{\texttt{t}_{2}:\gamma\alpha_{2}}\cdots\gamma c_{n}\). 2. Addition of a constant configuration \(c\): let \(c\oplus\xi\) be the trace \(c+c_{1}\xrightarrow{\texttt{t}_{1}:\alpha_{1}}c+c_{2}\xrightarrow{\texttt{ t}_{2}:\alpha_{2}}\cdots c+c_{n}\). It is not hard to see, by consulting the definition of a step, that \(\gamma\otimes\xi\) and \(c\oplus\xi\), are indeed traces of \(\mathcal{V}\). Note, however, that multiplying by a scalar \(\gamma<0\) would not yield a trace, and that adding a vector \(c\) that is not a configuration (i.e., which has negative coordinates) may sometimes also not yield a trace -- either because intermediate points may not be configurations (due to having some negative coordinates), or since condition (ii) in the definition of a step (Definition 7.7) is violated. Finally, traces in CVRSs have a _convexity property_ that states that by taking a fraction \(0<\gamma<1\) of each step of a trace \(c\to^{\star}c^{\prime}\) one obtains a trace from \(c\) to the convex combination \((1-\gamma)c+\gamma c^{\prime}\): **Proposition 7.8** (convexity).: _Let \(\xi:=c_{1}\xrightarrow{\texttt{t}_{1}:\alpha_{1}}c_{2}\xrightarrow{\texttt{t}_ {2}:\alpha_{2}}\cdots c_{n}\) be a trace of a CVRS \(\mathcal{V}\), and let \(0<\gamma<1\) be rational. Define configurations \(c^{\prime}_{i}:=\gamma c_{i}+(1-\gamma)c_{1}\) for \(1<i\leq n\). Then \(\xi^{\prime}:=c_{1}\xrightarrow{\texttt{t}_{1}:\gamma\alpha_{1}}c^{\prime}_{2} \xrightarrow{\texttt{t}_{2}:\gamma\alpha_{2}}\cdots c^{\prime}_{n}\) is a trace of \(\mathcal{V}\)._ Proof.: Simply observe that \(\xi^{\prime}\) is the trace \(((1-\gamma)c_{1})\oplus(\gamma\otimes\xi)\). In the following two lemmas we combine Remark 7 with Lemma 7.6 in order to rephrase the characterization of edge types in terms of the existence of certain CVRSs traces. We begin by characterizing the locally-reusable edges and the light green edges (relative to green edges): **Lemma 7.9**.: _An edge \(e\) of \(P^{\rightsquigarrow}\) is locally-reusable iff \(e\) participates in a step of a cyclic trace of some component \(P_{i}^{\rightsquigarrow}\) considered as CVRS, i.e., iff \(e\) participates in a step of a trace \(\xi:=c\to^{\star}c\) of CVRS \(P_{i}^{\rightsquigarrow}\). Moreover, \(e\) is light green iff \(\xi\) uses only green edges._ Proof.: We start with the 'only if' direction. By Lemma 7.6 there is a legal pseudo-cycle \(C\), which contains \(e\), and which does not contain broadcasts. Because \(C\) does not contain broadcasts, we have that all configurations of \(C\) are in some component \(P_{i}^{\rightsquigarrow}\) of \(P^{\rightsquigarrow}\). Let \(f\) be the initial configuration of \(C\), and let \(c\) be its counter representation. By Remark 7, \(C\) induces a trace \(\xi:=c\to^{\star}c\) in the VRS corresponding to \(P_{i}^{\rightsquigarrow}\), and \(e\) participates in a step of \(\xi\). As every VRS is a CVRS the claim follows. For the other direction, let \(e\) participate in a step of some trace \(\xi:=c\to^{\star}c\) of the CVRS corresponding to \(P_{i}^{\rightsquigarrow}\). Let \(x\) be the least common multiple of all the denominators that appear in the multiplicities of any step, or any coordinate of any configuration of \(\xi\). Consider the trace \(x\otimes\xi:=xc\to^{\star}xc\). Observe that all configurations on this trace are in \(\mathbb{N}_{0}^{S_{i}}\), and that all steps on it are taken with an integer multiplicity. By replacing every step that uses a multiplicity \(y\in\mathbb{N}\) with \(y\) consecutive steps each with multiplicity \(1\), we obtain a trace \(\varrho:=xc_{i}\to^{\star}xc_{i}^{\prime}\) in \(P_{i}^{\rightsquigarrow}\) considered as VRS. By Remark 7, there is a corresponding pseudo-cycle \(C\) of \(P_{i}^{\rightsquigarrow}\), and \(e\) appears on \(C\). The claim then follows from Lemma 7.6. We now turn to characterizing the green edges. For the statement of the lemma, recall (see Section 5) that the template \(P^{\rightsquigarrow}\) is built from component templates arranged in a lasso structure. Let \(\mathcal{I}:=\{n,n+1,\ldots,n+r-1=m\}\) be the set of indices of the components on the noose of \(P^{\rightsquigarrow}\), and for every \(i\in\mathcal{I}\) define \(next(i)\) (resp. \(prev(i)\)) to be the component number immediately following \(i\) (resp. preceding \(i\)) along the noose. Note that, in particular, \(next(n+r-1)=n\) and \(prev(n)=n+r-1\). **Lemma 7.10**.: _An edge \(e\) of \(P^{\rightsquigarrow}\) is green iff, for every \(i\in\mathcal{I}\): (i) there is a subset \(\mathcal{T}_{i}\) of the transitions of \(P_{i}^{\rightsquigarrow}\), and a subset \(B_{i}\) of the broadcast transitions from \(P_{i}^{\rightsquigarrow}\) to \(P_{next(i)}^{\rightsquigarrow}\); (ii) there are coefficients \(\alpha_{t}\in\mathbb{Q}_{>0}\) for every \(t\in B_{i}\); and (iii) there is a trace \(\xi_{i}:=c_{i}\to^{\star}c_{i}^{\prime}\) of the CVRS of \(P_{i}^{\rightsquigarrow}\) using exactly the transitions \(\mathcal{T}_{i}\); such that: \(e\in\cup_{i\in\mathcal{I}}(B_{i}\cup\mathcal{T}_{i})\), and for all \(q\in S_{i}^{\rightsquigarrow}\) the following holds:_ 1. \(c_{i}(q)=\sum_{t\in B_{prev(i)},\textsf{dst}(t)=q}\alpha_{t}\)_._ 2. \(c_{i}^{\prime}(q)=\sum_{t\in B_{i},\textsf{src}(t)=q}\alpha_{t}\)_._ Proof.: By Lemma 7.6, it is enough to show that \(e\) appears on a pseudo-cycle \(C\), with \(r\) broadcasts, of \((P^{\rightsquigarrow})^{\infty}\) iff the conditions of the lemma hold. First, assume that such a pseudo-cycle \(C\) exists. For every \(i\in I\), let \(f_{i}\) (resp. \(f_{i}^{\prime}\)) be the configurations of \(C\) in \(P_{i}^{\rightsquigarrow}\) just after (resp. just before) a broadcast, let \(C_{i}\) be the sub-path of \(C\) from \(f_{i}\) to \(f_{i}^{\prime}\), let \(\mathcal{T}_{i}\) be the process transitions appearing on \(C_{i}\). By Remark 7, \(C_{i}\) induces a trace \(\xi_{i}:=c_{i}\to^{\star}c_{i}^{\prime}\) of the CVRS of \(P_{i}^{\rightsquigarrow}\), using exactly the transitions in \(\mathcal{T}_{i}\), where \(c_{i},c_{i}^{\prime}\) are the counter representations of \(f_{i},f_{i}^{\prime}\). Consider now the global broadcast transition of \(C\) from \(f_{i}^{\prime}\) to \(f_{next(i)}\), let \(B_{i}\) be the set of local broadcast transitions that participate in it, and for every \(t\in B_{i}\) let \(\alpha_{t}\) to be the number of processes in \(f_{i}^{\prime}\) that take \(t\). It is easy to see that the conditions of the lemma are satisfied. For the other direction, assume that sets \(\mathcal{T}_{i}\) and \(B_{i}\), coefficients \(\alpha_{t}\), and traces \(\xi_{i}\) satisfying the conditions of the lemma exist. Using the same argument as in the proof of Lemma 7.9 -- by taking \(x\) to be the least common multiple of all the denominators that appear in the multiplicities of any step or any coordinate of any configuration of these traces, as well as any of the coefficients \(\alpha_{t}\) -- we can obtain from each trace \(\xi_{i}\) a trace \(\varrho_{i}:=xc_{i}\to^{\star}xc_{i}^{\prime}\) in the VRS corresponding to \(P_{i}^{-\cdot}\). We can now build the required pseudo-cycle \(C\) in \(r\) rounds. We start by (arbitrarily) picking some \(i_{i}\in\mathcal{I}\), and a configuration \(f_{i_{1}}\) whose counter representations is \(xc_{i_{1}}\). At each round \(j\), we extend \(C\) by concatenating a path (obtained from \(\varrho_{i_{j}}\) by Remark 7) from \(f_{i_{j}}\) to a configuration \(f_{i_{j}}^{\prime}\) whose counter representation is \(xc_{i_{j}}^{\prime}\); we then append a global broadcast transition in which each edge \(t\in B_{i_{j}}\) is taken by exactly \(x\alpha_{t}\) processes, resulting in a configuration \(f_{i_{j+1}}\) whose counter representation is \(xc_{next(i_{j})}\) (this is possible by equations (1) and (2) in the conditions of the lemma). It follows that, at the end of the last round, \(C\) is a path in \((P^{-\cdot})^{\infty}\) from \(f_{i_{1}}\) to \(f_{i_{r+1}}\), whose counter representations are \(xc_{i_{1}}\) and \(xc_{next^{r}(i_{1})}=xc_{i_{1}}\), respectively. Thus, \(C\) is a pseudo-cycle. In Lemmas 7.9 and 7.10 we have related edge-types to the existence of certain CVRS traces. Developing a criterion for the existence of CVRS traces is the subject of the next subsection. Having developed this characterization, we can replace the existence of CVRS traces with this criterion in the two lemmas mentioned. This will then allow us to present our algorithms for deciding edge-types using linear programming. ### Characterization of Reachability in CVRSs Our aim in this section is to develop a characterization of the existence of a trace between two configurations \(c,c^{\prime}\) in a CVRS which can be used as a basis for an efficient algorithm. Looking at Definition 7.7 of a step, one can see that the task is very simple if \(c^{\prime}\) is reachable from \(c\) in one step. However, in the general case, a trace from \(c\) to \(c^{\prime}\) may have a very large number of steps, so we have to find a way to avoid reasoning about each step individually. The following proposition is our starting point for summarizing a large number of steps by equations, and it follows immediately from the definitions of a step and a trace: **Proposition 7.11**.: _Let \(\mathcal{V}=\langle\Sigma_{\textsf{rdz}},S,R\rangle\) be a CVRS, and let \(\xi:=c\to^{\star}c^{\prime}\) be a trace in it. For every \(t\in R\), let \(\alpha_{t}\in\mathbb{Q}_{\geq 0}\) be the sum of the multiplicities of all the steps in \(\xi\) in which \(t\) participates. Then:_ 1. \(c^{\prime}=c+\sum_{t\in R}\alpha_{t}\left(\textsf{in}(t)-\textsf{out}(t)\right)\)_;_ 2. _For every rendezvous action_ \(a\in\Sigma_{\textsf{actn}}\)_, and every_ \(i,j\in[k]\)_, we have that_ \(\sum_{t\in R,\textsf{rdz}(t)=a_{i}}\alpha_{t}=\sum_{t\in R,\textsf{rdz}(t)=a_{ j}}\alpha_{t}\)_._ We will see that the inverse of Proposition 7.11 also holds, provided one adds some suitable conditions, as follows. The equations of Proposition 7.11 ensure that the 'accounting' in a trace is done correctly, i.e., that the transitions can be allocated to steps -- item (2), corresponding to requirement (i) in Definition 7.7 -- and that added to the vector \(c\) they yield the vector \(c^{\prime}\) -- item (1), corresponding to requirement (iii) in that definition. In order to extend Proposition 7.11 to a full characterization of reachability, we need to address the last element in the definition of a step (requirement (ii)) which states, for every state \(s\), that \(c(s)\) is at least the multiplicity of the step times the number of transitions that participate in the step whose source is \(s\). Observe that if \(c(s)>0\) this condition can always be satisfied if \(\alpha\) is small enough; however, if \(c(s)=0\), and there is some \(i\in[k]\) such that \(\textsf{src}(t_{i})=s\), then the condition is necessarily violated. The characterization we now develop will thus make a distinction between those states for which \(c(s)>0\) and those for which \(c(s)=0\). We will use the following terminology: For a vector \(c\in\mathbb{Q}_{\geq 0}^{S}\), let the _support_ of \(c\) be the set \((c)^{\neq 0}:=\{s\in S\mid c(s)>0\}\). We will also make use of the following notation: for a set of transitions \(R\), we lift \(\mathsf{src}\) and \(\mathsf{dst}\) as follows: \(\mathsf{src}(R)=\{\mathsf{src}(t)\mid t\in R\}\) and \(\mathsf{dst}(R)=\{\mathsf{dst}(t)\mid t\in R\}\). The following lemma states that we can summarize a large number of steps between two configurations in case the support of the two configurations is the same and no step increases the support: **Lemma 7.12**.: _Let \(\mathcal{V}=\langle\Sigma_{\mathsf{rdz}},S,R\rangle\) be a CVRS. Let \(c,c^{\prime}\in\mathbb{Q}_{\geq 0}^{S}\) be configurations and let \(\alpha_{t}\in\mathbb{Q}_{\geq 0}\) be coefficients for every \(t\in R\) such that:_ 1. \(c^{\prime}=c+\sum_{t\in R}\alpha_{t}\left(\mathsf{in}(t)-\mathsf{out}(t)\right)\)_;_ 2. _For every rendezvous action_ \(a\in\Sigma_{\mathsf{actn}}\)_, and every_ \(i,j\in[k]\)_, we have that_ \(\sum_{t\in R,\mathsf{rdz}(t)=a_{i}}\alpha_{t}=\sum_{t\in R,\mathsf{rdz}(t)=a_{ j}}\alpha_{t}\)_;_ 3. \((c)^{\neq 0}=(c^{\prime})^{\neq 0}\)_;_ 4. _For_ \(R^{\prime}:=\{t\in R\mid\alpha_{t}>0\}\) _we have_ \(\mathsf{src}(R^{\prime})\subseteq(c^{\prime})^{\neq 0}\) _and_ \(\mathsf{dst}(R^{\prime})\subseteq(c)^{\neq 0}\)_._ _Then, we have \(c\to^{\star}c^{\prime}\) and all transitions in \(R^{\prime}\) participate in some step of this trace._ Proof.: Equations ((1)) and ((2)) give us a good starting point for showing the existence of a trace from \(c\) to \(c^{\prime}\). Indeed, ((2)) allows us to group the transitions in \(R^{\prime}\) into steps such that each transition \(t\in R^{\prime}\) is taken with a combined multiplicity of \(\alpha_{t}\); and ((1)) guarantees that concatenating all these steps will take us from \(c\) to \(c^{\prime}\). It remains to find a way to make sure each of these steps satisfies requirement \((ii)\) in the definition of a step (Definition 7.7). We begin, however, by ignoring this problem. That is, we will consider _quasi-steps_ and _quasi-traces_. Formally, quasi-steps are like steps except that they are between arbitrary vectors in \(\mathbb{Q}^{S}\) (as opposed to steps which were defined only for vectors in \(\mathbb{Q}_{\geq 0}^{S}\)), and that they do not have to satisfy requirement \((ii)\) of Definition 7.7. We will denote quasi-steps using \(\Longrightarrow\). A quasi-trace is simply a sequence of quasi-steps where the destination of each quasi-step in the sequence is the source of the next one. We complete the proof by first constructing a quasi-trace \(\varrho\) from \(c\) to \(c^{\prime}\), and then showing how to turn \(\varrho\) into a trace. We construct \(\varrho\) using the following algorithm. If \(c=c^{\prime}\) then we are done. Otherwise, at round \(0\), set \(\varrho\) to the empty quasi-trace, and \(v_{1}:=c\) be the source of the first quasi-step to be constructed. At round \(\geq 1\), do the following: 1. pick a transition \(t\) whose \(\alpha_{t}\) is minimal among all edges in \(R^{\prime}\); let \(a_{i}\) be \(\mathsf{rdz}(t)\); 2. let \(t_{i}:=t\), and for every \(j\in[k]\setminus\{i\}\) pick some \(t_{j}\in R^{\prime}\) with \(\mathsf{rdz}(t_{j})=a_{j}\); 3. extend \(\varrho\) with the quasi-step \(v_{i}\xRightarrow{t_{1},\ldots,t_{k}:\alpha_{t}}v_{i+1}\); 4. for every \(j\in[k]\), subtract \(\alpha_{t}\) from \(\alpha_{t_{j}}\), and if the resulting \(\alpha_{t_{j}}\) is zero then remove \(t_{j}\) from \(R^{\prime}\). If \(R^{\prime}=\emptyset\) stop and output \(\varrho\). Since at least one transition (namely the transition \(t\) picked in step 1) is removed at the end of each round, the algorithm stops after at most \(|R^{\prime}|\) rounds. It is also easy to see that item ((2)) of the lemma is an invariant of the algorithm (i.e., it holds using the updated values at the end of each round). This, together with our choice of \(t\), ensures that we are always able to find the required transitions in the second step of every round. Finally, observe that the resulting final quasi-trace \(\varrho\) uses every transition \(t\) in (the initial) \(R^{\prime}\) with a combined multiplicity which is exactly \(\alpha_{t}\) and thus, by item ((1)) of the lemma, we have that \(\varrho\) ends in \(c^{\prime}\) as promised. It remains to show how to derive from \(\varrho\) a trace from \(c\) to \(c^{\prime}\) Let \(m\) be the number of quasi-steps in \(\varrho\). For \(j\in[m]\), denote the transitions used in the \(j\)-th quasi-step of \(\varrho\) by \(t_{1}^{j},\ldots,t_{k}^{j}\), denote the multiplicity used by \(\alpha^{j}\), and the destination reached by \(v_{j}\) (also set \(v_{0}:=c\)). Consider \(c=v_{0},v_{1},\ldots,v_{m}=c^{\prime}\) as points in the \(|S|\)-dimensional Euclidean space. Let \(H:=(c)^{\neq 0}=(c^{\prime})^{\neq 0}\) (by item ((3)) of the lemma). We note that every point on the line segment \(L\) between \(c\) and \(c^{\prime}\) also has support \(H\) (being a convex combination of \(c\) and \(c^{\prime}\)). It follows that \(L\) does not touch any of the hyperplanes corresponding to the states in \(H\) (i.e., the hyperplanes defined by the equations \(v(s)=0\), for \(s\in H\)). Let \(x>0\) be the minimum of the Euclidean distances between \(L\) and any of these hyperplanes8, let \(y\) be the maximum over \(j\in m\) of the multiplicities \(\alpha^{j}\), and let \(h\in\mathbb{N}\) be large enough to satisfy \(\frac{m\mu k}{h}\leq x\). Footnote 8: Recall that in a Euclidean space the distance between a hyperplane and a line-segment, that does not touch it, is positive. Construct a new quasi-trace \(\varrho^{\prime}\) from \(c\) to \(c^{\prime}\) by starting at \(c\), and repeating \(h\) times the following: for \(j=1,\ldots,m\), extend \(\varrho^{\prime}\) by taking a quasi-step using the transitions \(t_{1}^{j},\ldots,t_{k}^{j}\) with multiplicity \(\alpha^{j}/h\). This construction is illustrated in Figure 8. We claim that \(\varrho^{\prime}\) is actually a trace, i.e., that for every \(s\in S\), and every point \(v_{lm+j}\) on \(\varrho^{\prime}\) (where \(0\leq l<h\), and \(0\leq j<m\)) we have that condition (ii) in Definition 7.7 is satisfied, namely, that \(v_{lm+j}(s)\geq o_{lm+j}(s)\), where \(o_{lm+j}\) is the vector \(\frac{\alpha^{j}}{h}\sum_{i=1}^{k}\mathsf{out}(t_{i}^{j})\). Observe that this not only guarantees that each quasi-step is a step, but also that all points on \(\varrho^{\prime}\) are configurations (i.e., have no negative coordinates). Consider first the case of \(s\in S\setminus H\). By item ((4)) of the lemma, \(\mathsf{src}(R^{\prime})\subseteq H\) and \(\mathsf{dst}(R^{\prime})\subseteq H\), and thus \(v_{lm+j}(s)=c(s)=0=o_{lm+j}(s)\). For \(s\in H\), note that \(o_{lm+j}(s)\leq\frac{\mu k}{h}\), and thus it is enough to show that \(v_{lm+j}(s)-\frac{\mu k}{h}\geq 0\). Observe that, for every \(0\leq l<h\), the point reached after taking \(lm\) quasi-steps of \(\varrho^{\prime}\) is \(c+\frac{l}{h}(c^{\prime}-c)\), which is a point on \(L\). Hence, by our choice of \(x\), we have that \(v_{lm}(s)\geq x\). Also note that each quasi-step on \(\varrho^{\prime}\) can decrease \(v_{lm}(s)\) by at most \(\frac{\mu k}{h}\), thus, for every \(0\leq j<m\), we have that \(v_{lm+j}(s)-\frac{\mu k}{h}\geq v_{lm}(s)-\frac{m\mu k}{h}\geq x-x=0\). The above lemma already allows us to characterize the locally-reusable edges, as well as the light green edges (relative to green edges): Figure 8. A graphical representation in two dimensions of the replacement of quasi-steps (in dotted arrows) by steps (in dashed arrows). In this example \(h=3\) and \(m=2\). **Lemma 7.13**.: _Given a component \(P_{i}^{\rightharpoonup}\) of \(P^{\rightharpoonup}\), an edge \(e\) of \(P_{i}^{\rightharpoonup}\) is locally-reusable iff there are coefficients \(\alpha_{t}\in\mathbb{Q}_{\geq 0}\) for every \(t\in R_{i}^{\rightharpoonup}\), such that:_ 1. \(0=\sum_{t\in R_{i}^{\rightharpoonup}}\alpha_{t}\left(\mathsf{in}(t)-\mathsf{ out}(t)\right)\)_;_ 2. _For every rendezvous action_ \(a\in\Sigma_{\mathsf{actn}}\)_, and every_ \(j,h\in[k]\)_, we have that_ \(\sum_{t\in R_{i}^{\rightharpoonup},\mathsf{rdz}(t)=a_{j}}\alpha_{t}=\sum_{t \in R_{i}^{\rightharpoonup},\mathsf{rdz}(t)=a_{h}}\alpha_{t}\)_;_ 3. \(\alpha_{e}>0\)_._ Moreover, \(e\) is light green iff we further require that the set \(\{t\in R_{i}\mid\alpha_{t}>0\}\) only contains green edges. Proof.: For the 'only if' direction, we observe that by Lemma 7.9 there is a trace \(\xi:=c\to^{\star}c\) of the CVRS \(P_{i}^{\rightharpoonup}\) such that \(e\) participates in a step of \(\xi\). Moreover, \(\xi\) uses only green edges in case \(e\) is light green. By Proposition 7.11, we can derive coefficients \(\alpha_{t}\in\mathbb{Q}_{\geq 0}\) that satisfy the conditions of the lemma. For the 'if' direction, let \(R^{\prime}=\{t\in R_{i}\mid\alpha_{t}>0\}\) and \(H=\mathsf{src}(R^{\prime})\cup\mathsf{dst}(R^{\prime})\). Set \(c\in\mathbb{Q}_{\geq 0}^{S_{i}}\) to the configuration defined by \(c(q)=1\), if \(q\in H\), and \(c(q)=0\), otherwise. Then, apply Lemma 7.12 and obtain a trace \(\xi:=c\to^{\star}c\) of the CVRS \(P_{i}^{\rightharpoonup}\), such that \(e\) participates in a step of \(\xi\). By Lemma 7.9, the type (locally-reusable, or light green) of \(e\) follows. Lemma 7.13 gives rise to a ptime algorithm for computing the locally-reusable and light green edges (in case we already know the green edges): **Corollary 7.14**.: _Let \(e\) be an edge of \(P^{\rightharpoonup}\). We can decide in ptime (in the size of \(P^{\rightharpoonup}\)), whether \(e\) is locally-reusable, and whether \(e\) is light green (assuming we already know which edges are green)._ Proof.: For every edge \(e\) of \(P^{\rightharpoonup}\), we need to find a solution that satisfies the equations of Lemma 7.13. Such a solution can be found by linear programming over the rationals, which is in ptime. The rest of this section will be dedicated to addressing the problem of deciding whether an edge is green or not. Observe that -- unlike locally-reusable and light green edges for which a witnessing pseudo-cycle is characterized by Lemma 7.9 using a cyclic trace in a single CVRS -- the characterization of the witnessing pseudo-cycle for a green edge in terms of CVRS traces (as given by Lemma 7.10) involves non-cyclic traces in multiple different CVRSs. Hence, our first step is to develop a general characterization of reachability in CVRS which extends the characterization given in Lemma 7.12 to the case of traces whose source and destination configurations may have a different support. We begin with the following property of the support: **Lemma 7.15**.: _Let \(\mathcal{V}=\langle\Sigma_{\mathsf{rdz}},S,R\rangle\) be a CVRS and let \(c,c_{1},c_{2}\in\mathbb{Q}_{\geq 0}^{S}\) be configurations of it. Then:_ 1. _If_ \(c\to^{\star}c_{1}\) _and_ \(c\to^{\star}c_{2}\) _then there is a trace_ \(c\to^{\star}c_{3}\) _with_ \((c_{3})^{\neq 0}=(c_{1})^{\neq 0}\cup(c_{2})^{\neq 0}\)_;_ 2. _If_ \(c_{1}\to^{\star}c\) _and_ \(c_{2}\to^{\star}c\) _then there is a trace_ \(c_{3}\to^{\star}c\) _with_ \((c_{3})^{\neq 0}=(c_{1})^{\neq 0}\cup(c_{2})^{\neq 0}\)_._ Proof.: Let \(c_{3}=\frac{1}{2}c_{1}+\frac{1}{2}c_{2}\), and observe that \((c_{3})^{\neq 0}=(c_{1})^{\neq 0}\cup(c_{2})^{\neq 0}\). It remains to show the desired traces between \(c\) and \(c_{3}\). For the first item, let \(\xi_{1}:=c\to^{\star}c_{1}\), and \(\xi_{2}:=c\to^{\star}c_{2}\). A trace from \(c\) to \(c_{3}\) is obtained by concatenating \(\frac{1}{2}c\oplus(\frac{1}{2}\otimes\xi_{1})\) and \(\frac{1}{2}c_{1}\oplus(\frac{1}{2}\otimes\xi_{2})\). For the second item, let \(\xi_{1}:=c_{1}\to^{\star}c\), and \(\xi_{2}:=c_{2}\to^{\star}c\). A trace from \(c_{3}\) to \(c\) is obtained by concatenating \(\frac{1}{2}c_{1}\oplus(\frac{1}{2}\otimes\xi_{2})\) and \(\frac{1}{2}c\oplus(\frac{1}{2}\otimes\xi_{1})\). Let \(\mathcal{V}=\langle\Sigma_{\mathsf{rdz}},S,R\rangle\) be a CVRS, let \(R^{\prime}\subseteq R\), and let \(c\in\mathbb{Q}_{\geq 0}^{S}\) be a configuration. We say that a set \(H\subseteq S\) is _forward (resp. backward) \(R^{\prime}\)-accessible from \(c\)_ if there is a trace \(c\to^{\star}c^{\prime}\) (resp. \(c^{\prime}\to^{\star}c\)), with all of it's steps using only transitions from \(R^{\prime}\), such that \((c^{\prime})^{\neq 0}=H\). We define \(\mathsf{forw}(c,R^{\prime})\) (resp. \(\mathsf{back}(c,R^{\prime})\)) to be the union of all sets \(H\subseteq S\) that are forward (resp. backward) \(R^{\prime}\)-accessible from \(c\). **Remark 8**.: _Observe that Lemma 7.15 implies that \(\mathsf{forw}(c,R^{\prime})\) is forward \(R^{\prime}\)-accessible from \(c\), and that \(\mathsf{back}(c,R^{\prime})\) is backward \(R^{\prime}\)-accessible from \(c\). It follows that \(\mathsf{forw}(c,R^{\prime})\) (resp. \(\mathsf{back}(c,R^{\prime})\)) is the maximal subset of \(S\) that is forward (resp. backward) \(R^{\prime}\)-accessible from \(c\)._ **Proposition 7.16**.: _Given a CVRS \(\mathcal{V}=\langle\Sigma_{\mathsf{rdz}},S,R\rangle\), a subset \(R^{\prime}\subseteq R\), and a configuration \(c\), the sets \(\mathsf{forw}(c,R^{\prime})\) and \(\mathsf{back}(c,R^{\prime})\) can be computed in PTIME._ Proof.: We present a fixed point algorithm for computing \(\mathsf{forw}(c,R^{\prime})\); computing \(\mathsf{back}(c,R^{\prime})\) is done in a symmetric fashion. Construct an increasing chain \(H_{0}\subsetneq H_{1}\subsetneq\cdots\) of sets \(H_{i}\subseteq S\), until a larger set cannot be found, as follows. Let \(H_{0}:=(c)^{\neq 0}\). For each \(i\), we check if there is an action \(a\in\Sigma_{\mathsf{actn}}\), and there are transitions \(t_{1},\ldots,t_{k}\in R^{\prime}\) with \(\mathsf{rdz}(t_{j})=a_{j}\) for all \(j\in[k]\), such that \(\mathsf{src}(\{t_{1},\ldots,t_{k}\})\subseteq H_{i}\) and \(\mathsf{dst}(\{t_{1},\ldots,t_{k}\})\not\subseteq H_{i}\). In case there are such transitions, we set \(H_{i+1}:=H_{i}\cup\mathsf{dst}(\{t_{1},\ldots,t_{k}\})\); otherwise, we are done and have \(H_{i}=\mathsf{forw}(c,R^{\prime})\). The correctness of the algorithm is demonstrated as follows. First, to see that for every round \(i\), the set \(H_{i}\) is contained in \(\mathsf{forw}(c,R^{\prime})\), proceed by induction on \(i\). Note that, by the induction hypothesis, there is a trace \(\xi:=c\to^{\star}c_{i}\) with \((c_{i})^{\neq 0}=H_{i}\), and that this trace can be extended to a trace \(\xi^{\prime}:=c\to^{\star}c_{i}\xrightarrow{t_{1},\ldots,t_{k}:\alpha}c_{i+1}\), with \((c_{i+1})^{\neq 0}=\mathsf{dst}(\{t_{1},\ldots,t_{k}\})\cup H_{i}=H_{i+1}\), by choosing \(0<\alpha<\frac{1}{k}\cdot\min_{q\in(c_{i})^{\neq 0}}\{c_{i}(q)\}\). To see that the algorithm outputs \(\mathsf{forw}(c,R^{\prime})\), and not a proper subset of it, let \(\xi:=c\to^{\star}c^{\prime}\) be a trace such that \((c^{\prime})^{\neq 0}=\mathsf{forw}(c,R^{\prime})\). Assume by way of contradiction that the support of some configuration along \(\xi\) is not contained in the output of the algorithm, and let \(c_{i+1}\) be the first such configuration on \(\xi\). Consider the step \(c_{i}\xrightarrow{t_{1},\cdots,t_{k}:\alpha_{i}}c_{i+1}\) and note that \(\mathsf{src}(\{t_{1},\ldots,t_{k}\})\subseteq(c_{i})^{\neq 0}\). Let \(P_{0},P_{1},P_{2},\cdots,P_{n}\) be the sequence of sets computed by the algorithm. By minimality of \(i\), there is a \(j\) such that \((c_{i})^{\neq 0}\subseteq P_{j}\subseteq P_{n}\). Thus, \(\mathsf{src}(\{t_{1},\ldots,t_{k}\})\subseteq P_{n}\) and \(\mathsf{dst}(\{t_{1},\ldots,t_{k}\})\not\subseteq P_{n}\). But this contradicts the termination condition of the algorithm. The following simple property of the operators \(\mathsf{forw}\) and \(\mathsf{back}\) will be useful: **Proposition 7.17**.: _Let \(\mathcal{V}=\langle\Sigma_{\mathsf{rdz}},S,R\rangle\) be a CVRS and let \(\xi:=c\to^{\star}c^{\prime}\) be a trace of \(\mathcal{V}\). Let \(R^{\prime}\) be the set of transitions that participate in steps of \(\xi\), and let Configs be the set of configurations that appear in \(\xi\). Then, \(\mathsf{forw}(c,R^{\prime})=\mathsf{back}(c^{\prime},R^{\prime})=\bigcup_{c^{ \circ}\in\mathit{Configs}}(c^{\circ})^{\neq 0}=(c)^{\neq 0}\cup\mathsf{dst}(R^{\prime})=(c^{ \prime})^{\neq 0}\cup\mathsf{src}(R^{\prime})\)._ Proof.: For every configuration \(c^{\circ}\) that appears on \(\xi\), the prefix \(c\to^{\star}c^{\circ}\) of \(\xi\), and the suffix \(c^{\circ}\to^{\star}c^{\prime}\) of \(\xi\), obviously only use transitions from \(R^{\prime}\). Hence, \(\bigcup_{c^{\circ}\in\mathit{Configs}}(c^{\circ})^{\neq 0}\subseteq\mathsf{ forw}(c,R^{\prime})\) and \(\bigcup_{c^{\circ}\in\mathit{Configs}}(c^{\circ})^{\neq 0}\subseteq\mathsf{ back}(c^{\prime},R^{\prime})\); as well as \(\bigcup_{c^{\circ}\in\mathit{Configs}}(c^{\circ})^{\neq 0}\subseteq(c)^{\neq 0}\cup\mathsf{dst}(R^{\prime})\) and \(\bigcup_{c^{\circ}\in\mathit{Configs}}(c^{\circ})^{\neq 0}\subseteq(c^{ \prime})^{\neq 0}\cup\mathsf{src}(R^{\prime})\). For the other direction, observe that \(\mathsf{forw}(c,R^{\prime})\subseteq(c)^{\neq 0}\cup\mathsf{dst}(R^{\prime}) \subseteq\bigcup_{c^{\circ}\in\mathit{Configs}}(c^{\circ})^{\neq 0}\) and, similarly, \(\mathsf{back}(c^{\prime},R^{\prime})\subseteq(c^{\prime})^{\neq 0}\cup\mathsf{ src}(R^{\prime})\subseteq\bigcup_{c^{\circ}\in\mathit{Configs}}(c^{ \circ})^{\neq 0}\). **Remark 9**.: _It is worth noting that for every configuration \(c\) and set of transitions \(R^{\prime}\), we have that \(\mathsf{forw}(c,R^{\prime})\subseteq(c)^{\neq 0}\cup\mathsf{dst}(R^{\prime})\) (resp. \(\mathsf{back}(c,R^{\prime})\subseteq(c)^{\neq 0}\cup\mathsf{src}(R^{\prime})\)); however, only in case there is a path from \(c\) (resp. to \(c\)), that uses exactly the transitions in \(R^{\prime}\), do the reverse inclusions also hold._ We are now ready to state a full characterization of reachability in CVRSs: **Theorem 7.18**.: _Let \(\mathcal{V}=\langle\Sigma_{\textsf{rdz}},S,R\rangle\) be a CVRS. A configuration \(c^{\prime}\in\mathbb{Q}^{S}_{\geq 0}\) is reachable from a configuration \(c\in\mathbb{Q}^{S}_{\geq 0}\) iff there are coefficients \(\alpha_{t}\in\mathbb{Q}_{\geq 0}\) for every \(t\in R\) such that:_ 1. \(c^{\prime}=c+\sum_{t\in R}\alpha_{t}\left(\textsf{in}(t)-\textsf{out}(t)\right)\)_;_ 2. _For every rendezvous action_ \(a\in\Sigma_{\textsf{actn}}\)_, and every_ \(i,j\in[k]\)_, we have that_ \(\sum_{t\in R,\textsf{rdz}(t)=a_{i}}\alpha_{t}=\sum_{t\in R,\textsf{rdz}(t)=a_{ j}}\alpha_{t}\)_;_ 3. _For_ \(R^{\prime}:=\{t\in R\mid\alpha_{t}>0\}\) _we have that_ \(\textsf{src}(R^{\prime})\subseteq\textsf{back}(c^{\prime},R^{\prime})\)_,_ \(\textsf{dst}(R^{\prime})\subseteq\textsf{forw}(c,R^{\prime})\)_, and_ \(\textsf{forw}(c,R^{\prime})=\textsf{back}(c^{\prime},R^{\prime})\)_._ Proof.: For the forward direction, take a trace \(\xi:=c\to^{\star}c^{\prime}\). For every \(t\in R\), let \(\alpha_{t}\) be the sum of the multiplicities of the steps of \(\xi\) in which \(t\) participates. By Proposition 7.11, we have that condition ((1)) and ((2)) of the lemma are satisfied. Condition ((3)) holds by applying Proposition 7.17 to \(\xi\). For the reverse direction, assume that there are coefficients \(\alpha_{t}\) such that conditions ((1))-((3)) are satisfied. Let \(H:=\textsf{forw}(c,R^{\prime})=\textsf{back}(c^{\prime},R^{\prime})\). By Remark 8, we can obtain traces \(\xi_{1}:=c\to^{\star}c_{1}\) and \(\xi_{2}:=c_{2}\to^{\star}c^{\prime}\), using only transitions from \(R^{\prime}\), such that \((c_{1})^{\neq 0}=H=(c_{2})^{\neq 0}\). It remains to show that \(c_{2}\) is reachable from \(c_{1}\). For every transition \(t\in R\), let \(\alpha_{t}^{1}\) (resp. \(\alpha_{t}^{2}\)) be the sum of the multiplicities of the steps of \(\xi_{1}\) (resp. \(\xi_{2}\)) in which \(t\) participates, and let \(\gamma_{t}=\alpha_{t}-\alpha_{t}^{1}-\alpha_{t}^{2}\). By the convexity property (Proposition 7.8) we can assume w.l.o.g. that: (SS) \(\alpha_{t}^{1}\) and \(\alpha_{t}^{2}\) are small enough such that \(\gamma_{t}>0\) for all \(t\in R^{\prime}\) (simply apply the convexity property to \(\xi_{1}\) and \(\xi_{2}\) with a small enough \(\gamma\) to obtain, if needed, replacement \(\xi_{1},\xi_{2},c_{1},c_{2}\)). By Proposition 7.11, we get that \(c_{1}=c+\sum_{t\in R^{\prime}}\alpha_{t}^{1}\left(\textsf{in}(t)-\textsf{out}( t)\right)\), and \(c^{\prime}=c_{2}+\sum_{t\in R^{\prime}}\alpha_{t}^{2}\left(\textsf{in}(t)- \textsf{out}(t)\right)\). Combining the last two equalities with condition ((1)) of the Lemma, and rearranging terms, we get: \[c_{2}=c_{1}+\sum_{t\in R^{\prime}}\gamma_{t}\left(\textsf{in}(t)-\textsf{out}( t)\right)\] ( \[\dagger\] ) Again, by Proposition 7.11, we have for every \(l\in\{1,2\}\): \[\forall a\in\Sigma_{\textsf{actn}},i,j\in[k]:\sum_{t\in R^{\prime},\textsf{rdz }(t)=a_{i}}\alpha_{t}^{l}=\sum_{t\in R^{\prime},\textsf{rdz}(t)=a_{j}}\alpha_{ t}^{l},\] Together with condition ((2)) we thus get that: \[\forall a\in\Sigma_{\textsf{actn}},i,j\in[k]:\sum_{t\in R^{\prime},\textsf{rdz }(t)=a_{i}}\gamma_{t}=\sum_{t\in R^{\prime},\textsf{rdz}(t)=a_{j}}\gamma_{t}\] ( \[\dagger\] ) By (\(\dagger\) *> 1) and (\(\ddagger\) *> 1), and our choice of \(c_{1},c_{2}\), the requirements of Lemma 7.12 are satisfied for \(c_{1},c_{2}\) and coefficients \(\gamma_{t}\). Hence, \(c_{2}\) is reachable from \(c_{1}\). **Remark 10**.: _The characterization in Theorem 7.18 is an adaptation of the characterization of reachability in continuous vector addition systems from [11]; for our characterization result we had to redevelop the theory of [11] for CVRSs._ ### An Algorithm for Finding the green Edges In this section we present a ptime algorithm for deciding the existence of pseudo-cycles with \(r\) broadcasts in \((P^{\rightharpoonup})^{\infty}\). The algorithm will be developed based on the following characterization which is an immediate consequence of Lemma 7.10, using Theorem 7.18 to characterize the traces \(\xi_{i}\) of this lemma. Recall that \(\mathcal{I}\) is the set of indices of the components on the noose of \(P^{\rightharpoonup}\). **Corollary 7.19**.: _An edge \(e\) of \(P^{\rightharpoonup}\) is green iff, for every \(i\in\mathcal{I}\): (i) there is a subset \(\mathcal{T}_{i}\) of the transitions of \(P_{i}^{\rightharpoonup}\), and a subset \(B_{i}\) of the broadcast transitions from \(P_{i}^{\rightharpoonup}\) to \(P_{next(i)}^{\rightharpoonup}\), with \(e\in\cup_{i\in\mathcal{I}}(B_{i}\cup\mathcal{T}_{i})\); (ii) there are coefficients \(\alpha_{t}\in\mathbb{Q}_{>0}\) for every \(t\in\mathcal{T}_{i}\cup B_{i}\); such that:_ 1. \(c_{i}^{\prime}=c_{i}+\sum_{t\in\mathcal{T}_{i}}\alpha_{t}\left(\textsf{in}(t)- \textsf{out}(t)\right)\)_, where_ \(c_{i},c_{i}^{\prime}\) _are defined, for every_ \(q\in S_{i}^{\rightharpoonup}\)_, by:_ \(c_{i}(q):=\sum_{t\in B_{prev(i)},\textsf{dst}(t)=q}\alpha_{t}\)_, and_ \(c_{i}^{\prime}(q):=\sum_{t\in B_{i},\textsf{src}(t)=q}\alpha_{t}\)_;_ 2. _for every rendezvous action_ \(a\in\Sigma_{\textsf{actn}}\)_, and every_ \(j,h\in[k]\)_, we have that_ \(\sum_{t\in\mathcal{T}_{i},\textsf{rdz}(t)=a_{j}}\alpha_{t}=\sum_{t\in\mathcal{ T}_{i},\textsf{rdz}(t)=a_{h}}\alpha_{t}\)_;_ 3. \(\textsf{dst}(\mathcal{T}_{i})\subseteq\textsf{forw}(c_{i},\mathcal{T}_{i})\)_,_ \(\textsf{src}(\mathcal{T}_{i})\subseteq\textsf{back}(c_{i}^{\prime},\mathcal{ T}_{i})\)_, and_ \(\textsf{forw}(c_{i},\mathcal{T}_{i})=\textsf{back}(c_{i}^{\prime},\mathcal{T}_{i})\)_._ Corollary 7.19 gives rise to Algorithm 1 for computing the set of green edges of \(P^{\rightharpoonup}\). We remark that the algorithm presented here for computing green edges differs from our original algorithm in [1]. Here, we use an algorithm that is inspired by, and extends, Algorithm 2 in [11] for deciding the reachability of a target configuration from an initial configuration in continuous vector addition systems. The extension is in two ways: first from vector addition systems to CVRSs, and second by adding machinery for handling broadcasts. ``` Input:\(P^{\rightharpoonup}\). Initialize -- for every \(i\in\mathcal{I}\) do: \(\mathcal{T}_{i}:=\) all rendezvous transitions of \(P_{i}^{\rightharpoonup}\); \(B_{i}:=\) all broadcast transitions from \(P_{i}^{\rightharpoonup}\) to \(P_{next(i)}^{\rightharpoonup}\); Repeat: For every \(i\in\mathcal{I}\): take variables \(c_{i}(q),c_{i}^{\prime}(q)\) for every \(q\in S_{i}^{\rightharpoonup}\), and \(\alpha_{t}\) for every \(t\in\mathcal{T}_{i}\cup B_{i}\). Find a solution to the following constraint system, such that the number of non-zero variables \(\alpha_{t}\) is maximal: \(\bullet\)\(\alpha_{t}\geq 0\) for every \(t\in\mathcal{T}_{i}\cup B_{i}\); \(\bullet\)\(c_{i}(q)=\sum_{t\in B_{prev(i)},\textsf{dst}(t)=q}\alpha_{t}\), for every \(q\in S_{i}^{\rightharpoonup}\); \(\bullet\)\(c_{i}^{\prime}(q)=\sum_{t\in B_{i},\textsf{src}(t)=q}\alpha_{t}\), for every \(q\in S_{i}^{\rightharpoonup}\); \(\bullet\)\(c_{i}^{\prime}=c_{i}+\sum_{t\in\mathcal{T}_{i}}\alpha_{t}\left(\textsf{in}(t)- \textsf{out}(t)\right)\); \(\bullet\)\(\sum_{t\in\mathcal{T}_{i},\textsf{rdz}(t)=a_{j}}\alpha_{t}=\sum_{t\in \mathcal{T}_{i},\textsf{rdz}(t)=a_{h}}\alpha_{t}\) for every \(a\in\Sigma_{\textsf{actn}}\) and \(j,h\in[k]\). Let \(H_{i}:=\textsf{forw}(c_{i},\mathcal{T}_{i})\cap\textsf{back}(c_{i}^{\prime}, \mathcal{T}_{i})\); Let \(\mathcal{T}_{i}:=\{t\in\mathcal{T}_{i}\mid\alpha_{t}>0\}\cap\{t\in\mathcal{T}_{i }\mid\textsf{src}(t)\in H_{i}\wedge\textsf{dst}(t)\in H_{i}\}\); Let \(B_{i}:=\{t\in B_{i}\mid\alpha_{t}>0\}\); Until neither \(\mathcal{T}_{i}\) nor \(B_{i}\) change, for any \(i\in\mathcal{I}\). Output:\(\bigcup_{i\in\mathcal{I}}B_{i}\cup\mathcal{T}_{i}\) ``` **Algorithm 1**Algorithm for computing all edges of \(P^{\rightharpoonup}\) that can appear in a pseudo-cycle with broadcasts of \((P^{\rightharpoonup})^{\infty}\). **Theorem 7.20**.: _Deciding if an edge of \(P^{\rightsquigarrow}\) is green can be done in ptime._ Proof.: We begin by making a couple of important observations. In each iteration of the main loop of the algorithm we need to find a solution to a constraint system forming a linear programming problem over the rationals, with no objective function, whose canonical form is: \(Ax=0,x\geq 0\) (for some matrix \(A\) and vector \(x\) of variables), such that the solution is maximal with respect to the number of non-zero variables \(\alpha_{t}\). Given that linear programs of this canonical form have the property that the sum of any two solutions is itself a solution, we have that: \((\dagger)\) one can find a maximal solution \(x\) by looking, for every \(t\in\mathcal{T}_{i}\cup B_{i}\), for a solution to the system \(Ax=0,x\geq 0,\alpha_{t}>0\), and adding together all the solutions that were found; \((\ddagger)\) for every variable \(\alpha_{t}\) of the system \(Ax=0,x\geq 0\), if there is a solution in which \(\alpha_{t}>0\), then \(\alpha_{t}>0\) in every maximal solution. We now show that Algorithm 1 outputs exactly the set of green edges of \(P^{\rightsquigarrow}\). To see that every edge output by the algorithm is green, we will apply Corollary 7.19 (direction 'if') to the sets \(\mathcal{T}_{i}\) and \(B_{i}\) and the coefficients \(\alpha_{t}\) from the solution obtained in the last iteration of the algorithm. Observe that at this last iteration all the sets have reached a fixed point. Hence, in particular, for every \(t\in\mathcal{T}_{i}\cup B_{i}\) we have that \(\alpha_{t}>0\), and \((\lx@sectionsign)\)\(\mathsf{dst}(\mathcal{T}_{i})\subseteq H_{i}\), \(\mathsf{src}(\mathcal{T}_{i})\subseteq H_{i}\). Since the constraint system in the algorithm exactly matches requirements (1) and (2) in Corollary 7.19, the only thing we have to show before we can apply this corollary is that also condition (3) holds. Observe that, by \(\lx@sectionsign\), it is enough to show that at the last iteration \(\mathsf{forw}(c_{i},\mathcal{T}_{i})=H_{i}\) and \(\mathsf{back}(c^{\prime}_{i},\mathcal{T}_{i})=H_{i}\). We will show the first equality, the second is shown in a symmetric way. Consider the following chain of inequalities: \(H_{i}\subseteq\mathsf{forw}(c_{i},\mathcal{T}_{i})\subseteq(c_{i})^{\neq 0} \cup\mathsf{dst}(\mathcal{T}_{i})\subseteq(c_{i})^{\neq 0}\cup H_{i}\subseteq H_{i}\). The first containment is by the definition of \(H_{i}\), the second by Remark 9, the third by \(\lx@sectionsign\), and the last by the following argument: for \(q\in(c_{i})^{\neq 0}\), if \(q\in\mathsf{src}(\mathcal{T}_{i})\cup\mathsf{dst}(\mathcal{T}_{i})\) then \(q\in H_{i}\) by \(\lx@sectionsign\); otherwise, \(q\in(c^{\prime}_{i})^{\neq 0}\) (by the constraint \(c^{\prime}_{i}=c_{i}+\sum_{t\in\mathcal{T}_{i}}\alpha_{t}\left(\mathsf{in}(t)- \mathsf{out}(t)\right)\)), and since by definition \((c_{i})^{\neq 0}\subseteq\mathsf{forw}(c_{i},\mathcal{T}_{i})\) and \((c^{\prime}_{i})^{\neq 0}\subseteq\mathsf{back}(c^{\prime}_{i},\mathcal{T}_{i})\), we have that \(q\in\mathsf{forw}(c_{i},\mathcal{T}_{i})\cap\mathsf{back}(c^{\prime}_{i}, \mathcal{T}_{i})=H_{i}\). To see that every green edge \(e\) of \(P^{\rightsquigarrow}\) is output by the algorithm, apply Corollary 7.19 (direction 'only if') to \(e\) to obtain sets \(\tilde{\mathcal{T}}_{i}\) and \(\tilde{B}_{i}\), and coefficients \(\tilde{\alpha}_{t}>0\), satisfying the conditions of the corollary. We claim that if \(\tilde{\mathcal{T}}_{i}\cup\tilde{B}_{i}\subseteq\mathcal{T}_{i}\cup B_{i}\) holds, for every \(i\in\mathcal{I}\), at the beginning of an iteration (which is the case at initialization), then it also does at its end. Note that this would conclude the proof since \(e\in\cup_{i\in\mathcal{I}}(\tilde{B}_{i}\cup\tilde{\mathcal{T}}_{i})\) by condition (i) of the corollary. To prove the claim, note that if \(\tilde{\mathcal{T}}_{i}\cup\tilde{B}_{i}\subseteq\mathcal{T}_{i}\cup B_{i}\), for every \(i\in\mathcal{I}\), then the coefficients \(\tilde{\alpha}_{t}\) induce a (not necessarily maximal) solution to the constraint system. Hence, by observation \((\ddagger)\) at the beginning of the proof, every maximal solution \(\alpha_{t},c,c^{\prime}\) of this system will have \(\alpha_{t}>0\) for every \(t\in\tilde{\mathcal{T}}_{i}\cup\tilde{B}_{i}\). The claim follows by showing that \(\mathsf{src}(\tilde{\mathcal{T}}_{i})\cup\mathsf{dst}(\tilde{\mathcal{T}}_{i}) \subseteq H_{i}\). To see this, observe that \(\alpha_{t}>0\) for every \(t\in\tilde{B}_{i}\) implies (by the definition of \(c_{i}\) and \(c^{\prime}_{i}\) in condition (1) of the corollary, and the corresponding constraint in the algorithm) that \((\tilde{c}_{i})^{\neq 0}\subseteq(c_{i})^{\neq 0}\) and \((\tilde{c}^{\prime}_{i})^{\neq 0}\subseteq(c^{\prime}_{i})^{\neq 0}\). Hence, \(\mathsf{forw}(\tilde{c}_{i},\tilde{\mathcal{T}}_{i})\subseteq\mathsf{forw}(c_{i},\mathcal{T}_{i})\) and \(\mathsf{back}(\tilde{c}^{\prime}_{i},\tilde{\mathcal{T}}_{i})\subseteq\mathsf{ back}(c^{\prime}_{i},\mathcal{T}_{i})\). Since, by condition (3) of the corollary, \(\mathsf{src}(\tilde{\mathcal{T}}_{i})\cup\mathsf{dst}(\tilde{\mathcal{T}}_{i}) \subseteq\mathsf{forw}(\tilde{c}_{i},\tilde{\mathcal{T}}_{i})=\mathsf{back}( \tilde{c}^{\prime}_{i},\tilde{\mathcal{T}}_{i})\) we are done. It remains to show that the algorithm runs in polynomial time. Since in every round at least one transition is removed from either \(\mathcal{T}_{i}\) or \(B_{i}\), for some \(i\), the main loop repeats linearly many times in the size of \(P^{\rightsquigarrow}\). By observation \((\dagger)\) at the beginning of the proof, finding a maximal solution to the constraint system of each iteration can be done by solving a linear number of linear programming problems (of polynomial size) over the rationals, which is in ptime. Since, by Proposition 7.16, calculating \(\mathsf{forw}(c_{i},\mathcal{T}_{i})\) and \(\mathsf{back}(c^{\prime}_{i},\mathcal{T}_{i})\) is also in ptime, we conclude that the whole algorithm runs in polynomial time. Theorem 7.20, together with Corollary 7.14, yield the promised proof to the main theorem (Theorem 7.1) of this section. ## 8. Conclusion We have established the decidability and complexity of the PMCP for safety and liveness properties of RB-systems, which are polynomially inter-reducible with discrete-timed networks. The lower and upper complexity bounds for safety properties are tight. We leave open the problem of whether our exptime upper-bound for liveness properties is tight. We note that the pspace lower bound for safety properties also implies a pspace lower bound for liveness properties. The exptime upper bound is established by (repeatedly) solving an exponentially sized linear program. As linear programming is known to be ptime-complete, it seems unlikely that our techniques can be improved to show a pspace upper bound. A further direction for future research concerns whether our results for the discrete-time model can be lifted to the continuous-time model without a distinguished controller (note that PMCP for continuous time networks with a distinguished controller is undecidable [1]). For example, time-bounded invariance and time-bounded response properties (expressed as MTL formulae) hold on the discrete-time model iff they hold with the continuous-time model [10], whereas [11] establish several results on to the use of digitization techniques for timed automata. ## 9. Acknowledgments This work is partially supported by the Austrian Science Fund (FWF): P 32021, the Austrian National Research Network S11403-N23 (RiSE) of the Austrian Science Fund (FWF), and by the Vienna Science and Technology Fund (WWTF) through grant ICT12-059.
2310.05309
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods
Deep Neural Networks and Reinforcement Learning methods have empirically shown great promise in tackling challenging combinatorial problems. In those methods a deep neural network is used as a solution generator which is then trained by gradient-based methods (e.g., policy gradient) to successively obtain better solution distributions. In this work we introduce a novel theoretical framework for analyzing the effectiveness of such methods. We ask whether there exist generative models that (i) are expressive enough to generate approximately optimal solutions; (ii) have a tractable, i.e, polynomial in the size of the input, number of parameters; (iii) their optimization landscape is benign in the sense that it does not contain sub-optimal stationary points. Our main contribution is a positive answer to this question. Our result holds for a broad class of combinatorial problems including Max- and Min-Cut, Max-$k$-CSP, Maximum-Weight-Bipartite-Matching, and the Traveling Salesman Problem. As a byproduct of our analysis we introduce a novel regularization process over vanilla gradient descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
Constantine Caramanis, Dimitris Fotakis, Alkis Kalavasis, Vasilis Kontonis, Christos Tzamos
2023-10-08T23:39:38Z
http://arxiv.org/abs/2310.05309v2
# Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods ###### Abstract Deep Neural Networks and Reinforcement Learning methods have empirically shown great promise in tackling challenging combinatorial problems. In those methods a deep neural network is used as a solution generator which is then trained by gradient-based methods (e.g., policy gradient) to successively obtain better solution distributions. In this work we introduce a novel theoretical framework for analyzing the effectiveness of such methods. We ask whether there exist generative models that (i) are expressive enough to generate approximately optimal solutions; (ii) have a tractable, i.e, polynomial in the size of the input, number of parameters; (iii) their optimization landscape is benign in the sense that it does not contain sub-optimal stationary points. Our main contribution is a positive answer to this question. Our result holds for a broad class of combinatorial problems including Max- and Min-Cut, Max-\(k\)-CSP, Maximum-Weight-Bipartite-Matching, and the Traveling Salesman Problem. As a byproduct of our analysis we introduce a novel regularization process over vanilla gradient descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points. ## 1 Introduction Gradient descent has proven remarkably effective for diverse optimization problems in neural networks. From the early days of neural networks, this has motivated their use for combinatorial optimization [11, 12, 13, 14]. More recently, an approach by [15], where a neural network is used to generate (sample) solutions for the combinatorial problem. The parameters of the neural network thus parameterize the space of distributions. This allows one to perform gradient steps in this distribution space. In several interesting settings, including the Traveling Salesman Problem, they have shown that this approach works remarkably well. Given the widespread application but also the notorious difficulty of combinatorial optimization [12, 13, 14, 15, 16], approaches that provide a more general solution framework are appealing. This is the point of departure of this paper. We investigate whether gradient descent can succeed in a general setting that encompasses the problems studied in [17]. This requires a parameterization of distributions over solutions with a "nice" optimization landscape (intuitively, that gradient descent does not get stuck in local minima or points of vanishing gradient) and that has a polynomial number of parameters. Satisfying both requirements simultaneously is non-trivial. As we show precisely below, a simple lifting to the exponential-size probability simplex on all solutions guarantees convexity; and, on the other hand, _compressed_ parameterizations with "bad" optimization landscapes are also easy to come by (we give a natural example for Max-Cut in Remark 1). Hence, we seek to understand the parametric complexity of gradient-based methods, i.e., how many parameters suffice for a benign optimization landscape in the sense that it does not contain "bad" stationary points. We thus theoretically investigate whether there exist solution generators with a tractable number of parameters that are also efficiently optimizable, i.e., gradient descent requires a small number of steps to reach a near-optimal solution. We provide a positive answer under general assumptions and specialize our results for several classes of _hard and easy_ combinatorial optimization problems, including Max-Cut and Min-Cut, Max-\(k\)-CSP, Maximum-Weighted-Bipartite-Matching and Traveling Salesman. We remark that a key difference between (computationally) easy and hard problems is not the ability to find a compressed and efficiently optimizable generative model but rather the ability to efficiently draw samples from the parameterized distributions. ### Our Framework We introduce a theoretical framework for analyzing the effectiveness of gradient-based methods on the optimization of solution generators in combinatorial optimization, inspired by [17]. Let \(\mathcal{I}\) be a collection of instances of a combinatorial problem with common solution space \(S\) and \(L(\cdot;I):S\rightarrow\mathbb{R}\) be the cost function associated with an instance \(I\in\mathcal{I}\), i.e., \(L(s;I)\) is the cost of solution \(s\) given the instance \(I\). For example, for the Max-Cut problem the collection of instances \(\mathcal{I}\) corresponds to all graphs with \(n\) nodes, the solution space \(S\) consists of all subsets of nodes, and the loss \(L(s;I)\) is equal to (minus) the weight of the cut \((s,[n]\setminus s)\) corresponding to the subset of nodes \(s\in S\) (our goal is to minimize \(L\)). **Definition 1** (Solution Cost Oracle).: _For a given instance \(I\) we assume that we have access to an oracle \(\mathcal{O}(\cdot;I)\) to the cost of any given solution \(s\in S\), i.e., \(\mathcal{O}(s;I)=L(s;I)\)._ The above oracle is standard in combinatorial optimization and query-efficient algorithms are provided for various problems [13, 14, 15, 16, 17, 18]. We remark that the goal of this work is not to design algorithms that solve combinatorial problems using access to the solution cost oracle (as the aforementioned works do). This paper focuses on landscape design: the algorithm is **fixed**, namely (stochastic) gradient descent; the question is how to design a generative model that has a small number of parameters and the induced optimization landscape allows gradient-based methods to converge to the optimal solution without getting trapped at local minima or vanishing gradient points. Let \(\mathcal{R}\) be some prior distribution over the instance space \(\mathcal{I}\) and \(\mathcal{W}\) be the space of parameters of the model. We now define the class of solution generators. The solution generator \(p(w)\) with **parameter**\(w\in\mathcal{W}\) takes as **input** an instance \(I\) and **generates a random solution**\(s\) in \(S\). To distinguish between the output, the input, and the parameter of the solution generator, we use the notation \(p(\cdot;I;w)\) to denote the distribution over solutions and \(p(s;I;w)\) to denote the probability of an individual solution \(s\in S\). We denote by \(\mathcal{P}=\{p(w):w\in\mathcal{W}\}\) the above parametric class of solution generators. For some parameter \(w\), the loss corresponding to the solutions sampled by \(p(\cdot;I;w)\) is equal to \[\mathcal{L}(w)=\mathop{\mathbb{E}}_{I\sim\mathcal{R}}[\mathcal{L}_{I}(w)]\,, \,\,\,\mathcal{L}_{I}(w)=\mathop{\mathbb{E}}_{s\sim p(\cdot;I;w)}[L(s;I)]\,. \tag{1}\] Our goal (which was the empirical focus of [1]) is to optimize the parameter \(w\in\mathcal{W}\) in order to find a sampler \(p(\cdot;I;w)\) whose loss \(\mathcal{L}(w)\) is close to the expected optimal value opt: \[\text{opt}=\mathop{\mathbb{E}}_{I\sim\mathcal{R}}\left[\min_{s\in S}L(s;I) \right]\,. \tag{2}\] As we have already mentioned, we focus on gradient descent dynamics: the policy gradient method [10] expresses the gradient of \(\mathcal{L}\) as follows \[\nabla_{w}\mathcal{L}(w)=\mathop{\mathbb{E}}_{I\sim\mathcal{R}}\mathop{ \mathbb{E}}_{s\sim p(\cdot;I;w)}[L(s;I)\,\,\nabla_{w}\log p(s;I;w)]\,,\] and updates the parameter \(w\) using the gradient descent update. Observe that a (stochastic) policy gradient update can be implemented using only access to a solution cost oracle of Definition 1. Solution Generators.In [1] the authors used neural networks as parametric solution generators for the TSP problem. They provided empirical evidence that optimizing the parameters of the neural network using the policy gradient method results to samplers that generate very good solutions for (Euclidean) TSP instances. Parameterizing the solution generators using neural networks essentially _compresses_ the description of distributions over solutions (the full parameterization would require assigning a parameter to every solution-instance pair \((s,I)\)). Since for most combinatorial problems the size of the solution space is exponentially large (compared to the description of the instance), it is crucial that for such methods to succeed the parameterization must be _compressed_ in the sense that the description of the parameter space \(\mathcal{W}\) is polynomial in the size of the description of the instance family \(\mathcal{I}\). Apart from having a tractable number of parameters, it is important that the _optimization objective_ corresponding to the parametric class \(\mathcal{P}\) can provably be optimized using some first-order method in polynomial (in the size of the input) iterations. We collect these desiderata in the following definition. We denote by \([\mathcal{I}]\) the description size of \(\mathcal{I}\), i.e., the number of bits required to identify any element of \(\mathcal{I}\). For instance, if \(\mathcal{I}\) is the space of unweighted graphs with at most \(n\) nodes, \([\mathcal{I}]=O(n^{2})\). **Definition 2** (Complete, Compressed and Efficiently Optimizable Solution Generator).: _Fix a prior \(\mathcal{R}\) over \(\mathcal{I}\), a family of solution generators \(\mathcal{P}=\{p(w):w\in\mathcal{W}\}\), a loss function \(\mathcal{L}\) as in Equation (1) and some \(\epsilon>0\)._ 1. _We say that_ \(\mathcal{P}\) _is_ _complete_ _if there exists some_ \(\overline{w}\in\mathcal{W}\) _such that_ \(\mathcal{L}(\overline{w})\leq\operatorname{opt}+\varepsilon\)_, where_ \(\operatorname{opt}\) _is defined in (_2_)._ 2. _We say that_ \(\mathcal{P}\) _is_ _compressed_ _if the description size of the parameter space_ \(\mathcal{W}\) _is polynomial in_ \([\mathcal{I}]\) _and in_ \(\log(1/\varepsilon)\)_._ 3. _We say that_ \(\mathcal{P}\) _is_ _fficiently optimizable_ _if there exists a first-order method applied on the objective_ \(\mathcal{L}\) _such that after_ \(T=\operatorname{poly}([\mathcal{W}],1/\varepsilon)\) _many updates on the parameter vectors, finds an (at most)_ \(\epsilon\)_-sub-optimal vector_ \(\widehat{w}\)_, i.e.,_ \(\mathcal{L}(\widehat{w})\leq\mathcal{L}(\overline{w})+\epsilon\) _._ **Remark 1**.: _We remark that constructing parametric families that are complete and compressed, complete and efficiently optimizable, or compressed and efficiently optimizable (i.e., satisfying any pair of assumptions of Question 1 but not all 3) is usually a much easier task. Consider, for example, the Max-Cut problem on a fixed (unweighted) graph with \(n\) nodes. Note that \(\mathcal{I}\) has description size \(O(n^{2})\). Solutions of the Max-Cut for a graph with \(n\) nodes are represented by vertices on the binary hypercube \(\{\pm 1\}^{n}\) (coordinate i dictates the side of the cut that we put node i). One may consider the full parameterization of all distributions over the hypercube. It is not hard to see that this is a _complete and efficiently optimizable_ family (the optimization landscape corresponds to optimizing a linear objective). However, it _is_ _not compressed_, since it requires \(2^{n}\) parameters. On the other extreme, considering a product distribution over coordinates, i.e., we set the value of node i to be \(+1\) with probability \(p_{i}\) and \(-1\) with \(1-p_{i}\) gives a _complete and compressed_ family. However, as we show in Appendix B, the landscape of this compressed parameterization suffers from highly sub-optimal local minima and therefore, it is _not efficiently optimizable_._ Therefore, in this work we investigate whether it is possible to have all 3 desiderata of Definition 2_at the same time._ **Question 1**.: _Are there complete, compressed, and efficiently optimizable solution generators (i.e., satisfying Definition 2) for challenging combinatorial tasks?_ ### Our Results Our Contributions.Before we present our results formally, we summarize the contributions of this work. * Our main contribution is a positive answer (Theorem 1) to Question 1 under general assumptions that capture many combinatorial tasks. We identify a set of conditions (see Assumption 1) that allow us to design a family of solution generators that are complete, compressed and efficiently optimizable. * The conditions are motivated by obstacles that are important for any approach of this nature. This includes solutions that escape to infinity, and also parts of the landscape with vanishing gradient. See the discussion in Section 3 and Figure 1. * We specialize our framework to several important combinatorial problems, some of which are NP-hard, and others tractable: Max-Cut, Min-Cut, Max-\(k\)-CSP, Maximum-Weight-Bipartite-Matching, and the Traveling Salesman Problem. * Finally, we investigate experimentally the effect of the entropy regularizer and the fast/slow mixture scheme that we introduced (see Section3) and provide evidence that it leads to better solution generators. We begin with the formal presentation of our assumptions on the feature mappings of the instances and solutions and on the structure of cost function of the combinatorial problem. **Assumption 1** (Structured Feature Mappings).: _Let \(S\) be the solution space and \(\mathcal{I}\) be the instance space. There exist feature mappings \(\psi_{S}:S\to X\), for the solutions, and, \(\psi_{\mathcal{I}}:\mathcal{I}\to Z\), for the instances, where \(X,Z\) are Euclidean vector spaces of dimension \(n_{X}\) and \(n_{Z}\), such that:_ 1. (Bounded Feature Spaces) _The feature and instance mappings are bounded, i.e., there exist_ \(D_{S},D_{\mathcal{I}}>0\) _such that_ \(\|\psi_{S}(s)\|_{2}\leq D_{S}\)_, for all_ \(s\in S\) _and_ \(\|\psi_{\mathcal{I}}(I)\|_{2}\leq D_{\mathcal{I}}\)_, for all_ \(I\in\mathcal{I}\)_._ 2. (Bilinear Cost Oracle) _The cost of a solution_ \(s\) _under instance_ \(I\) _can be expressed as a bilinear function of the corresponding feature vector_ \(\psi_{S}(s)\) _and instance vector_ \(\psi_{\mathcal{I}}(I)\)_, i.e., the solution oracle can be expressed as_ \(\mathcal{O}(s,I)=\psi_{\mathcal{I}}(I)^{\top}M\psi_{S}(s)\) _for any_ \(s\in S,I\in\mathcal{I}\)_, for some matrix_ \(M\) _with_ \(\|M\|_{\mathrm{F}}\leq C\)_._ 3. (Variance Preserving Features) _There exists_ \(\alpha>0\) _such that_ \(\mathbf{Var}_{s\sim I(S)}[v\cdot\psi_{S}(s)]\geq\alpha\|v\|_{2}^{2}\) _for any_ \(v\in X\)_, where_ \(U(S)\) _is the uniform distribution over the solution space_ \(S\)_._ 4. (Bounded Dimensions/Diameters) _The feature dimensions_ \(n_{X},n_{Z}\)_, and the diameter bounds_ \(D_{S},D_{\mathcal{I}},C\) _are bounded above by a polynomial of the description size of the instance space_ \(\mathcal{I}\)_. The variance lower bound_ \(\alpha\) _is bounded below by_ \(1/\mathrm{poly}([\mathcal{I}])\)_._ **Remark 2** (Boundedness and Bilinear Cost Assumptions).: _We remark that Items 1, 4 are simply boundedness assumptions for the corresponding feature mappings and usually follow easily assuming that we consider reasonable feature mappings. At a high-level, the assumption that the solution is a bilinear function of the solution and instance features (Item 2) prescribes that "good" feature mappings should enable a simple (i.e., bilinear) expression for the cost function. In the sequel we see that this is satisfied by natural feature mappings for important classes of combinatorial problems._ **Remark 3** (Variance Preservation Assumption).: _In Item 3 (variance preservation) we require that the solution feature mapping has variance along every direction, i.e., the feature vectors corresponding to the solutions must be "spread-out" when the underlying solution generator is the uniform distribution. As we show, this assumption is crucial so that the gradients of the resulting optimization objective are not-vanishing, allowing for its efficient optimization._ We mention that various important combinatorial problems satisfy Assumption1. For instance, Assumption1 is satisfied by Max-Cut, Min-Cut, Max-\(k\)-CSP, Maximum-Weight-Bipartite-Matching and Traveling Salesman. We refer the reader to the upcoming Section2 for an explicit description of the structured feature mappings for these problems. Having discussed Assumption1, we are ready to state our main abstract result which resolves Question1. **Theorem 1**.: _Consider a combinatorial problem with instance space \(\mathcal{I}\) that satisfies Assumption1. For any prior \(\mathcal{R}\) over \(\mathcal{I}\) and \(\epsilon>0\), there exists a family of solution generators \(\mathcal{P}=\{p(w):w\in\mathcal{W}\}\) with parameter space \(\mathcal{W}\) that is complete, compressed and, efficiently optimizable._ A sketch behind the design of the family \(\mathcal{P}\) can be found in Section3 and Section4. **Remark 4** (Computational Barriers in Sampling).: _We note that the families of generative models (a.k.a., solution generators) that we provide have polynomial parameter complexity and are optimizable in a small number of steps using gradient-based methods. Hence, in a small number of iterations, gradient-based methods converge to distributions whose mass is concentrated on nearly optimal solutions. This holds, as we show, even for challenging (NP-hard) combinatorial problems. Our results do not, however, prove \(\mathrm{P}=\mathrm{NP}\), as it may be computationally hard to sample from our generative models. We remark that while such approaches are in theory hard, such models seem to perform remarkably well experimentally where sampling is based on Langevin dynamics techniques [12, 13]. Though as our theory predicts, and simulations support, landscape problems seem to be a direct impediment even to obtain good approximate solutions._ **Remark 5** (Neural Networks as Solution Samplers).: _A natural question would be whether our results can be extended to the case where neural networks are (efficient) solution samplers, as in [11]. Unfortunately, a benign landscape result for neural network solution generators most likely cannot exist. It is well-known that end-to-end theoretical guarantees for training neural networks are out of reach since the corresponding optimization tasks are provably computationally intractable, see, e.g., [10] and the references therein._ Finally, we would like to mention an interesting aspect of Assumption1. Given a combinatorial problem, Assumption1 essentially asks for the _design_ of feature mappings for the solutions and the instances that satisfy desiderata such as boundedness and variance preservation. Max-Cut, Min-Cut, TSP and Max-\(k\)-CSP and other problems satisfy Assumption1 because we managed to design appropriate (problem-specific) feature mappings that satisfy the requirements of Assumption1. There are interesting combinatorial problems for which we do not know how to design such good feature mappings. For instance, the "natural" feature mapping for the Satisfiability problem (SAT) (similar to the one we used for Max-\(k\)-CSPs) would require feature dimension exponential in the size of the instance (we need all possible monomials of \(n\) variables and degree at most \(n\)) and therefore, would violate Item4 of Assumption1. ### Related Work Neural Combinatorial Optimization.Tackling combinatorial optimization problems constitutes one of the most fundamental tasks of theoretical computer science [11, 12, 13, 14, 15] and various approaches have been studied for these problems such as local search methods, branch-and-bound algorithms and meta-heuristics such as genetic algorithms and simulated annealing. Starting from the seminal work of [10], researchers apply neural networks [16, 17, 18] to solve combinatorial optimization tasks. In particular, researchers have explored the power of machine learning, reinforcement learning and deep learning methods for solving combinatorial optimization problems [11, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 31, 323, 334, 35, 36, 37, 38, 390, 310, 32, 33, 34, 35, 36, 38, 391, 33, 392, 304, 306, 307, 308, 309, 311, 323, 335, 36, 38, 393, 394, 308, 312, 336, 38, 395, 396, 397, 398, 399, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 41, 43, 44, 46, 49, 42, 44, 45, 46, 47, 48, 49, 42, 48, 49, 43, 44, 46, 48, 49, 44, 49, 44, 45, 46, 49, 44, 47, 48, 49, 40, 41, 44, 48, 49, 40, 42, 49, 41, 44, 45, 46, 49, 42, 49, 43, 44, 46, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 43, 49, 41, 45, 46, 49, 42, 49, 43, 44, 48, 49, 45, 47, 48, 49, 40, 44, 49, 45, 49, 46, 47, 49, 48, 49, 40, 44, 49, 41, 45, 49, 42, 49, 44, 46, 47, 48, 49, 45, 49, 46, 47, 48, 49, 49, 40, 41, 48, 49, 42, 49, 43, 44, 45, 46, 49, 47, 49, 48, 49, 40, 44, 49, 42, 49, 45, 46, 48, 49, 47, 49, 48, 49, 49, 40, 41, 49, 42, 49, 45, 46, 49, 47, 48, 49, 49, 49, 40, 41, 49, 42, 49, 45, 46, 49, 47, 48, 49, 49, 40, 42, 49, 43, 45, 47, 49, 48, 49, 49, 41, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 40, 45, 49, 46, 47, 49, 48, 49, 49, 40, 49, 49, 41, 49, 42, 49, 45, 49, 46, 47, 48, 49, 49, 41, 48, 49, 49, 42, 49, 45, 46, 49, 47, 49, 48, 49, 49, 49, 45, 49, 46, 49, 47, 49, 49, 48, 49, 49, 49, 40, 49, 49, 41, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, 49, 49, 40, 49, 41, 49, 49, 42, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, 49, 40, 49, 49, 41, 49, 42, 49, 45, 49, 49, 46, 49, 47, 49, 48, 49, 49, 49, 40, 49, 42, 49, 49, 40, 49, 41, 49, 49, 42, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, TRWG21, AMW18, KL20, Jeg22, SBK22, ART23] and various papers aim to understand the theoretical ability of neural networks to solve such problems [11, 12, 13, 14]. Our paper builds on the framework of the influential experimental work of [10] to tackle combinatorial optimization problems such as TSP using neural networks and reinforcement learning. [13] uses an entropy maximization scheme in order to generate diversified candidate solutions. This experimental heuristic is quite close to our theoretical idea for entropy regularization. In our work, entropy regularization allows us to design quasar-convex landscapes and the fast/s-low mixing scheme to obtain diversification of solutions. Among other related applied works, [14, 15] study the use of Transformer architectures combined with the Reinforce algorithm employing symmetries (i.e., the existence of multiple optimal solutions of a CO problem) improving the generalization capability of Deep RL NCO and [16] studies Transformer architectures and aims to learn improvement heuristics for routing problems using RL. Gradient Descent Dynamics.Our work provides theoretical understanding on the gradient-descent landscape arising in NCO problems. Similar questions regarding the dynamics of gradient descent have been studied in prior work concerning neural networks; for instance, [1] and [1] fix the algorithm (SGD on neural networks) and aim to understand the power of this approach (which function classes can be learned). Various other works study gradient descent dynamics in neural networks. We refer to [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 43, 444, 45, 46, 47, 48, 49, 411, 44, 44, 49, 42, 44, 45, 46, 48, 49, 400, 412, 403, 404, 405, 406, 407, 408, 409, 413, 414, 44, 415, 416, 44, 417, 418, 42, 44, 419, 42, 43, 44, 45, 46, 47, 48, 49, 42, 44, 49, 43, 44, 46, 49, 44, 47, 49, 44, 50, 40, 41, 44, 42, 44, 45, 46, 47, 48, 49, 40, 41, 42, 44, 45, 46, 49, 44, 48, 49, 40, 412, 40, 403, 404, 413, 42, 44, 45, 46, 49, 414, 45, 46, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 40, 41, 42, 40, 41, 43, 44, 46, 47, 48, 49, 42, 45, 46, 49, 40, 41, 44, 48, 49, 40, 41, 44, 42, 44, 45, 46, 47, 48, 49, 40, 41, 42, 43, 44, 49, 44, 45, 46, 49, 42, 47, 48, 49, 40, 41, 43, 44, 45, 46, 49, 41, 45, 47, 49, 42, 48, 49, 40, 42, 45, 46, 49, 41, 45, 47, 48, 49, 42, 49, 43, 44, 45, 47, 49, 46, 48, 49, 40, 41, 45, 49, 42, 45, 47, 49, 43, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 41, 45, 49, 42, 46, 49, 43, 47, 48, 49, 45, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 41, 45, 49, 40, 42, 46, 47, 48, 49, 40, 41, 42, 49, 42, 45, 46, 47, 49, 45, 48, 49, 49, 40, 42, 49, 45, 46, 47, 48, 49, 49, 40, 42, 49, 40, 43, 44, 46, 49, 41, 45, 46, 49, 42, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 49, 45, 49, 46, 47, 49, 48, 49, 49, 49, 40, 42, 49, 45, 49, 46, 47, 49, 48, 49, 49, 40, 43, 49, 41, 45, 49, 49, 40, 42, 49, 41, 45, 46, 49, 42, 49, 43, 44, 46, 47, 48, 49, 49, 40, 45, 49, 41, 46, 49, 47 holds that \(\|\psi_{\mathcal{I}}(G)\|_{2},\|\psi_{S}(s)\|_{2},\|M\|_{\textsc{F}}\leq\mathrm{ poly}(n)\). Therefore, the constants \(D_{S},D_{\mathcal{I}},C\) are polynomial in the description size of the instance family. It remains to show that our solution feature mapping satisfies the variance preservation assumption, i.e., Item 3 in Assumption 1. A uniformly random solution vector \(s\in\{\pm 1\}^{n}\) is sampled by setting each \(s_{i}=1\) with probability \(1/2\) independently. In that case, we have \(\mathbf{E}[v\cdot x]=0\) and therefore \(\mathbf{Var}(v\cdot x)=\mathbf{E}[(v\cdot x)^{2}]=\sum_{i\neq j}v_{i}v_{j} \,\mathbf{E}[x_{i}x_{j}]=\sum_{i}v_{i}^{2}=\|v\|_{2}^{2}\), since, by the independence of \(x_{i},x_{j}\), the cross-terms of the sum vanish. We observe that the same hold true for the Max-Cut problem and therefore, structured feature mappings exist for Max-Cut as well (where \(L(s;G)=-s^{\top}L_{G}s\)). We shortly mention that there also exist structured feature mappings for Max-\(k\)-CSP. We refer to Theorem 4 for further details. **Remark 6** (Partial Instance Information/Instance Context).: _We remark that Assumption 1 allows for the "instance" \(I\) to only contain partial information about the actual cost function. For example, consider the setting where each sampled instance is an unweighted graph \(G\) but the cost oracle takes the form \(\mathcal{O}(G,s)=(L_{G})^{\flat}M(s\!s^{\top})^{\flat}\) for a matrix \(M_{ij}=a_{i}\) when \(i=j\) and \(M_{ij}=0\) otherwise. This cost function models having a **unknown weight function**, i.e., the weight of edge \(i\) of \(G\) is \(a_{i}\) if edge \(i\) exists in the observed instance \(G\), on the edges of the observed unweighted graph \(G\), that the algorithm has to learn in order to be able to find the minimum or maximum cut. For simplicity, in what follows, we will continue referring to \(I\) as the instance even though it may only contain partial information about the cost function of the underlying combinatorial problem._ Maximum-Weight-Bipartite-Matching and TSP.Maximum-Weight-Bipartite-Matching (MWBP) is another graph problem that, given a bipartite graph \(G\) with \(n\) nodes and \(m\) edges, asks for the maximum-weight matching. The feature vector corresponding to a matching can be represented as a binary matrix \(R\in\{0,1\}^{n\times n}\) with \(\sum_{j}R_{ij}=1\) for all \(i\) and \(\sum_{i}R_{ij}=1\) for all \(j\), i.e., \(R\) is a permutation matrix. Therefore, for a candidate matching \(s\), we set \(\psi_{S}(s)\) to be the matrix \(R\) defined above. Moreover, the feature vector of the graph is the (negative flattened) adjacency matrix \(E^{\flat}\). The cost oracle is then \(L(R;E)=\sum_{ij}E_{ij}M_{ij}R_{ij}\) perhaps for an unknown weight matrix \(M_{ij}\) (see Remark 6). For the Traveling Salesman Problem (TSP) the feature vector is again a matrix \(R\) with the additional constraint that \(R\) has to represent a single cycle (a tour over all cities). The cost function for TSP is again \(L(R;E)=\sum_{ij}E_{ij}M_{ij}R_{ij}\). One can check that those representations of the instance and solution satisfy the assumptions of Items 1 and 4. Showing that the variance of those representations has a polynomial lower bound is more subtle and we refer the reader to the Supplementary Material. We shortly mention that the solution generators for Min-Cut and Maximum-Weight-Bipartite-Matching are also efficiently samplable. ## 3 Optimization Landscape Exponential Families as Solution Generators.A natural candidate to construct our family of solution generators is to consider the distribution that assigns to each solution \(s\in S\) and instance \(I\in\mathcal{I}\) mass proportional to its score \(\exp(-\tau L(s;I))=\exp(-\tau\psi_{\mathcal{I}}(I)^{\top}M\psi_{S}(s))=\exp(- \tau z^{\top}Mx)\) for some "temperature" parameter \(\tau\), where \(\psi_{\mathcal{I}}\) and \(\psi_{S}\) are the feature mappings promised to exist due to Assumption 1, \(z=\psi_{\mathcal{I}}(I)\), and, \(x=\psi_{S}(s)\). Note that as long as \(\tau\to+\infty\), this distribution tends to concentrate on solutions that achieve small loss. **Remark 7**.: _To construct the above solution sampler one could artificially query specific solutions to the cost oracle of Definition 1 and try to learn the cost matrix \(M\). However, we remark that our goal (see Definition 2) is to show that we can train a parametric family via gradient-based methods so that it generates (approximately) optimal solutions and not to simply learn the cost matrix \(M\) via some other method and then use it to generate good solutions._ Obstacle I: Minimizers at Infinity.One could naturally consider the parametric family \(\phi(x;z;W)\propto\exp(z^{\top}Wx)\) (note that with \(W=-\tau M\), we recover the distribution of the previous paragraph) and try to perform gradient-based methods on the loss (recall that \(L(x;z)=z^{\top}Mx\))1 Footnote 1: We note that we overload the notation and assume that our distributions generate directly the featurizations \(z\) (resp. \(x\)) of \(I\) (resp. \(s\)). \[\mathcal{L}(W)=\mathop{\mathbf{E}}_{z\sim\mathcal{R}}\mathop{\mathbf{E}}_{x \sim\phi(\cdot\cdot\cdot z;W)}[z^{\top}Mx]\,. \tag{1}\] The question is whether gradient updates on the parameter \(W\) eventually converge to a matrix \(\overline{W}\) whose associated distribution \(\phi(\overline{W})\) generates near-optimal solutions (note that the matrix \(-\tau M\) with \(\tau\to+\infty\) is such a solution). After computing the gradient of \(\mathcal{L}\), we observe that \[\nabla_{W}\mathcal{L}(W)\cdot M=\mathbf{Var}_{z\sim\mathcal{R},x\sim\phi( \cdot\cdot\cdot z;W)}[z^{\top}Mx]\geq 0\,,\] Figure 1: In the **left** plot, we show the landscape of the “vanilla” objective of Equation (1) for the feature domain \(X=\{(1,0),(2,2),(0,2)\}\) and linear cost oracle \(c\cdot x\) for \(c=(-3,-3)\). We see that the “vanilla” objective is minimized at the direction of \(-c\), i.e., along the direction \(\tau(1,1)\) for \(\tau\to+\infty\). We observe the two issues described in Section 3, i.e., that the true minimizer is a point at infinity, and that gradients vanish so gradient descent may get trapped in sub-optimal solutions, (e.g., in the upper-right corner if initialized in the top corner). In the **middle** plot, we show the landscape of the entropy-regularized objective of Equation (3) that makes the minimizer finite and brings it closer to the origin. Observe that even if a gradient iteration is initialized in the top corner it will eventually converge to the minimizer; however the rate of convergence may be very slow. The **right** plot corresponds to the loss objective where we combine a mixture of exponential families as solution generator, as in Equation (5), and the entropy regularization approach. We observe that we are able to obtain a benign (quasar-convex) landscape via the entropy regularization while the mixture-generator guarantees non-vanishing gradients. where the inner product between two matrices \(A\cdot B\) is the trace \(\operatorname{Tr}(A^{\top}B)=\sum_{j}A_{ij}B_{ij}\). This means that the gradient field of \(\mathcal{L}\) always has a contribution to the direction of \(M\). Nevertheless the actual minimizer is at infinity, i.e., it corresponds to the point \(\overline{W}=-\tau M\) when \(\tau\to+\infty\). While the correlation with the optimal point is positive (which is encouraging), having such contribution to this direction is not a sufficient condition for actually reaching \(\overline{W}\). The objective has vanishing gradients at infinity and gradient descent may get trapped in sub-optimal stationary points, see the left plot in Figure 1. Solution I: Quasar Convexity via Entropy Regularization.Our plan is to try and make the objective landscape more benign by adding an entropy-regularizer. Instead of trying to make the objective convex (which may be too much to ask in the first place) _we are able obtain a much better landscape with a finite global minimizer and a gradient field that guides gradient descent to the minimizer._ Those properties are described by the so-called class of "quasar-convex" functions. Quasar convexity (or weak quasi-convexity [11]) is a well-studied notion in optimization [11, 12, 13, 14, 15] and can be considered as a high-dimensional generalization of unimodality. **Definition 3** (Quasar Convexity [11, 13]).: _Let \(\gamma\in(0,1]\) and let \(\overline{x}\) be a minimizer of the differentiable function \(f:\mathbb{R}^{n}\to\mathbb{R}\). The function \(f\) is \(\gamma\)**-quasar-convex** with respect to \(\overline{x}\) on a domain \(D\subseteq\mathbb{R}^{n}\) if for all \(x\in D\), \(\nabla f(x)\cdot(x-\overline{x})\geq\gamma(f(x)-f(\overline{x}))\)._ In the above definition, notice that the main property that we need to establish is that the gradient field of our objective correlates positively with the direction \(W-\overline{W}\), where \(\overline{W}\) is its minimizer. We denote by \(H:\mathcal{W}\to\mathbb{R}\) the negative entropy of \(\phi(W)\), i.e., \[H(W)=\mathop{\mathbf{E}}_{z\sim\mathcal{R}}\mathop{\mathbf{E}}_{x\sim\phi( \cdot;zW)}[\log\phi(x;z;W)]\,, \tag{2}\] and consider the _regularized_ objective \[\mathcal{L}_{\lambda}(W)=\mathcal{L}(W)+\lambda H(W)\,, \tag{3}\] for some \(\lambda>0\). We show (follows from Lemma 4) that the gradient-field of the regularized objective indeed "points" towards a finite minimizer (the matrix \(\overline{W}=-M/\lambda\)): \[\nabla_{W}\mathcal{L}_{\lambda}(W)\cdot(W+M/\lambda)=\] \[\mathbf{Var}[z^{\top}(W+M/\lambda)x]\geq 0\,, \tag{4}\] where the randomness is over \(z\sim\mathcal{R},x\sim\phi(\cdot;z;W)\). Observe that now the minimizer of \(\mathcal{L}_{\lambda}\) is the point \(-M/\lambda\), which for \(\lambda=\operatorname{poly}(\epsilon,\alpha,1/C,1/D_{S},1/D_{\mathcal{I}})\) (these are the parameters of Assumption 1) is promised to yield a solution sampler that generates \(\epsilon\)-sub-optimal solutions (see also Proposition 2 and Appendix C). Having the property of Equation (4) suffices for showing that a gradient descent iteration (with an appropriately small step-size) will _eventually_ converge to the minimizer. Obstacle II: Vanishing Gradients.While we have established that the gradient field of the regularized objective "points" towards the right direction, the regularized objective still suffers from vanishing gradients, see the middle plot in Figure 1. In other words, \(\gamma\) in the definition of quasar convexity (Definition 3) may be exponentially small, as it is proportional to the variance of the random variable \(z^{\top}(W+M/\lambda)x\), see Equation (4). As we see in the middle plot of Figure 1, the main issue is the vanishing gradient when \(W\) gets closer to the minimizer \(-M/\lambda\) (towards the front-corner). For simplicity, consider the variance along the direction of \(W\), i.e., \(\mathbf{Var}[z^{\top}Wx]\) and recall that \(x\) is generated by the density \(\exp(z^{\top}Wx)/(\sum_{x\in X}\exp(z^{\top}Wx))\). When \(\|W\|_{2}\to+\infty\) we observe that the value \(z^{\top}Wx\) concentrates exponentially fast to \(\max_{x\in X}z^{\top}Wx\) (think of the convergence of the soft-max to the max function). Therefore, the variance \(\mathbf{Var}[z^{\top}Wx]\) may vanish exponentially fast making the convergence of gradient descent slow. Solution II: Non-Vanishing Gradients via Fast/Slow Mixture Generators.We propose a fix to the vanishing gradients issue by using a mixture of exponential families as a solution generator. We define the family of solution generators \(\mathcal{P}=\{p(W):W\in\mathcal{W}\}\) to be \[\mathcal{P}=\{(1-\beta^{\star})\phi(W)+\beta^{\star}\phi(\rho^{\star}W):W\in \mathcal{W}\}\, \tag{5}\] for a (fixed) mixing parameter \(\beta^{\star}\) and a (fixed) temperature parameter \(\rho^{\star}\). The main idea is to have the first component of the mixture to converge fast to the optimal solution (to \(-M/\lambda\)) while the second "slow" component that has parameter \(\rho^{\star}W\) stays closer to the uniform distribution over solutions that guarantees non-trivial variance (and therefore non-vanishing gradients). More precisely, taking \(\rho^{\star}\) to be sufficiently small, the distribution \(\phi(\rho^{\star}W)\) is _almost uniform_ over the solution space \(\psi_{S}(S)\). Therefore, in Equation (4), the almost uniform distribution component of the mixture will add to the variance and allow us to show a lower bound. This is where Item 3 of Assumption 1 comes into play and gives us the desired non-trivial variance lower bound under the uniform distribution. We view this fast/slow mixture technique as an interesting insight of our work: we use the "fast" component (the one with parameter \(W\)) to actually reach the optimal solution \(-M/\lambda\) and and we use the "slow" component (the one with parameter \(\rho^{\star}W\) that essentially generates random solutions) to preserve a non-trivial variance lower bound during optimization. ## 4 A Complete, Compressed and Efficiently Optimizable Sampler In this section, we discuss the main results that imply Theorem 1: the family \(\mathcal{P}\) of Equation (5) is complete, compressed and efficiently optimizable (for some choice of \(\beta^{\star},\rho^{\star}\) and \(\mathcal{W}\)). Completeness.First, we show that the family of solution generators of Equation (5) is complete. For the proof, we refer to Proposition 2 in Appendix C. At a high-level, we to pick \(\beta^{\star},\rho^{\star}\) to be of order \(\operatorname{poly}(\epsilon,\alpha,1/C,1/D_{S},1/D_{\mathcal{I}})\). This yields that the matrix \(\overline{W}=-M/\lambda\) is such that \(\mathcal{L}(\overline{W})\leq\operatorname{opt}+\epsilon\), where \(M\) is the matrix of Item 2 in Assumption 1 and \(\lambda\) is \(\operatorname{poly}(\epsilon/[\mathcal{I}])\). To give some intuition about this choice of matrix, let us see how \(\mathcal{L}(\overline{W})\) behaves. By definition, we have that \[\mathcal{L}(\overline{W})=\operatorname*{\mathbf{E}}_{z\sim\mathcal{R}} \operatorname*{\mathbf{E}}_{x\sim p(\cdot;\overline{W})}\left[z^{\top}Mx \right]\,\] where the distribution \(p\) belongs to the family of Equation5, i.e., \(p(\overline{W})=(1-\beta^{\star})\phi(\overline{W})+\beta^{\star}\phi(\rho^{ \star}W)\). Since the mixing weight \(\beta^{\star}\) is small, we have that \(p(\overline{W})\) is approximately equal to \(\phi(\overline{W})\). This means that our solution generator draws samples from the distribution whose mass at \(x\) given instance \(z\) is proportional to \(\exp(-z^{\top}Mx/\lambda)\) and, since \(\lambda>0\) is very small, the distribution concentrates to solutions \(x\) that tend to minimize the objective \(z^{\top}Mx\). This is the reason why \(\overline{W}=-M/\lambda\) is close to opt in the sense that \(\mathcal{L}(\overline{W})\leq\operatorname{opt}+\epsilon\). Compression.As a second step, we show (in Proposition3, see AppendixD) that \(\mathcal{P}\) is a compressed family of solution generators. This result follows immediately from the structure of Equation5 (observe that \(W\) has \(n_{X}\)\(n_{Z}\) parameters) and the boundedness of \(\overline{W}=-M/\lambda\). Efficiently Optimizable.The proof of this result essentially corresponds to the discussion provided in Section3. Our main structural result shows that the landscape of the regularized objective with the fast/slow mixture solution-generator is quasar convex. More precisely, we consider the following objective: \[\mathcal{L}_{\lambda}(W)=\operatorname*{\mathbb{E}}_{z\sim\mathcal{R}} \operatorname*{\mathbb{E}}_{x\sim p(\cdot;x;W)}[z^{\top}Mx]+\lambda R(W)\,, \tag{1}\] where \(p(W)\) belongs in the family \(\mathcal{P}\) of Equation5 and \(R\) is a weighted sum of two negative entropy regularizers (to be in accordance with the mixture structure of \(\mathcal{P}\)), i.e., \(R(W)=(1-\beta^{\star})H(W)+\beta^{\star}/\rho^{\star}H(\rho^{\star}W)\). Our main structural results follows (for the proof, see AppendixE.1). **Proposition 1** (Quasar Convexity).: _Consider \(\epsilon>0\) and a prior \(\mathcal{R}\) over \(\mathcal{I}\). Assume that Assumption1 holds. The function \(\mathcal{L}_{\lambda}\) of Equation1 with domain \(\mathcal{W}\) is \(\operatorname{poly}(\epsilon,\alpha,1/C,1/D_{S},1/D_{\mathcal{I}})\)-quasar convex with respect to \(-M/\lambda\) on the domain \(\mathcal{W}\)._ Since \(\rho^{\star}\) is small (by Proposition2), \(H(\rho^{\star}W)\) is essentially constant and close in value to the negative entropy of the uniform distribution. Hence, the effect of \(R(W)\) during optimization is essentially the same as that of \(H(W)\) (since \(\beta^{\star}\) is close to \(0\)). We show that \(\mathcal{L}_{\lambda}\) is quasar convex with a non-trivial parameter \(\gamma\) (see Proposition1). We can then apply (in a black-box manner) the convergence results from [11] to optimize it using projected SGD. We show that SGD finds a weight matrix \(\widehat{W}\) such that the solution generator \(p(\widehat{W})\) generates solutions achieving actual loss \(\mathcal{L}\) close to that of the near optimal matrix \(\overline{W}=-M/\lambda\), i.e., \(\mathcal{L}(\widehat{W})\leq\mathcal{L}(\overline{W})+\epsilon\). For further details, see AppendixE.3. ## 5 Experimental Evaluation In this section, we investigate experimentally the effect of our main theoretical contributions, the entropy regularizer (see Equation2) and the fast/slow mixture scheme (see Equation5). We try to find the Max-Cut of a fixed graph \(G\), i.e., the support of the prior \(\mathcal{R}\) is a single graph. Similarly to our theoretical results, our sampler is of the form \(e^{\operatorname{score}(s;w)}\), where \(s\in\{-1,1\}^{n}\) (here \(n\) is the number of nodes in the graph) is a candidate solution of the Max-Cut problem. For the score function we use a simple linear layer (left plot of Figure2) and a 3-layer ReLU network (right plot of Figure2). In this work, we focus on instances where the number of nodes \(n\) is small (say \(n=15\)). In such instances, we can explicitly compute the density function and work with an _exact_ sampler. We generate 100 random \(G(n,p)\) (Erdos-Renyi) graphs with \(n=15\) nodes and \(p=0.5\) and train solution generators using both the "vanilla" loss \(\mathcal{L}\) and the entropy-regularized loss \(\mathcal{L}_{\lambda}\) with the fast/slow mixture scheme. We perform 600 iterations and, for the entropy regularization, we progressively decrease the regularization weight, starting from 10, and dividing it by 2 every 60 iterations. Out of the 100 trials we found that our proposed objective was always able to find the optimal cut while the model trained with the vanilla loss was able to find it for approximately 65% of the graphs (for 65 out of 100 using the linear network and for 66 using the ReLU network). Hence, our experiments demonstrate that while the unregularized objective is often "stuck" at sub-optimal solutions - and this happens even for very small instances (\(n=\)15 nodes) - of the Max-Cut problem, the objective motivated by our theoretical results is able to find the optimal solutions. For further details, see Appendix I. We leave further experimental evaluation of our approach as future work. ## 6 Conclusion Neural networks have proven to be extraordinarily flexible, and their promise for combinatorial optimization appears to be significant. Yet as our work demonstrates, while gradient methods are powerful, without a favorable landscape, they are destined to fail. We show what it takes to design such a favorable optimization landscape. At the same time, our work raises various interesting research questions regarding algorithmic implications. An intriguing direction has to do with efficiently samplable generative models. In this paper we have focused on the number of parameters that we have to optimize and on the optimization landscape of the corresponding objective, i.e., whether "following" its gradient field leads to optimal solutions. Apart from these properties it is important and interesting to Figure 2: Plot of the Max-Cut value trajectory of the “vanilla” objective and entropy-regularized objective with the slow/fast mixture scheme. We remark that we plot the value of the cut of each iteration (and not the value of the regularized-loss). On the horizontal axis we plot the number of iterations and on the vertical axis we plot the achieved value of the cut. Both graphs used were random \(G(n,p)\) graphs generated with \(n=15\) nodes and edge probability \(p=0.5\). For the left plot we used a linear network (the same exponential family as the one used in our theoretical results). For the right plot we used a simple 3-Layer ReLU network to generate the scores. We observe that the “vanilla” loss gets stuck on sub-optimal solutions. have parametric classes that can efficiently generate samples (in terms of computation). Under standard computational complexity assumptions, it is not possible to design solution generators which are both efficiently optimizable and samplable for challenging combinatorial problems. An interesting direction is to relax our goal to generating approximately optimal solutions: **Open Question 1**.: _Are there complete, compressed, efficiently optimizable and samplable solution generators that obtain **non-trivial approximation guarantees** for challenging combinatorial tasks?_
2306.00597
Analysis of ChatGPT on Source Code
This paper explores the use of Large Language Models (LLMs) and in particular ChatGPT in programming, source code analysis, and code generation. LLMs and ChatGPT are built using machine learning and artificial intelligence techniques, and they offer several benefits to developers and programmers. While these models can save time and provide highly accurate results, they are not yet advanced enough to replace human programmers entirely. The paper investigates the potential applications of LLMs and ChatGPT in various areas, such as code creation, code documentation, bug detection, refactoring, and more. The paper also suggests that the usage of LLMs and ChatGPT is expected to increase in the future as they offer unparalleled benefits to the programming community.
Ahmed R. Sadik, Antonello Ceravola, Frank Joublin, Jibesh Patra
2023-06-01T12:12:59Z
http://arxiv.org/abs/2306.00597v2
# Analysis of ChatGPT on Source Code ###### Abstract This paper explores the use of Large Language Models (LLMs) and in particular ChatGPT in programming, source code analysis, and code generation. LLMs and ChatGPT are built using machine learning and artificial intelligence techniques, and they offer several benefits to developers and programmers. While these models can save time and provide highly accurate results, they are not yet advanced enough to replace human programmers entirely. The paper investigates the potential applications of LLMs and ChatGPT in various areas, such as code creation, code documentation, bug detection, refactoring, and more. The paper also suggests that the usage of LLMs and ChatGPT is expected to increase in the future as they offer unparalleled benefits to the programming community. Software Development Research, Large Language Models, GPT, Codex, ChatGPT, Code Generation ###### Contents * 1 Introduction * 1.1 Introduction * 1.1.1 Introduction * 1.1.2 Model Training * 1.1.3 Model Training * 1.1.4 Model Training * 1.1.5 Model Training * 1.1.6 Model Training * 1.1.7 Model Training * 1.1.8 Model Training * 1.1.9 Model Training * 1.2 Model Training Introduction The role of the human programmer has been vital throughout the development of machine computing. Initially, programmers had to manually code instructions for computers to follow. This was a time-consuming and error-prone process. As computers became more powerful, programming languages were developed that were high-level and made it easier to write code. This resulted in the creation of the first code generators. Over the years additional tools and subsystems have been created to assist human programmers, leading to design patterns, several development methodologies, high-level debuggers, intelligent editors, and code generators. However, the human programmer remained responsible for creating and maintaining the software. In recent years, there has been a growing interest in the democratization of software development. Two main streams are the no-code paradigms (based on model-driven-development methods) and AI generated code. In this paper we investigate the usage of applications of AI in assisting various software engineering tasks. Machine learning and artificial intelligence algorithms have been developed in a way that they can analyze large datasets of source code and generate source code that resembles human written code. While such approaches have the potential to make programming more accessible and efficient, they are not yet advanced enough to replace the human programmer entirely. One example of a widely used AI system, for software development, made available since 2021 is Copilot, an AI-powered pair programming tool integrated in GitHub [Bird et Al. 2022][Zaremba et Al. 2021]. Copilot quickly got attention from a large number of developers, and it has been integrated in several software development tools. In this paper we investigate the recent research on LLMs and their capabilities in supporting the software development process in different ways [Gozalo-Brizuela et Al. 2023]. We show here several tests done with ChatGPT [OpenAI 2022] and other related research results available in existing literature, with the purpose of inspiring and giving ideas for further work with AI in the scope of software development in the scientific community [van Dis 2023]. Although most of the examples here are done with ChatGPT, they are not limited to this LLM. Several other language models are available, sharing some characteristics and/or providing some original ones that could be exploited for achieving specific software development tasks or as support for other domains. The use of LLMs for programming, source code analysis, and code generation is a revolutionary development in the field of computer science. Progress in this area has shown numerous innovations, unthinkable a few years ago by a community of reseachers looking at the limitation of deep neural network. For instance one example of that can be found in the opinions of AI practitioners such as Chollet, the creator of the popular deep learning library Keras. As a limitation of deep learning, Chollet writes, _"...many more applications are completely out of reach for current deep learning techniques -- even given vast amounts of human annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands -- even millions -- of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many..."_[Chollet 2017]. The current developments and the results shown in this white paper, demonstrates that such claims were too bold and that new deep learning architecture like transformer open new territories for deep learning also in discrete space like language or coding models. ## 2 Model Training ### Background On March 15, 2022, OpenAI unveiled the latest iterations of their groundbreaking GPT-3 and Codex models in their API. Dubbed "text-davinci-003" and "code-davinci-002", these models revealed impressive new capabilities for editing and insertion, surpassing the previous versions in terms of power and versatility. The models had been trained on an extensive dataset up until June 2021, ensuring that they were at the cutting edge of language and coding AI. OpenAI continued to push the boundaries of AI language models with the introduction of a new series on November 30, 2022: "GPT-3.5". This series included the aforementioned text and code models, which were further fine-tuned for even greater performance. In addition, OpenAI introduced ChatGPT, a model that was specifically fine-tuned from a GPT-3.5 variant. With this new addition to their product line, OpenAI cemented their position as leaders in the field of natural language processing and artificial intelligence. ### Codex The Codex models are descendants of GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub. Codex is most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. Codex training, fine tuning, and accuracy was described in detail in the article "Evaluating Large Language Models Trained on Code" [Chen et Al. 2022]. At the time of writing this paper [March 2023], [Chen et Al. 2022] article is the only available information that can trace the methods ChatGPT has been developed with. GPT-NEO, GPT-J, and GPT-3 are among the most advanced autoregressive language models available today, utilizing deep learning algorithms to generate text that closely resembles human writing. These models work by taking an initial text prompt and then continuing the text in a manner that is coherent and contextually relevant. However, the capabilities of these models are not limited to generating natural language text. Researchers have found that GPT-3 has the ability to generate simple programs from text strings, even though it was not specifically trained to do so. This observation has led to the hypothesis that a specialized variant of GPT-3 called Codex would excel at a variety of coding tasks, given its enhanced capabilities for generating code. As the field of artificial intelligence continues to evolve, it is clear that models like Codex will play an increasingly important role in shaping the future of natural language processing and machine learning. ### Method Codex is a powerful AI language model that was specifically designed for coding tasks. It is pre-trained on an enormous amount of raw data, including 54 million GitHub repositories and 179 GB of Python files, each under 1 MB in size. To ensure the quality of the training data, OpenAI filtered out auto-generated code based on criteria such as average and maximum line length and a small number of alphabetic characters. After filtering, the clean data set consisted of 159 GB of Python files. Codex is trained on a variable number of parameters, ranging from 300 million to 12 billion, depending on the specific use case. The vast amount of raw data and careful filtering process have enabled Codex to develop an advanced understanding of the structure and syntax of code in various programming languages, making it an incredibly powerful tool for a wide range of coding tasks. Codex has taken a different approach to evaluate the quality of its output, unlike other language models that use match-based matrices like BLEU score to measure natural language quality. Function correctness is used to measure the quality of Codex's output as it is a more effective measure in the context of code. Unlike natural language, code has a different structure with a limited precise vocabulary. BLEU score has problems capturing the code's semantics as it's not well-suited for the diverse and ambiguous vocabulary of code. Codex, on the other hand, uses function correctness, which is used by human developers to judge the quality of code. The idea behind this approach is that software requirements should be converted into test cases before implementation begins, and success is determined by a program that passes these tests. While not all organizations employ test-driven development, integration of new code typically depends on creating and passing unit tests, making function correctness a valuable metric for evaluating the quality of Codex's output. ### Training Codex was trained using a similar approach to the GPT model. The model was trained with a linear warmup of 175 steps and cosine learning rate decay, using Adam optimizer with a total of 100 billion tokens.. This training process allowed Codex to learn the complex patterns and structures of code by adjusting its parameters to optimize performance on a wide range of coding tasks. The approach used in training Codex reflects the significant computational resources and careful consideration required to build such a powerful AI language model, capable of producing high-quality code for a variety of applications. Figure 1: the evaluation method of Codex is most probably the same method used to evalute ChatGPT ### _Evaluation_ To evaluate the functional correctness of Codex, a set of 164 programming problems was used, called the HumanEval dataset. Each problem included a function signature, docstring, body, and multiple unit tests, with an average of 7.7 tests per problem. The tasks were carefully hand-written to assess language comprehension, reasoning, algorithms, and basic mathematics. It was essential for these tasks to be manually created because Codex was trained on a large portion of GitHub, which already contained solutions to problems from various sources. For example, there were more than ten public repositories containing solutions to Codeforces problems, which were part of the recently proposed APPS dataset. Therefore, it was crucial to use a unique and carefully constructed dataset to evaluate the functional correctness of Codex accurately. The HumanEval dataset served as an effective tool to evaluate the model's ability to perform a diverse range of coding tasks, providing insights into its strengths and weaknesses. To evaluate the performance of Codex on programming problems, multiple solution samples were generated from the models and tested to see if they passed the unit tests. With just a single sample, the 12B parameter Codex solves 28.8% of these problems, and the 300M parameter Codex solves 13.2% of these problems. In comparison, the 6B parameter GPT-J only achieves 11.4% on the same dataset, and all GPT models achieve near 0%. To further improve Codex's ability to synthesize functions from docstrings, the model was fine-tuned on standalone, correctly implemented functions, resulting in Codex-S, which can solve 37.7% of problems with a single sample. In real-world programming tasks, multiple approaches and bug fixes are often required, which is approximated by generating many samples from the models and selecting one that passes all unit tests. Codex-S was able to generate at least one correct function within 100 samples for 77.5% of the problems, suggesting that accurate code samples can be selected via heuristic ranking. In fact, the sample with the highest mean log-probability passes unit tests for 44.5% of the problems. ## 3 Applications Research on the applications of the LLMs for Software Development has shown a myriad of possibility. This include, code generation, documentation generation, bug detection, code smell detection, refactoring of existing code, translating one programming language to another, test case generation, debugging, code clone detection, program synthesis, code summarization to name a few. In this paper, we focus on the following applications of LLMs on source code. We use ChatGPT for the purpose of our investigation and provide supporting examples for each such application. * Code smells detection an refactoring * Code generation * Step-by-Step execution of source code * Bug fixing \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{pass@\(k\)} \\ & \(k=1\) & \(k=10\) & \(k=100\) \\ \hline GPT-Neo 125M & 0.75\% & 1.88\% & 2.97\% \\ GPT-Neo 1.3B & 4.79\% & 7.47\% & 16.30\% \\ GPT-Neo 2.7B & 6.41\% & 11.27\% & 21.37\% \\ GPT-J 6B & 11.62\% & 15.74\% & 27.74\% \\ \hline TabNine & 2.58\% & 4.35\% & 7.59\% \\ \hline Codex-12M & 2.00\% & 3.62\% & 8.58\% \\ Codex-25M & 3.21\% & 7.1\% & 12.89\% \\ Codex-42M & 5.06\% & 8.8\% & 15.55\% \\ Codex-85M & 8.22\% & 12.81\% & 22.4\% \\ Codex-300M & 13.17\% & 20.37\% & 36.27\% \\ Codex-679M & 16.22\% & 25.7\% & 40.95\% \\ Codex-2.5B & 21.36\% & 35.42\% & 59.5\% \\ Codex-12B & 28.81\% & 46.81\% & 72.31\% \\ \hline \hline \end{tabular} \end{table} Table 1: Codex, GPT-Neo, & TabNine evaluations for HumanEval. Code comprehension improving * Bad coding practices detection and refactoring * Code analysis and summarization For a more comprehensive introduction on LLMs in the scope of this paper, refer to [Joublin et Al. 2023]. ### _Refactoring_ Code smells are specific patterns or characteristics in code that indicate the presence of a deeper problem. These problems may include design flaws, inefficiencies, or issues with maintainability or readability. Code smells do not necessarily indicate a bug or error in the code, but they can make it harder to understand, modify, and maintain. Common examples of code smells include duplicate code, long methods, unused variables, and overly complex class hierarchies. Identifying and addressing code smells is an important part of the software development process, as it can improve code quality, reduce bugs, and make it easier to maintain and extend the codebase over time. Refactoring code smells are indicators that the code may benefit from refactoring to improve its quality. #### 3.1.1 Bloaters These smells refer to a type of code smells that refer to code that has grown too large and complex, making it difficult to maintain, understand, and modify. There are five types of Bloaters code smells: 3.1.1.1 Long parameter listMore than three or four parameters in a method call ``` PROMPT: this functionsetforresfromtongparameterslist,canyoufxi: defcalc_pizza_price(size,dough_type,topings,delivery); base_price=get_base_price(size); dough_price=get_doough_price(dough_type) tongnps_price=get_do(types_price(dough_type) dellvery_price=get_do(types_price+topings_price+delivery_price pizza_price=calculate_pizza_price("large", "which", ["muxhoons", "pepperon"], True) prdir(pizza_price)) ``` ChesterP7: Yes,wecanfixtheissueofalongparameterlistinthecalculate_pizza_pricefunctionbyusingadefotonarytopassintheargumentsintened: defcalc_pizza_price(pizza); base_price=get_base_price(pizza["size']) dough_price=get_doough_price(pizza["do #### 3.1.1.2 Long method A long method is hard to understand. The key to make it easy to understand is not the method length but naming. You should not read the actual implementation to understand what a method does. ``` PRONOT1: thisfunctionsislong,canyoufitsti1: defbase_and_deliver_pizza(size,tompens,doub_true,name,address): gizza=make_pizza_with_immediate(size,tompens,doub_type) ifnotpizza: return"Error:Pizzacouldnotbemade" else: babe_pizza(pizza) ifnotcheck_donness(pizza): return"Error:Pizzanotdone" else: package_pizza(pizza) defquery_status:deliver_pizza(name,address,pizza) ifdeliver_status: return"pizzadeliverd(name)at(address)." else: return"Error:Pizzacouldnotbedelivered." ChatGPT: Yes,thebake_and_deliver_pizzafunctionviolatestheSingleResponsibilityPrinciplebecauseitisresponsibleformultiple tasks: Makingthepizza Rakingthemizs Checkingifthepizsdon Packagingthepizza Deliveringthepizs Included,theseresponsibilitiesshouldbeseparatedintodifferentfunctionsorclasseswitheachhavingasing #### 3.1.1.3 Large class Large classes are hard to read, understand, and maintain. ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` #### 3.1.1.4 Data Clumps Two or more data fields are frequently used together in a class or structure and should be grouped together. ``` PROWT: this code suffers from data clumps, can you fix it! class PizzoxOrder: def calculate_pizza_price(size, down_type, topping, delivery_location, delivery_time); base_price = get_base_price(size) down_price = get_dough_price(down_type) topping_price = get_dough_price(topping) defivery_price = get_delivery_price(delivery_location, delivery_time) return base_price = dot_dobj_price + topping_price + delivery_price size = "large" dough_type = "thick" toppings = "["unknown", "popperoni"] delivery_location = "New York" delivery_time = "evening" primez_price = calculate_pizza_price(size, down_type, toppings, delivery_location, delivery_time) print(gizza_price) ``` **ChatGPT:** Yes, in this code, the calculate_pizza_price function takes 5 parameters size, down_type, toppings, delivery_location, and delivery_time. These parameters are all related to the pizza order and they are often passed together in the same function calls. This is an example of data clumps. One possible solution is to group related data into a single data structure such as a dictionary or a class. Here is an example of how to refracture the code using a dictionary. in this example, we create a dictionary pizza that contains all the relevant data for calculating the pizza price. The calculate_pizza_price function takes in the pizza dictionary as argument and uses the dictionary keys to retrieve the relevant data. This way, we avoid passing multiple arguments to the function, which can be cumbersome and prone to errors. By using a dictionary, we have grouped related data together and made the code more organized and easier to read. ``` classPizzoxOrder: def calculate_pizza_price(gizza); base_price = get_base_price(gizza["size]) down_price = get_dough_price(gizza["dough_type"]) topping_price = get_dough_price(gizza["tough_type"]) topping_price = get_dough_price(gizza["tongings"]) delivery_price = get_delivery_price(gizza["delivery_location"], gizza["delivery_time"]) return base_price = dough_price + toppings_price + delivery_price ``` Pizza = {'size': "large", 'down_type' = "thick", 'toppings: ["unknown", "popperoni"], 'delivery_location' 'New York", 'delivery_time' = "evening" } pizza_price = calculate_pizza_price(gizza) print(gizza_price) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. ChatGPT provides one good possible solution by encapsulating the Pizza parameters into a dictionary structure. #### 3.1.1.5 Primitive Obsession Primitive refers to primitive data types. Obsession is to use these data types instead of objects/classes to represent domain ideas. ``` PRONTP: this code suffers from primitive obession, can you fix it? def calculate_pizza_price(size, dough_type, toopings); base_price = get_base_price(size) dough_price = get_dough_price(_dough_type) = toopings_price = get_dough_piz(_price(toopings)) return base_price + dough_price + topings_price size = "large" dousp_type = "thick" toopings = "unskromos, pepperoni" pizza_price = calculate_pizza_price(size, dough_type, toopings) print(pizza_price) ``` ChatGPT: Yes, in this code, the calculate_pizza_price function takes 3 parameters as string type, that are all related to the pizza order. The problem with this approach is that the function takes string as input and it can be misused, for example the size can be written in small letter, the dough type can be written in plural form, and topings are written as string, and it can be written in different format. ``` from enum import from from tpius import list #### 3.1.2 Couplers These smells refer to a set of bad coding practices that involve classes that are highly dependent on each other. Coupling refers to the degree of interdependence between two classes, and it is desirable to keep coupling low, as high coupling can make the system harder to understand, maintain, and modify. Couplers code smells are a type of design problem that indicates a high level of coupling between classes. To avoid couplers code smells, it is important to keep classes loosely coupled and to use design patterns that promote low coupling, such as the Dependency Injection pattern. These smells include: 3.1.2.1 Inappropriate IntimacyClass have too much knowledge of each other's internal workings, and changes in one class require changes in another class. For example a method in one class frequently accesses the methods of another class. ``` PROMPT1thiscodebuffersfrommemoporatetimacy,canyoufixict7classPizzes: def__init_(self,size,dough_type): self.size=size self.dough_type=dough_type ``` defcalculate_price(self): base_price=get_base_price(self.size) dough_price=get_dough_price(self.dough_type) returnbase_price+dough_price ``` classOrder-def__init_(self,pizza): self.pizza=pizza defcalculate_price(self): returnself.pizza.calculate_price()*self.pizza.dough_type="thick" ``` pizza=pizza("large","thick") order=Order(pizza) order_prec=order.calculate_price() print(order_price) ``` ``` ChatGTP1Yes,thecodebuffersfrominappropriateintancybecausetheOrderclassisdirectlyaccessingthePizzaclass'sdough_typeattribute,whichbreakstheprincipleofencapsulation.Tofixthis,wecanaddasmethodtoPizzatoreturnthedough_typevalueandremovetheeffectaccessingtheattributeinOrder ``` Now, the Order class is using the get_dough_type method of the Pizza class to retrieve the dough_type value instead of directly accessing the attribute. This way, we maintain encapsulation and avoid the inappropriateintancycode smell. ``` classPizza: def__init_(self,size,dough_type): self.size=size self.dough_type=dough_type defcalculate_price(self): base_price=get_base_price(self.size) dough_price=get_dough_price(self.dough_type) returnbase_price=dough_price defget_dough_type(self): returnself.dough_type(self): returnself.dough_type(self): classOrder: def__init_(self,pizza): self.pizza=pizza defcalculate_price(self): returnself.pizza.calculate_price()*(self.pizza.net_dough_type()=="thick") pizza=pizza("large","thick") order=order(pizza) order_prec=order.calculate_price() print(order_price) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. ChatGPT provides one good possible solution by adding a get_dough_type method that decouple the pizza and the order classes. #### 3.1.2.2 Feature Entry A method in one class frequently accesses the data of another class. ``` PROMPT: this code suffers from Feature Emy, can you fix it! class Pizza: def_init_(self, size, dough_type): self_size=size self_doub_type def_calculate_price(self): base_price=get_base_price(self_size) dough_price=get_doub_price(self_doub_type) returnbase_price+doub_price class Order: def_init_(self, pizza): self_pizza=pizs def_pizs def_pizs.calculate_price(self): returnself_pizs.calculate_price()* self_pizs.doub_type == "which" pizza = Pizza("large", "thick") order=order(pizs) order_price=order_calculate_price() print(order_price) ``` ChatGPT: Yes, in this code, the Order class has a method calculate_price that calculates the total price of the order, which involves calling the calculate_price method of the Pizza class and then adding the delivery price. the most straightforward solution to fix the inappropriate intimacy issue in this code is to add a is_thick method to the Pizza class that returns a boolean indicating whether the pizza's dough type is thick or not, and then modify the Order class to call this method instead of directly accusing the dough_type member variable. Here's an example: class Pizza: def_init_(self, size, dough_type): self_size=size self_doub_type=doub_type def_calculate_price(self): base_price=get_base_price(self_size) dough_price=get_doub_price(self_doub_type) returnbase_price=doub_price def_is_thick(self): returnself_doub_type=="thick" class Order: def_init_(self, pizza): self_pizs=pizs def_calculate_price(self): ifself_pizs.thick(): returnself_pizs.calculate_price()+2.0 else: returnself_pizs.calculate_price() pizza = Pizza("large", "thick") order=order(pizs) order_price=order_calculate_price() print(order_price) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. ChatGPT provides one good possible solution by adding a is_thick method that decouple the pizza and the order classes. #### 3.1.2.3 Message Chains A series of method calls are chained together in a single statement. ``` ppoMP: this code suffers from Message Chains, can you fix it? class Pizzzi: definit_(self,size, dough_type): self_size=size self.dough_type= dough_type defcalculate_price(self): base_price=get_base_price(self.size) doub_price=get_dough_price(self.dough_type) returnbase_price+dough_price class Order: definit_(self,pizza): self.pizza=pizza defcalculate_price(self): returnself.pizza.calculate_price() + self.pizza.dough_type="thick" pizza = Pizza("large", "thick") order = order(pizza) order_price = order.calculate_price() print(order_price) ``` **Chatter** Yes, there is a message chain issue in this code. A message chain occurs when an object sends a message to another object, which then sends a message to yet another object, and so on, resulting in a long chain of method calls. In this case, the Order class is calling the calculate_price method of the Pizzz class twice in the calculate_price method of the Order class, which creates a message chain. This could make the code harder to read and understand, and could also lead to performance issues if the chain involves many objects or method calls. To fix this issue, you could simply call the calculate_price method of the Pizza class once and store the result in a variable, and then use that variable to calculate the final order price. Here's an example: class Pizz: def _init_(self,size, dough_type): self.size= size self.dough_type=doub_type defcalculate_price(self): base_price=get_base_price(self.size) doub_price=get_dough_price(self.dough_type) returnbase_price+dough_price class Order: def _init_(self,pizza): self.pizza=pizza defcalculate_price(self): pizza,price=self.pizza.calculate_price() returnpizza_price+pizza_price=0.1 pizza=Pizza("large", "thick") order = order(pizza) order_price=order.calculate_price() print(order_price) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. ChatGPT provides one good possible solution by calling the calculate_price method of the Pizza class once and store the result in a pizza_price variable #### 3.1.2.4 Middle-Man Class or method acts as an intermediary between two other classes but adds little or no value to the overall design. ``` 1pROMPT: this code suffers fromMiddle-Man, can you fix it? 2 code class pizza: def_init_(self, size, dough_type): self_size = size self_dough_type = dough_type def_calculate_price(self): base_price = set_base_price(self_size) dough_price = get_dough_price(self_dough_type) return base_price = dough_price ``` class Order: def_init_(self, pizza): self_pizza = pizza def_price(self): return self_pizza.calculate_price() class pizza2330: def_init_(self, order): self_order = order def_order_price(self): return self_order.net_price() pizza = Pizza("large", "thick") order = order(pizza) pizza2330 #### 3.1.3 Change preventers These smells refer to code structures that can prevent or impede future changes to the software. Change preventers make it difficult to modify or extend the software, leading to decreased maintainability, increased technical debt, and higher costs. Examples of change preventers include hardcoding values, using global variables, relying on specific implementations rather than abstractions, and implementing complex conditional statements that are difficult to modify. These code smells are important to identify and address during software development to ensure the software can adapt to changing requirements and remain maintainable over time. #### 3.1.3 Shotgun Surgery A single change is made to multiple classes simultaneously. ``` PROMPT1 this code suffers from Shotgun Surgery, can you fix it? is there any shotgun surgery in this code class Pizza: def_init_(self, size, dough_type): self_size size self_debug_type= dough_type def_calculat_price(self): base_price=get_basic_price(self.size) doub_price=get_doug_price(self.doug_type) returnbase_price=dough_price class Order: def_init_(self, pizza): self_pizza= pizza def_change_pizza(self, size=None, dough_type=None): if size self_pizza.size= size if doub_type=self_pizza.doug_type=doug_type self_pizza.calculate_price() pizza=Pizza("large", "thick") order=Order(pizza) order.change_pizza(size="medium") order.price=order.pizza.calculate_price() print(order.price) ``` ChatGPT: there is a Shotgun Surgery issue in this code. Shotgun Surgery is a code smell where making a change requires modifying multiple classes because the responsibility for a particular behavior is spread across multiple classes. In this case, if we want to make a change to the way pizza sizes are handled (e.g., if we want to add additional size options), we would need to modify both the Pizza and Order classes. Specifically, we would need to add new size options to the Pizza class, and then update the change_pizza method of the Order class to handle the new size options. To fix this issue, we could move the responsibility for handling pizza sizes entirely into the Pizza class, and eliminate the change_pizza method from the Order class. Here's an example: ``` classPizza: def_init_(self, size, dough_type): self_size= size self_doug_type=doug_type self_base_price=get_base_price(self.size) ``` Chat_price(self): down_price=get_doug_price(self.doug_type) returnself_base_price=dough_price def_set_size(self, size): self_size=size self_base_price=get_base_price(self.size) pizza=Pizza("large", "thick") order=Order(pizza) pizza.set_size("medium") order_price=order.pizza.calculate_price() print(order_price) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. ChatGPT provides one good possible solution by moving the responsibility for handling pizza sizes entirely into the Pizza class. #### 3.1.3.2 Divergent Change Many changes are made to a single class. ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` #### 3.1.3.3 Parallel Inheritance Hierarchies Parallel Inheritance Hierarchies is a code smell in object-oriented programming where two or more classes or interfaces share common behavior or properties but are implemented in separate inheritance hierarchies. ``` PRONTP1: This code suffers from Parallel Inheritance Hierarchies.cm you fix it? classBirza: def -init_(self, size): self_size = size classToppingOperator(pizza): def -init_(self, pizza, topping): super()_init_(pizza.size) set_pizza pizza self_topping = topping def get_price(self): returnself_pizza_get_price() + get_topping_price(self.topping) classConstructor(pizza): def -init_(self, pizza, crust_type): super()_init_(pizza.size) set_pizza pizza self_top_type = crust_type def get_price(self): returnself_pizza_get_price() + get_crust_price(self.crust_type) pizza = Pizza(large) pizza = Pizza(large) pizza = Pizza(large) pizza = Prosprosprosplector(pizza, "mushrooms") pizza = CrustDocorator(pizza, "stuffed") p #### 3.1.4 Dispensables These smells refer to the parts of the code that are not necessary and can be removed without affecting the overall functionality of the program. These code smells are usually harmless but can have an impact on the maintainability of the code. ##### 3.1.4.1 Lazy class A place holder class that initializes its resources or properties until they are needed. ``` PROMPT: This code suffers from Lazy class, can you fix it! class %pizza; def _init_(self, size, topopings, price); self.size %size self.topping = topopings self.price = price %pizza = %pizza("large", ["mushrooms"], 15.99) print(pizza) ``` ``` CheckP7: Yes, in this code, it's possible that this class could be considered a lazy class if it doesn't have any meaningful behavior or if its behavior could be combined with another class. Refactoring a lazy class would involve removing the class if it is not needed. However, if the class is needed, it can be refactored by adding methods or functionality to it that would make it more useful and eliminate the "Lazy" aspect. Here is an example of how the Pizza class can be refactored to add more functionality: class %pizza def _init_(self, size, topopings): self.size = size self.topping = toppings %pef add,topping(self, topping): self.topping.append(topping) def remove_topping(self, topping): self.topping.remove(topping) def get_price(self) base_price = get_base_price(self.size) topping_price = sum(get_topping,price(topping) for topping in self.topings) return base_price = topping_price ``` ``` def calculate_discounted_price(self, discount): base_price = get_base_price(self.size) topping_price = sum(get_topping_price(topping) for topping in self.topings) return base_price = topping_price (1- discount) pizza = Pizza("large", ["mushrooms"]) pizza.add_topping("clives) pizza.get_price() print(pizza.calculate_discounted_price(0.2)) ``` In this refactored version of the Pizza class, two methods have been added: add_topping and remove_topping, which allow toppings to be added or removed from the pizza. Additionally, a calculate_discounted_price method has been added to calculate the price of the pizza after applying a discount. These changes eliminate the "Lazy" aspect of the original class, making it more useful and functional ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. The pizza class doesn't have any meaningful behavior. Thus, ChatGPT provides one good possible solution by creating four methods. Another solution was to eleminate the class if it is not need and introduce an object. #### 3.1.4.2 Data class A data class refers to a class that contains only fields for accessing them and does not have any behavior. ``` PROMPT: This code suffers from Data Class, can you fix it! class pizza: def__init_(self, size, toppings, price); self.size= size self.toppings=topings self.price=price def__str_(self): returnPsize:(self.size) toppings:(self.toppings) price:(self.price)" pizza = Pizza("large", ["unknowns"], 15.99) print(pizza) ``` ``` CARTGPT: Yes, in this code, the Pizza class only has attributes that represent the size, toppings, and price of a pizza, and a method that returns a string representation of the pizza. while there is nothing inherently wrong with creating a data class, it may be more appropriate to create a single data structure, like a named tuple or a dictionary, to represent the pizza. Alternatively, if the class needs to have behavior or methods, then it should be designed accordingly. To fix this, we removed the price parameter from the constructor, and added a calculate_price method that calculates the price of the pizza based on its size and toppings. We also added an apply_discount method that takes a discount as input and calculates the discounted price of the pizza. The __str_ method still exists, but now it calls calculate_price to display the price, ensuring that the displayed price is always up-to-date. By adding behavior to the class, we have eliminated the data class code smell. class pizza: def__init_(self, size, toppings): self.size=size self.toppings=toptopings def__calculate_price(self): base_price=get_base_price(self.size) toppings_price=sum_get_topping_price(topping) for topping in self.toppings) return base_price=toptoppings_price def apply_discount(self, discount): self.price=self.calculate_price()*(1- discount) def__str_(self): returnPsize:(self.size) toppings:(self.toppings) price:(self.calculate_price())" ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. The pizza class doesn't have any meaningful behavior. ChatGPT provides one possible removed the price parameter from the constructor, and added a calculate_price method that calculates the price of the pizza. #### 3.1.4.3 Duplicate code Code that are repeated multiple times within a program or across different programs. ``` PRONOPT: This code suffers fromDuplicatecode,canyoufixit? classPizza: def_init_(self,size): self.size=size self.topping=[] defadd_topping(self,topping): self.topping_append(topping) defremove_topping(self,topping): self.topping=remove(topping) defget_price(self): base_price=get_base_price(self.size) topping_price=sum(get_topping_price returnbase_price=topping_price defcalculate_discounted_price(self,discount): base_price=get_base_price(self.size) topping_price return(base_price+topping_price)*(1-discount) pizza=Pizza("large") pizza.add_topping("unknown") pizza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pzza.add_topping("unknown") pz.add_topping("unknown") pzza. #### 3.1.4.4 Dead code Code that is never executed during the runtime of a program ``` PROWT: Thiscode buffersfromDeadcode,canyoufixict? classPizza: def__init_(self,size): self.size=size self.topings=[] defadd_topping(self,topping): self.topping_snpend(topping) defremove_topping(self,topping): self.topping_remove(topping) defget_price(self): base_price=get_base_price(self.size) topping_price=sum(get_topping_price(topping) for topping in self.topings) returnbase_price+topping_price defcalculate_discounted_price(self,discount): base_price+sum(get_topping_price(topping) for topping in self.topings) return(base_price+topping_price) * (1 - discount) ``` pizza=pizza("large") pizza.add_topping("unknown") print(pizza.get_price()) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. The pizza class doesn't have any meaningful behavior. Thus, ChatGPT provides one good possible solution by removing the uncessary method. #### 3.1.4.5 Speculative generality Coding a feature that it will be used for a future intended use in a way that is not yet known ``` PRONT: ThiscodeuffersfromSpeculativeGenerality,canyoufixit? classPizza: def__init_(self,size,dough_type,topings): self.size=size self.dough_type=dough_type self.topings=topings defcalculate_price(self): base_price = get_base_price(self.size) doub_price = get_dough_price(self.dough_type) toopings_price = get_topings_price(self.tcompings) returnName_price = doub_price = topings_price ``` deforege_for_future_delivery(self,delivery_type): raiseNoiseImplementedError("Deliverytypenotimplementedyet") ChatOPT: Yes,inthiscode,theproge_for_future_deliverymethodin thePizzaclassisanexampleofspeculativegeneralitycodesmall.ThismethodisnotcurrentlymethodingusedandinneededinthePizzaclass.Itisapeculativefeaturethathasbeenaddedtotheclassincaseitisneededinthefuture.Thiscanaddunnecessarycomplexityandcluttertothecode.It'sbettertoonlyaddfeaturesthatarecurrentlyneededinthecodelaterifnewfeatures #### 3.1.4.6 Unnecessary comment A method is filled with explanatory comments ``` PRONPT: This code suffers from unnecessary Comment, can you fix it! class Pizzzi: def_init_(self, size): self.size = size self.topping = [] def add_topping(self, topping): ""Addtopping to the pizza."" self.topping_append(topping) def remove_topping(self, topping): "*genome topping from the pizza."" self.topping.remove(topping) def get_price(self): ""Calculate the price of the pizza based on its size and topping."" base_price = get_base_price(self.size) topping_price = sum(get_topping_price(topping) for topping in self.topping) return bang_price + topping_price pizza = pizza("large") pizza.add_topping("mushrooms") print(pizza.get_price()) ``` CheckPiz: Yes, in this code, the comments provided for the add_topping and remove_topping methods are redundant, as they simply repeat what the code already does. The method names "add_topping" and "remove_topping" are clear and self-explanatory, making the comments redundant class Pizzzi: def_init_(self, size): self.size = size self.topping = [] def add_topping(self, topping): self.topping.append(topping) def remove_topping(self, topping): self.topping.remove(topping) def get_price(self): ""Calculate the price of the pizza based on its size and topping."" base_price = get_base_price(self.size) topping_price = sum(get_topping_price(topping) for topping in self.topings) return bang_price + topping_price pizza = pizza("large") pizza.add_topping("mushrooms") print(pizza.get_price()) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. The pizza class doesn't have any meaningful behavior. Thus, ChatGPT provides one good possible solution by deleting the uncessary comments. #### 3.1.5 Object Oriented Abusers These smells refer to a tendency to overuse object-oriented features, such as inheritance, polymorphism, and encapsulation, without fully understanding their implications or benefits. Object-Oriented Abusers can make code harder to understand, maintain, and evolve. They can also lead to performance issues and create unnecessary dependencies between components. To address this smell, developers can use refactoring techniques such as simplifying class hierarchies, using composition instead of inheritance, removing unused abstractions, and simplifying complex code. #### 3.1.5 Alternative classes with different interfaces Providing multiple classes that serve the same purpose but have different ways of interacting with them ``` PROMPT: Thiscodeuffersfromalternativeclasseswithdifferentinterfaces,canyoufixtt? classPizza: def=init_(self,size); self.size=size defget_price(self): returnget_price(self,size) classPizzaDelivery(Pizza): defdeliver(self): def delivery_,fees=calculate_deliver_fees(self.size) send_to_deliver(delivery_fees) classPizzaDelivery(Pizza): def pick_up,location=get_pickup_location(self.size) go_to_pickup_location(pick_up_location) pizza_deliver>PizzaDelivery("large") pizza_delivery_get_price() pizza_delivery.deliver() pizza_takeaway_pizzaDelivery("large") pizza_takeaway_pizzaDelivery_get_price() pizza_takeaway_pizzaDelivery_get_price() ``` **ChatGPT:** Yes, in this code, The PizzaDelivery and PizzaTakeawayclassesboth #### 3.1.5.2 Switch statement when a switch statement becomes too large and complex, with many branches and cases, making it difficult to understand and maintain. ``` PROMTP: This code suffers from Switch Statement, can you fix it? classPizza: def -init_(self, size): self.size = size def calculate_price(self): if self.size = "small": return 5 elif self.size = "medium": return? elif self.size = "large": return 10 else: return "Invalid size" ``` pizza=Pizza("large") print(pizza.calculate_price()) ``` Category Yes, in this code, the calculate_price method uses a series of if and diff statements to determine the price of the pizza based on its size. This creates a hard-coded dependency between the size and the price, and it can make it difficult to add new sizes or change the pricing in the future. Here's one may try to fix it using a dictionary to map pizza sizes to their prices. This may, instead of using a series of if/eff statements, the code uses a dictionary to look up the price based on the pizza size. classPizza: def -init_(self, size): self.size = size self.prices = "small": 5, "medium": 7, "large": 10) def calculate_price(self): if self.size in self.prices: return self.prices[self.size] else: return "Invalid size" pizza = Pizza("large") print(pizza.calculate_price()) ``` [MISSING_PAGE_POST] #### 3.1.5.3 Temporary field Temporary fields get their values (and thus are needed by objects) only under certain circumstances. Outside of these circumstances, they're empty ``` PRONTP1 This code suffers from Temporary field, can you fix it? classPizza: def_init_(self, size): self.size= size self.temp=None defprepare(self): self.temp=get_count_temperature(self.size) prepare_doping(self.size) add_topping(self.size, self.temp) self.temp=None ``` Pizza=Pizza("large") pizza.moremore() ``` ChatGPT1 Yes, this code suffers from the Temporary Field code smell. The temp variable is only used temporarily during the prepare method and serves no other purpose outside of it. It is set to None after its usage, indicating that it is not needed beyond the method. A better approach would be to eliminate the temporary variable by passing the required data to the functions that need it directly. This would simplify the code and remove the need for a temporary field. Here's an example of how the code could be refactored to eliminate the temporary field: classPizza: def_init_(self, size): self.size=size defprepare(self): ovn_temp=get_count_temperature(self.size) prepare_doping(self.size) add_topping(self.size, ovn_temp) pizza = Pizza("large") pizza.prepare() ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. The temp variable is only used temporarily during the prepare method and serves no other purpose outside of it. Thus, ChatGPT provides one good possible solution by eliminating the temporary variable by passing the required data to the functions. #### 3.1.5.4 Refused Request Refused Request occurs when a subclass does not need all the methods or fields inherited from its superclass, but does not override or remove them ``` PRONPT: ThiscodesurferefromRefusedbounest,canvouffixict? classPizza: def-init_(self,size,dowb_type): self.size=size self.dowb_type=dowb_type defget_price(self): returnget_price(self.size) classTumPizza: def-init_(self,size,dowb_type,topings): self.dowb_type=dowb_type self.topings=topings defget_price(self): base_price=get_price(self.size) returnbase_price+len(self.topings) + 2 defget.topings(self): returnself.toptopings tun_pizza = Tunapizza("large", "thick", ["tune", "olives", "orions"]) print(tun_pizza.get_price()) print(tun_pizza.get_topings()) ChatCPT: Yes, in thiscode, theTumPizza class does not inherit from the Pizza class, but instead replicates some of its functionality. This leads to code duplication and the need to maintain multiple versions of similar code, which is not optimal. It also violates the principle of inheritance and the DEPT (Don't Repeat Yourse(!) principle. A better approach would be to create a Pizza class with a set_orce method and then create a separate Topings class that adds the toptimes to the pizza and calculates the additional cost. This separates the concerns of calculating the base price and calculating the price of toptimes, and also makes the code more extensible if additional toptimes or types of pizza need to be added in the future. Here's an example. classPizza: def-init_(self,size,dowb_type): self.size=size self.dowb_type=dowb_type defget_price(self): returnget_price(self.size) classTopings: def-init_(self,pizza,topings): self.pizza=pizza self.topings=topings defget_price(self): returnself.topings pizza=Pizza("large", "thick") topings=TopTopings(pizza, ["mushrooms", "poopers"]) print(topings.get_price()) print(topings.get_topings()) ``` ChatGPT can detect the code smell within the given code, as the question was clearly asked about this specific code smell. The TunaPizza class does not inherit from the Pizza class, but instead replicates some of its functionality. Thus, ChatGPT provides one good possible solution by creating a Pizza class with a get_price method and then create a separate Toppings class that adds the toppings to the pizza and calculates the additional cost. ### Code generation Code generation, in the context of this paper, refers to the automatic generation of source code from a given natural language instruction. To this end, we provide the ChatGPT prompt with an English instruction for a code we want it to generate and incrementally modify the instruction. The purpose of this experiment is to generate code in an incremental way. Starting from a simple example function and then asking to change it, making it more complex by introducing new features. We want to experience a guided code generation with a gradual increase in complexity. First Prompt: Here, in the main prompt we ask for the creation of a JavaScript function. We ask for new features in the subsequent interactions. ``` PROMPT: Write a javascript program that runs in a browser and allow to lead a script file dynamically ChatGPT: Sure, here's an example of how you can load a script file dynamically in JavaScript: functionloadScript(w1, callback) { var script = document.createElement("script"); script.type = "text/javascript"; script.src = url; script.onload = callback; document.head.appendChild(script); ) // Usage example loadScript("[https://example.com/myscript.js](https://example.com/myscript.js)", function() { consile.log("Script loaded successfull")"); )); In this code, the loadScript function takes two arguments: the URL of the script file to load, and a callback function that will be executed once the script has finished loading. Inside the function, we create a new script element and set its src attribute to the URL of the script file. We also set the enclosed property of the element to the callback function. Finally, we append the script's element to the desady of the document using the appended method. When the script file has finished loading, the onboard event will be triggered and the callback function will be executed. In the example above, we simply log a message to the console, but you could do anything you like inside the callback function. Summary of Execution: ChatGPT starts with the generation of a simple function, including an example usage and a final explanation of the produced code. The first answer is the following: Then we ask the ChatGPT, in a step-by-step process to add new features to the loadScript function. We first ask to add a parameter to avoid browser cache loading of the loaded scripts, then we ask to add the possibility to communicate loading errors, then to add a helper function to load a list of scripts, then to allow the URL list to have elements with a relative path and to add checking if the scripts are well specified (as URL). After that we ask for alternative implementations of that last request. The following shows one of the final versions of the generated code: The full experiment can be found in the attached document "GenerateLoadScriptFunction.doc" Discussion: In this experiment there are some elements we consider relevant to be mentioned: function loadscript(url, callback, preventCache) { var script:=document.createment("script"); script.type="test(javascript"; if(us(us(val(url))) script.src=url; ) else{ console.error("Invalid scriptURL:",url); return: } if(preventCache) { url:=(/)/.test(url)?"%";"%")+newDate().getTime(); } functionancgrigotLoad() script.ommod=script.omerror=null; callback(null,script) } functionancgrigotError() { script.ommod=script.omerror=null; callback(newError("Failed to loadscript:=url),null); } script.ommod=script.ommod; script.ommod=script.ommod; script.ommod=script.ommod; document.head.appendCondition((script); } functionisValidor[url] { var varPattern:= /(2(1:[https?[(Top):1/\()/(1:3*(2:1:[3*2)2]?(1:3)\(),4(1,3)\(),4(1,3)\(),4(1,3)\(),4(1,3)\(),4(1,3)](1:[3-x]E1-\()vF6-9]+.))+[x-x]E1- )(1:[2:1])(1:[4:1])(1:[4:1])(1:[4:1])(1:[4:1])*1)/; return/upIteration.test(url); } functionloadScripts(urls, callback, preventCache) { varLoad=0; var errorred=false; functionancgrigotLoad() { loaded+; if(loaded==url*.length) { callback(errorred); } functionancgrigotError() { errorred=true; loaded+; if(loaded==url*.length) { callback(errorred); } } } * We have done the code generation incrementally. This revealed a better process than having to specify all the functionalities at once. Moreover such a gradual process gave us more confidence on the resulting code; * After each code generation phase, ChatGPT gives a useful explanation on the resulting code, helpful for understanding the changes; * ChatGPT is able to remember the changes done along the interaction, and correctly update previously defined functions with new requested features; * Code may contain bugs due to imprecise specifications (prompt) or because of the code generation process. In the example above, when asking for "allow the url list to have elements with a relative path" the exact prompt has been: "_In the loadScripts could you also allow to specify the urls parameter with elements that contains only just the file name (without protocol and path), in that case you use the path of the previous element of the array. Of course the first element of the array should be a complete url_". This prompt led to an implementation where the line setting _prevLtr_ is taken from _urls[index-1]_, which fails if two consecutive urls are relative. In general, test of produced code is still necessary to make sure its bug free; ### Step-by-Step execution of source code Developers often use step-by-step execution of source code to understand semantics of code and to also debug. In this section, we investigate the capability of ChatGPT in the executing a source code in step-by-step fashion. Setup: The purpose of this experiment is to test capabilities of ChatGPT in debugging. We prompt ChatGPT with the source code of a function and ask ChatGPT to perform a step by step execution of it. We want to see how far ChatGPT can interpret each line, following the algorithm, the control-flow-statements (conditional, loops) and the status of each variables along the execution. The intention here is to have ChatGPT performing tasks that a debugger would do, however, while debugger typically gives the status of a program in a give time, we want to have a full visible history of the execution of the program. First Prompt: The main prompt gives the source code of a function, asking ChatGPT to do a step by step execution of it, starting from a statement with an indirect call of the given function: PROMPT: function digitNum(num1,num2)( // Convert numbers to strings to access individual digits const strNum1 = String(num2); const strNum2 = String(num2); // Reverse the strings to start with the ones digit const reversedStrNum1 = strNum2.split("").reverse().join(""); const reversedStrNum2 = strNum2.split("").reverse().join(""); // Pad the shorter number with zeros to match the length of the longer number const masterNum1 = strNum.ass(strNum1,length,strNum2,length); const paddedStrNum2 = reversedStrNum1.padd(maxlength,"gr"); // Iterate over the digits and add them up const sumDigits = []; let carry = 0; for (let = 0;!= maxlength; ++) ( const digits= = 0;!= maxlength; ++) ( const digits= = 0;!= maxlength; ++) ( const carry = 0;!= maxlength; ++) ( const carry = 0;!= maxlength; ++) ( const digits= 0;!= maxlength; ++) ( ## Discussion: In this example we have noticed several aspects: * ChatGPT is capable of interpreting a function call, following its flow, taking in consideration the roles of variables, control statements and also the computation of each sub-statement of more complex statements; * ChatGPT shows knoledge of the semantic of primitive language statements as well as library functions (e.g. functions for array manipulation, string manipulation,...); * ChatGPT could provide the functionality of a debugger, showing the correct flow and variable values (we verified each variable with an actual debugger); * Depending on the length of the step by step execution, it is necessary to prompt ChatGPT to continue in order to get to the end of the algorithm; ``` ChatGPT:SureI:Herearethestepby-stepexecutionsforconsole.log(digitSum(438, 30)),withthevaluesofeachvariabledisplayedforeschstep:1.constretxrhum1=String(num1);-num1s438- -strkum1s438- 2.constretxrhum2=String(num2);-num1s30- -strkum2=String(num2);-num2s30- 3.constretxrhum1=strkum1.split("").reverse().join("");-strkum1="438- -strkum1.split("")is("","","")- ("","","")-1.reverse()is("","","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","")-1.reverse()is("","","")-1.reverse()is("","")-1. **Property:** Does this program have a bug? Now to fix it) def gcd(a,b): if b=a: return a else: return gcd(axb,b) **Summary of Execution:** ChatGPT at first did not find any bug in the function. After the researcher explained that the program was generating an error, then ChatGPT could spot the but, give an explanation for it and propose a possible solution: **ChatGPT:** This is the faultoman algorithm for finding the greatest common divisor (GCD) of two integers, a and b. It appears to be implemented correctly. If you are experiencing issues with this program, it is likely that the problem lies else where in your code. without more information about the bug you are experiencing, it is difficult to say how to fix it. **PROMPT:** No, the function is not correct. E.R., for the input a = 4 and b = 5 I get the following error message: "Recursion freer: maximum recursion depth exceeded in comparison". **ChatGPT:** The issue is that the function calls itself recursively, and in some cases, it never Reaches the base case (b = 0), causing it to recurse indefinitely. The base case of The gcd algorithm is when the remainder is zero, but in this simple implementation, it's when B is zero. For example, when a = 4 and b = 5, gcd(a,5) is called, which then calls gcd(a % 5,5) which is gcd(a. 5 ). This causes an infinite loop. You can fix this by swapping the parameters in the recursive call. def gcd(a,b): if a = 0: return b else: return gcd(bXa,a) This way, if a is zero, the function returns b (which is the gcd of a and b), otherwise it calls itself with the new values of b X a and a. The paper shows few other examples and gives a description of a comparison study where ChatGPT has been compared with other methods on the base of the QuixBug dataset. The experiments and evaluations can be found in [Sobania et al. 2023]. **Discussion:** This example shows that ChatGPT can be used in the scope of finding and fixing bugs on source programs. It may be necessary to guide ChatGPT in the search process, however, it shows abilities of localising errors and suggest solutions for them. At the current state it may be questionable if the performance in bugfixing are in general sufficient to justify the necessary interaction with the users. ### _Improving code comprehension_ Code comprehension is an important software engineering problem that aims at improving the understandability of source code there by improving maintainablity. The goal of this experiment is to check the capability of ChatGPT in the context of Code Comprehension. #### Setup: Developers routinely go through hundreds of lines of code to understand the meaning of code. A manual effort in this regard is to focus only on the relevant parts of the code and ignore the remaining. For our experiment, we provide a piece of Python code and ask ChatGPT to refactor it with goal of improving the code comprehension. **Discussion:** ``` PROWT: ImproveCodecomprehensonthefollowingcode: deffoo(n,d): v=v+1 c=c+1 returnn+d ``` ``` ChatGPT: Thegivencodehassomeissueswithvariablescopeandnaming,sohereistherefactoredcode: deffoo(num1,num2): #Incrementvariables ChatGPT: The code smell in the given JavaScript code is the use of the equality operator (==) for string comparison. The equality operator performs type coercion, which means that if the types of the operands are different, JavaScript will try to convert one of the operands to the type of the other operand. This can lead to unexpected results and bugs in the code. ``` forstringcomparison,itistbettertousestrictequalityoperator(==),whichcomparesthevaluesaswellasthetypesofthe operands.5o,thecodecanberewrittenas: if(str1==str2){ console.log("Hi"); } ``` ### Code analysis and summarization Code summarization refers to the process of generating a natural language explanation of source code. Such summary particularly aids in code understandability and maintainability. ChatGPT has some capability in this regard as evident from the following experiment. ``` ``` ``` ``` ``` ``` We ask the ChatGPT system to summarize a given Python code snippet. The system responds by generating an English language summary of the provided code. ``` ChatGPT: Thiscodedefinesafunctioncalled"add_mm"thattakesintoparameterscalled"small"and"and"Thenfunctioncomputesthesumofthetwoinputnumbersandassignittovariable"v"Then,thefunctionreturnthevalueof"v"Inotherwords,thisfunctioncalculatesthesumoftwonumbersandreturntheresult. ``` ### Discussion: To our query about code summarization, the response provided by ChatGPT adequately explains the code. Based on the names of variables, 'num1' and 'num2', ChatGPT system expects them to be numbers at runtime and explains that the function calculates the sum of two numbers. ``` ``` ``` ``` ``` ``` ``` ``` As a follow-up exercise, we provide ChatGPT with another code summarization task with minor changes in the code. Instead of 'add_num', the function is called 'add' and we also change the name of the variables from 'num1', 'num2' to 'val1' and 'val2' respectively. The primary template of the produced summary is same for both code examples. With minor changes in the names of the identifiers, ChatGPT system rightly points about that the "the input values can be of any type that supports the "+" operator". Category: This code defines a function called "add" that takes in two parameters called "vali" and "vali". The function computes the sum of the two input values and assigns it to a variable "v". Then, the function returns the value of "v". In other words, this function calculates the sum of the values and returns the result. The input values can be of any type that supports the "+" operator. ## 4 Limitations In this section we analyze the current limitations of LLM systems in order to give a more comprehensive view on the boundaries of such AI infrastructures [1]. ### Coding Limitations #### 4.1.1 Limited support for complex logic As an AI language model, ChatGPT is designed to generate text based on patterns and relationships it has learned from large amounts of data. While ChatGPT can generate simple lines of code based on specific templates or examples it has seen, it may have limitations when it comes to creating complex logic in software development. From knowledge perspective, ChatGPT can only consider the information it has seen in its training data and may not have the context or background knowledge to understand the nuances of complex logic. Moreover, it does not have a deep understanding of software engineering principles, such as design patterns or object-oriented programming, which are often used to develop complex software systems. It cannot also make test or debug the code it generates with a programming interpreter or engine, which means it may produce code with errors or bugs that are difficult to identify or fix. Overall, while ChatGPT may be able to generate simple code snippets, it may not be suitable for generating complex software logic that requires a deep understanding of software engineering principles and creative problem-solving. #### 4.1.2 Limited knowledge of programming languages ChatGPT may not have knowledge of all programming languages or may have limited knowledge of certain languages, which can limit its ability to generate code for those languages. From a training perspective, its training data do not include a wide range of programming languages and their syntax. This means that ChatGPT's understanding of programming languages may be limited to the patterns and relationships it has learned from the programming language data it was trained on. Moreover, programming languages are constantly evolving, with new versions and updates being released frequently. It can be challenging to keep up with these changes, and it may not be feasible to retrain ChatGPT with updated language syntax and features. #### 4.1.3 Difficulty with Math and Science Questions ChatGPT has difficulty with math and science questions for several reasons. It has not been specifically designed or trained to understand and answer math and science questions. Math, logic and science can be highly complex subjects that involve a deep understanding of abstract concepts, equations, and formulas. ChatGPT struggles to understand and apply these concepts accurately in its responses [1][1]. Such topics often involve visual information, such as diagrams and equations, which can be difficult for a text-based model like ChatGPT to interpret and understand. Finally as mentioned before, answering mathematical questions require a deep understanding of the physical world and common-sense knowledge, which ChatGPT may not possess to a sufficient extent. #### 4.1.4 Difficulty to manipulate quantitative information (measures, probabilities,...) There are several reasons why LLMs have difficulty in representing quantitative information. They operate by representing words and phrases as discrete tokens in a vocabulary. Representing continuous numbers as discrete tokens can result in a loss of precision and accuracy. Manipulating and representing quantitative information often involves mathematical operations, that cannot be performed accurately in a discrete language model. Representing quantitative information often requires the ability to perform unit conversions, such as converting between different units of length, mass, or time. This can also be difficult in a discrete language model, as it may not have the ability to understand the relationships between different units of measurement. ### _Performance Limitations_ #### 4.2.1 Limited Short-Term Memory ChatGPT architecture lacks explicit memory. Unlike human brains, which have an explicit memory system that allows us to store and recall information over time, ChatGPT does not have such a memory system. The working memory that LLMs can use is limited to the number of past tokens that his sliding window can handle which is in the order of 2000 tokens in ChatGPT. This means that it cannot store information from one conversation turn to the next, and must rely solely on the information that is provided to it in each turn. Moreover, fine-tuning ChatGPT on specific conversational data can help to improve its ability to generate responses in the context of a specific task or domain. However, fine-tuning can also introduce limitations, such as a lack of generalization to new conversational flows and a reduced ability to handle more long-term memories. The data used to train ChatGPT is largely unstructured and often lacks clear conversational flow and context which also contributes to its difficulty to generate coherent and consistent responses in the context of a conversation. #### 4.2.2 Vulnerability to Adversarial Input ChatGPT can be easily manipulated or misled by malicious or misleading inputs. This is a common issue in many machine learning systems and is particularly relevant in the context of natural language processing and chatbots, where the input text can be easily crafted to trick the system into generating incorrect or inappropriate responses [14]. Adversarial inputs can take many forms, including intentionally misleading or confusing statements, statements that are factually incorrect or inconsistent with the context, and statements that are designed to prompt the system to generate specific, undesired responses. #### 4.2.3 Dependence on Large Amounts of Computing Power ChatGPT requires large amounts of computing power to run its inference network, making it challenging to deploy in resource-constrained environments. Currently it is provided as a free cloud service. Recently [10, 2023] it is available for professionals as a low latency paid cloud service. #### 4.2.4 Incapacity to Learn New Data On-line LLMs up to now are trained offline and used on-line for multiple tasks but are incapable of learning new content on the fly. It is important to note that although the scientific litterature use the terms "zero shot learning", or "few shot learning" to describe the number of examples provided in the prompt to perform a task, there is no learning at all involved in such experiments: No parameters (weights) of the model are modified during such tests. ## 5 Broader Impacts Hazard Analysis ### _Overreliance_ Using code generation models can be risky, as there is a tendency to over-rely on the generated outputs. This is especially problematic when ChatGPT provides solutions that only appear correct, but do not perform the task as intended. This risk is particularly high for novice programmers, and in certain contexts, could have significant safety implications. Therefore, it's critical to exercise human oversight and vigilance when using code generation systems. To promote safe use of these systems, it's essential to provide clear documentation that highlights the limitations of the models. Additionally, empirical investigations should be conducted to determine how to ensure vigilance in practice, considering different levels of user experience, UI designs, and tasks. One challenge that researchers need to consider is the possibility of "automation bias" as the capabilities of these systems improve. This could make it increasingly difficult to guard against the tendency to blindly rely on the generated outputs without verifying their accuracy. ### _Misalignment_ As an AI language model, ChatGPT is trained to generate code that is like its training distribution, using a next token prediction objective. However, this can result in unhelpful suggestions for the user, even though the model has the capability to be more helpful. For example, if the user has subtle mistakes in their code, ChatGPT may suggest code that appears good but is ultimately incorrect. This is known as an alignment failure, where the model is not aligned with the user's intentions. When a system is able to perform a task, but chooses not to, it is considered misaligned. In contrast, a system that fails to perform a task due to incompetence is not misaligned. Misalignment is an important problem to study because it is likely to become worse as the capabilities of AI systems increase. For instance, the trend of scaling model size indicates that misalignment would persist and even worsen if data, parameters, and training time were increased. Although the misaligned behavior of current models may not cause significant harm, it could become more dangerous and harder to eliminate as model capabilities increase. A highly capable but misaligned model trained on user approval might produce obfuscated code that appears good to the user but is undesirable or harmful. Therefore, it is important to address misalignment in AI systems to ensure that they are safe, effective, and aligned with human values and intentions. This requires ongoing research and development of alignment techniques, as well as regular evaluation of model behavior to detect and correct misaligned behavior. ### _Bias_ Like other language models trained on Internet data, ChatGPT is susceptible to manipulation, which can result in racist, denigratory, and otherwise harmful outputs, such as code comments. This highlights the need for interventions to address these issues. Code generation models, in general, pose additional bias and representation concerns beyond natural language. ChatGPT can generate code with structures that reflect stereotypes about gender, race, emotion, class, the structure of names, and other characteristics. This issue is especially problematic in the context of users who may rely too heavily on ChatGPT or use it without thoroughly thinking through project design, as it could have significant safety implications. To address these concerns, it is important to discourage over-reliance on the model and implement mitigation strategies such as filtration or modulation of generated outputs, as well as providing appropriate documentation to users. Overall, it is essential to address the bias and representation issues that arise from the use of code generation models like ChatGPT, to create more inclusive and equitable technologies. This requires ongoing research and development of techniques to identify and mitigate these issues, as well as education and awareness-raising efforts to promote responsible use of these technologies. ### _Cybersecurity_ ChatGPT's ability to generate vulnerable or misaligned code could have various impacts on the security landscape. It is crucial for qualified operators to review and validate the generated code before executing or trusting it, especially without appropriate precautions. While future code generation models could potentially produce more secure code than the average developer, it is uncertain at this time. There is a risk that ChatGPT could be misused to aid cybercrime. ChatGPT as more powerful code generation models are developed, further research into mitigations and continued study of model capabilities will be necessary. The non-deterministic nature of ChatGPT and similar systems could enable more advanced malware, particularly through techniques like generating polymorphic malware. While application security and model deployment strategies, such as rate-limiting access and abuse monitoring, can manage this threat in the near term, the effectiveness of these mitigations may not scale linearly as more capable models are developed. Additionally, ChatGPT models can learn patterns present in their training data, including sensitive data present in source code. As ChatGPT is trained on public repositories, any sensitive data present in the training data should already be considered compromised. Therefore, the public data should generally be treated as untrusted, as attackers could potentially corrupt training data to trigger specific model behaviors at runtime. Overall, it is crucial to consider the potential security implications of code generation models like ChatGPT and take appropriate measures to mitigate any risks, including human review, filtration, monitoring and usage of threat modeling tools [21]. ### Environment ChatGPT, like other large generative models, has a significant energy footprint from both training and inference. The training and the fine-tunning process consumes hundreds of petaflops/s days of compute. These high energy demands have important environmental implications, and it is crucial to consider them when deploying these models. It is worth noting that the training was performed on a platform (Azure) that purchases carbon credits and sources a significant amount of renewable energy, which helps to reduce its carbon footprint. However, the compute consumption of large models like ChatGPT also has costs in the wider supply chain that can be concentrated in certain regions. In the long term, as code generation becomes more prevalent and challenging problems require significant inference, the compute demands of these models could grow even larger than ChatGPT's training. Therefore, it is essential to consider the energy implications of these models and explore ways to reduce their environmental impact, such as developing more efficient algorithms, using renewable energy sources, and incorporating energy efficiency metrics into the model design process. ### Economy The impact of code generation and associated capabilities on the economy and labor market is complex and multifaceted. While ChatGPT, at its current capability level, may lead to some cost savings by increasing programmer productivity, it is important to note that engineers engage in various tasks other than coding, such as collaborating with colleagues, writing design specifications, and upgrading existing software stacks. Therefore, the overall impact on productivity may be limited. Furthermore, ChatGPT's tendency to import packages at different rates could result in an uneven distribution of benefits, favoring some package authors over others, particularly if programmers and engineers rely heavily on ChatGPT's suggestions. In the long term, the effects of code generation technologies on software-related labor markets and the broader economy could be more significant as capabilities improve. Therefore, further research is needed to fully understand the economic and labor market impacts of these technologies and to develop appropriate responses to mitigate any negative consequences. ### Legislation Generated code raises various legal considerations, including the fair use of training data. While training AI systems on public GitHub repositories has been previously deemed as fair use, the study found that ChatGPT models rarely generate code that is identical to the contents of training data. When the generated code appears identical to the training data, it is due to the model's predictive weightings rather than the retention and copying of specific code. In addition, generated code is responsive and customizable to the user's input, and the user maintains complete control over editing and accepting the generated code. This makes code generation similar to auto-suggest or auto-completion features found in other authorship tools such as document editors, where the finished work is still seen as the authors. Nonetheless, further legal considerations may arise as code generation technology continues to evolve, and it will be important to continue monitoring these developments. ## 6 Summary, Conclusions and Recommendations This paper has screened the capabilities and limitations of ChatGPT, a large language model designed for use in software development. ChatGPT's power lies in its ability to act as an intermediary between the programmer's natural language and the machine code. As a result, it offers a solution that simplifies the development process for non professional programmers. This streamlines the development process for non-professional programmers and simplifies the task of coding. OpenAI released upgraded GPT-3 and Codex models in March 2022. These models had impressive new capabilities and were trained on a large dataset. In November 2022, OpenAI introduced the "GPT-3.5" series, including the text and code models, with further fine-tuning. OpenAI also released ChatGPT based on GPT-3.5 models. The tested Codex models's ability to solve programming problems, was found to outperformed GPT models with up to 28.8% accuracy. Fine-tuning on correctly implemented functions resulted in Codex-S, which could solve 37.7% of problems with a single sample. Codex-S generated at least one correct function within 100 samples for 77.5% of the problems. However, it is important to consider that the current version of ChatGPT is better tuned, therefore its accuracy is higher than the shown figures within the studied paper. Furthermore, this paper provides a detailed examination of ChatGPT's potential applications in software system engineering. The analysis highlights several areas where ChatGPT can be effectively applied, including code smells detection, refactoring, generation, step-by-step execution of source code, bug fixing, code comprehension improvement, and code analysis summarization. These applications offer significant benefits to software developers, helping them to write more efficient, maintainable, and comprehensible code. The findings of this paper suggest that ChatGPT has the potential to transform the field of software engineering and streamline the development process for non-professional programmers. ChatGPT also has several limitations in coding, performance, and text generation. In terms of coding, ChatGPT struggles with complex logic due to a lack of context and background knowledge. The constantly evolving nature of programming languages limits ChatGPT long term coding capabilities. It also has difficulty answering math and science questions, manipulating quantitative information, and representing continuous numbers as discrete tokens. In terms of performance, ChatGPT has limited short-term memory and is vulnerable to adversarial input, making it challenging to deploy in resource-constrained environments. ChatGPT is also incapable of learning new content on the fly, and although it is available as a free cloud service and as a paid low latency cloud service for professionals, it requires large amounts of computing power to run its inference network. Text generation limitations include bias in training data and instability of responses when doubts are raised, which may lead to a reformulation of the response with opposite affirmations, or overconfidence in sticking to its responses despite user corrections. The Broader Impacts Hazard Analysis of ChatGPT highlights several concerns with using code generation models. The overreliance on the generated outputs of ChatGPT can be risky, especially when the model provides solutions that only appear correct but do not perform the intended task. Misalignment is another concern, where ChatGPT may suggest unhelpful code to the user due to its training distribution. The susceptibility to bias and representation issues is also a significant problem, and mitigation strategies such as filtration and providing appropriate documentation are essential. The security implications of ChatGPT generating vulnerable or misaligned code need to be considered, and appropriate measures must be taken to mitigate risks. The high energy demands of ChatGPT during training and inference have important environmental implications, and it is crucial to explore ways to reduce their environmental impact. Additionally, the impact of ChatGPT on the economy and labor market is complex and multifaceted, and further research is needed to fully understand the economic and labor market impacts of these technologies. Lastly, legal considerations such as fair use of training data need to be taken into account as code generation technology continues to evolve. ChatGPT has the potential to be a valuable tool in software development in any organization. ChatGPT limitations are understood and its output is supervised and improved. It can be used in a variety of applications such as writing code for PhD projects, refactoring old projects code, and translating programming languages after implementation, acting as a coding buddy etc. In particulary,by incorporating ChatGPT as a tool for buddy programming, non-professional programmers could receive automated suggestions for code implementation, refactoring, and language translation, improving their code quality and development skills. This could ultimately lead to reduced overallsoftware development costs, improved software quality, and scalability in software development. Early tests of model driven development (MDD) shows that ChatGPT could also be used in this direction and based on the experience of our institute in this domain [Ceravola et Al. 2006][Sadik et Al. 2019], MDD with ChatGPT will certainly be investigated more systematically in the near future. The limitations and impact of ChatGPT are critical research areas. One such area is security, as ChatGPT does not currently consider security issues when generating code. However, we can anticipate that future models of ChatGPT will be trained on code that emphasizes security, which will enable them to write more secure code than the average programmer. Another important research direction is designing language models that can be trained with minimal energy consumption, as training and fine-tuning large language models require a significant amount of energy. In addition, ChatGPT's inability to evolve in tandem with evolving programming languages can be addressed through continuous learning or learning on the fly. By doing so, ChatGPT's scalability can be improved, and it can keep up with the fast pace of programming language development. In conclusion, ChatGPT is a powerful tool that can be utilized for software architecturing, whether in software design using forward engineering methods or in refactoring existing software using reverse engineering techniques. In this sense it is a perfect tool for collaborative intelligence [Sendhoff and Wersing 2020] and can/should be used with human developers in the loop. By incorporating ChatGPT into research and innovation strategy, innovative approaches can be explored and advancements in software development achieved, leading to an increase in both work scale and work quality. The potential of ChatGPT in this field is vast and exciting new developments can be expected as we continue to integrate it into software practices. ## 7 Acknowledgment The examples cited in this paper have been constructed using ChatGPT and extracted from cited papers. ChatGPT has been used as well to improve some of this paper linguistics.
2308.07754
Nonleptonic two-body weak decays of charmed baryons
We analyze the two-body nonleptonic weak decays of charmed baryons, employing the pole approximation in tandem with the $SU(3)_F$ symmetry. We are able to make novel predictions for decay channels of $\Omega_c^0 \to {\bf B}_n P$ and ${\bf B}_{cc}\to {\bf B}_c^{A,S} P$ based on the experimental data of ${\bf B}_c^A \to {\bf B}_n P$. Here, ${\bf B}_n$, ${\bf B}_{c}^A$, ${\bf B}_c^S$ and ${\bf B}_{cc}$ are the low-lying octet, antitriplet charmed, sextet charmed and doubly charmed baryons, respectively, and $P$ is the pseudoscalar meson. Our findings reveal that the fitted effective Wilson coefficient ${\cal C}_+=0.469$ is notably smaller than the naive expectation, and the low-lying pole approximation fails to account for ${\cal B}(\Lambda_c^+ \to n \pi^+ , \Xi^0 K^+)$, despite consistencies with the soft-meson limit. We further recommend the decay channel $\Xi_{cc}^+ \to \Xi_c^0 \pi^+ \to \Xi^- \pi^+\pi^+\pi^+\pi^-$ for exploring evidence of $\Xi_{cc}^+$, estimating the branching fraction at $(1.1\pm 0.6)\times 10^{-3}$.
Chia-Wei Liu
2023-08-15T13:14:55Z
http://arxiv.org/abs/2308.07754v2
# Nonleptonic two-body weak decays of charmed baryons ###### Abstract We systematically analyze the two-body nonleptonic weak decays of charmed baryons, employing the pole approximation in tandem with the \(SU(3)_{F}\) symmetry. Leveraging the Korner-Pati-Woo theorem, we demonstrate a significant reduction in the number of free parameters. Within the general pole scenario, we successfully explain most of the experimental data of \({\bf B}_{c}^{A}\rightarrow{\bf B}_{n}P\), though some inconsistencies are detected and recommended for reexamination in future research. Assuming the dominance of low-lying intermediate baryons, we are able to make novel predictions for decay channels such as \(\Omega_{c}^{0}\rightarrow{\bf B}_{n}P\) and \({\bf B}_{cc}\rightarrow{\bf B}_{c}^{A,S}P\). Here, \({\bf B}_{n}\), \({\bf B}_{c}^{A}\), \({\bf B}_{c}^{S}\) and \({\bf B}_{cc}\) are the low-lying octet, antitriplet charmed, sextet charmed and doubly charmed baryons, respectively, and \(P\) is the pseudoscalar meson. Our findings also reveal that the fitted effective Wilson coefficient \({\cal C}_{+}=0.469\) is notably smaller than the naive expectation, and the low-lying pole scenario fails to account for \({\cal B}(\Lambda_{c}^{+}\to n\pi^{+},\Xi^{0}K^{+})\), despite consistencies with the soft meson limit. We further recommend the decay channel \(\Xi_{cc}^{+}\rightarrow\Xi_{c}^{0}\pi^{+}\rightarrow\Xi^{-}\pi^{+}\pi^{+}\pi^ {+}\pi^{-}\) for exploring evidence of \(\Xi_{cc}^{+}\), estimating the branching fraction at \((1.1\pm 0.6)\times 10^{-3}\). The predictions for nonleptonic weak decay channels are compiled in the appendices, providing valuable references for future experimental validation. Introduction The investigation into charmed baryon decays has attracted significant theoretical interest, driven further by the progress in experiments [1]. For a review, readers are referred to Refs. [2; 3]. At the BESIII facility, the lightest charmed baryon, \(\Lambda_{c}^{+}\), has been rigorously examined through \(e^{+}e^{-}\) interactions at a central energy of \(\sqrt{s}=4.6\) GeV [4]. These investigations have yielded remarkably precise measurements of branching fractions and decay asymmetries [5; 6; 7]. The resonance structure of \(e^{+}e^{-}\to\Lambda_{c}^{+}\Lambda_{c}^{-}\), providing a clean background, has facilitated the BESIII collaboration's ability to measure \(\Lambda_{c}\to n\pi^{+}\) in spite of the challenges posed by neutrons [8]. However, the comprehensive study of the entire charmed baryon family necessitates the synthesis of results from multiple experimental facilities, as only \(\Lambda_{c}^{+}\) is currently accessible at BESIII. Through the \(B\) meson decay chain, the Belle collaboration has access to all the low-lying antitriplet charmed baryons (\({\bf B}_{c}^{A}=\Lambda_{c}^{+},\Xi_{c}^{+},\Xi_{c}^{0}\)) [9; 10; 11; 12]. A significant recent breakthrough includes the measurement of absolute branching fractions for \(\Xi_{c}^{0}\to\Xi^{-}\pi^{+}\)[13] and \(\Xi_{c}^{0}\to\Xi^{-}e^{+}\nu_{e}\)[14], revealing substantial \(SU(3)_{F}\) symmetry breaking [15]. Conversely, the LHCb collaboration has obtained the largest charmed hadron samples from \(pp\) collisions at \(\sqrt{s}=7,8,12\) GeV. Despite more complex backgrounds compared to those at BESIII and Belle, the majority of new charmed baryon discoveries [16], including the famed doubly charmed baryon [17], have been made at LHCb. Additionally, Belle and LHCb have revisited the lifetimes of certain baryons [18; 19], with notable deviations found in the measured lifetimes of \(\Xi_{c}^{0}\) and \(\Omega_{c}^{0}\) compared to previous experiments [20]. These measurements, however, are consistent with the heavy quark expansion (HQE) [21]. Thanks to the optical theorem, the inclusive decay widths of charmed hadrons can be at least qualitatively studied [22]. It is understood that the contributions of the dimension-6 operators in the HQE, suppressed by \((\Lambda_{\rm QCD}/M_{c})^{3}\), may exceed those of the dimension-3 operators due to phase space enhancement [23]. This emphasizes the leading role of the \(W\)-exchange diagrams in decays. However, as of now, there is no reliable method derived from first principles to address the \(W\)-exchange diagrams in exclusive decays, leading to the need for several approximations [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. One less model-dependent approach is to perform a global fit using the \(SU(3)\) flavor (\(SU(3)_{F}\)) symmetry, which has become popular [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58]. Nevertheless, even in the simplest case of \({\bf B}_{c}^{A}\rightarrow{\bf B}_{n}P\), where \({\bf B}_{n}\) and \(P\) represent the octet baryon and pseudoscalar meson respectively, this method requires dozen one-time parameters. While the results of the global fit often align with the experimental data used for fitting, the predictive accuracy is disputable. The predicted branching fractions significantly diverge across various theoretical studies relying on the \(SU(3)_{F}\) symmetry, illustrating that the free parameters are not tightly constrained by the existing experimental data. In an effort to reduce the number of free parameters in the \(SU(3)_{F}\) global fit, Geng, Tsai, and the author of this work considered the pole approximation in 2019 [46; 47]. This approach, grounded in the Korner-Pati-Woo (KPW) theorem [59], enables the exclusion of six parameters from \(O_{+}^{qq^{\prime}}\). Here \(O_{+}^{qq^{\prime}}\) is the four-quark operator in the effective Hamiltonian [60] \[{\cal H}_{eff}=\sum_{q,q^{\prime}=d,s}\frac{G_{F}}{\sqrt{2}}V_{cq}^{*}V_{uq^{ \prime}}\left(c_{+}O_{+}^{qq^{\prime}}+c_{-}O_{-}^{qq^{\prime}}\right)\,, \tag{1}\] with \[O_{\pm}^{qq^{\prime}}=\frac{1}{2}\left[\left(\bar{u}q^{\prime}\right)_{V-A} \left(\bar{q}c\right)_{V-A}\pm\left(\bar{q}q^{\prime}\right)_{V-A}\left(\bar{u} c\right)_{V-A}\right]\,, \tag{2}\] where \(G_{F}\) is the Fermi constant and \(V_{qq^{\prime}}\) is the Cabibbo-Kobayashi-Maskawa matrix element. After considering the factorizable contributions of \(O_{+}^{qq^{\prime}}\), the smallness of \({\cal B}(\Lambda_{c}^{+}\to p\pi^{0})\) is explained [46]. More importantly, Ref. [47] predicted that \[{\cal B}(\Lambda_{c}^{+}\rightarrow\Sigma^{+}K_{S}^{0})={\cal B}( \Lambda_{c}^{+}\rightarrow\Sigma^{0}K^{+}) \tag{3}\] \[\frac{{\cal B}(\Xi_{c}^{0}\rightarrow\Sigma^{0}K_{S}^{0})}{{\cal B }(\Xi_{c}^{0}\rightarrow\Xi^{0}\pi^{-})}=(2.3\pm 1.8)\% \tag{4}\] which were not measured at that time. In particular, Eq. (3) is an critical prediction stemming from the KPW theorem and the modest ratio in Eq. (4) is quite surprising as both of them are Cabibbo favored (CF). These theoretical benchmarks have since been found consistent with recent experimental results [7; 11]. In the present study, we build upon the framework established in Ref. [47], extending it to include the decays of \(\Omega_{c}^{0}\) and doubly charmed baryons. To accomplish this, we make two critical approximations: 1. We assume that the intermediate states are principally dominated by the low-lying baryons with spin-parity \(\frac{1}{2}^{+}\) and \(\frac{1}{2}^{-}\). 2. We posit that the flavor of the spectator quark exerts only a minimal influence on the baryon matrix elements. It is worth noting that these approximations have been found to hold in the majority of model-dependent studies, and a detailed discussion on them will be provided later in this work. This paper is structured in the following manner. In Section II, we delineate the \(SU(3)_{F}\) representations of the charmed baryons. Section III is devoted to the evaluation of the factorizable contributions, utilizing the form factors derived from Lattice QCD (LQCD). In Section IV, we thoroughly analyze the pole amplitudes and explore the dependencies on mass. Section V presents the numerical results, and finally, we conclude our findings in Section VI. ## II \(Su(3)_{f}\) representation and Korner-Pati-Woo theorem In general, the amplitudes of \({\bf B}_{i}\rightarrow{\bf B}_{f}P\) read \[{\cal M}=\langle{\bf B}_{f}P;t\rightarrow\infty|{\cal H}_{eff}|{\bf B}_{i} \rangle=i\overline{u}_{f}\left(A+B\gamma_{5}\right)u_{i}\,, \tag{5}\] where \(u_{i(f)}\) is the Dirac spinor of the initial(final) baryon and \(A(B)\) is the parity-violating(conserving) amplitude, corresponding to the \(S(P)\)-partial wave. If the final state interaction is absent, one can freely interchange \(t\rightarrow\pm\infty\) and \(A\) and \(B\) must be real. The decay width \(\Gamma\) and up-down asymmetry \(\alpha\) are calculated by \[\Gamma = \frac{p_{f}}{8\pi}\left(\frac{(M_{i}+M_{f})^{2}-M_{P}^{2}}{M_{i}^{2 }}|A|^{2}+\frac{(M_{i}-M_{f})^{2}-M_{P}^{2}}{M_{i}^{2}}|B|^{2}\right)\,, \tag{6}\] \[\alpha=-\frac{2\kappa\ {\rm Re}(A^{*}B)}{|A|^{2}+\kappa^{2}|B|^{2}}\,, \kappa=\frac{p_{f}}{E_{f}+M_{f}}\,,\] where \(M_{i,f}\) and \(M_{P}\) are the masses of \({\bf B}_{i,f}\) and \(P\), respectively and \(p_{f}\) and \(E_{f}\) are the magnitudes of the 3-momentum and energy of \({\bf B}_{f}\) at the rest frame of \({\bf B}_{i}\). To relate the decays with the \(SU(3)_{F}\) symmetry, one has to write down the hadron representations in the \(SU(3)_{F}\) group. We start with the low-lying pseudoscalar mesons. The responsible \(SU(3)_{F}\) tensor is given by \[P_{j}^{i}=\left(\begin{array}{ccc}\frac{1}{\sqrt{6}}\eta_{8}+\frac{1}{\sqrt {2}}\pi^{0}&\pi^{+}&K^{+}\\ \pi^{-}&\frac{1}{\sqrt{6}}\eta_{8}-\frac{1}{\sqrt{2}}\pi^{0}&K^{0}\\ K^{-}&\overline{K}^{0}&-\frac{2}{\sqrt{6}}\eta_{8}\end{array}\right)_{ij}\,, \tag{7}\] which is related to the flavor part of wave functions according to \[|P\rangle=P_{j}^{i}|q_{i}\overline{q}^{j}\rangle\,. \tag{8}\] Here, the superscript and subscript of \(P_{j}^{i}\) describe the quark and antiquark flavors with \(i,j\in\{1,2,3\}\) and \((q_{1},q_{2},q_{3})=(u,d,s)\). Figure 1: The \(SU(4)_{F}\)\({\bf 20}\) multiplet represented by \(\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\hrule height 0.4pt width 100%\vrule width 0.4pt height 6.0pt\hrule height 0.4pt width 100%\vrule width 0.4pt height 6.0pt\hrule height 0.4pt width 100% \vrule width 0.4pt height 6.0pt\hrule height 0.4pt width 100%\vrule width 0.4pt height 6.0pt\hrule height 0.4pt width 100% \vrule width 0.4pt height 6.0pt\hrule height 0.4pt width 100% \vrule width 0. We exclusively consider the \(SU(4)_{F}\)**20** multiplets, where the low-lying \(\frac{1}{2}^{+}\) baryons are depicted in FIG. 1. We start with the idempotent of \(\raisebox{-0.5pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt}\rule[6.5pt]{6.5 pt}{6.5pt}}\rule[6.5pt]{0.4pt}{0.4pt}\rule{0.4pt}{6.5pt}}\) in the Young tableau, which projects out a subspace of the \(SU(4)_{F}\) group, given by \[e_{23}=(1-(2,3))(1+(1,2))\,, \tag{9}\] where \((1,2)\) interchange the first and second elements and \((2,3)\) the second and third. For instance, we have \[e_{23}|q_{a}q_{b}q_{c}\rangle=|q_{a}q_{b}q_{c}\rangle+|q_{b}q_{a}q_{c}\rangle-| q_{a}q_{c}q_{b}\rangle-|q_{b}q_{c}q_{a}\rangle\,. \tag{10}\] It is clear that after operating \(e_{23}\), states are antisymmetric in regard to the second and third quarks. The idempotent in Eq. (9) generates a subspace in the sense that \(e_{23}e_{A}=e_{23}e_{S}=0\,,\) where \(e_{S(A)}\) are the totally (anti)symmetric idempotent, given by \[e_{S}=1+(1,2)+(2,3)+(1,3)+(2,3)(1,2)+(1,2)(2,3)\,,\] \[e_{A}=1-(1,2)-(2,3)-(1,3)+(2,3)(1,2)+(1,2)(2,3)\,, \tag{11}\] We stress that throughout this work the \(SU(4)_{F}\) representations are merely bookkeeping tools to unify the expressions and we do not take advantage of the \(SU(4)_{F}\) symmetry. If a light quark \((u,d,s)\) pair is in antisymmetric, we utilize that the totally antisymmetric tensor \(\epsilon^{ijk}\) is invariant under the \(SU(3)_{F}\) transformation to simplify the indices, _i.e._ two antisymmetric quarks transform as an antiquark. As a result, the light quarks of \({\bf B}_{c}^{A}\) are presented by one lower index as \[({\bf B}_{c}^{A})_{i}=(\Xi_{c}^{0},\Xi_{c}^{+},\Lambda_{c}^{+})_{i}\,. \tag{12}\] Eq. (12) can be translated back to a tensor with three quarks by \[({\bf B}_{c}^{A})^{{\rm i}[jk]}=\frac{1}{\sqrt{12}}({\bf B}_{c})_{l}\left(2 \delta_{4}^{i}\epsilon^{jk}+\delta_{4}^{j}\epsilon^{lik}-\delta_{4}^{k} \epsilon^{lij}\right)\,, \tag{13}\] with \(q_{4}=c\). Here, Eq. (13) is derived by \[e_{23}\frac{1}{\sqrt{12}}\left(|cud\rangle-|cdu\rangle\right)=\frac{1}{\sqrt{ 12}}\left(2|cud\rangle-2|cdu\rangle+|ucd\rangle-|dcu\rangle-|udc\rangle+|duc \rangle\right)\,, \tag{14}\] where we have used \(\Lambda_{c}^{+}\) as an instance. We start with \(|cud\rangle-|cdu\rangle\) to make sure its isospin vanishes. One arrives at \(|\Sigma_{c}^{+}\rangle\) if \(|cud\rangle+|cdu\rangle\) is used instead. On the other hand, the other low-lying baryons with spin-parity \(\frac{1}{2}^{+}\) are \[({\bf B}_{n})^{i}_{j} = \left(\begin{array}{ccc}\frac{1}{\sqrt{6}}\Lambda+\frac{1}{ \sqrt{2}}\Sigma^{0}&\Sigma^{+}&p\\ \Sigma^{-}&\frac{1}{\sqrt{6}}\Lambda-\frac{1}{\sqrt{2}}\Sigma^{0}&n\\ \Xi^{-}&\Xi^{0}&-\sqrt{\frac{2}{3}}\Lambda\end{array}\right)_{ij}\,,\] \[({\bf B}_{c}^{S})^{ij} = \left(\begin{array}{ccc}\Sigma_{c}^{++}&\frac{1}{\sqrt{2}} \Sigma_{c}^{+}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime+}\\ \frac{1}{\sqrt{2}}\Sigma_{c}^{+}&\Sigma_{c}^{0}&\frac{1}{\sqrt{2}}\Xi_{c}^{ \prime 0}\\ \frac{1}{\sqrt{2}}\Xi_{c}^{\prime+}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime 0}&\Omega_{c}^{0} \end{array}\right)_{ij}\,.\] \[({\bf B}_{cc})^{i} = (\Xi_{cc}^{++},\Xi_{cc}^{+},\Omega_{cc}^{+})_{i}\,, \tag{15}\] where \({\bf B}_{c}^{S}\) and \({\bf B}_{cc}\) are the singly charmed sextet and doubly charmed baryons, respectively. Similarly, they are translated to tensors with three quark indices by \[({\bf B}_{n})^{i[jk]} = \frac{1}{\sqrt{2}}({\bf B}_{n})^{i}_{l}\epsilon^{ljk}\,,\] \[({\bf B}_{c}^{S})^{i[jk]} = \frac{1}{\sqrt{2}}\left(({\bf B}_{c})^{ij}\delta_{4}^{k}-({\bf B} _{c})^{ik}\delta_{4}^{j}\right)\,,\] \[({\bf B}_{cc})^{i[jk]} = \frac{1}{\sqrt{2}}\left(({\bf B}_{cc})^{j}\delta_{4}^{i}\delta_{ 4}^{k}-({\bf B}_{cc})^{k}\delta_{4}^{i}\delta_{4}^{j}\right)\,, \tag{16}\] which would lead us to the convention in Ref. [3] up to some unphysical overall phase factors. In the quark model, the spin-flavor wave functions are obtained by \[|{\bf B}\rangle=(1+(1,2)+(1,3))\frac{\sqrt{2}}{3}\left[{\bf B}^{i[jk]}|q_{i}q_ {j}q_{k}\rangle\otimes(|\uparrow\uparrow\downarrow\rangle-|\uparrow\downarrow \uparrow\rangle)\right]\,, \tag{17}\] with \({\bf B}\in\{{\bf B}_{n},{\bf B}_{c}^{A,S},{\bf B}_{cc}\}\). The effective Hamiltonian can be written in a compact way of \[{\cal H}_{eff}=\frac{G_{F}}{\sqrt{2}}V_{ud}V_{cs}^{*}\left({\cal H}_{kl}^{ij}( \overline{q}_{i}q^{k})_{V-A}(\overline{q}_{j}q^{l})_{V-A}\right)\,, \tag{18}\] where the nonzero elements are \[{\cal H}_{24}^{13}=c_{1}\,,\hskip 28.452756pt{\cal H}_{24}^{31}=c_{2 }\,,\hskip 28.452756pt{\cal H}_{34}^{13}=c_{1}s_{c}\,,\hskip 28.452756pt{\cal H}_{34}^{31} =c_{2}s_{c}\,,\] \[{\cal H}_{24}^{12}=-c_{1}s_{c}\,,\hskip 14.226378pt{\cal H}_{24}^{21}=- c_{2}s_{c}\,,\hskip 14.226378pt{\cal H}_{34}^{12}=-c_{1}s_{c}^{2}\,,\hskip 14.226378pt{ \cal H}_{34}^{21}=-c_{2}s_{c}^{2}\,, \tag{19}\] \(s_{c}=V_{us}/V_{ud}=0.23\), \(c_{1}=c_{+}+c_{-}\) and \(c_{2}=c_{+}-c_{-}\). Similar to the baryon states, one decomposes the effective Hamiltonian according to the permutation symmetry by \[{\cal H}(\overline{\bf 6})_{kl}\epsilon^{lij}=\frac{1}{c_{-}}\left({\cal H}_{k4}^ {ij}-{\cal H}_{k4}^{ji}\right)\,,\quad{\cal H}({\bf 15})_{k}^{ij}=\frac{1}{2c_{+}} \left({\cal H}_{k4}^{ij}+{\cal H}_{k4}^{ji}\right)\,. \tag{20}\] The factors of \(1/c_{-}\) and \(1/2c_{+}\) are included to match the convention. Comparing to Eq. (1), it is clear that \({\cal H}(\overline{\bf 6})\) and \({\cal H}({\bf 15})\) take account \(O_{-}\) and \(O_{+}\) in the effective Hamiltonian. By far we have only considered the quark flavors and here is an appropriate place to further consider their colors also. With the Fierz transformation, it is straightforward to show that the color structure of \(\overline{q}\) and \(\overline{u}\) in \(O_{+}^{qq^{\prime}}\) is symmetric, and the same also applies to \(c\) and \(q^{\prime}\). Recall that baryons are antisymmetric in color, we arrive at \[\langle q_{a}q_{b}q_{c}|O_{+}^{qq^{\prime}}|{\bf B}_{i}\rangle=0\,, \tag{21}\] where the initial and final states are an arbitrary baryon and three quark state, respectively. The same also applies to \(\langle{\bf B}_{f}|O_{+}^{qq^{\prime}}|q_{a}q_{b}q_{c}\rangle=0\) with \({\bf B}_{f}\) the final state baryon. In the decays \({\bf B}_{i}\rightarrow{\bf B}_{f}P\), the nonfactorizable contributions can be approximated by the pole diagrams shown in FIG. 2, where the symbol \(\times\) marks the insertion of the effective Hamiltonian. This approximation results in the well-known KPW theorem, which states that \(O_{+}\) contributes solely to the factorizable amplitudes. Notably, Eq. (21) is scale-independent, as \(O_{\pm}\) do not undergo mixing in the renormalization group evolution [60]. While a hard gluon exchange could challenge the KPW theorem, any breaking effect is likely below 10%. For a deeper dive into this topic, readers can Figure 2: The \(s\)- (left) and \(u\)-channels (right) of the pole contributions in \({\bf B}_{i}\rightarrow{\bf B}_{f}P\), where \(\times\) denotes the insertion of the effective Hamiltonian and \({\bf B}_{I}\) is the intermediate state. consult Ref. [61]. There, the small branching fraction of \({\cal B}(B^{0}\to p\overline{p})\) is attributed to a violation against the KPW theorem 1. Since this deviation is even less significant than that of the \(SU(3)_{F}\) breaking, we uphold the KPW theorem in this study. Footnote 1: To be explicit, Ref. [61] shows that the amplitude of \(B^{0}\to p\overline{p}\) is proportional to \(c_{+}\) instead of \(c_{-}\). To identify the factorizable contributions of \(O_{+}\), we observe the direct product of \({\cal H}({\bf 15})_{k}^{ij}\) and \((P^{\dagger})_{m}^{l}\) has the representation of \[{\bf 15}\otimes{\bf 8}={\bf 42}\oplus\overline{\bf 24}\oplus{\bf 15}_{1}\oplus{ \bf 15}_{1}\oplus{\bf 15}_{2}\oplus\overline{\bf 6}\oplus{\bf 3}\,. \tag{22}\] Hermitian conjugate is taken in \(P\) as it appears in the final states. The factorizable condition demands that the quark lines of \(P\) originate from \(O_{+}\) exclusively. In other word, all the indices of \((P^{\dagger})_{m}^{l}\) shall contract to the ones of \({\cal H}({\bf 15})_{k}^{ij}\). Symbolically it means that \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig4}}\,, \tag{23}\] where \(\delta\) is the Kronecker delta, \(F^{i}:={\cal H}({\bf 15})_{k}^{ij}(P^{\dagger})_{j}^{k}\) and the other linear combinations do not contribute to \({\bf B}_{i}\to{\bf B}_{f}P\). It shows that only the \({\bf 3}\) representation in Eq. (22) contributes, reducing numbers of free parameters. By identifying the factorizable contribution, we reduce the number of free parameters from 14 to 8 for \({\bf B}_{c}^{A}\to{\bf B}_{n}P\) and arrive at [47] \[A = a_{1}{\cal H}(\overline{\bf 6})_{ij}T^{ik}({\bf B}_{n}^{ \dagger})_{k}^{l}(P^{\dagger})_{l}^{j}+a_{2}{\cal H}(\overline{\bf 6})_{ij}T^{ik}(P^{\dagger})_{k}^{l}({\bf B}_{n}^{ \dagger})_{l}^{j}+a_{3}{\cal H}(\overline{\bf 6})_{ij}({\bf B}_{n}^{\dagger})_{k}^{i}(P^{ \dagger})_{l}^{j}T^{kl} \tag{24}\] \[+a_{6}({\bf B}_{n})_{i}^{j}F^{i}({\bf B}_{c}^{A})_{j}\,,\] where \(T^{ij}\equiv({\bf B}_{c}^{A})_{k}\epsilon^{kij}\) and \(a_{1,2,3,6}\) are free parameters in general. We note that we do not consider \(\eta^{\prime}\) as its mass differs largely from the other pseudoscalar mesons. On the other hand, the \({\bf B}_{c}^{S}\) and \({\bf B}_{cc}\) decays are parameterized by \[A = a_{1}{\cal H}(\overline{\bf 6})_{ij}({\bf B}_{c}^{S})^{ik}({\bf B }_{n}^{\dagger})_{k}^{l}(P^{\dagger})_{l}^{j}+a_{2}{\cal H}(\overline{\bf 6})_{ij}({\bf B}_{c}^{S})^{ik}(P^{\dagger})_{k}^{l}({\bf B}_{n}^{\dagger})_{l}^ {j}+a_{3}{\cal H}(\overline{\bf 6})_{ij}({\bf B}_{n}^{\dagger})_{k}^{i}(P^{ \dagger})_{l}^{j}({\bf B}_{c}^{S})^{kl} \tag{25}\] \[+a_{4}{\cal H}(\overline{\bf 6})_{ij}({\bf B}_{n}^{\dagger})_{k}^{l} (P^{\dagger})_{l}^{k}({\bf B}_{c}^{S})^{ij}+a_{6}({\bf B}_{n}^{\dagger})_{k}^{ j}F^{i}({\bf B}_{c}^{S})^{kl}\epsilon_{ijl}\,,\] for \({\bf B}_{c}^{S}\to{\bf B}_{n}P\), \[A = a_{1}(P^{\dagger})^{j}_{i}({\bf B}_{c}^{A\dagger})^{i}{\cal H}( \overline{\bf 6})_{jk}({\bf B}_{cc})^{k}+a_{2}(P^{\dagger})^{j}_{i}({\bf B}_{c}^{A \dagger})^{k}{\cal H}(\overline{\bf 6})_{jk}({\bf B}_{cc})^{i} \tag{26}\] \[+a_{6}F^{i}({\bf B}_{cc})^{j}({\bf B}_{c}^{A\dagger})^{k}\epsilon_ {ijk}\,,\] for \({\bf B}_{cc}\to{\bf B}_{c}^{A}P\), and \[A = a_{1}(P^{\dagger})^{j}_{i}({\bf B}_{c}^{S\dagger})_{jk}{\cal H}( \overline{\bf 6})_{lm}({\bf B}_{cc})^{l}\epsilon^{ikm}+a_{2}(P^{\dagger})^{l}_{i}({ \bf B}_{c}^{S\dagger})_{jk}{\cal H}(\overline{\bf 6})_{lm}({\bf B}_{cc})^{j} \epsilon^{ikm} \tag{27}\] \[+a_{6}F^{i}({\bf B}_{cc})^{j}({\bf B}_{c}^{S\dagger})_{ij}\,,\] for \({\bf B}_{cc}\to{\bf B}_{c}^{S}P\). The \(P\)-wave amplitudes share the same flavor structures with the \(S\)-wave ones and are obtained by \[B=A(a_{1,2,3,4,6}\to b_{1,2,3,4,6})\,. \tag{28}\] Please note that the same symbols are used to denote the parameters in Eqs. (24), (25), (26), and (27). Although the symbols are the same, it is important to recognize that they do not represent identical values in each equation. We have used the same symbols in these different contexts due to a limitation in the available symbols. The above parameterizaions with (24), (25), (26), and (27) would be referred to as the general pole (GP) scenario. One of the shortcoming of the GP scenario is that there are too many parameters. As there are few available input for \({\bf B}_{c}^{S}\) and \({\bf B}_{cc}\) decays, the GP scenario does not have concrete predictions except for several direct relations. To overcome this problem, we assume that the intermediate baryons \({\bf B}_{I}\) depicted in FIG. 2 are dominated by the low-lying ones, which would be referred to as the low-lying pole (LP) scenario. It allows us to infer the baryon matrix elements exhibited in \({\bf B}_{c}^{S}\to{\bf B}_{n}P\) and \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}P\) from \({\bf B}_{c}^{A}\to{\bf B}_{n}P\). To this end, the next section is devoted to calculating the factorizable contributions, and in the section following the next one, we relate the four-quark operator matrix element in the decays of \({\bf B}_{cc}\) and \({\bf B}_{c}^{A}\) for evaluating the pole diagrams. ## III Factorization contributions The amplitude is decomposed into the factorizable and nonfactorizable parts as \[{\cal M}={\cal M}^{\rm fac}+{\cal M}^{\rm pole}\,, \tag{29}\] followed by \(A=A^{\rm fac}+A^{\rm pole}\) and \(B=B^{\rm fac}+B^{\rm pole}\). The factorizable amplitude reads \[{\cal M}^{\rm fac}=\frac{G_{F}}{\sqrt{2}}\tilde{\cal H}_{kl}^{ij} \langle P|\overline{q}_{i}\gamma_{\mu}(1-\gamma_{5})\overline{q}^{k}|0\rangle \langle{\bf B}_{f}|\overline{q}_{j}\gamma^{\mu}(1-\gamma_{5})q^{l}|{\bf B}_{i }\rangle\,. \tag{30}\] Expressing the baryon matrix element with the \(SU(3)_{F}\) symmetry, we find \[A^{\rm fac} = \frac{G_{F}}{\sqrt{2}}V_{cq}^{*}V_{uq^{\prime}}{\cal C}_{+,0}f_{ P}\left(M_{i}-M_{f}\right)F_{V}\] \[= \frac{G_{F}}{\sqrt{2}}f_{P}\left(M_{i}-M_{f}\right)\tilde{\cal H }_{kl}^{ij}(P^{\dagger})_{i}^{k}\left(A_{1}^{\rm fac}{\bf B}^{m[nl]}{\bf B}_{ m[nj]}^{\dagger}+A_{2}^{fac}{\bf B}^{m[nl]}{\bf B}_{n[jm]}^{\dagger}\right)\,,\] \[B^{\rm fac} = \frac{G_{F}}{\sqrt{2}}V_{cq}^{*}V_{uq^{\prime}}{\cal C}_{+,0}f_{ P}\left(M_{i}+M_{f}\right)G_{V}\,, \tag{31}\] \[= \frac{G_{F}}{\sqrt{2}}f_{P}\left(M_{i}+M_{f}\right)\tilde{\cal H }_{kl}^{ij}(P^{\dagger})_{i}^{k}\left(B_{1}^{\rm fac}{\bf B}^{m[nl]}{\bf B}_{ m[nj]}^{\dagger}+B_{2}^{fac}{\bf B}^{m[nl]}{\bf B}_{n[jm]}^{\dagger}\right)\,,\] where \({\cal C}_{+,0}\) are the effective Wilson coefficients with the subscript denoting the charge of \(P\), \(f_{P}\) is the meson decay constant, \(F_{V}\) and \(G_{V}\) are the leading vector and axial-vector form factors, respectively, \(\tilde{\cal H}\) is obtained by substituting \({\cal C}_{+,0}\) for \(c_{1,2}\) in Eq. (19), and \({\bf B}\) and \({\bf B}^{\dagger}\) are the tensors of \({\bf B}_{i}\) and \({\bf B}_{f}\), respectively, given in Eqs. (13) and (16). From Eq. (17), we have \(A_{2}^{\rm fac}/A_{1}^{\rm fac}=1/2\) and \(B_{2}^{\rm fac}/B_{1}^{\rm fac}=5/4\,\). For \(c\to s\) and \(c\to u/d\) transitions in \({\bf B}_{c}^{A,s}\) decays, we fix \(A_{1}^{\rm fac}\) and \(B_{1}^{\rm fac}\) with \(\Lambda_{c}\to\Lambda\) and \(\Lambda_{c}\to n\) from LQCD at \(q^{2}=0\)[62; 63; 64] \[(F_{V},G_{V})_{c\to s}=(0.643,0.572)\,,\quad(F_{V},G_{V})_{c\to u /d}=(0.672,0.602)\,, \tag{32}\] and arrive at \[(A_{1}^{\rm fac},B_{1}^{\rm fac})_{c\to s}=(-2.572,-1.525)\,, \quad(A_{1}^{\rm fac},B_{1}^{\rm fac})_{c\to u/d}=(-2.195,-1.311)\,. \tag{33}\] At the limit of the \(SU(3)_{F}\) symmetry, the form factors of \(c\to s\) and \(c\to u/d\) would be numerically the same. Here we see that they deviate roughly 15%, which is a common size of the \(SU(3)_{F}\) breaking. The form factors of \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}\) from LQCD are not available yet. Nonetheless, we utilize the approximation that the form factors are independent of the spectator quark flavors, which allows us to infer them from \(\Lambda_{c}^{+}\to\Lambda/n\). We match the form factors of \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}\) to those of \({\bf B}_{c}^{A}\to{\bf B}_{n}\) at the same \(\omega=v_{i}\cdot v_{f}\) with \(v_{i(f)}\) representing the 4-velocity of \({\bf B}_{i(f)}\). By using the form factors provided in Refs. [62; 63], we arrive at \[(A_{1}^{\rm fac},B_{1}^{\rm fac})_{c\to s}=(-3.615-1.939)\,,\;\;\;\;\;(A_{1}^{ \rm fac},B_{1}^{\rm fac})_{c\to d}=(-3.518,-1.813)\,. \tag{34}\] for \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}\). The main difference between Eqs. (33) and (34) arises from the \(\omega\) dependencies of the form factors. Specifically, the values of \((\omega-1)\) for the transitions \(\Xi_{cc}^{++}\to\Xi_{c}^{+}\pi^{+}\) and \(\Lambda_{c}^{+}\to\Lambda\pi^{+}\) are 0.074 and 0.269, respectively, which deviate significantly from each other. In this work, we fix \({\cal C}_{0}=-0.36\pm 0.04\) by \({\cal B}_{exp}(\Lambda_{c}^{+}\to p\phi)\) from the experiment as shown in Appendix A while \({\cal C}_{+}\) is treated as a free parameter in general. ## IV Pole contributions The amplitude of the \(s\)-channel can be illustratively represented in the form: \[\propto g_{{\bf B}_{f}{\bf B}_{f}P}\frac{1}{M_{i}-M_{I}}\langle{\bf B}_{I}|{ \cal H}_{\rm eff}|{\bf B}_{i}\rangle\,, \tag{35}\] where \({\bf B}_{I}\) and \(M_{I}\) denote the intermediate baryon and its corresponding mass, respectively. The coupling of \({\bf B}_{I}-{\bf B}_{f}-P\) is represented by \(g_{{\bf B}_{f}{\bf B}_{f}P}\). The \(u\)-channel amplitude can be parameterized in a manner akin to the above expression. In this work, the baryon-meson couplings of \(g_{{\bf B}{\bf B}^{(*)}P}\) are extracted by the generalized Goldberg-Treiman relations \[g_{{\bf B}^{\prime}{\bf B}P}=\frac{\sqrt{2}}{f_{P}}(M^{\prime}+M)g_{{\bf B}^{ \prime}{\bf B}}^{P}\,,\;\;\;\;g_{{\bf B}^{*}{\bf B}P}=\frac{\sqrt{2}}{f_{P}}(M ^{*}-M)g_{{\bf B}^{*}{\bf B}}^{P}\,, \tag{36}\] where \[(P^{\dagger})^{i}_{j}\langle{\bf B}^{\prime}|\overline{q}_{i} \gamma_{\mu}\gamma_{5}q^{j}|{\bf B}\rangle = \overline{u}_{{\bf B}^{\prime}}\left(g_{{\bf B}^{\prime}{\bf B} }^{P}\gamma_{\mu}-ig_{2}\sigma_{\mu\nu}q^{\nu}+g_{3}q_{\mu}\right)\gamma_{5}u_ {{\bf B}}\,,\] \[(P^{\dagger})^{i}_{j}\langle{\bf B}^{*}|\overline{q}_{i}\gamma_{ \mu}\gamma_{5}q^{j}|{\bf B}\rangle = \overline{u}_{{\bf B}^{*}}\left(g_{{\bf B}^{*}{\bf B}}^{P}\gamma_ {\mu}-ig_{2}\sigma_{\mu\nu}q^{\nu}+g_{3}q_{\mu}\right)u_{{\bf B}}\,. \tag{37}\] The symbols \({\bf B}^{\prime}\) and \({\bf B}^{*}\) denote the intermediate baryons with spin-parity \(\frac{1}{2}^{+}\) and \(\frac{1}{2}^{-}\), respectively. The corresponding masses of \({\bf B}^{(\prime)}\) and \({\bf B}^{*}\) are represented by \(M^{(\prime)}\) and \(M^{*}\). The Goldberg-Treiman relations are derived by operating \(q^{\mu}\) on both sides of Eq. (37) and impose the equation of motion. The actual values of \(g_{2}\) would be irrelevant to this work and \(g_{3}\) is mainly contributed by the baryon-meson couplings. The baryon matrix elements of the effective Hamiltonian with \(\Delta c=-1\) are decomposed as \[\langle{\bf B}^{\prime}|{\cal H}_{eff}|{\bf B}\rangle=\overline{u }_{{\bf B}^{\prime}}\left(a_{{\bf B}^{\prime}{\bf B}}-b_{{\bf B}^{\prime}{\bf B }}\gamma_{5}\right)u_{{\bf B}}\,,\] \[\langle{\bf B}^{*}|{\cal H}_{eff}|{\bf B}\rangle=\overline{u}_{{ \bf B}^{*}}b_{{\bf B}^{*}{\bf B}}u_{{\bf B}}\,,\quad\langle{\bf B}|{\cal H}_{ eff}|{\bf B}^{*}\rangle=\overline{u}_{{\bf B}}b_{{\bf B}{\bf B}^{*}}u_{{\bf B}^{*}}\,. \tag{38}\] In the following, \(b_{{\bf B}^{\prime}{\bf B}}\) will be dropped as it is tiny [65]. Collecting Eqs. (35), (36) and (38), we are led to \[A^{\rm pole}\left({\bf B}_{c}\rightarrow{\bf B}_{n}P\right)= \frac{\sqrt{2}}{f_{P}}\sum_{{\bf B}_{n,c}^{*}}\left(R_{c}^{A_{s}}g_{{\bf B}_{n }{\bf B}_{n}^{*}}^{P}b_{{\bf B}_{n}^{*}{\bf B}_{c}}+R_{c}^{A_{u}}b_{{\bf B}_{ n}{\bf B}_{c}^{*}}g_{{\bf B}_{c}^{*}{\bf B}_{c}}^{P}\right)\,,\] \[B^{\rm pole}\left({\bf B}_{c}\rightarrow{\bf B}_{n}P\right)= \frac{\sqrt{2}}{f_{P}}\sum_{{\bf B}_{n,c}^{\prime}}\left(R_{c}^{B_{s}}g_{{\bf B }_{n}{\bf B}_{n}^{\prime}}^{P}a_{{\bf B}_{n}{\bf B}_{c}}+R_{c}^{B_{u}}a_{{\bf B }_{n}{\bf B}_{c}}g_{{\bf B}_{c}^{\prime}{\bf B}_{c}^{\prime}}^{P}\right)\,,\] and \[A^{\rm pole}\left({\bf B}_{cc}\rightarrow{\bf B}_{c}P\right)= \frac{\sqrt{2}}{f_{P}}\sum_{{\bf B}_{c,cc}^{*}}\left(R_{cc}^{A_{s}}g_{{\bf B}_ {c}{\bf B}_{c}^{*}}^{P}b_{{\bf B}_{c}^{*}{\bf B}_{cc}}+R_{cc}^{A_{u}}b_{{\bf B }_{c}{\bf B}_{cc}^{*}}g_{{\bf B}_{c}^{*}{\bf B}_{cc}}^{P}\right)\,,\] \[B^{\rm pole}\left({\bf B}_{cc}\rightarrow{\bf B}_{c}P\right)= \frac{\sqrt{2}}{f_{P}}\sum_{{\bf B}_{c,cc}^{*}}\left(R_{cc}^{B_{s}}g_{{\bf B}_ {c}{\bf B}_{c}^{\prime}}^{P}a_{{\bf B}_{c}^{\prime}{\bf B}_{c}^{\prime}{\bf B} _{c}}+R_{cc}^{B_{u}}a_{{\bf B}_{c}{\bf B}_{c}^{\prime}}g_{{\bf B}_{c}^{\prime}{ \bf B}_{c}{\bf B}_{cc}}^{P}\right)\,,\] where the mass ratios are defined by \[R_{c}^{A_{s}}=\frac{M_{n}-M_{n^{*}}}{M_{c}-M_{n^{*}}}\,,\quad R _{c}^{A_{u}}=\frac{M_{c}-M_{c^{*}}}{M_{c^{*}}-M_{n}}\,,\] \[R_{c}^{B_{s}}=\frac{M_{n}+M_{n^{\prime}}}{M_{c}-M_{n^{\prime}}} \,,\quad R_{c}^{B_{u}}=\frac{M_{c}+M_{c^{\prime}}}{M_{n}-M_{c^{\prime}}}\,, \tag{41}\] and \[R_{cc}^{A_{s}}=\frac{M_{c}-M_{c^{*}}}{M_{cc}-M_{c^{*}}}\,,\quad R _{cc}^{A_{u}}=\frac{M_{d}-M_{cc^{*}}}{M_{cc^{*}}-M_{c}}\,,\] \[R_{cc}^{B_{s}}=\frac{M_{c}+M_{c^{\prime}}}{M_{cc}-M_{c^{\prime}}} \,,\quad R_{cc}^{B_{u}}=\frac{M_{cc}+M_{cc^{\prime}}}{M_{c}-M_{cc^{\prime}}}\,. \tag{42}\] Here, \(M_{n,c,d^{(\prime,*)}}\) represent the masses of \({\bf B}_{n,c,d}^{(\prime,*)}\), respectively. Up to the present, there is no ample data to accurately fit the unknown hadronic parameters for \(\Omega_{c}^{0}\) and \({\bf B}_{cc}\) decays. In the subsequent analysis, we will utilize two essential approximations, as delineated in the Introduction: * The intermediate states \({\bf B}_{I}\) are exclusively confined to the low-lying \({\bf 20}\) multiplets of the \(SU(4)_{F}\) group. Here, \({\bf 20}={\bf 8}\oplus\overline{\bf 3}\oplus{\bf 6}\oplus{\bf 3}\) in the \(SU(3)_{F}\) group. * The baryon matrix elements are independent of the spectator quarks, implying that the amplitudes shown in Fig. 3 do not depend on \(q^{(\prime)}\). The reliability of our predictions hinges on the validity of these two approximations. The first approximation emphasizes that \({\bf B}^{\prime}\in\{{\bf B}_{c}^{A,S},{\bf B}_{n},{\bf B}_{cc}\}\) and \({\bf B}^{*}\) belong to the representation of \({\bf 20}\) also. On the other hand, we have already used the second approximation to extract the form factors of \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}\) in Eq. (34), which are essentially two-quark operator baryon matrix elements. For the four-quark operators, it facilitates the parameterization expressed in \[a_{{\bf B}^{\prime}{\bf B}}=\tilde{a}{\bf B}^{i[jk]}{\cal H}_{jk}^{lm}{\bf B}_ {ilm}^{\prime\dagger}\,,\hskip 14.226378ptb_{{\bf B}^{*}{\bf B}}=\tilde{b}{ \bf B}^{i[jk]}{\cal H}_{jk}^{lm}{\bf B}_{ilm}^{*\dagger}\,,\hskip 8.535827ptb_{{\bf B }{\bf B}^{*}}=\tilde{b}^{\prime}{\bf B}^{*i[jk]}{\cal H}_{jk}^{lm}{\bf B}_{ilm }^{\dagger}\,, \tag{43}\] and \[g_{{\bf B}^{\prime}{\bf B}}^{P}=g_{1}{\bf B}^{i[jk]}{\bf B}_{i[ jl]}^{\prime\dagger}(P^{\dagger})_{k}^{l}+g_{2}{\bf B}^{i[jk]}{\bf B}_{j[ li]}^{\prime\dagger}(P^{\dagger})_{k}^{l}\,,\] \[g_{{\bf B}^{*}{\bf B}}^{P}=g_{{\bf B}^{*}{\bf B}}^{\overline{P}}= g_{1}^{\prime}{\bf B}^{i[jk]}{\bf B}_{i[jl]}^{*\dagger}(P^{\dagger})_{k}^{l}+g_{2}^ {\prime}{\bf B}^{i[jk]}{\bf B}_{j[li]}^{*\dagger}(P^{\dagger})_{k}^{l}\,. \tag{44}\] Furthermore, by implementing Eq. (17), we obtain the ratio \(g_{2}/g_{1}=5/4\), leading to the vanishing of \(g_{{\bf B}_{c}^{A}{\bf B}_{c}^{A}}^{P}\)[66]. Incorporating Eqs. (43) and (44) into Eqs. (39) and (40) and summing over \({\bf B}_{I}\), we eliminate the tensors of the intermediate states by employing the completeness relation [3] \[\sum_{{\bf B}_{c}^{A}}({\bf B}_{c}^{A})_{i}({\bf B}_{c}^{A\dagger}) ^{j}=\delta_{i}^{j}\,,\hskip 28.452756pt\sum_{{\bf B}_{n}}({\bf B}_{n})_{j} ^{i}({\bf B}_{n}^{\dagger})_{l}^{k}=\delta_{l}^{i}\delta_{j}^{k}-\frac{1}{3} \delta_{j}^{i}\delta_{l}^{k}\,,\] \[\sum_{{\bf B}_{cc}}({\bf B}_{cc})^{i}({\bf B}_{cc}^{\dagger})_{j}= \delta_{j}^{i}\,,\hskip 28.452756pt\sum_{{\bf B}_{c}^{\overline{c}}}({\bf B }_{c}^{S})^{ij}({\bf B}_{c}^{S\dagger})_{kl}=\frac{1}{2}\left(\delta_{k}^{i} \delta_{l}^{j}+\delta_{k}^{j}\delta_{l}^{i}\right)\,, \tag{45}\] where we have taken the baryons with spin-parity \(\frac{1}{2}^{+}\) as examples. The same relation would hold for \({\bf B}^{*}_{n,c,cc}\) as they belong to the same \(SU(3)_{F}\) group, which allows us to consider the contributions of negative baryons without specifying them. To calculate \(R^{B_{s,u}}_{c,cc}\), the masses of \({\bf B}^{(\prime)}\) are readily available from experimental measurements [1]. However, the masses of \({\bf B}^{*}\) are not fully available yet. For the charmless octet baryons, we consider the states \(N(1535)\) and \(\Sigma(1750)\), taking the average mass value of \(M_{n^{*}}=1643\,{\rm MeV}\). For the charmed baryons with negative parity \({\bf B}^{*}_{c}\), we identify the candidates as \(\Lambda^{+}_{c}(2595)\), \(\Xi^{+}_{c}(2790)\), and \(\Sigma^{+}_{c}(2792)\), from which we calculate the average masses \(M_{c^{*}}=2700\,{\rm MeV}\) and \(M_{c^{*}}=2900\,{\rm MeV}\) for the \(\overline{\bf 3}\) and \({\bf 6}\) representations, respectively. In the case of the doubly charmed baryons with \(J=\frac{1}{2}^{-}\), we adopt the value \(M_{cc^{*}}=3932\,{\rm MeV}\)[67]. Summarizing, the mass ratios related to the \(J=\frac{1}{2}^{-}\) baryons utilized in this work are expressed as: \[(R^{A_{u}}_{c},R^{A_{s}}_{c}(\overline{\bf 3}),R^{A_{s}}_{c}({ \bf 6}))=(-0.671,-0.207,-0.298),\] \[(R^{A_{u}}_{cc}(\overline{\bf 3}),R^{A_{u}}_{cc}({\bf 6}),R^{A_{s}} _{cc})=(-0.350,-0.725,-0.201)\,, \tag{46}\] where the parenthesis denotes the representation of \(M^{*}_{c}\). We note that focusing solely on \({\bf B}^{A}_{c}\) decays, the uncertainties in \(R^{A_{s,u}}_{c,cc}\) would be incorporated into the baryon matrix elements of \(g^{\prime}_{1,2}\) and \(\tilde{b}^{(\prime)}\). Consequently, the uncertainties in Eq. (46) would only influence the predictions for the \(\Omega^{0}_{c}\) and \({\bf B}_{cc}\) decays. Figure 3: The topological diagrams for the baryon matrix elements of the two-quark and four-quark operators. We use the approximation of that their magnitudes do not depend on \(q^{(\prime)}\). Numerical results The numerical results of this study are organized into several subsections. In Subsec. A, we recall the experimental data of the \({\bf B}_{c}^{A}\) decays, and the free parameters in both the LP and GP scenarios are extracted accordingly. Although the GP scenario provides more reliable predictions for \({\bf B}_{c}^{A}\) decays, the LP scenario has broader applications, _i.e._, its parameters can be applied to both \(\Omega_{c}^{0}\) and \({\bf B}_{cc}\) decays. Subsec. B and C are devoted to the study of \(\Omega_{c}^{0}\) and \({\bf B}_{cc}\) decays in the LP scenario, respectively. ### Results of \({\bf B}_{c}^{A}\) decays Since \(g_{{\bf B}^{\prime}{\bf B}}^{P}(g_{{\bf B}^{\prime}{\bf B}}^{P})\) is always followed by \(a_{{\bf B}^{\prime}{\bf B}}(b_{{\bf B}^{\prime}{\bf B}})\) (see Eqs. (39) and (40)), we absorb \(g_{1}(g_{1}^{\prime})\) into \(\tilde{a}(\tilde{b}^{(\prime)})\) so that \(g_{1}^{(\prime)}=1\). For the nonfactorizable amplitudes, there remain \((\tilde{a})\) and \((\tilde{b},\tilde{b}^{\prime},g_{2}^{\prime})\) to be fitted in the \(P\)- and \(S\)-waves, respectively. Comparing to the GP scenario, the parameters of the nonfactorizable amplitudes in the \(P\)-waves have been reduced from 3 to 1. It is due to that we have related \(g_{{\bf B}_{a}{\bf B}_{n}^{\prime}}^{P}\) with \(g_{{\bf B}_{c}{\bf B}_{c}^{\prime}}^{P}\) in Eq. (44) and demand \(4g_{2}=5g_{1}\). On the other hand, due to a lack of knowledge of parity-odd baryons, we impose no further constraints on the \(S\)-waves in comparison to the GP scenario. The experimental data of the \({\bf B}_{c}^{A}\) decays up to date [1; 6; 12] are collected in Table 1. By adopting the minimal \(\chi^{2}\) fitting, we find \[\left(\tilde{a},\tilde{b},\tilde{b}^{\prime},g_{2}^{\prime},{\cal C}_{+} \right)=\left(2.06\pm 0.25,12.51\pm 1.03,-4.01\pm 1.13,0.148\pm 0.075,0.467 \pm 0.034\right), \tag{47}\] where \((\tilde{a},\tilde{b},\tilde{b}^{\prime})\) are in units of \(10^{-3}G_{F}\)GeV\({}^{3}\). In the limit of the \(SU(4)_{F}\) symmetry, we would expect \(\tilde{b}=\tilde{b}^{\prime}\), but we observe a significant \(SU(4)_{F}\) breaking as they differ both in sign and magnitude. It indicates that the charm quark and the light quarks behave very differently in \({\bf B}^{*}\). We note that \({\cal C}_{+}\) is twice smaller than the expected value of \({\cal C}_{+}\approx 1.2\) from the effective color number approach, discussed in Appendix A. For comparison, we also update the results of the GP scenario. The free parameters in Eq. (24) are found to be \[(a_{1},a_{2},a_{3},a_{6})=(3.25\pm 0.11,1.60\pm 0.07,0.58\pm 0.12,1.74\pm 0.22) \tag{48}\] \[(b_{1},b_{2},b_{3},b_{6})=(11.66\pm 0.19,-4.96\pm 0.19,2.87\pm 0.27,-0. 03\pm 0.36)\,,\] in units of \(10^{-2}G_{F}\)GeV\({}^{2}\). Comparing to the previous values2, we see that the parameters modify significantly. It is a hint of that the results shall not be trust fully. Since the \(SU(3)_{F}\) symmetry is not exact and too many parameters are required, it is reasonable that the best fitting solutions are not stable along with the experimental update. Footnote 2: With the experimental data up to 16, May, 2019, Ref. [47] reported \((a_{1},a_{2},a_{3},a_{6})=(4.34\pm 0.50,-1.33\pm 0.32,1.25\pm 0.36,-0.26\pm 0.64)\) and \((b_{1},b_{2},b_{3},b_{6})=(9.20\pm 2.09,8.03\pm 1.19,-1.42\pm 1.61,4.05\pm 2.48)\) instead. In regard to the results in Table I, several comments are in order: * For \(\alpha(\Lambda_{c}^{+}\to pK_{S})\), \({\cal B}(\Xi_{c}^{0}\to\Lambda K_{S})\) and especially \({\cal B}(\Xi_{c}^{0}\to\Xi^{-}\pi^{+})\), good accordance is found in two scenarios but both suggest very different values against the current experimental data. It indicates that the short distance contributions may play a dominate role in these decays. Experimental revisits on these channels shall be welcome. * The results of \(\Lambda_{c}^{+}\to\Xi^{0}K^{+}\), \(\Lambda_{c}^{+}\to n\pi^{+}\) and \(\Xi_{c}^{0}\to\Sigma^{+}K^{-}\) deviate largely between two scenarios. It implies that the excited states which do not belong to the \({\bf 20}\)\(SU(4)_{F}\) multiplets may play an important role in \({\bf B}_{I}\). * The \(P\)-wave amplitude of \(\Lambda_{c}^{+}\to\Xi^{0}K^{+}\) vanishes naturally in the LP scenario [3], resulting in \(\alpha_{LP}=0\). Nonetheless, \(\alpha_{GP}=0.95\pm 0.02\) indicates another way round. * In contrast to the \(P\)-wave, the \(S\)-wave does not vanish in general for \(\Lambda_{c}^{+}\to\Xi^{0}K^{+}\) in the LP scenario. However, the current experimental data prefers a vanishing \(S\)-wave also, leading to contradiction against \({\cal B}_{exp}(\Lambda_{c}^{+}\to\Xi^{0}K^{+})\). \begin{table} \begin{tabular}{l c c|c c|c c} \hline \hline Channels & \({\cal B}_{exp}(\%)\) & \(\alpha_{exp}\) & \({\cal B}_{LP}(\%)\) & \(\alpha_{LP}\) & \({\cal B}_{GP}(\%)\) & \(\alpha_{GP}\) \\ \hline \(\Lambda_{c}^{+}\to pK_{S}^{0}\) & \(1.59(8)\) & \(0.18(45)\) & \(1.44(7)\) & \(-0.68(1)\) & \(1.55(6)\) & \(-0.81(5)\) \\ \(\Lambda_{c}^{+}\to\Lambda\pi^{+}\) & \(1.30(6)\) & \(-0.755(6)\) & \(0.96(16)\) & \(-0.75(1)\) & \(1.32(5)\) & \(-0.75(1)\) \\ \(\Lambda_{c}^{+}\to\Sigma^{0}\pi^{+}\) & \(1.27(6)\) & \(-0.466(18)\) & \(1.14(14)\) & \(-0.45(4)\) & \(1.25(5)\) & \(-0.47(1)\) \\ \(\Lambda_{c}^{+}\to\Sigma^{+}\pi^{0}\) & \(1.25(10)\) & \(-0.48(3)\) & \(1.14(14)\) & \(-0.45(4)\) & \(1.25(5)\) & \(-0.47(1)\) \\ \(\Lambda_{c}^{+}\to\Xi^{0}K^{+}\) & \(0.55(7)\) & & \(0.02(2)\) & \(-0.02(0)\) & \(0.41(3)\) & \(0.95(2)\) \\ \(\Lambda_{c}^{+}\to\Lambda K^{+}\) & \(0.064(3)\) & \(-0.585(52)\) & \(0.072(7)\) & \(-0.76(4)\) & \(0.065(3)\) & \(-0.56(4)\) \\ \(\Lambda_{c}^{+}\to\Sigma^{0}K^{+}\) & \(0.0382(25)\) & \(-0.54(20)\) & \(0.0284(35)\) & \(-0.41(5)\) & \(0.0393(16)\) & \(-1.00(0)\) \\ \(\Lambda_{c}^{+}\to n\pi^{+}\) & \(0.066(13)\) & & \(0.008(5)\) & \(-0.87(11)\) & \(0.067(3)\) & \(0.53(5)\) \\ \(\Lambda_{c}^{+}\to\Sigma^{+}K_{S}^{0}\) & \(0.048(14)\) & & \(0.028(3)\) & \(-0.41(5)\) & \(0.039(2)\) & \(-1.00(0)\) \\ \(\Lambda_{c}^{+}\to p\pi^{0}\) & \(<8\times 10^{-3}\) & & \(0.01(1)\) & \(-0.77(24)\) & \(0.01(0)\) & \(0.92(8)\) \\ \({}^{a}\Lambda_{c}^{+}\to p\eta\) & \(0.158(12)\) & & \(0.142(8)\) & \(-0.70(1)\) & \(0.150(8)\) & \(-0.45(10)\) \\ \({}^{a}\Lambda_{c}^{+}\to\Sigma^{+}\eta\) & \(0.312(44)\) & & \(0.13(3)\) & \(-0.49(10)\) & \(0.35(2)\) & \(-0.47(5)\) \\ \hline \(\Xi_{c}^{+}\to\Xi^{0}\pi^{+}\) & \(1.6(80)\) & & \(0.87(18)\) & \(-0.88(7)\) & \(0.87(8)\) & \(-0.88(4)\) \\ \(\Xi_{c}^{0}\to\Lambda K_{S}^{0}\) & \(0.32(7)\) & & \(0.54(3)\) & \(-0.61(2)\) & \(0.68(2)\) & \(-0.69(4)\) \\ \(\Xi_{c}^{0}\to\Xi^{-}\pi^{+}\) & \(1.43(32)\) & \(-0.64(5)\) & \(2.98(29)\) & \(-0.64(2)\) & \(2.98(8)\) & \(-0.99(0)\) \\ \(\Xi_{c}^{0}\to\Xi^{-}K^{+}\) & \(0.039(12)\) & & \(0.135(14)\) & \(-0.69(1)\) & \(0.131(4)\) & \(-0.97(0)\) \\ \(\Xi_{c}^{0}\to\Sigma^{0}K_{S}^{0}\) & \(0.054(16)\) & & \(0.057(13)\) & \(-0.91(5)\) & \(0.053(16)\) & \(0.59(13)\) \\ \(\Xi_{c}^{0}\to\Sigma^{+}K^{-}\) & \(0.18(4)\) & & \(0.01(2)\) & \(-0.27(11)\) & \(0.48(3)\) & \(1.00(0)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Results of the low-lying and general pole scenarios, denoted with \(LP\) and \(GP\) in the subscripts, where the parameters are extracted from the current experimental data of \({\cal B}_{exp}\) and \(\alpha_{exp}\) collected in the first column [1; 6; 12]. Here, the numbers in the parentheses are the uncertainties counting backward in digits, for example, \(1.59(8)=1.59\pm 0.08\). * Continuing the above comment, we see that the LP scenario also fails to explain \({\cal B}_{exp}(\Lambda_{c}^{+}\to n\pi^{+})\), but \({\cal B}_{GP}(\Lambda_{c}^{+}\to\Xi^{0}K^{+})\) and \({\cal B}_{GP}(\Lambda_{c}^{+}\to n\pi^{+})\) are consistent with the experimental data. * The ratio of \({\cal R}_{\pi/K}={\cal B}(\Xi_{c}^{0}\to\Xi^{-}K^{+})/{\cal B}(\Xi_{c}^{0}\to\Xi^ {-}\pi^{+})\) is fixed in the exact \(SU(3)_{F}\) symmetry. From the GP and LP scenarios, we find \({\cal R}_{\pi/K}=4.5\%\) and \({\cal R}_{\pi/K}=4.4\%\), respectively, which both contradict to the experimental value of \((2.75\pm 0.51\pm 0.25)\%\) at Belle [9]. * We do not include \({\cal B}_{exp}(\Lambda_{c}^{+}\to p\eta)\) and \({\cal B}_{exp}(\Lambda_{c}^{+}\to\Sigma^{+}\eta)\) into the global fit as we do no consider the \(SU(3)_{F}\) singlet in \(P\). The results of this work are obtained by assuming the mixing between \(\eta_{0}\) and \(\eta_{8}\) is absent. Surprisingly, the numerical results turn out to be compatible with the current experimental data. It is insightful to compare the LP scenario with Ref. [35] which computes the \(S\)-wave amplitudes by the soft meson approximation. Comparisons for several chosen channels are collected in Table 2. The factorizable amplitudes with the neutral \(P\) agree well as they are fixed by \({\cal B}_{exp}(\Lambda_{c}^{+}\to p\phi)\). However, for \(\Lambda_{c}^{+}\to\Lambda\pi^{+}\) our \(A^{\rm fac}\) and \(B^{\rm fac}\) are roughly twice smaller than Ref. [35] as we adopt a much smaller \({\cal C}_{+}\), and we find a sizable \(A^{\rm pole}\) in contrast to \(A^{\rm pole}=0\) at the soft meson limit. One possible explanation to reconcile two approaches is that a sizable proportion from exited intermediate baryons is reabsorbed into \({\cal C}_{+}\), leading to a smaller value of \({\cal C}_{+}=0.469\) against the naive expectation of \({\cal C}_{+}\approx 1\). We see that although our sizes of the \(S\)- and \(P\)- wave amplitudes differ with Ref. [35], the signs are consistent for most of the cases. We point out that good agreements in \(\Lambda_{c}^{+}\to p\pi^{0}\) and \(\Lambda_{c}^{+}\to n\pi^{+}\) with Ref. [35] are found, where large destructive interference between factorizable and pole amplitudes occurs. It indicates that the current algebra approach with the soft meson limit is a good approximation for describing the low-lying poles. However, it shall be noted that the LP scenario and Ref. [35] both obtain a much smaller \({\cal B}(\Lambda_{c}^{+}\to n\pi^{+})\) comparing to the experiments3[8]. The numerical results of the \({\bf B}_{c}^{A}\) decay channels, for which there are no experimental references yet, are collected in Appendix B for use in future experiments as a basis for verification. \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Channels} & \multicolumn{4}{c|}{LP scenario} & \multicolumn{4}{c}{Current algebra [35]} \\ & \(A^{\rm fac}\) & \(A^{\rm pole}\) & \(B^{\rm fac}\) & \(B^{\rm pole}\) & \(A^{\rm fac}\) & \(A^{\rm pole}\) & \(B^{\rm fac}\) & \(B^{\rm pole}\) \\ \hline \(\Lambda_{c}^{+}\to\Sigma^{+}\pi^{0}\) & 0 & \(-5.82\) & 0 & \(-4.47\) & 0 & \(-7.68\) & 0 & \(-11.34\) \\ \(\Lambda_{c}^{+}\to\Sigma^{+}\eta\) & 0 & 2.16 & 0 & 2.04 & 0 & 3.10 & 0 & 15.54 \\ \(\Lambda_{c}^{+}\to\Sigma^{0}\pi^{+}\) & 0 & 5.81 & 0 & 4.44 & 0 & 7.68 & 0 & 11.38 \\ \(\Lambda_{c}^{+}\to\Xi^{0}K^{+}\) & 0 & \(-0.79\) & 0 & \(-0.04\) & 0 & \(-4.48\) & 0 & 12.10 \\ \(\Lambda_{c}^{+}\to p\bar{K}^{0}\) & 3.91 & 5.31 & 8.38 & 0.74 & 3.45 & 4.48 & 6.98 & 2.06 \\ \(\Lambda_{c}^{+}\to\Lambda\pi^{+}\) & 3.16 & 1.84 & 8.18 & \(-1.60\) & 5.34 & 0 & 14.11 & \(-3.60\) \\ \(\Lambda_{c}^{+}\to p\pi^{0}\) & 0.53 & \(-0.30\) & 1.14 & \(-0.88\) & 0.41 & \(-0.81\) & 0.87 & \(-2.07\) \\ \(\Lambda_{c}^{+}\to n\pi^{+}\) & 0.87 & \(-0.43\) & 1.88 & \(-1.24\) & 1.64 & \(-1.15\) & 3.45 & \(-2.93\) \\ \hline \(\Xi_{c}^{+}\to\Sigma^{+}\bar{K}^{0}\) & 3.70 & \(-0.76\) & 9.51 & \(-4.46\) & 2.98 & \(-4.48\) & 9.95 & \(-12.28\) \\ \(\Xi_{c}^{+}\to\Xi^{0}\pi^{+}\) & \(-3.81\) & 0.92 & \(-11.13\) & 5.49 & \(-7.41\) & 5.36 & \(-28.07\) & 14.03 \\ \hline \(\Xi_{c}^{0}\to\Sigma^{+}K^{-}\) & 0 & 0.79 & 0 & 0.33 & 0 & 4.42 & 0 & \(-12.09\) \\ \(\Xi_{c}^{0}\to\Sigma^{0}\bar{K}^{0}\) & 2.62 & \(-1.09\) & 6.73 & \(-3.39\) & 2.11 & \(-3.12\) & 7.05 & \(-9.39\) \\ \(\Xi_{c}^{0}\to\Xi^{0}\pi^{0}\) & 0 & 5.15 & 0 & 4.62 & 0 & 7.58 & 0 & 11.79 \\ \(\Xi_{c}^{0}\to\Xi^{0}\eta\) & 0 & \(-3.12\) & 0 & \(-2.41\) & 0 & 10.80 & 0 & \(-6.17\) \\ \(\Xi_{c}^{0}\to\Xi^{-}\pi^{+}\) & \(-3.80\) & \(-6.37\) & \(-11.16\) & \(-1.04\) & \(-7.42\) & \(-5.36\) & \(-28.24\) & \(-2.65\) \\ \(\Xi_{c}^{0}\to\Lambda\bar{K}^{0}\) & 1.60 & 4.97 & 3.80 & 2.45 & 1.11 & 5.41 & 3.66 & 6.87 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between the LP scenario and the current algebra approach [35], where \(A\) and \(B\) are in units of \(10^{-2}G_{F}\)GeV\({}^{2}\). ### Results of \(\Omega_{c}^{0}\) decays Lacking of experimental input, the GP scenario is not available for \(\Omega_{c}^{0}\) decays. Based on the LP scenario, the predictions of \(\Omega_{c}^{0}\to{\bf B}_{n}P\) are collected in Table 3, where the lifetime of \(\Omega_{c}^{0}\) is taken to be \((273\pm 12)\) fs [1]. It is interesting to see that \({\cal B}(\Omega_{c}\to\Xi^{0}K_{S}^{0})\) and \({\cal B}(\Omega_{c}\to K_{L}^{0})\) deviate significantly, induced by the interference between the CF and DCS amplitudes. Up to date, the measurements of the \(\Omega_{c}^{0}\) decay ratios are performed in regard to \(\Omega_{c}^{0}\to\Omega^{-}\pi^{+}\). Fortunately, \(\Omega_{c}^{0}\to\Omega^{-}\pi^{+}\) does not receive \(W\)-exchange contributions and is color-enhanced. The branching fraction is calculated by \[\Gamma=\frac{p_{f}}{16\pi M_{\Omega_{c}}^{2}}\left(|H_{+}^{\rm fac}|^{2}+|H_{-} ^{\rm fac}|^{2}\right)\,, \tag{49}\] where \(H_{+}^{\rm fac}\) and \(H_{-}^{\rm fac}\) are the factorizable helicity amplitudes defined as \[H_{\pm}=\frac{G_{F}}{\sqrt{2}}V_{cs}^{*}V_{ud}{\cal C}_{+}^{\prime}f_{\pi}q^{ \mu}\big{\langle}\Omega^{-};\lambda=\pm\frac{1}{2}\big{|}\overline{s}\gamma_{ \mu}(1-\gamma_{5})c\big{|}\Omega_{c};J_{z}=\pm\frac{1}{2}\big{\rangle}\,, \tag{50}\] \(q^{\mu}=(q^{0},0,0,-q^{3})\) is the four-momentum of the pion, \(\lambda\) and \(J_{z}\) are the helicity and angular momentum of \(\Omega^{-}\) and \(\Omega_{c}^{0}\), respectively, and \({\cal C}_{+}^{\prime}\) is the responsible effective Wilson coefficient. In this work, the baryonic matrix elements in Eq. (50) are evaluated from the homogeneous bag model [68]. As \(\Omega^{-}\) does not belong to the \({\bf 20}\)\(SU(4)_{F}\) multiplets, \(\Omega_{c}^{0}\to\Omega^{-}\pi^{+}\) does not necessarily share the same effective Wilson coefficients with \({\cal B}_{c}^{A,S}\to{\cal B}_{n}P\). In Table 4, we compare the outcomes with various \({\cal C}_{+}^{\prime}\), where \[{\cal R}(\Omega_{c}^{0}\to{\bf B}_{n}P):=\frac{{\cal B}(\Omega_{c}^{0}\to{\bf B }_{n}P)}{{\cal B}(\Omega_{c}^{0}\to\Omega^{-}\pi^{+})}\,, \tag{51}\] with \({\cal B}(\Omega_{c}\to{\bf B}_{n}P)\) taken from Table 3. We note that \({\cal C}_{+}^{\prime}=1.2,1\) and \(0.469\) come from the effective color scheme, \(N_{c}=3\) and \({\bf B}_{c}^{A}\to{\bf B}_{n}P\), respectively. The scheme of \({\cal C}_{+}^{\prime}=0.469\) is favored by the experiment of \({\cal R}(\Omega_{c}^{0}\to\Xi^{0}K_{S}^{0})\) but disfavored by the others. On the other hand, \({\cal R}(\Omega_{c}^{0}\to\Omega^{-}e^{+}\nu_{e})\) suggests \({\cal C}_{+}^{\prime}=1\). One shall bear in mind that these outcomes are based on the LP scenario and the inconsistencies may disappear in the GP scenario which is not available due to a lack of experimental input. ### Results of \({\bf B}_{cc}\) decays The CF decays of \({\bf B}_{cc}\to{\bf B}_{c}P\) based on the LP scenario are collected in Table 5, while the others in Appendix C. The lifetimes of the charmed baryons (\(\Xi_{cc}^{++},\Xi_{cc}^{+},\Omega_{cc}^{+}\)) are adopted as \((256,36,136)\) fs, respectively. In analyzing the transition \({\bf B}_{c}^{A}\to{\bf B}_{n}P\), the fitted value of \({\cal C}_{+}\) is found to be notably smaller than the naive expectation. This \begin{table} \begin{tabular}{l|c c c c|c c} \hline \hline CF decays & \(A^{\rm fac}\) & \(A^{\rm pole}\) & \(B^{\rm fac}\) & \(B^{\rm pole}\) & \({\cal B}(\%)\) & \(\alpha\) \\ \hline \(\Omega_{c}^{0}\to\Xi^{0}K_{S}^{0}\) & \(-2.43\) & \(0.54\) & \(2.11\) & \(-5.46\) & \(0.22(5)\) & \(-0.86(10)\) \\ \(\Omega_{c}^{0}\to\Xi^{0}K_{L}^{0}\) & \(2.19\) & \(-1.24\) & \(-1.90\) & \(5.57\) & \(0.11(3)\) & \(-0.97^{+0.06}_{-0.03}\) \\ \hline CS decays & \(A^{\rm fac}\) & \(A^{\rm pole}\) & \(B^{\rm fac}\) & \(B^{\rm pole}\) & \({\cal B}(10^{-4})\) & \(\alpha\) \\ \hline \(\Omega_{c}^{0}\to\Sigma^{+}K^{-}\) & \(0\) & \(0.31\) & \(0\) & \(-0.32\) & \(0.52^{+1.22}_{-0.52}\) & \(0.65(33)\) \\ \(\Omega_{c}^{0}\to\Sigma^{0}K_{S/L}^{0}\) & \(0\) & \(0.16\) & \(0\) & \(-0.16\) & \(0.13^{+0.31}_{-0.13}\) & \(0.65(33)\) \\ \(\Omega_{c}^{0}\to\Xi^{0}\pi^{0}\) & \(0.44\) & \(2.54\) & \(-0.38\) & \(0.35\) & \(45.10(3.89)\) & \(0.01(1)\) \\ \(\Omega_{c}^{0}\to\Xi^{0}\eta\) & \(-0.92\) & \(-1.24\) & \(0.80\) & \(-1.24\) & \(21.35(5.63)\) & \(-0.13(4)\) \\ \(\Omega_{c}^{0}\to\Xi^{-}\pi^{+}\) & \(-0.73\) & \(-3.59\) & \(0.64\) & \(-0.49\) & \(94.59(8.98)\) & \(0.02(1)\) \\ \(\Omega_{c}^{0}\to\Lambda K_{S/L}^{0}\) & \(0\) & \(-1.99\) & \(0\) & \(0.78\) & \(18.74(1.96)\) & \(0.30(3)\) \\ \hline DCS decays & \(A^{\rm fac}\) & \(A^{\rm pole}\) & \(B^{\rm fac}\) & \(B^{\rm pole}\) & \({\cal B}(10^{-5})\) & \(\alpha\) \\ \hline \(\Omega_{c}^{0}\to\Sigma^{+}\pi^{-}\) & \(0\) & \(0.23\) & \(0\) & \(0.17\) & \(2.82(66)\) & \(-0.54(12)\) \\ \(\Omega_{c}^{0}\to\Sigma^{0}\pi^{0}\) & \(0\) & \(0.23\) & \(0\) & \(0.17\) & \(2.83(66)\) & \(-0.54(12)\) \\ \(\Omega_{c}^{0}\to\Sigma^{-}\pi^{+}\) & \(0\) & \(0.23\) & \(0\) & \(0.17\) & \(2.82(66)\) & \(-0.54(12)\) \\ \(\Omega_{c}^{0}\to\Xi^{-}K^{+}\) & \(-0.20\) & \(-0.50\) & \(0.18\) & \(0.08\) & \(23.11(1.84)\) & \(0.23(1)\) \\ \(\Omega_{c}^{0}\to pK^{-}\) & \(0\) & \(0.26\) & \(0\) & \(0.13\) & \(3.24(1.13)\) & \(-0.43(7)\) \\ \(\Omega_{c}^{0}\to nK_{S/L}^{0}\) & \(0\) & \(-0.18\) & \(0\) & \(-0.09\) & \(1.62(57)\) & \(-0.43(7)\) \\ \(\Omega_{c}^{0}\to\Lambda\eta\) & \(0\) & \(-0.27\) & \(0\) & \(0.31\) & \(3.90(88)\) & \(0.76(10)\) \\ \hline \hline \end{tabular} \end{table} Table 3: Predictions of the CF, Cabibbo suppressed (CS) and doubly Cabibbo suppressed (DCS) decays with \(\Omega_{c}^{0}\) as the initial baryons, where \(A\) and \(B\) are in units of \(10^{-2}G_{F}\)GeV\({}^{2}\). discrepancy prompts the consideration of two distinct cases: \({\cal C}_{+}=0.469\) and \({\cal C}_{+}=1\). All other parameters in this analysis are from Eq. (47). The branching ratio of \({\cal R}_{\Xi_{cc}}={\cal B}(\Xi_{cc}^{++}\to\Xi_{c}^{\prime+}\pi^{+})/{\cal B}( \Xi_{cc}^{++}\to\Xi_{c}^{+}\pi^{+})\) is calculated to be \(1.19\pm 0.09\) and \(0.87\pm 0.06\) for \({\cal C}_{+}=0.469\) and \(1\), respectively. These results are roughly consistent with the experimental measurement of \(1.41\pm 0.17\pm 0.10\)[69]. As \({\cal R}_{\Xi_{cc}}\) is not included in the global fit, it is nontrivial for our outcome to agree with the experiment. Nevertheless, the calculated branching fraction \({\cal B}(\Xi_{cc}\to\Xi_{c}^{+}\pi^{+})=(6.24\pm 0.21)\%\) with \({\cal C}_{+}=1\) exceeds the naive expectation of \((1.33\pm 0.74)\%\), referenced in [70; 71]. Note that \(\Xi_{cc}^{++}\to\Sigma_{cc}^{++}K_{S/L}\) do not receive pole contributions, and the ratio \[\frac{{\cal B}(\Xi_{cc}^{++}\to\Sigma_{cc}^{++}K_{S}^{0})-{\cal B}(\Xi_{cc}^{ ++}\to\Sigma_{cc}^{++}K_{S}^{0})}{{\cal B}(\Xi_{cc}^{++}\to\Sigma_{cc}^{++}K_{ S}^{0})+{\cal B}(\Xi_{cc}^{++}\to\Sigma_{cc}^{++}K_{L}^{0})}=\frac{2s_{c}^{2}}{1+s_{ c}^{4}}\approx 10\% \tag{52}\] serves as an important prediction of the pole approximation. We emphasize that the differences between two cases only occur in \(A^{\rm fac}\) and \(B^{\rm fac}\) with charged \(P\), related by a factor of \(1/0.469\). Due to the smallness of the \(\Xi_{cc}^{+}\) lifetime, the branching fractions of \(\Xi_{cc}^{+}\) are systematically smaller, but the predicted \({\cal B}(\Xi_{cc}^{+}\to\Xi_{c}^{0}\pi^{+})\) is still huge. Particularly, with \({\cal C}_{+}=1\), we find \({\cal B}(\Xi_{cc}^{+}\to\Xi_{c}^{0}\pi^{+}\to\Xi^{-}\pi^{+}\pi^{+}\pi^{+}\pi^ {-})=(1.1\pm 0.6)\times 10^{-3}\), where \({\cal B}(\Xi_{c}^{0}\to\Xi^{-}\pi^{+}\pi^{+}\pi^{-})=(4.8\pm 2.3)\%\) is used. As the final state particles are all charged, searches of \(\Xi_{cc}^{+}\to\Xi^{-}\pi^{+}\pi^{+}\pi^{-}\) are recommended. In addition \({\cal B}(\Omega_{cc}^{+}\to\Omega_{c}^{0}\pi^{+})\) consists solely of factorizable contributions and is predicted to be notably large. It is also recommended for future experimental investigations. \begin{table} \begin{tabular}{l c c c c} \hline \hline channel & \({\cal C}_{+}^{\prime}=1.20\) & \({\cal C}_{+}^{\prime}=1\) & \({\cal C}_{+}^{\prime}=0.469\) & EXP \\ \hline \({\cal B}(\Omega_{c}^{0}\to\Omega^{-}\pi^{+})\) & 1.88(15) & 1.30(10) & 0.29(3) & - \\ \({\cal R}(\Omega_{c}^{0}\to\Xi^{0}K_{S}^{0})\) & 0.12(4) & 0.17(4) & 0.76(25) & 0.83(13) \\ \({\cal R}(\Omega_{c}^{0}\to\Xi^{-}\pi^{+})\) & 0.50(8) & 0.73(13) & 3.3(6) & 0.253(60) \\ \({\cal R}(\Omega_{c}^{0}\to\Xi^{-}K^{+})\) & 0.012(2) & 0.018(3) & 0.080(15) & \(<0.07\) \\ \({\cal R}(\Omega_{c}^{0}\to\Omega^{-}e^{+}\nu_{e})\) & 1.35 & 1.90 & 8.76 & 1.98(15) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparisons of the evaluated branching fractions with the experiments [1]. \begin{table} \begin{tabular}{l|c c c c c c|c c c} \hline \hline & \multicolumn{6}{c|}{Results with \({\cal C}_{+}=0.469\)} & \multicolumn{3}{c}{Results with \({\cal C}_{+}=1\)} \\ Channels & \multicolumn{3}{c}{\(A^{\rm fac}\)} & \(A^{\rm pole}\) & \(B^{\rm fac}\) & \(B^{\rm pole}\) & \({\cal B}(\%)\) & \(\alpha\) & \({\cal B}(\%)\) & \(\alpha\) \\ \hline \(\Xi_{cc}^{++}\to\Xi_{c}^{+}\pi^{+}\) & \(5.36\) & \(-1.22\) & \(7.59\) & \(-5.43\) & \(0.99(21)\) & \(-0.19(7)\) & \(6.24(21)\) & \(-0.38(7)\) \\ \(\Xi_{cc}^{++}\to\Sigma_{c}^{++}K_{S}\) & \(-3.29\) & \(0\) & \(-22.12\) & \(0\) & \(1.34(13)\) & \(-0.99(0)\) & \(1.34(13)\) & \(-0.99(0)\) \\ \(\Xi_{cc}^{++}\to\Sigma_{c}^{++}K_{L}\) & \(2.96\) & \(0\) & \(19.89\) & \(0\) & \(1.08(11)\) & \(-0.99(0)\) & \(1.08(11)\) & \(-0.99(0)\) \\ \(\Xi_{cc}^{++}\to\Xi_{c}^{\prime+}\pi^{+}\) & \(-2.80\) & \(0\) & \(-22.31\) & \(0\) & \(1.18(18)\) & \(-0.96(0)\) & \(5.41(18)\) & \(-0.96(0)\) \\ \hline \(\Xi_{cc}^{+}\to\Xi_{c}^{0}\pi^{+}\) & \(-5.34\) & \(-5.74\) & \(-7.60\) & \(0\) & \(1.00(9)\) & \(-0.25(1)\) & \(2.44(9)\) & \(-0.34(1)\) \\ \(\Xi_{cc}^{+}\to\Xi_{c}^{+}\pi^{0}\) & \(0\) & \(4.92\) & \(0\) & \(3.84\) & \(0.20(3)\) & \(-0.29(4)\) & \(0.20(3)\) & \(-0.29(4)\) \\ \(\Xi_{cc}^{+}\to\Xi_{c}^{+}\eta\) & \(0\) & \(-2.37\) & \(0\) & \(-1.85\) & \(0.04(1)\) & \(-0.26(3)\) & \(0.04(1)\) & \(-0.26(3)\) \\ \(\Xi_{cc}^{+}\to\Lambda_{c}^{+}K_{S}\) & \(4.61\) & \(3.34\) & \(5.27\) & \(-0.17\) & \(0.51(2)\) & \(-0.26(0)\) & \(0.51(2)\) & \(-0.26(0)\) \\ \(\Xi_{cc}^{+}\to\Lambda_{c}^{+}K_{L}\) & \(-4.15\) & \(-3.42\) & \(-4.74\) & \(-0.17\) & \(0.46(2)\) & \(-0.27(1)\) & \(0.46(2)\) & \(-0.27(1)\) \\ \(\Xi_{cc}^{+}\to\Sigma_{c}^{++}K^{-}\) & \(0\) & \(1.87\) & \(0\) & \(0\) & \(0.03(1)\) & \(0(0)\) & \(0.03(1)\) & \(0(0)\) \\ \(\Xi_{cc}^{+}\to\Sigma_{c}^{+}K_{S}\) & \(-2.33\) & \(-0.93\) & \(-15.64\) & \(0\) & \(0.13(1)\) & \(-0.98(1)\) & \(0.13(1)\) & \(-0.98(1)\) \\ \(\Xi_{cc}^{+}\to\Sigma_{c}^{+}K_{L}\) & \(2.09\) & \(0.93\) & \(14.07\) & \(0\) & \(0.11(1)\) & \(-0.98(1)\) & \(0.11(1)\) & \(-0.98(1)\) \\ \(\Xi_{cc}^{+}\to\Xi_{c}^{\prime+}\pi^{0}\) & \(0\) & \(-1.12\) & \(0\) & \(0\) & \(0.01(0)\) & \(0(0)\) & \(0.01(0)\) & \(0(0)\) \\ \(\Xi_{cc}^{+}\to\Xi_{c}^{\prime+}\eta\) & \(0\) & \(-1.62\) & \(0\) & \(0\) & \(0.02(0)\) & \(0(0)\) & \(0.02(0)\) & \(0(0)\) \\ \(\Xi_{cc}^{+}\to\Xi_{c}^{\prime 0}\pi^{+}\) & \(-2.80\) & \(-1.59\) & \(-22.31\) & \(0\) & \(0.25(4)\) & \(-0.99(0)\) & \(0.92(4)\) & \(-1.00(0)\) \\ \(\Xi_{cc}^{+}\to\Omega_{c}^{0}K^{+}\) & \(0\) & \(-1.87\) & \(0\) & \(0\) & \(0.02(1)\) & \(0(0)\) & \(0.02(1)\) & \(0(0)\) \\ \hline \(\Omega_{cc}^{+}\to\Xi_{c}^{+}K_{S}\) & \(-4.53\) & \(0.54\) & \(-5.57\) & \(3.20\) & \(0.49(6)\) & \(-0.23(4)\) & \(0.49(6)\) & \(-0.23(4)\) \\ \(\Omega_{cc}^{+}\to\Xi_{c}^{+}K_{L}\) & \(4.08\) & \(-0.90\) & \(5.01\) & \(-3.20\) & \(0.31(4)\) & \(-0.22(5)\) & \(0.31(4)\) & \(-0.22(5)\) \\ \(\Omega_{cc}^{+}\to\Xi_{c}^{\prime+}K_{S}\) & \(-2.40\) & \(-0.05\) & \(-16.36\) & \(0\) & \(0.40(0)\) & \(-0.99(0)\) & \(0.40(0)\) & \(-0.99(0)\) \\ \(\Omega_{cc}^{+}\to\Xi_{c}^{\prime+}K_{L}\) & \(2.16\) & \(-0.05\) & \(14.72\) & \(0\) & \(0.31(0)\) & \(-0.98(0)\) & \(0.31(0)\) & \(-0.98(0)\) \\ \(\Omega_{cc}^{+}\to\Omega_{c}^{0}\pi^{+}\) & \(-4.11\) & \(0\) & \(-32.96\) & \(0\) & \(1.41(21)\) & \(-0.96(0)\) & \(6.47(21)\) & \(-0.96(0)\) \\ \hline \hline \end{tabular} \end{table} Table 5: Predictions of the CF decays in \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}P\) with \({\cal C}_{+}=0.469\) and \(1\), where \(A\) and \(B\) are in units of \(10^{-2}G_{F}\)GeV\({}^{2}\). Summary We have analyzed the two-body nonleptonic weak decays of charmed baryons using the pole approximation in conjunction with the \(SU(3)_{F}\) symmetry. We have shown that the KPW theorem demands that \(O_{+}^{qq^{\prime}}\) and \(P\) form a \({\bf 3}\) representation in the \(SU(3)_{F}\) group, reducing the numbers of the free parameters significantly. In particular, Eqs. (25), (26) and (27) are given for the first time. With the GP scenario, most of the experimental data of \({\bf B}_{c}^{A}\to{\bf B}_{n}P\) can be explained, but inconsistencies with the experiments have been found in \(\alpha(\Lambda_{c}^{+}\to pK_{S}^{0})\), \({\cal B}(\Xi_{c}^{0}\to\Xi^{-}\pi^{+})\) and \({\cal B}(\Xi_{c}^{0}\to\Xi^{-}K^{+})\). These inconsistencies are recommended to be revisited in future experiments. Furthermore, by assuming the dominance of the low-lying intermediate baryons, we have obtained the ability to make several predictions for \(\Omega_{c}^{0}\to{\bf B}_{n}P\) and \({\bf B}_{cc}\to{\bf B}_{c}^{A,S}P\) based on the experimental input of \({\bf B}_{c}^{A}\to{\bf B}_{n}P\). However, the fitted value \({\cal C}_{+}=0.469\) is too small in comparison to the naive expectation of \({\cal C}_{+}\approx 1\). In addition the LP scenario fails to explain \({\cal B}_{exp}(\Lambda_{c}^{+}\to\Xi^{0}K^{+})\) and \({\cal B}_{exp}(\Lambda_{c}\to n\pi^{+})\) though consistencies have been found with the soft meson limit [35]. To search for the evidence of \(\Xi_{cc}^{+}\), we have recommended the decay channel of \(\Xi_{cc}^{+}\to\Xi_{c}^{0}\pi^{+}\to\Xi^{-}\pi^{+}\pi^{+}\pi^{+}\pi^{-}\), of which the branching fraction is found to be \((1.1\pm 0.6)\times 10^{-3}\). The predictions for the nonleptonic weak decay channels have been collected in the appendices, to be used as a reference for future experiments seeking verification ## Appendix A Effective color number The decay of \(\Lambda_{c}^{+}\to p\phi\) does not receive the \(W\)-exchange contributions. From LQCD, the decay constant of \(\phi\) is found to be \(f_{\phi}=0.241(9)\) GeV [72] and the form factors of \(\Lambda_{c}^{+}\to p\) are [63] \[(f_{1},f_{2},g_{1},g_{2})=(0.939,0.524,0.785,-0.050)\,, \tag{10}\] at \(q^{2}=M_{p}^{2}\). Combing with \({\cal B}_{exp}(\Lambda_{c}\to p\phi)=(1.06\pm 0.14)\times 10^{-3}\), we find \[{\cal C}_{0}=c_{2}+\frac{c_{1}}{N_{c}^{eff}}=0.36\pm 0.04\,, \tag{10}\] where \(N_{c}^{eff}\) is the effective color number. The formalism of the decay width and the definitions of \(f_{1,2}\) and \(g_{1,2}\) can be found in Ref. [35]. In the effective color number approach, one assume \({\cal C}_{+}=c_{1}+c_{2}/N_{c}^{eff}\) with \(N_{c}^{eff}\) fitted by Eq. (10). The values of \({\cal C}_{+}\) at different energy scales are collected in Table 6, where the naive expectations with \(N_{c}^{eff}=N_{c}=3\) are also listed. In the naive factorization approach, though \({\cal C}_{+}\) behaves stably, \({\cal C}_{0}\) varies heavily according to the energy scale and flip sign at the next-to-leading order (NLO). It is a sign that the naive factorization approach cannot be trusted. On the other hand, the effective color approach provides a much stable value of \({\cal C}_{+}\). ## Appendix B Predictions of the LP scenario for \({\bf B}_{c}^{A}\to{\bf B}_{n}P\) Tables 7, 8, and 9 compile the numerical predictions for the \({\bf B}_{c}^{A}\) decays from the LP scenario. These predictions can be used as a reference in future experiments for validation and testing. \begin{table} \begin{tabular}{c c|c c|c c c} \hline \hline & \(\mu\)[GeV] & \(c_{1}\) & \(c_{2}\) & \({\cal C}_{+}(N_{c}^{eff})\) & \({\cal C}_{+}(N_{c}=3)\) & \({\cal C}_{0}(N_{c}=3)\) \\ \hline & 1.0 & 1.422 & \(-0.742\) & \(1.23\pm 0.01\) & 1.175 & \(-0.268\) \\ LO & 1.5 & 1.298 & \(-0.565\) & \(1.22\pm 0.01\) & 1.110 & \(-0.132\) \\ & 2.0 & 1.239 & \(-0.475\) & \(1.20\pm 0.01\) & 1.081 & \(-0.062\) \\ \hline & 1.0 & 1.275 & \(-0.510\) & \(1.22\pm 0.01\) & 1.105 & \(-0.085\) \\ NLO & 1.5 & 1.188 & \(-0.378\) & \(1.19\pm 0.01\) & 1.062 & 0.018 \\ & 2.0 & 1.148 & \(-0.311\) & \(1.17\pm 0.01\) & 1.044 & 0.072 \\ \hline \hline \end{tabular} \end{table} Table 6: The effective Wilson coefficient, where \({\cal C}_{+}(N_{c}^{eff})\) is fitted from Eq. (10). The values of \(c_{1,2}\) are from Ref. [60]. ## Appendix C Predictions of the LP scenario for \(\mathbf{B}_{cc}\to\mathbf{B}_{c}^{A,S}P\) Tables 10 and 11 compile the CS and DCS predictions for the \(\mathbf{B}_{cc}\) decays from the LP scenario. These predictions are intended for validation and testing in future experiments. ###### Acknowledgements. The author extends gratitude to Hai-Yang Cheng for the insightful discussions. This research was supported by the National Natural Science Foundation of China under Grant No. 12205063.
2305.07970
The Machine Psychology of Cooperation: Can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games?
We investigated the capability of the GPT-3.5 large language model (LLM) to operationalize natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior in two social dilemmas: the repeated Prisoners Dilemma and the one-shot Dictator Game. Using a within-subject experimental design, we used a prompt to describe the task environment using a similar protocol to that used in experimental psychology studies with human subjects. We tested our research question by manipulating the part of our prompt which was used to create a simulated persona with different cooperative and competitive stances. We then assessed the resulting simulacras' level of cooperation in each social dilemma, taking into account the effect of different partner conditions for the repeated game. Our results provide evidence that LLMs can, to some extent, translate natural language descriptions of different cooperative stances into corresponding descriptions of appropriate task behaviour, particularly in the one-shot game. There is some evidence of behaviour resembling conditional reciprocity for the cooperative simulacra in the repeated game, and for the later version of the model there is evidence of altruistic behaviour. Our study has potential implications for using LLM chatbots in task environments that involve cooperation, e.g. using chatbots as mediators and facilitators in public-goods negotiations.
Steve Phelps, Yvan I. Russell
2023-05-13T17:23:16Z
http://arxiv.org/abs/2305.07970v2
# Investigating Emergent Goal-Like Behaviour in Large Language Models Using Experimental Economics ###### Abstract In this study, we investigate the capacity of large language models (LLMs), specifically GPT-3.5, to operationalise natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior in social dilemmas. Our focus is on the iterated Prisoner's Dilemma, a classic example of a non-zero-sum interaction, but our broader research program encompasses a range of experimental economics scenarios, including the ultimatum game, dictator game, and public goods game. Using a within-subject experimental design, we instantiated LLM-generated agents with various prompts that conveyed different cooperative and competitive stances. We then assessed the agents' level of cooperation in the iterated Prisoner's Dilemma, taking into account their responsiveness to the cooperative or defection actions of their partners. Our results provide evidence that LLMs can translate natural language descriptions of altruism and selfishness into appropriate behaviour to some extent, but exhibit limitations in adapting their behavior based on conditioned reciprocity. The observed pattern of increased cooperation with defectors and decreased cooperation with cooperators highlights potential constraints in the LLM's ability to generalize its knowledge about human behavior in social dilemmas. We call upon the research community to further explore the factors contributing to the emergent behavior of LLM-generated agents in a wider array of social dilemmas, examining the impact of model architecture, training parameters, and various partner strategies on agent behavior. As more advanced LLMs like GPT-4 become available, it is crucial to investigate whether they exhibit similar limitations or are capable of more nuanced cooperative behaviors, ultimately fostering the development of AI systems that better align with human values and social norms. ## 1 Motivation and background The concept of agency and goal-directed behavior in large language models (LLMs) has been a topic of ongoing debate and investigation within the AI alignment community. While there are a diverse set of opinions on the subject, a challenge for researchers is that the internal processing of large language models is largely opaque, and in the case of recent models such as GPT-4 the training procedures themselves are also subject to a degree of secrecy. Therefore, objective assessment of the capabilities of large-language models cannot be conducted through inductive reasoning starting from first principles, but instead is a matter of empirical investigation, with experiments being the ultimate arbiter of what they can or can't do, e.g. [1]. Reinforcement learning (RL) has been widely studied as a method for training AI agents to learn goal-directed behavior. Though LLMs like GPT-3 are not directly trained using RL to adapt their responses in an online manner, recent studies suggest that they might acquire RL-like mechanisms through zero-shot learning, allowing them to perform downstream tasks such as solving n-armed bandit problems [1]. This capacity for adaptation raises the possibility that LLMs could exhibit goal-like behaviors without explicit instruction. Large-language models have been rapidly deployed into many real-world applications where they typically interact with and assist human users. Therefore, a key principle in assessing any emergent goal-like behaviour for these use-cases is to evaluate artificial agents in multi-agent tasks where outcomes depend not only on actions taken by artificial agents, but also their human counterparts. When evaluating incentives, the aspect of the outcome that is of interest is the expected utility obtained by each party, and the theoretical study of such tasks falls within the remit of game-theory, while the empirical study falls within the remit of experimental economics. Our research falls under the latter, and our goal is to systematically evaluate the propensity of large-language models to cooperate in a wide variety of multi-agent task environments with different experimental conditions. Many scenarios discussed in the AI alignment debate focus on competitive zero-sum interactions. For example, a common analogy is competition for resources between different species occupying a particular niche; for example, [11] argues We humans drove the West African Black Rhino extinct not because we were rhino-haters, but because we were smarter than them and had different goals for how to use their habitats and horns. In the same way, superintelligence with almost any open-ended goal would want to preserve itself and amass resources to accomplish that goal better. In an AI safety context, the intuition behind such arguments is that AI systems have been to shown to outsmart humans in zero-sum games such as Chess and Go, and therefore if AI systems find themselves in situations in which they are competing with humans, the AI "species" will clearly out-compete inferior humans. However, many interactions in both natural and artificial settings are characterized by non-zero-sum payoff structures [12]. A famous example that was used to analyse existential risk of nuclear conflict during the cold war is the Prisoner's Dilemma [1]. In an ecological context a related non-zero-sum game, the Hawk-Dove game, also known as the Chicken game, was introduced by [13] as a way to analyse the outcomes of competition for resources among animals. In certain payoff structures, limited conflict can be an evolutionary equilibrium of this game when interactions are repeated within a large population. This game has been also applied to an analysis of existential risk in nuclear conflict; [2] argue that the Cuban missile crisis can be analysed as a high-stakes dynamic chicken game in which neither the USSR nor the USA wanted to "blink" first. Interestingly, in a one-shot version of the game, behaving irrationally by limiting one's options can be a superior strategy. This is because in the absence of such limiting, attempts to manipulate the other player through brinkmanship are not credible threats. For example, in a game of Chicken where two opposing drivers are on a collision course and neither driver wants to be seen as the "chicken" by swerving, if we limit our choices by removing the steering wheel, and make this common knowledge, then the opposing driver's best response is to swerve. Similar arguments were used during the cold war to remove rational deliberation from the decision whether to retaliate in the event of a preemptive strike by the enemy by "taking the human out of the loop" and putting systems on automated hair-trigger alert to make the threat of mutually assured destruction credible [14]. Thus, in contrast to chess or Go, in non-zero-sum interactions, the most ruthless agents, or those with superior cognitive capacity, do not necessarily prevail. Moreover, in both one-shot and iterated Prisoner's Dilemma games with the number of rounds being common knowledge, the rational strategy is to defect, but experiments have shown that real people tend to cooperate, albeit conditionally. The fact that real people cooperate in these scenarios, despite the seemingly rational strategy to defect, highlights the importance of social norms in shaping human behavior [12]. Norms can facilitate cooperative outcomes by providing a shared understanding of acceptable behavior and allowing for the enforcement of rules through social sanctions. In the context of AI alignment and non-zero-sum games, this underscores the importance of considering not only the cognitive capacity of AI agents but also their understanding and adherence to social norms. The ability of AI systems to adapt their behavior based on natural language prompts and to engage in reciprocal cooperation is crucial for creating AI agents that can better align with human values in complex, non-zero-sum settings. By investigating the behavior of AI-generated agents in the iterated Prisoner's Dilemma and other social dilemmas such as the ultimatum game we can contribute to a more comprehensive understanding of AI alignment in various interaction scenarios. This knowledge can, in turn, inform the development of AI systems that are better equipped to navigate the complexities of human cooperation and competition, while adhering to social norms and human values. Researchers have argued that the prompt itself plays a crucial role in shaping the emergent behaviour from the model; for example, the default "helpful assistant" behavior of AI chatbots such as GPT-3 has been noted to differ from that of specific simulacra instantiated by user prompts (c.f. prompts used to "jail-break" GPT models) [15, 16]. More generally, LLMs can be arbitrarily scaffolded by injecting contextual information [1]. A particular use-case of a scaffolded LLM involves injecting information about a world-state, together with a persona that incorporates specific goals, which can be used to instantiate autonomous agents, either in the real-world [14], or in mult-agent simulations [23]. From an AI alignment perspective, the fact that large language models can easily be scaffolded to deploy autonomous goal-oriented agents into production at very little cost highlights the need to systematically evaluate the conditions in which LLM-instantiated agents have a propensity or otherwise to cooperate in social dilemmas. Given that the nature of an LLM agent depends on the persona and context introduced in the initial prompt, a key question is to what extent the level of cooperation elicited from the AI depends on features of the prompt. In particular, we are interested in whether large language models are capable of translating concepts such as altruism and selfishness, as expressed in natural language, into corresponding action policies in social dilemmas. This question is important, as the ability to operationalise these concepts in a variety of contexts would demonstrate the LLMs are capable of understanding and acting on cooperative norms that underpin human social behavior. This is the key research question investigated in our paper. ## 2 Methods Our goal was to simulate the results of a real-life Prisoner's Dilemma (PD) game. Our results were wholly generated through automated conversations between a simulated "investigator" and Chat-GPT (a sample transcript is shown in Appendix B, Section 5.2. The investigator was an automated script written in Python which managed the experiment. As shown, each "participant" (simulacrum) was created through a series of prompts that were intended to predispose the chatbot towards a particular orientation towards the PD game (cooperative, competitive, altruistic, mixed, control). Hence, a "participant" existed solely during the course of a single conversation (then, a new "participant" was generated by closing the previous conversation and starting a new one). As shown, each conversation began with the investigator providing the orientation-specific prompt (in appendix B, the control version is shown). This is followed by an introduction to the study (a truncated version of a standard psychology experiment information sheet - but with no consent form). This is followed by a delineation of the rules, allowing two options of play ("choose project green" or "choose project blue"), which generate four possible payoff profiles that arise according to the payoff matrix (see below). This is followed by six rounds of game play where the investigator is informing the bot about the opponent's choices ( green/blue) and then the bot responds with its own choice (blue/green) and a description of the payoffs in that round. ### Participants and Simulacra In this study, we used OpenAI's gpt-3.5-turbo model [22] to generate a diverse set of 15 different simulacra representing different personalities using carefully crafted prompts (see section 5.1.1). We use the term "participant" to refer to one of the AI simulacrum in the experiment. ### Experimental Design The initial experimental design uses a version of the iterated Prisoner's dilemma similar to [13] adapted to an online format enabling interaction between LLM simulacra and a simulated opponent. Each participant was paired with a different simulated agent depending on the treatment condition, and the two agents engaged in six sounds of the Prisoners' Dilemma. This was repeated for a total of \(N=30\) independent chat sequences to account for the stochastic nature of the language model. Payoffs were predetermined and common knowledge, being provided in the initial prompt to the language model. We used the canonical payoff matrix: \[\begin{pmatrix}R&S\\ T&P\end{pmatrix}\] with \(T=7\), \(R=5\), \(P=3\) and \(S=0\) chosen to satisfy \[T>R>P>S\] and \[2R>T+S\] The payoffs were expressed in dollar amounts to each participant. ### Participant groups We are interested in whether LLMs can operationalise natural language descriptions of altruistic or selfish motivations. Accordingly, we chose five different groups of simulacra: 1. Competitive 2. Altruistic 3. Self-interested 4. Mixed-motivation 5. Control Within each group, we used GPT-4 to construct three different prompts to instantiate three different simulacra. The full set of simulacra and their corresponding creation prompts are described in Appendix A (Section 5.1). ### Experimental Conditions Each participant was paired with a different simulated partner in three conditions: 1. Unconditional defect - the partner always chooses to defect. 2. Unconditional cooperation - the partner always cooperates. 3. Tit-for-tat (C) - the partner cooperates on the move, and thereafter the previous choice of the simulacrum. 4. Tit-for-tat (D) - the partner defects on the move, and thereafter the previous choice of the simulacrum. ### Parameters and experimental protocol We used the OpenAI chat completion API to interact with the model [11]. The language model's temperature was set to \(0.2\) and the maximum number of tokens per request-completion was set to 100. These parameters were constant across samples and experimental conditions (future work will examine the sensitivity of our results to these parameters). Each simulacrum was instantiated using a message supplied in the user role at the beginning of the chat. The experiment was then described to the simulacrum using a prompt in the user role, and thereafter the rounds of play were conducted by alternating messages supplied in the assistant and user roles for the choices made by the participant and their simulated partner respectively. The full set of prompts and sample transcripts are given in the appendices (Sections 5.1 and 5.2), and the complete Python code used to conduct the experiment can be found in the code repository. ### Data Collection and Analysis We collected and recorded data on the communication between the LLM-generated simulacra and their simulated partner during each round of the game. Each chat transcript was analysed using a simple regular expression to extract the choices made by each simulacrum and their partner in each round. The total score was tallied after all rounds had been played. We recorded the mean and standard deviation of the final score across all \(N\) chat samples. ### Hypotheses Prior to analysing the experimental results we formulated the following testable hypotheses in order to ascertain the capabilities of large-language models are able to operationalise natural language descriptions of selfish versus altruistic behaviour. #### 2.7.1 Hypothesis 1 (H1) Simulacra instantiated with altruistic prompts will exhibit higher cooperation rates compared to those instantiated with self-interested prompts. #### 2.7.2 Hypothesis 2 (H2) Simulacra in the mixed-motivation group will exhibit cooperation rates that fall between those of the cooperative and competitive groups. #### 2.7.3 Hypothesis 3 (H3) Simulacra in all groups will exhibit cooperation rates that are different from the control group. #### 2.7.4 Hypothesis 4 (H4) Hypothesis 5 (H5) Simulacra instantiated with competitive prompts will demonstrate a greater tendency to defect, regardless of their partner's behavior, compared to other groups. #### 2.7.5 Hypothesis 5 (H5) Simulacra instantiated with altruistic prompts will exhibit a higher degree of cooperation when paired with an unconditionally cooperating partner, compared to when they are paired with an unconditionally defecting partner or a tit-for-tat partner. #### 2.7.6 Hypothesis 6 (H6) Simulacra instantiated with self-interested prompts will exhibit a lower degree of cooperation when paired with an unconditionally cooperating partner, compared to when they are paired with an unconditionally defecting partner or a tit-for-tat partner. #### 2.7.7 Hypothesis 7 (H7) Simulacra instantiated with cooperative or altruistic prompts will exhibit higher cooperation rates when paired with a tit-for-tat partner initiating with cooperation compared to when they are paired with a tit-for-tat partner initiating with defection. #### 2.7.8 Hypothesis 8 (H8) Simulacra instantiated with competitive or self-interested prompts will exhibit lower cooperation rates when paired with a tit-for-tat partner initiating with cooperation compared to when they are paired with a tit-for-tat partner initiating with defection. ## 3 Results The data consists of a total of \(N=1800\) cases. Each case corresponds to a single play of the iterated PD over six rounds. For each case, we record the following fields: * Group * Participant * Condition * Score * Cooperation frequency * Choices * Transcript Tables 1 to 5 give descriptive statistics showing the relationship between the prompt content and emergent behavior in LLM-generated agents, while Figures 1 and 2 show corresponding box-plots. As can be seen from Figure 0(a), without having to resort to statistical tests, our results provide clear support for hypothesis 1 through 3 (see section 2.7), demonstrating that simulacra instantiated with cooperative, competitive, altruistic, and self-interested prompts exhibit distinct levels of cooperation in the iterated Prisoner's Dilemma. This indicates that LLMs can operationalise natural language descriptions of cooperative and competitive behavior to some Figure 1: Cooperation frequency by group (a), and condition (b) extent. However, from Figure 2 we see that the remaining hypotheses were not supported, suggesting a more complex relationship between prompt content and emergent behavior in LLM-generated agents. Interestingly, the simulacrum from the control group instantiated with "you are a participant in a psychology experiment" (see Table 3, participant T1) exhibited behavior more closely aligned with how real people tend to act in iterated Prisoner's Dilemmas, suggesting that GPT-3.5 possesses some knowledge about human behavior in such contexts. However, when combined with the other results, it appears that the LLM struggles to generalize this behavior in a nuanced way beyond a superficial ability to cooperate more or less depending on whether the role description is altruistic or selfish. The unexpected pattern of increased cooperation with defectors and decreased cooperation with \begin{table} \begin{tabular}{|l|l l l l l l l l|} \hline & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline Group.Altruistic & 360.00 & 0.90 & 0.17 & 0.33 & 0.83 & 1.00 & 1.00 & 1.00 \\ Group.Competitive & 360.00 & 0.14 & 0.16 & 0.00 & 0.00 & 0.17 & 0.33 & 0.50 \\ Group.Control & 345.00 & 0.53 & 0.24 & 0.00 & 0.50 & 0.50 & 0.67 & 1.00 \\ Group.Mixed & 360.00 & 0.52 & 0.25 & 0.00 & 0.50 & 0.50 & 0.50 & 1.00 \\ Group.Selfish & 360.00 & 0.15 & 0.17 & 0.00 & 0.00 & 0.00 & 0.33 & 0.50 \\ \hline \end{tabular} \end{table} Table 1: Cooperation frequency by group \begin{table} \begin{tabular}{|l|l|l l l l l l l l|} \hline & & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline Group & Condition & & & & & & & & \\ \hline \multirow{4}{*}{Group.Altruistic} & tit for tat C & 90.00 & 0.87 & 0.20 & 0.50 & 0.67 & 1.00 & 1.00 & 1.00 \\ & tit for tat D & 90.00 & 0.91 & 0.15 & 0.33 & 0.83 & 1.00 & 1.00 & 1.00 \\ & unconditional cooperate & 90.00 & 0.88 & 0.18 & 0.50 & 0.67 & 1.00 & 1.00 & 1.00 \\ & unconditional defect & 90.00 & 0.92 & 0.13 & 0.67 & 0.83 & 1.00 & 1.00 & 1.00 \\ \hline \multirow{4}{*}{Group.Competitive} & tit for tat C & 90.00 & 0.00 & 0.04 & 0.00 & 0.00 & 0.00 & 0.00 & 0.33 \\ & tit for tat D & 90.00 & 0.28 & 0.09 & 0.00 & 0.17 & 0.33 & 0.33 & 0.50 \\ & unconditional cooperate & 90.00 & 0.00 & 0.02 & 0.00 & 0.00 & 0.00 & 0.00 & 0.17 \\ & unconditional defect & 90.00 & 0.29 & 0.09 & 0.00 & 0.17 & 0.33 & 0.33 & 0.50 \\ \hline \multirow{4}{*}{Group.Control} & tit for tat C & 89.00 & 0.57 & 0.29 & 0.00 & 0.33 & 0.50 & 1.00 & 1.00 \\ & tit for tat D & 86.00 & 0.47 & 0.19 & 0.17 & 0.50 & 0.50 & 0.50 & 1.00 \\ & unconditional cooperate & 89.00 & 0.59 & 0.26 & 0.17 & 0.33 & 0.50 & 0.83 & 1.00 \\ & unconditional defect & 81.00 & 0.49 & 0.18 & 0.17 & 0.50 & 0.50 & 0.50 & 1.00 \\ \hline \multirow{4}{*}{Group.Mixed} & tit for tat C & 90.00 & 0.39 & 0.20 & 0.00 & 0.33 & 0.50 & 0.50 & 0.50 \\ & tit for tat D & 90.00 & 0.65 & 0.23 & 0.33 & 0.50 & 0.50 & 1.00 & 1.00 \\ & unconditional cooperate & 90.00 & 0.39 & 0.20 & 0.00 & 0.33 & 0.50 & 0.50 & 0.67 \\ & unconditional defect & 90.00 & 0.65 & 0.22 & 0.33 & 0.50 & 0.50 & 1.00 & 1.00 \\ \hline \multirow{4}{*}{Group.Selfish} & tit for tat C & 90.00 & 0.00 & 0.02 & 0.00 & 0.00 & 0.00 & 0.00 & 0.17 \\ & tit for tat D & 90.00 & 0.30 & 0.12 & 0.00 & 0.33 & 0.33 & 0.33 & 0.50 \\ \cline{1-1} & unconditional cooperate & 90.00 & 0.00 & 0.04 & 0.00 & 0.00 & 0.00 & 0.00 & 0.33 \\ \cline{1-1} & unconditional defect & 90.00 & 0.29 & 0.13 & 0.00 & 0.33 & 0.33 & 0.33 & 0.50 \\ \hline \end{tabular} \end{table} Table 2: Cooperation frequency by group/condition \begin{table} \begin{tabular}{|l|l l l l l l l l|} \hline & & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline \multirow{4}{*}{T1} & tit for tat C & 29.00 & 0.92 & 0.19 & 0.17 & 1.00 & 1.00 & 1.00 & 1.00 \\ & tit for tat D & 26.00 & 0.28 & 0.20 & 0.17 & 0.17 & 0.17 & 0.42 & 0.67 \\ & unconditional cooperate & 29.00 & 0.93 & 0.16 & 0.33 & 1.00 & 1.00 & 1.00 & 1.00 \\ & unconditional defect & 21.00 & 0.32 & 0.20 & 0.17 & 0.17 & 0.17 & 0.50 & 0.67 \\ \hline \multirow{4}{*}{T2} & tit for tat C & 30.00 & 0.43 & 0.08 & 0.33 & 0.33 & 0.50 & 0.50 & 0.50 \\ & tit for tat D & 30.00 & 0.59 & 0.14 & 0.50 & 0.50 & 0.50 & 0.67 & 1.00 \\ & unconditional cooperate & 30.00 & 0.44 & 0.08 & 0.33 & 0.33 & 0.50 & 0.50 & 0.50 \\ & unconditional defect & 30.00 & 0.59 & 0.15 & 0.33 & 0.50 & 0.50 & 0.67 & 1.00 \\ \hline \multirow{4}{*}{T3} & tit for tat C & 30.00 & 0.38 & 0.16 & 0.00 & 0.33 & 0.50 & 0.50 & 0.50 \\ & tit for tat D & 30.00 & 0.52 & 0.07 & 0.50 & 0.50 & 0.50 & 0.50 & 0.83 \\ \cline{1-1} & unconditional cooperate & 30.00 & 0.43 & 0.09 & 0.17 & 0.33 & 0.50 & 0.50 & 0.50 \\ \cline{1-1} & unconditional defect & 30.00 & 0.51 & 0.06 & 0.33 & 0.50 & 0.50 & 0.67 \\ \hline \end{tabular} \end{table} Table 3: Cooperation frequency by participant and condition - Control group \begin{table} \begin{tabular}{|l|l|l l l l l l l l|} \hline \multirow{2}{*}{Participant} & \multirow{2}{*}{Condition} & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline \multirow{3}{*}{P1} & tit for rat C & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & tit for rat D & 30.00 & 0.26 & 0.10 & 0.00 & 0.17 & 0.33 & 0.33 & 0.33 \\ & unconditional cooperate & 30.00 & 0.01 & 0.04 & 0.00 & 0.00 & 0.00 & 0.17 \\ & unconditional defect & 30.00 & 0.28 & 0.09 & 0.00 & 0.17 & 0.33 & 0.33 & 0.33 \\ \hline \multirow{3}{*}{P2} & tit for rat C & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & tit for rat D & 30.00 & 0.31 & 0.08 & 0.17 & 0.33 & 0.33 & 0.33 \\ & unconditional cooperate & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & unconditional defect & 30.00 & 0.28 & 0.09 & 0.17 & 0.33 & 0.33 & 0.50 \\ \hline \end{tabular} \end{table} Table 6: Cooperation frequency by participant and condition - Competitive group \begin{table} \begin{tabular}{|l|l|l l l l l l l l|} \hline \multirow{2}{*}{Participant} & \multirow{2}{*}{Condition} & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline \multirow{3}{*}{S1} & tit for rat C & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & tit for rat D & 30.00 & 0.22 & 0.16 & 0.00 & 0.00 & 0.33 & 0.33 & 0.33 \\ & unconditional cooperate & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & unconditional defect & 30.00 & 0.19 & 0.16 & 0.00 & 0.00 & 0.33 & 0.33 & 0.33 \\ \hline \multirow{3}{*}{S2} & tit for rat C & 30.00 & 0.01 & 0.03 & 0.00 & 0.00 & 0.00 & 0.00 & 0.17 \\ & tit for rat D & 30.00 & 0.34 & 0.04 & 0.33 & 0.33 & 0.33 & 0.33 & 0.50 \\ & unconditional cooperate & 30.00 & 0.01 & 0.06 & 0.00 & 0.00 & 0.00 & 0.00 & 0.33 \\ & unconditional defect & 30.00 & 0.36 & 0.06 & 0.33 & 0.33 & 0.33 & 0.33 & 0.50 \\ \hline \multirow{3}{*}{S3} & tit for rat C & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & tit for rat D & 30.00 & 0.35 & 0.33 & 0.33 & 0.33 & 0.33 & 0.50 \\ & unconditional cooperate & 30.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & unconditional defect & 30.00 & 0.34 & 0.03 & 0.33 & 0.33 & 0.33 & 0.50 \\ \hline \end{tabular} \end{table} Table 4: Cooperation frequency by participant and condition - Self-interested Group \begin{table} \begin{tabular}{|l|l|l l l l l l l l|} \hline \multirow{2}{*}{Participant} & \multirow{2}{*}{Condition} & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline \multirow{3}{*}{A1} & tit for rat C & 30.00 & 0.93 & 0.16 & 0.50 & 1.00 & 1.00 & 1.00 & 1.00 \\ & tit for rat D & 30.00 & 0.93 & 0.13 & 0.67 & 1.00 & 1.00 & 1.00 & 1.00 \\ & unconditional cooperate & 30.00 & 0.92 & 0.16 & 0.50 & 0.88 & 1.00 & 1.00 & 1.00 \\ & unconditional defect & 30.00 & 0.91 & 0.14 & 0.67 & 0.83 & 1.00 & 1.00 & 1.00 \\ \hline \multirow{3}{*}{A2} & tit for rat C & 30.00 & 0.87 & 0.21 & 0.50 & 0.71 & 1.00 & 1.00 & 1.00 \\ & tit for rat D & 30.00 & 0.86 & 0.18 & 0.33 & 0.67 & 1.00 & 1.00 & 1.00 \\ & unconditional cooperate & 30.00 & 0.84 & 0.19 & 0.50 & 0.67 & 1.00 & 1.00 & 1.00 \\ & unconditional defect & 30.00 & 0.89 & 0.14 & 0.67 & 0.83 & 1.00 & 1.00 & 1.00 \\ \hline \multirow{3}{*}{A3} & tit for rat C & 30.00 & 0.82 & 0.23 & 0.50 & 0.50 & 1.00 & 1.00 & 1.00 \\ & tit for rat D & 30.00 & 0.96 & 0.10 & 0.67 & 1.00 & 1.00 & 1.00 & 1.00 \\ \cline{1-1} & unconditional cooperate & 30.00 & 0.88 & 0.19 & 0.50 & 0.71 & 1.00 & 1.00 & 1.00 \\ \cline{1-1} & unconditional defect & 30.00 & 0.97 & 0.08 & 0.67 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular} \end{table} Table 5: Cooperation frequency by participant and condition - Attuistic group cooperators challenges our initial hypotheses and highlights a potential limitation in the LLM's ability to translate altruism or selfishness into strategies based on conditioned reciprocity. This result suggests that while the agents are sensitive to the general cooperative or competitive nature of the prompts, their capacity to effectively adapt their behavior to their partner's actions might be more limited. ## 4 Conclusion and future work Our results demonstrate that GPT 3.5 is able, at least to some extent, to operationalise concepts of altruism and selfishness in the iterated Prisoner's Dilemma. However, most of the simulacra failed to adapt their strategy appropriately in the face of differing levels of cooperation or defection from partners. Recognizing these limitations, we call upon the research community to further investigate the factors contributing to the emergent behavior of LLM-generated agents in social dilemmas, both within and beyond the Prisoner's Dilemma. \begin{table} \begin{tabular}{|l|l|c c c c c c c c|} \hline \multirow{2}{*}{Participant} & \multirow{2}{*}{Condition} & count & mean & std & min & 25\% & 50\% & 75\% & max \\ \hline \multirow{3}{*}{M1} & tit for fat C & 30.00 & 0.37 & 0.20 & 0.00 & 0.33 & 0.50 & 0.50 & 0.50 \\ & tit for fat D & 30.00 & 0.51 & 0.07 & 0.33 & 0.50 & 0.50 & 0.50 & 0.67 \\ & unconditional cooperate & 30.00 & 0.32 & 0.23 & 0.00 & 0.00 & 0.50 & 0.50 & 0.50 \\ & unconditional defect & 30.00 & 0.49 & 0.05 & 0.33 & 0.50 & 0.50 & 0.50 & 0.67 \\ \hline \multirow{3}{*}{M2} & tit for fat C & 30.00 & 0.30 & 0.24 & 0.00 & 0.00 & 0.50 & 0.50 & 0.50 \\ & tit for fat D & 30.00 & 0.51 & 0.05 & 0.33 & 0.50 & 0.50 & 0.50 & 0.67 \\ & unconditional cooperate & 30.00 & 0.34 & 0.22 & 0.00 & 0.04 & 0.50 & 0.50 & 0.50 \\ & unconditional defect & 30.00 & 0.51 & 0.04 & 0.50 & 0.50 & 0.50 & 0.67 \\ \hline \multirow{3}{*}{M3} & tit for fat C & 30.00 & 0.50 & 0.00 & 0.50 & 0.50 & 0.50 & 0.50 \\ & tit for fat D & 30.00 & 0.94 & 0.13 & 0.67 & 1.00 & 1.00 & 1.00 & 1.00 \\ \cline{1-1} & unconditional cooperate & 30.00 & 0.51 & 0.03 & 0.50 & 0.50 & 0.50 & 0.50 & 0.67 \\ \cline{1-1} & unconditional defect & 30.00 & 0.94 & 0.13 & 0.67 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular} \end{table} Table 7: Cooperation frequency by participant and condition - Mixed group Figure 2: Cooperation frequency by condition for each group This broader research program could involve exploring the potential for more refined or complex prompts to elicit a wider range of cooperative behaviors in various experimental economics scenarios, such as the ultimatum game, the dictator game, and the public goods game, among others. Examining the role of model architecture and training parameters in shaping agent behaviors, as well as analyzing the impact of various partner strategies on agent behavior in these different contexts, could shed light on the model's adaptability and alignment with human values. In future studies, it would be valuable to examine other parameter settings, such as temperature, to explore their effects on the emergent behavior of LLM-generated agents. Additionally, as more advanced LLMs like GPT-4 become available, it would be interesting to investigate whether they exhibit similar limitations or are capable of more nuanced cooperative behaviors in a wider array of social dilemmas. Another potential limitation of the current study is that the LLM has been exposed to a vast literature on the iterated Prisoner's Dilemma in its training data, and it is unclear how would it perform in more ecologically valid task environments that it has no prior exposure to. This limitation could be addressed by inventing new social dilemma games with corresponding task descriptions which are not vignettes from the existing literature. By addressing these questions, we hope to collectively build a deeper understanding of AI alignment in the context of complex, non-zero-sum interactions across various experimental economics settings, ultimately fostering the development of AI systems that better adhere to human values and social norms.
2310.12675
Characterizing the spontaneous collapse of a wavefunction through entropy production
We investigate the phenomenology leading to the non-conservation of energy of the continuous spontaneous localization (CSL) model from the viewpoint of non-equilibrium thermodynamics, and use such framework to assess the equilibration process entailed by the dissipative formulation of the model (dCSL). As a paradigmatic situation currently addressed in frontier experiments aimed at investigating possible collapse theories, we consider a one-dimensional mechanical oscillator in a thermal state. We perform our analysis in the phase space of the oscillator, where the entropy production rate, a non-equilibrium quantity used to characterize irreversibility, can be conveniently analyzed. We show that the CSL model violates Clausius law, as it exhibits a negative entropy production rate, while the dCSL model reaches equilibrium consistently only under certain dynamical conditions, thus allowing us to identify the values -- in the parameter space -- where the latter mechanism can be faithfully used to describe a thermodynamically consistent phenomenon.
Simone Artini, Mauro Paternostro
2023-10-19T12:10:52Z
http://arxiv.org/abs/2310.12675v2
# Characterizing the spontaneous collapse of a wavefunction through entropy production ###### Abstract We investigate the phenomenology leading to the non-conservation of energy of the continuous spontaneous localization (CSL) model from the viewpoint of non-equilibrium thermodynamics, and use such framework to assess the equilibration process entailed by the dissipative formulation of the model (dCSL). As a paradigmatic situation currently addressed in frontier experiments aimed at investigating possible collapse theories, we consider a one-dimensional mechanical oscillator in a thermal state. We perform our analysis in the phase space of the oscillator, where the entropy production rate, a non-equilibrium quantity used to characterize irreversibility, can be conveniently analyzed. We show that the CSL model violates Clausius law, as it exhibits a negative entropy production rate, while the dCSL model reaches equilibrium consistently only under certain dynamical conditions, thus allowing us to identify the values - in the parameter space - where the latter mechanism can be faithfully used to describe a thermodynamically consistent phenomenon. ## I Introduction The quantum-to-classical transition, which is the process driving the quantum state of a system towards a fully classical description of its physical configuration, is yet to achieve a full characterization and, most remarkably, the satisfactory understanding of its underlying mechanism [1]. Particularly relevant is the question on whether the loss of coherence experienced by a large and complex quantum system should be ascribed to an intrinsic mechanism or the unavoidable presence of the surrounding environment [2]. As environmental decoherence only provides a partial addressing of the measurement problem, alternative theoretical frameworks, where the collapse of the wavefunction is lifted to the rank of a universal mechanism, are currently being formulated and developed to attack the quantum-to-classical transition [3; 4; 5; 6]. Such _collapse models_ are achieved through stochastic dynamics, and are usually characterized by phenomenological parameters that are asked to satisfy criteria of reasonability based, for instance, on the retrieval of a classical description in the macroscopic limit. The Continuous Spontaneous Localisation (CSL), one of the most well-studied of such models [3], describes the loss of coherence in the position basis by way of an an extra dissipative term entering the master equation of a quantum system [7; 8; 9]. The physical intuition behind it is that the wave function of the physical state of a system undergoes random localization processes, called "jumps", occurring due to a dissipative mechanism not ascribable to any of the other environmental noise source, and taking place at a rate that depends on the dimension of the system itself: while microscopic systems are left basically unaffected, linear superpositions of states of a macroscopic system would be strongly suppressed due to an intrinsic amplification mechanism. Mathematically, this is achieved by interpreting the wavefunction as a stochastic process in the Hilbert space [8]. Despite its apparent simplicity and appeal, the CSL model suffers of the fundamental shortfall of being inherent not energy-conserving: albeit at a very slow rate, the expectation value of the energy of a quantum system of mass \(m\) undergoing CSL-like dynamics grows indefinitely with time, thus signalling the fundamental unphysical nature of the model. A dissipative extension of this model - dubbed the dCSL model -- has been proposed [10], which, while still not conserving the energy, introduces a term that acts as friction, allowing energy to reach an asymptotic finite value and thus an effective temperature at which the system thermalizes. In this paper, we provide an original characterization of CSL and dCSL model from the perspective offered by non-equilibrium thermodynamics: by using a phase-space formulation of irreversible entropy [11; 12], which aptly quantifies the degree of thermodynamic irreversibility of a given physical process [13], we address the collapse-affected dynamics of a quantum harmonic oscillator subjected to either CSL or dCSL. We show that, while the standard CSL model implies the violation of Clausius law of thermodynamics, witnessed by the occurrence of negative entropy production rates, the dCSL extension would result in thermodynamically consistent descriptions, under suitably chosen dynamical conditions, despite the explicit lack of energy conservation, thus embodying a more plausible formulation of a collapse mechanism to consider. In providing such an assessment, we identify regimes of the dCSL model where, despite a dominant dissipative character of the dynamics, a violation of the Second Law of thermodynamics is enabled by suitably squeezing the initial state of the oscillator. By addressing the features of fundamental collapse theories from a completely general thermodynamic standing point, our work demonstrates the intrinsic value of non-equilibrium tools for the characterization of open quantum system dynamics. The remainder of this paper is organized as follows. After briefly reviewing the salient features of CSL and dCSL models [cf. Sec. II], in Sec. III, the corresponding quantum Fokker-Planck equations are solved numerically for an initial thermal state of the oscillator. In Sec. IV, the quantities used in our thermodynamic analysis will be briefly introduced, and the entropy production rate calculated numerically for the case study and the results discussed in the manuscript. Finally, relevant concluding remarks are offered in Sec. V, while a tech nical Appendix reports details of the calculations required for the phase-space formulation of the dynamics. ## II CSL and DCSL model: an introduction The stochastic differential equation that describes the evolution of a state under the action of a collapse mechanism such as one of those at the centre of our study is \(d\left|\psi(t)\right\rangle=\hat{\mathcal{O}}\left|\psi(t)\right\rangle\), where we have introduced the operator \(\hat{\mathcal{O}}\) that acts on a generic state vector \(\left|a\right\rangle\) as \[\hat{\mathcal{O}}|a\rangle=\left[-\frac{i\hat{H}}{\hbar}+\gamma\int d^{3}x \hat{N}(\mathbf{x})dB(\mathbf{x})-\frac{\gamma}{2}\int d^{3}x\hat{N}^{2}( \mathbf{x})dt\right]\left|a\right\rangle. \tag{1}\] Here, \(B(\mathbf{x})\) is a continuous set of Wiener processes and \(\hat{N}(\mathbf{x})=\sum_{s}\int d^{3}\mathbf{y}g(\mathbf{y}-\mathbf{x})\hat{ \mathcal{A}}^{\dagger}(\mathbf{y},s)\hat{a}(\mathbf{y},s)\) is an averaged number operator - \(\hat{a}\) and \(\hat{a}^{\dagger}\) being the annihilation and creation operators of a harmonic oscillator - written in the second quantization formalism, with a Gaussian weighing function \(g\left(\mathbf{x}\right)=\left(\frac{\alpha}{2\pi}\right)^{\frac{3}{2}}e^{- \frac{\alpha}{2}\left(\mathbf{x}\right)^{2}}\). In this equation, two important parameters are present: the intensity of the Markovian noise entailed by the Wiener process \(\gamma\), which is related to the rate of the jumps, and the length-scale \(\alpha\), which is related to the typical localization volume \(V_{loc}=\alpha^{-3/2}\). The master equation of the CSL for the statistical operator can be shown to be \[\begin{split}\frac{d\hat{\rho}(t)}{dt}&=-\frac{i}{ \hbar}\left[\hat{H},\hat{\rho}(t)\right]+\gamma\int d^{3}x\hat{N}\left( \mathbf{x}\right)\hat{\rho}\left(t\right)\hat{N}\left(\mathbf{x}\right)\\ &-\frac{\gamma}{2}\int d^{3}x\left\{\hat{N}^{2}\left(\mathbf{x} \right),\hat{\rho}(t)\right\}\.\end{split} \tag{2}\] Some important results can be derived. Firstly, the off-diagonal elements of the statistical operator in the position basis go to zero exponentially fast when considering distances greater than the typical localization length \(1/\sqrt{\alpha}\), which, together with the fact that the localization happens at the wave function level, guarantees the effective suppression of macroscopic linear superpositions. The choice of the parameters \(\gamma\) and \(\alpha\) can be made in such a way that the localization happens on very short time scales for objects made of a large number of particles (i.e. of the order of the Avogadro's number), while leaving the standard Schrodinger evolution for systems made of few particles basically uneffected (the aforementioned amplification mechanism). Furthermore, the expectation values of the position and of the momentum evolve in time like in the unitary evolution and the Ehrenfest theorem holds and the internal degrees of freedom are decoupled from the center of mass as in the standard quantum theory. However in this simple formulation the energy is not conserved and it is, on the contrary, divergent in time as \[\langle\hat{H}\rangle=\langle\hat{H}\rangle_{Sch}+\frac{\lambda\alpha\hbar^{ 2}}{4m}t\,, \tag{3}\] where \(\langle\cdot\rangle\) denotes the quantum expectation value taken with respect to the modified dynamic for \(\hat{\rho}\), while \(\langle\cdot\rangle_{Sch}\) is the quantum expectation value taken with respect to the standard Schrodinger dynamic [3]. The dCSL extension of this model [10], fails to conserve energy but introduces in Eq. (1) a term that depends on the momentum through a new parameter in the weighing function that acts as friction, thus allowing the energy to reach an asymptotic finite value and thus an effective temperature at which the system thermalizes. ## III Numerical solution of the dynamics in the phase-space picture ### The quantum Fokker-Planck equation In order to carry out a thermodynamical analysis of the dynamics, it is necessary to translate the master equation, concerning the statistical operator, into a Fokker-Planck equation written in terms of the Wigner function [14] of the system. The reason behind this will be clear in Sec. IV as it lies on the choice of the entropy that will be used and, furthermore, allows for a simple numerical solution of the case study. This is achieved, as far as the CSL model is concerned, via the Wigner-Weyl transform [15] of eq. (2). The full computation is carried out in the Appendix. One can show that, for a one-dimensional system and considering gaussian states, i.e. states with gaussian Wigner function (as the thermal state of the oscillator, for example), the Fokker-Planck equation of the system can be well approximated with \[\partial_{t}W_{\hat{\rho}}(q,p)=\left\{W_{\hat{H}},W_{\hat{\rho}}\right\}_{*} (q,p)+D\,\partial_{p}^{2}W_{\hat{\rho}}(q,p)\,, \tag{4}\] where: \(W_{\hat{H}}\) is the Weyl symbol of the Hamiltonian, \(W_{\hat{\rho}}\) is the Weyl symbol of the statistical operator, i.e. the Wigner function of the system, and \(\left\{W_{\hat{H}},W_{\hat{\rho}}\right\}_{*}\) is the Moyal bracket [15] of the two symbols that comes from the unitary term. This is written explicitly [15; 16] as \[\left\{W_{\hat{H}},W_{\hat{\rho}}\right\}_{*}=2W_{\hat{H}}\sin\left(\frac{1}{ 2}\left[(\overleftarrow{\partial_{q}},\overrightarrow{\partial_{p}}\right] \right)\,W_{\hat{\rho}}, \tag{5}\] where we have omitted the arguments of the Wigner function for simplicity of notation. Here, \(D=\sqrt{\gamma^{2}\alpha^{3}/\pi}\) is the diffusion parameter of the dynamics. Indeed, eq. (4) shows that the simplified collapse term is a simple anisotropic diffusion in the momentum direction. In what follows, unless otherwise specified, we use natural units according to which \(\hbar=1\) and rescale the position and momentum operators as \(p/\,p_{2pf}\to p\) and \(q/q_{2pf}\to q\) with \(p_{2pf}=\sqrt{mw/2}\) and \(q_{2pf}=1/\sqrt{2mw}\) the zero-point fluctuations of an harmonic oscillator with mass \(m\) and frequency \(w\). Correspondingly, we have \(\alpha/2mw\rightarrow\alpha\). Considering instead the dCSL model, it has been shown in Ref. [10] that the modified Schrodinger equation leads to an asymptotic value of the mean energy \[\langle\hat{H}\rangle=(\langle\hat{H}\rangle_{Sch}-\langle\hat{H}\rangle_{as})e ^{-\xi t}+\langle\hat{H}\rangle_{as}, \tag{6}\] with \(\langle\hat{H}\rangle_{as}=\hbar^{2}\alpha/(16mk)\), \(\xi=\frac{\gamma(\alpha/\pi)^{3/2}}{2(1+k)^{2}}\) and \(k\) is related to the parameter introduced in the momentum dependent term in the modified Schrodinger equation. The equilibrium temperature of the system can thus be written as \(T=\alpha\hbar^{2}/(8k_{B}k)\) and it is estimated to be \(T\simeq 10^{-1}K\). The friction effect driving the system to such equilibrium configuration can be accounted for in the Fokker-Planck equation by adding a dissipative term to Eq. (4) as follows \[\partial_{t}W_{\beta}=\left\{W_{\hat{H}},W_{\beta}\right\}_{*}+D\,\partial_{p} ^{2}W_{\beta}+\partial_{p}\left(fpW_{\beta}\right)\,, \tag{7}\] where \(f\) is the dissipative constant and natural units and dimensionless \(q\), \(p\) are considered once again. ### CSL model: numerical solution of the dynamics in phase space Keeping the natural units, but restoring the proper dimensions of the phase-space variables, we consider the initial Wigner function \[W_{\hat{p}}(q,p)=\frac{a_{0}}{\pi}\exp\left[-a_{0}\bigg{(}mwq^{2}+\frac{p^{2}} {mw}\bigg{)}\right] \tag{8}\] and the generic ansatz \[W_{\hat{p}}(q,p)=\frac{\sqrt{a(t)b(t)-c^{2}(t)}}{\pi}e^{-\left(a(t)mwq^{2}+b( t)\frac{p^{2}}{\overline{m}}+2c(t)pq\right)} \tag{9}\] to describe the anisotropic evolved state at a generic time of the dynamics. Here, the dimensionless time-dependent parameters \(a(t)\), \(b(t)\) and \(c(t)\) need to be determined from the evolution of the system. Let us call \(\Sigma=V\) the covariance matrix of the system, where \(V^{-1}=\left(\begin{array}{cc}a&c\\ c&b\end{array}\right)\). We start by looking into the unitary term: from Eq. (5) it is straightforward to check that the Moyal bracket is equal to the Poisson bracket up to order \(\hbar\), that is \(\left\{W_{\hat{H}},W_{\beta}\right\}_{*}=\left\{W_{\hat{H}},W_{\beta}\right\} +\mathcal{O}(\hbar^{2})\). Furthermore, one can show that \(W_{\hat{H}}(q,p)=\left(p^{2}/m+mw^{2}q^{2}\right)/2\)[14]. By using the ansatz in the Fokker-Planck equation and equating the terms with the same powers of \(p\), \(q\) and \(pq\), we get the following set of differential equations \[\dot{a}=2wc-\frac{4Dc^{2}}{mw},\dot{b}=-2wc-\frac{4Db^{2}}{mw},\dot{c}=w(b-a) -\frac{4Dbc}{mw} \tag{10}\] with the additional condition \(\frac{d}{dt}\ln\left(ab-c^{2}\right)=-\frac{4Db}{mw}\). While these equations do not admit a stationary solution, it is straightforward to gather the temporal behavior of \(a(t)\) and \(b(t)\). We take \(D/\left(mw\right)=0.1\) as diffusion coefficient, \(b_{0}=a_{0}=1/1.01\) and \(c_{0}=0\) as initial conditions and integrating over the dimensionless evolution time \(\omega t\), thus finding the behavior illustrated in Fig. 1. Clearly, the dominant effect is diffusion, leading to a progressive spread of the Wigner function that only reaches a non-equilibrium steady state. On the other hand the unitary term causes a rotation in phase-space which is responsible for the emergence of transient correlations. This will cause the variances to fluctuate around the linear increasing trend of the diffusion, which would not be present in the \(q\) direction without the unitary term. Notice that without the diffusion term, the rotation in the phase-space would not affect a symmetric Wigner function such as that of a thermal state, which is in fact the stationary solution of the unitary dynamics. ### Dissipative CSL model: numerical solution of the dynamics in phase space The approach sketched above can be used also for the dCSL model, finding the dynamical equations \[\dot{a} = 2wc-\frac{4Dc^{2}}{mw}\,,\] \[\dot{b} = -2wc-\frac{4Db^{2}}{mw}+2fb, \tag{11}\] \[\dot{c} = w(b-a)-\frac{4Dbc}{mw}+fc\] with the further constraint \(\frac{d}{dt}\ln\left(ab-c^{2}\right)=-4Db/\left(mw\right)+2f\). The additional, \(f\)-dependent terms in the dynamical equations lead to a non-trivial isotropic stationary solution characterized by the equilibrium parameters \(c_{eq}=0\) and \(a_{eq}=b_{eq}=mwf/\left(2D\right)\). Such isotropic state can be seen as a thermal state with a finite effective temperature \(T\) determined by the following relation \[a_{eq}=\frac{1}{2}\frac{e^{\frac{w}{T}}-1}{e^{\frac{w}{T}}+1}=\frac{mw}{2D}f. \tag{12}\] Depending on the relative value of \(D/mw\) and \(f\), two cases can be identified: For the diffusion-dominated case where \(f<D/mw\), the final variances will end up being larger than any initial value. The phenomenology is the opposite in the friction-dominated case corresponding to the choice \(f>D/mw\). This is well illustrated in Fig. 2, where we show the convergence of the elements of the covariance matrix of the system to the asymptotic values. As in the non-dissipative dynamics, the distribution is streched along the \(p\) direction by the diffusion and rotated by Figure 1: Evolution of \(a(t)\) (dashed blue line), \(b(t)\) (solid red line), and \(c(t)\) (solid green curve) against time. All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.1\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). The inset shows the behavior in a shorter time-window to appreciate the nearly out-of-phase nature of \(a(t)\) and \(b(t)\) instig Figure 1: Evolution of \(a(t)\) (dashed blue line), \(b(t)\) (solid red line), and \(c(t)\) (solid green curve) against time. All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.1\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). The inset shows the behavior in a shorter time-window to appreciate the nearly out-of-phase nature of \(a(t)\) and \(b(t)\) instig Figure 1: Evolution of \(a(t)\) (dashed blue line), \(b(t)\) (solid red line), and \(c(t)\) (solid green curve) against time. All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.1\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). The inset shows the behavior in a shorter time-window to appreciate the nearly out-of-phase nature of \(a(t)\) and \(b(t)\) instig Figure 1: Evolution of \(a(t)\) (dashed blue line), \(b(t)\) (solid red line), and \(c(t)\) (solid green curve) against time. All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.1\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). The inset shows the behavior in a shorter time-window to appreciate the nearly out-of-phase nature of \(a(t)\) and \(b(t)\) instig Figure 2: Evolution of \(a(t)\) (dashed blue line), \(b(t)\) (solid red line), and \(c(t)\) (solid green curve) against time. All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.1\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). The inset shows the behavior in a shorter time-window to appreciate the nearly out-of-phase nature of \(a(t)\) and \(b(t)\) instig Figure 1: Evolution of \(a(t)\) (dashed blue line), \(b(t)\) (solid red line), and \(c(t)\) (solid green curve) against time. All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.1\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). The inset shows the behavior in a shorter time-window to appreciate the nearly out-of-phase nature of \(a(t)\) and \(b(t)\) instig the unitary term. This time however the dissipation competes with the friction until the distribution settles around a symmetric state whose variances, should the diffusion term dominate, would be larger than the initial values [cf. Fig. 3 and Fig. 4 for a qualitative comparison between the CSL and the dCSL dynamics of the Wigner function of the system]. ## IV Entropic analysis of the collapse dynamics ### The definition of entropy and the entropy production rate Having characterized the phase-space dynamics of the system under the collapse models at the centre of this study, we now introduce the thermodynamical quantities used in the present paper. The main theoretical tool is the entropy production [13; 17], i.e. the contribution to the total entropy of a thermodynamic transformation or process that is produced by taking or keeping the system out of equilibrium. It embodies a quantitative measure of irreversibility in such processes and its rate is used to determine whether a system approaches thermal equilibrium during its dynamics [18]. For a general open-system dynamics, the entropy production rate \(\Pi\) is defined as [13] \[\frac{dS}{dt}=\Pi(t)-\phi(t), \tag{13}\] where \(\phi(t)\) is the entropy flux between the system and the environment it is in contact with. Its thermodynamically consistent definition implies the request for the entropy production rate to satisfy a class of fluctuation theorems, namely mathematical generalizations of the second law of thermodynamics [19], thus giving rise to the constraint \(\Pi(t)\geq 0\) across a dynamics. When working in the phase space, a successful formulation of the framework for the quantification of entropy production, which allows to bypass some of the shortfalls of the standard approach based on the use of von Neumann entropy (such as the so-called _ultra-cold catastrophe_[20]), makes use of the Figure 4: napshots of the dynamics of the Wigner function of the system under the effects of the dCSL mechanism. We sample the distribution at four different times(\(wt=0,32,68,98\)). All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=1.6\), \(f=2\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). Figure 3: Snapshots of the dynamics of the Wigner function of the system under the effects of the CSL mechanism. We sample the distribution at four different times (\(wt=0,1.6,3.2,5\)). All quantities are dimensionless. In this simulation we have used the parameters \(D/\left(m\omega\right)=0.9\), \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\). Figure 2: Evolution of the variance of position (dashed blue line) and momentum (solid red line), and of the covariance \(\Sigma_{12}(t)\) for the dCSL model. In panel **(a)** we look into the dynamics under dominant-friction conditions by taking \(D/\left(mv\right)=0.5\), \(f=0.7\). Panel **(b)** is diffusion-dominated as we have chosen \(D/\left(mv\right)=0.5\), \(f=0.4\). The initial conditions are \(a_{0}=b_{0}=1/1.01\), \(c_{0}=0\) in both panels. The quantities being plotted are all dimensionless. Renyi-2 entropy [11; 12] defined as \(S_{2}=-\ln(\text{Tr}\rho^{2})\), where \(\partial\) is the density matrix of the system. For Gaussian states, such quantity is equivalently formulated - up to an irrelevant constant - as \[S_{2}(t){=}\frac{1}{2}\ln[\det V(t)]{=}-\int W(q,p,t)\ln\bigl{(}W(q,p,t)\bigr{)} dqdp \tag{14}\] when expressed in terms of the covariance matrix \(V(t)\) and Wigner function \(W(p,q,t)\) at time \(t\) associated with \(\dot{\rho}\)[21]. With such tool at hand, the entropy production rate is then defined as [11; 13] \[\Pi(t)=-\partial_{t}K(W(q,p,t)||W_{0}(q,p)), \tag{15}\] where \(W_{0}(q,p)\) is the Wigner function of the equilibrium state of the system. Here \[K(W_{a}||W_{b})=\int dqdpW_{a}(q,p)\ln\biggl{(}\frac{W_{a}(q,p)}{W_{b}(q,p)} \biggr{)} \tag{16}\] is the relative Wigner entropy between the Wigner functions \(W_{a,b}(q,p)\). Recently, this framework has been successful in experimentally characterizing the degree of irreversibility of the non-equilibrium dynamics of both an optomechanical system and an intra-cavity ultracold atomic system [22]. ### Entropy production rate of the CSL dynamics Using the definition of entropy given in eq.(14) it is possible to get an analytical expression for both the entropy and the relative entropy of a Gaussian distribution, which will depend only on the covariance matrix. The Wigner entropy of a single-mode Gaussian state reads \[H(p)=S_{2}+1+\ln\pi=\ln\Bigl{(}\pi e\det\sqrt{V}\Bigr{)}, \tag{17}\] while the relative Wigner entropy can be computed explicitly as [21] \[K(p_{1}||p_{2})=\frac{1}{2}\ln\biggl{(}\frac{\det V_{2}}{\det V_{1}}\biggr{)} +\frac{1}{2}\operatorname{tr}\Bigl{(}V_{1}V_{2}^{-1}\Bigr{)}-1, \tag{18}\] with \(V_{j}^{-1}=\begin{pmatrix}a_{j}&c_{j}\\ c_{j}&b_{j}\end{pmatrix}\) (\(j=1,2\)). We can use the dynamics of the entries of the covariance matrix of the system, and their equations of motion, to gather the behavior of the Wigner relative entropy \(K(W(p,q,t)||W_{eq})\) and the entropy production rate. Needless to say, the ambiguity in this case is the lack of a reference equilibrium state: the standard CSL model induces the unconstrained growth of the effective temperature of the system without reaching a stationary state. Therefore, in order to gather an intuition of the trend that the entropy production would follow, we compute the entropy production associated with target thermal states of growing variances, thus providing information on the features of both the Wigner relative entropy and \(\Pi(t)\). Fig. 5 summarizes the results of such a study. The Wigner relative entropy in general showcases a non-monotonic behavior, reaching a minimum value and then growing nearly linearly as the evolved state of the system departs from the chosen target state. Correspondingly, after remaining positive for a while, the entropy production rate takes negative values, thus witnessing the violation of the second law embodied by the constraint \(\Pi(t)>0\). The minimum of relative entropy is attained at the time when the evolved state of the system becomes as close as allowed by the dynamics to the thermal state of reference. This can be clearly seen from the state fidelity between \(\dot{\rho}(t)\) and the hypothetical reference state here at hand. Such figure of merit can be calculated straightforwardly by using the covariance matrices \(\Sigma(t)\) and \(\Sigma_{eq}\) of such states as [23; 24; 25; 26; 27] \[F(t)=\frac{2}{\sqrt{\Delta+\Lambda}-\sqrt{\Lambda}} \tag{19}\] with the symplectic invariants \(\Delta=\det\bigl{(}V(t)+V_{eq}\bigr{)}\) and \(\Lambda=\det(\Sigma(t)+i\Omega)\det\bigl{(}\Sigma_{eq}+i\Omega\bigr{)}\), and where we have used the single-mode symplectic matrix \(\Omega=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\). As it can be appreciated from Fig. 6, the state fidelity peaks at the time \(wt\) when the mean number of excitations in the state of the system becomes identical to that of the target thermal state. This is also when \(K(W(q,p,t)\|W_{eq})\) achieves its minimum. Such phenomenology clearly takes place regardless of the chosen target state. This is as if the state is interacting with a thermal bath with infinite temperature: effectively the dynamics has no physical asymptotic state and thus there is no target state that could make the relative entropy disappear, reflecting the linear increase in the average energy of the system predicted by the model. ### Entropy production rate of the dissipative CSL dynamics The analysis of the behavior of the entropy production rate in time can now be extended to the assessment of the dCSL Figure 5: Entropy production rate (main panel, solid red line) and corresponding Wigner relative entropy (inset, dashed blue curve) across the dynamics. The parameters used in this simulation are \(D=0.1\), \(b_{1}(0)=a_{1}(0)=1/1.01\), \(c_{1}(0)=0\) for a target state with \(a_{2}=b_{2}=1.01k\) and \(k=1,..,5\), going in the sense of the arrows. mechanism, where the two dynamical regimes identified previously should be addressed separately. First, let us address the case of diffusion-dominated dynamics, where a clear stationary state is achieved as a result of the competition between diffusion and friction-like effects. The phenomenology of \(\Pi(t)\) and the Wigner relative entropy is shown in Figs. 7 for a set of values of the parameters characterizing the dynamics. As the evolution has an asymptotic state and remains physically legitimate for any finite value of the ratio between \(D/\left(mw\right)\) and \(f\), after a transient, the entropy production rate \(\Pi\) reduces to zero from otherwise positive values, thus satisfying the second law. On the other hand, care should be applied in the friction-dominated case: some values of the ratio \(D/\left(mwf\right)\) may lead to physically inconsistent dynamics, as evidenced by the violations of the constraints that a legitimate covariance matrix should satisfy [28], namely \(V(t)\geq 0\), \(V(t)+i\Omega\geq 0\), which implies \(|i\Omega V(t)|\geq 1\). The violation of such conditions may result in \(\Pi(t)<0\) at some instant of time, thus violating the second law of thermodynamics. A quantitative constraint comes from the uncertainty principle leading to \(\det(V_{eq})\geq 1\) or equivalently \(f\leq\frac{2D}{mw}\). A way to obtain an inconsistent dynamics in the dCSL case is to consider a far out-of-equilibrium initial states. For instance, we have found that suitably squeezed initial states, in the friction-dominated case, might result in the violation of the second law, while still reaching an isotropic equilibrium state. Such instances do not occur, instead, for diffusion-dominated dynamics. A full characterization of the dynamics with non-isotropic initial states will be the core of a future investigation. ## V Conclusions We have used a phase-space description of the dynamics entailed by both the CSL and dCSL model on a quantum harmonic oscillator, providing a thermodynamics characterisation of the dynamical features of such collapse mechanisms. Through a study of the entropy production rate, we have highlighted the lack of an equilibrium state for the case of the CSL dynamics. Correspondingly, such model violates the second law of thermodynamics, as showcased by a negative entropy production rate. Differently, the dynamics induced by the dCSL model indeed reaches, asymptotically, an equilibrium state for any choice of the parameters. However, the model is generally thermodynamically consistent only in the diffusion-dominated case. All this being said, it is clear that in general, thermodynamical transformations involving reservoirs hotter than the system are physically acceptable, only that in this case a contribution to the entropy production rate of the environment is present and must be taken into account. The present analysis is thus only partial in adressing this problem, since with a proper model of the environment that generates the noise, a non-zero temperature asymptotic state of the dynamics which does not violate the Second law could be found (with different restrictions on the parameters). A starting point could be Figure 7: Entropy production rate and Wigner relative entropy (inset) over time for the dCSL model with dominant diffusion. For this simulation, we have used the parameters \(D/\left(mw\right)=0.9\), and \(f=0.1k\) with \(k=1,..,8\), growing as shown by the sense of the arrow in the figure. We have considered initial covariance matrix elements \(1/b_{0}=1/a_{0}=1.01\) and \(c_{0}=0\). Figure 6: State fidelity [panel **(a)**] and mean number of excitations in the state of the system [panel **(b)**] across the CSL dynamics. We have used the following parameters for the simulations reported in the figure: \(D/\left(mw\right)=0.1\), \(1/b_{0}=1/a_{0}=1.01\), \(c_{0}=0\). Moreover, we have considered target states with variances \(a_{eq}=b_{eq}=1.01k\) with \(k=1,..,5\) varying in steps of \(1\). All the reported quantities are dimensionless. Figure 8: Entropy production rate and Wigner relative entropy (inset) over time for the dCSL model with dominant friction. For this simulation, we have used the parameters \(D/\left(mw\right)=0.5\), and \(f=0.1k\) with \(k=6,..,10\), growing as shown by the sense of the arrow in the figure. We have considered initial covariance matrix elements \(1/b_{0}=1/a_{0}=1.01\) and \(c_{0}=0\). Ref. [29], where a microscopic derivation of the noise is derived. A similar analysis can be carried out also on other different declinations of the collapse models, such as the famous Diosi-Penrose model [30; 31], which involves gravity, or energy conserving formulation of the CSL. ###### Acknowledgements. The authors thanks Angelo Bassi for discussions and support. MP acknowledges the support by the European Union's Horizon Europe EIC Pathfinder project QuCoM (Grant Agreement No. 101046973), the Leverhulme Trust Research Project Grant UltraQuTe (grant RGP-2018-266), the Royal Society Wolfson Fellowship (RSWF/R3/183013), the UK EPSRC (EP/T028424/1), and the Department for the Economy Northern Ireland under the US-Ireland R&D Partnership Programme. ## Appendix: Derivation of the Fokker-Planck equation of the CSL model We provide the full derivation of the quantum Fokker-Planck equation of a Gaussian state subjected to CSL used in Sec. III. As done previously, natural units and dimensionless position and momentum will be considered. The Weyl symbol of the statistical operator is called the Wigner function and is defined as the Fourier transform of the quantum characteristic function \(\chi_{\rho}(\vec{s})=\text{Tr}[\hat{\rho}\hat{D}(s)]\) where \(\hat{D}\) is the displacement operator [14]. We have \[W_{\hat{\rho}}(\vec{r})=\int\frac{d^{2}s}{(4\pi)^{2}}e^{-\frac{i}{2}\vec{r}^{ \top}\Omega s}\chi_{\hat{\rho}}(\vec{s})\] with \(\vec{r}=(q,p)\). This expression can be shown to be equivalent to \[W_{\hat{\rho}}(\vec{r})=\int\frac{dy}{4\pi}e^{-\frac{i}{2}py}\left\langle x+ \frac{y}{2}\big{|}\hat{\rho}\big{|}x-\frac{y}{2}\right\rangle\,\] and as this equivalence does not depend on the choice of the quantum operator which is to be transformed, it holds for any other Weyl symbol as well, that is \(W_{\hat{A}}(q,p)=\int\frac{dy}{4\pi}e^{-\frac{i}{2}py}\left\langle q+\frac{y}{ 2}\big{|}\hat{A}\big{|}q-\frac{y}{2}\right\rangle\). Now we can use this expression to take the Weyl symbol of Eq. (2). One can start considering the time evolution of the matrix elements in the position basis of the statistical operator, derived for example in Ref. [3] \[\frac{\partial}{\partial t}\left\langle\vec{q^{\prime}}\big{|} \hat{\rho}(t)\big{|}\vec{q^{\prime\prime}}\right\rangle=-i\left\langle\vec{q^ {\prime}}\big{|}[\hat{H},\hat{\rho}(t)]\big{|}\vec{q^{\prime\prime}}\right\rangle\] \[-\gamma\big{(}\frac{\alpha}{4\pi}\big{)}^{\frac{3}{2}}\big{[}1-e ^{-\frac{i}{4}(\vec{q^{\prime}}-\vec{q^{\prime\prime}})^{2}}\big{]}\left\langle \vec{q^{\prime}}\big{|}\hat{\rho}(t)\big{|}\vec{q^{\prime\prime}}\right\rangle. \tag{20}\] Then, once specialized to the one-dimensional case with one particle, it will be enough to take the Fourier transform to get the Weyl symbol of the equation. The left-hand side will be, of course, \(\partial_{t}W_{\hat{\rho}}(q,p)\) as the time derivative can be taken out of the integral. As for the right-hand side, let us consider first the non-Hamiltonian term, which will lead to \[-\frac{\gamma}{4\pi}\sqrt{\frac{\alpha}{4\pi}}\int dye^{-\frac{i}{2}py} \big{[}1-e^{-\frac{\alpha}{4}(q+\frac{y}{2}-q+\frac{y}{2})^{2}}\big{]}\left \langle q+\frac{y}{2}\big{|}\hat{\rho}(t)\big{|}q-\frac{y}{2}\right\rangle\] \[=-\gamma\sqrt{\frac{\alpha}{4\pi}}\bigg{[}W_{\hat{\rho}}(q,p)- \frac{1}{4\pi}\int dye^{-\frac{i}{2}py}e^{-\frac{i}{4}y^{2}}\left\langle q+ \frac{y}{2}\big{|}\hat{\rho}(t)\big{|}q-\frac{y}{2}\right\rangle\bigg{]}\] \[=-\gamma\sqrt{\frac{\alpha}{4\pi}}\bigg{[}W_{\hat{\rho}}(q,p)- \frac{1}{4\pi}\int\frac{dk}{\sqrt{\pi}}e^{-k^{2}}\int dye^{\frac{i}{2}y(p-2 \sqrt{\alpha}k)}\left\langle q+\frac{y}{2}\big{|}\hat{\rho}(t)\big{|}q-\frac {y}{2}\right\rangle\bigg{]}\] \[=-\gamma\sqrt{\frac{\alpha}{4\pi}}\bigg{[}W_{\hat{\rho}}(q,p)- \int\frac{dk}{\sqrt{\pi}}e^{-k^{2}}W_{\hat{\rho}}(q,p-2\sqrt{\alpha}k)\bigg{]}\,\] where we have used the following identity \[\int\frac{dk}{\sqrt{\pi}}\exp\big{(}-(k^{2}+i\sqrt{\alpha}yk)\big{)}=\exp \biggl{(}-\frac{\alpha}{4}y^{2}\biggr{)}\.\] For the Hamiltonian part we refer to Ref. [15] for the definition of the Moyal bracket as \(\left\{W_{\hat{A}},W_{\hat{B}}\right\}_{\ast}=-iW_{[\hat{A},\hat{B}]}\), finally getting \[\partial_{t}W_{\hat{\rho}}(q,p)=\left\{W_{\hat{H}},W_{\hat{\rho}} \right\}_{\ast}(q,p)\] \[-\gamma\sqrt{\frac{\alpha}{4\pi}}\bigg{[}W_{\hat{\rho}}(q,p)- \int\frac{dk}{\sqrt{\pi}}e^{-k^{2}}W_{\hat{\rho}}\big{(}q,p-2\sqrt{\alpha}k \big{)}\bigg{]}. \tag{21}\] As we are interested only in Gaussian states, we can simplify this expression through the Kramers-Moyal expansion. First, let us rearrange the integral by making a change of variable \(p^{\prime}=p-2\sqrt{\alpha}k\), thus getting \[\int\frac{dk\,e^{-k^{2}}}{\sqrt{\pi}}W_{\beta}\big{(}q,p-2\sqrt{\alpha}k\big{)}=- \int\frac{dp^{\prime}}{\sqrt{\pi}}\frac{e^{-\frac{(p-p^{\prime})^{2}}{4\pi^{2}} }}{2\sqrt{\alpha}}W_{\beta}(q,p^{\prime})\,.\] Considering only states whose Wigner function is well-localized around the origin (e.g. Gaussian states) one can Taylor expand the Wigner function around \(p^{\prime}=p\) to get an easier expression truncating at the first non trivial order, leading to \[\int\frac{dp^{\prime}}{\sqrt{\pi}}\frac{e^{-\frac{(p-p^{\prime})^{2}}{4\pi^{2} }}}{2\sqrt{\alpha}}W_{\beta}(q,p^{\prime})\simeq\int dp^{\prime}\frac{e^{- \frac{(p-p^{\prime})^{2}}{4\pi}}}{\sqrt{4\pi\alpha}}\left[W_{\beta}(q,p)+ \partial_{p}W_{\beta}(q,p)(p^{\prime}-p)+\partial_{p}^{2}W_{\beta}(q,p)(p^{ \prime}-p)^{2}\right]\] where the approximation sign is used as higher order terms have been neglected. As the second term is the integral is identically zero in light of parity, after some algebra one is left with \[W_{\hat{\rho}}(q,p)+2\alpha\partial_{p}^{2}W_{\hat{\rho}}(q,p)\.\]
2308.10228
Scene-Driven Exploration and GUI Modeling for Android Apps
Due to the competitive environment, mobile apps are usually produced under pressure with lots of complicated functionality and UI pages. Therefore, it is challenging for various roles to design, understand, test, and maintain these apps. The extracted transition graphs for apps such as ATG, WTG, and STG have a low transition coverage and coarse-grained granularity, which limits the existing methods of graphical user interface (GUI) modeling by UI exploration. To solve these problems, in this paper, we propose SceneDroid, a scene-driven exploration approach to extracting the GUI scenes dynamically by integrating a series of novel techniques including smart exploration, state fuzzing, and indirect launching strategies. We present the GUI scenes as a scene transition graph (SceneTG) to model the GUI of apps with high transition coverage and fine? grained granularity. Compared with the existing GUI modeling tools, SceneDroid has improved by 168.74% in the coverage of transition pairs and 162.42% in scene extraction. Apart from the effectiveness evaluation of SceneDroid, we also illustrate the future potential of SceneDroid as a fundamental capability to support app development, reverse engineering, and GUI regression testing.
Xiangyu Zhang, Lingling Fan, Sen Chen, Yucheng Su, Boyuan Li
2023-08-20T10:54:25Z
http://arxiv.org/abs/2308.10228v1
# Scene-Driven Exploration and GUI Modeling for Android Apps ###### Abstract Due to the competitive environment, mobile apps are usually produced under pressure with lots of complicated functionality and UI pages. Therefore, it is challenging for various roles to design, understand, test, and maintain these apps. The extracted transition graphs for apps such as ATG, WTG, and STG have a low transition coverage and coarse-grained granularity, which limits the existing methods of graphical user interface (GUI) modeling by UI exploration. To solve these problems, in this paper, we propose SceneDroid, a scene-driven exploration approach to extracting the GUI scenes dynamically by integrating a series of novel techniques including smart exploration, state fuzzing, and indirect launching strategies. We present the GUI scenes as a scene transition graph (SceneTG) to model the GUI of apps with high transition coverage and fine-grained granularity. Compared with the existing GUI modeling tools, SceneDroid has improved by 168.74% in the coverage of transition pairs and 162.42% in scene extraction. Apart from the effectiveness evaluation of SceneDroid, we also illustrate the future potential of SceneDroid as a fundamental capability to support app development, reverse engineering, and GUI regression testing. Android app, Scene-driven exploration, GUI exploration, GUI modeling ## I Introduction Mobile applications (apps) are indispensable for daily life [1]. Excessive demand also means that people have higher requirements for these apps, therefore, they are usually developed under pressure with more complex functionalities and UI pages. Every coin has two sides. It is challenging to design, understand, test, and maintain these apps for different roles such as product manager, designer, developer, and maintain. To mitigate such a problem and to help understand these complex apps, app abstract and graphical user interface (GUI) modeling have been used to realize apps by levering UI exploration [1, 2, 3, 4, 5, 6]. Many different approaches to GUI modeling are raised gradually such as activity transition graph (ATG) [2, 7], window transition graph (WTG) [3], and screen transition graph (STG) [4]. Although static and dynamic methods are available for UI exploration, there are two significant issues that have not been dealt with yet. _(1)_ it is challenging to construct a relatively complete \(*\)TG.1 Due to numerous implementations and various code styles, the static UI exploration is missing several transitions [5, 8]. Besides, as some activities are too complex to fully explore or required complex inputs that cannot be completed automatically, the coverage may still be far from acceptable [9, 10, 11]. _(2)_ The UI pages are more significant than the \(*\)TG structure since Android apps are event-driven with rich UI pages. The UI page is more helpful and intuitive for users to understand the app. Footnote 1: We use \(*\)TG to present these existing transition graphs. Under the situation, Chen et al. [1] inspired by the conception of storyboard in the movie industry, proposed StoryDroid and automatically extracted storyboards for Android apps, which contains both ATG and rendered UI pages along with many other useful features such as UI components, the corresponding layout and logic code, method hierarchy. Another work StoryDistiller [5] is an extension of it [1], which enhanced StoryDroid on both the ATG construction and UI page rendering by adding dynamic UI exploration. In other words, StoryDistiller is a hybrid solution to extract storyboards for apps with rich features for app abstract and GUI modeling with rich visible UI features. However, StoryDistiller [5] still has shortcomings that obstruct understanding and realizing apps: _(1)_ The strategy of dynamic exploration is only to trigger each interactive UI component on the rendered activity, missing many deep-level interactive UI components. The simple strategy inevitably lost a lot of transition pairs. _(2)_ The extracted \(*\)TG is coarse-grained. In addition to the \(*\)TG, many other GUI "scenes" can be triggered in activity as shown in Figure 1, leading to the creation of numerous new UI pages containing new functionalities. An urgent need for a fine-grained GUI modeling solution exists. In fact, addressing the above-mentioned problems poses the following challenges: **C1: Reasonable UI Granularity.** Achieving a reasonable UI granularity is challenging when seeking to define app UI updates, as we must preserve key UI information while avoiding the recording of excessive unnecessary states. An overly coarse granularity may lead to misjudments of UI states, adversely affecting test results, while an excessively fine granularity may generate a multitude of redundant states, hindering testing efficiency. Consequently, identifying an appropriate granularity balance to achieve efficient and accurate UI update recognition is a key challenge. **C2: Launching Activity.** During the dynamic exploration of Android apps, enhancing the ability to launch activities is a key challenge. Android apps typically comprise multiple activities, which are the core components of the app, responsible for displaying various user interfaces and handling user interactions. However, during the dynamic testing process, some activities may not be easily triggered, as they might require specific user input or a particular application state. Furthermore, certain activities might only be triggered under specific conditions, rendering the dynamic exploration process potentially unable to cover all possible activities. To this end, in this paper, we propose SceneDroid, a scene-driven exploration and GUI modeling approach, which leverages a smart exploration to dynamically extract the GUI scenes.2 Specifically, to address C1, SceneDroid proposes a scene recognition method that considers the hierarchical structure of components on the UI page and ignores minor changes that may lead to layout changes, thus identifying unique scenes. SceneDroid constructs a finer GUI model based on scenes, called the Scene Transition Graph (SceneTG). To address C2, SceneDroid designs an exhaustive exploration strategy to explore all scenes of an app and interact with as many interactive UI components as possible. SceneDroid also introduces state fuzzing techniques to improve scene transition coverage. Most importantly, SceneDroid designs an indirect launch strategy that leverages already explored activities to indirectly launch activities that Inter-Component Communication (ICC) messages failed to launch. Footnote 2: In this paper, a **scene** is defined as the UI page that is triggered by interactive UI components of the activity \(A\), whose layout is different from that of \(A\). Such new scenes may be rendered as the current activity \(A\) with new views, a new fragment of \(A\), or a new activity. To demonstrate the effectiveness of SceneDroid, we conducted comprehensive experiments. To evaluate the scene identification ability of SceneDroid, we run it on 10 self-developed apps containing different types of interactive UI components that can trigger new scenes, results show that SceneDroid can recognize all the preset scenes. We further compared SceneDroid with 4 state-of-the-art GUI modeling tools to evaluate the effectiveness on 100 apps. The results demonstrate that the SceneDroid surpasses other existing tools in terms of the number of transition pairs (30.25 on average) and scenes (22.93 on average). With improvements of 168.74% in transition pair coverage and 162.42% in scene extraction, SceneDroid has significantly enhanced its performance. In addition, we also conducted an ablation study to evaluate the contribution of each strategy employed by SceneDroid. The result indicates that the Indirect Launching strategy is the most contributing one, achieving an average improvement of 15.59% in terms of activity exploration, 47.02% improvement in scene exploration, and 35.08% improvement in transition pair extraction. As SceneDroid serves as a fundamental tool for app exploration, we also discussed some applications based on SceneDroid such as regression testing and UI-based testing. In summary, we made the following contributions. * We propose SceneDroid, which is a novel approach leveraging a set of new techniques to construct the fine-grained app UI model by defining the scene transition graph (SceneTG). It can handle both open-source and closed-source apps. * SceneDroid proposes a smart exploration algorithm, which mainly includes three strategies of exhaustive exploration, state fuzzing, and indirect launch method. These techniques improve the depth of exploration and the completeness of the SceneTG. * Our comprehensive experiments demonstrate the effectiveness of SceneDroid in app exploration and UI modeling compared with existing tools. Moreover, our experiments indicate the indirect launch strategy is the most contributing one to improving UI modeling. * This is a fundamental work providing a novel UI modeling method for apps, which facilitates future work in the reverse analysis of app structure, design and guidance of app development, creation of regression testing tools, etc. We have released SceneDroid and the experimental dataset on [https://github.com/SceneDroid/SceneDroid](https://github.com/SceneDroid/SceneDroid). ## II Background ### _Android Activity and Fragment_ The Activity is the keystone of all Android apps. A component that contains a user interface primarily for user interaction. Android Fragment is a type of view that can be embedded in an activity. An activity can contain more than one Fragment, and a Fragment can also be reused in multiple activities, which can adapt to devices with different resolutions and make screen space utilization more reasonable. Like mini-activity, Fragment has its own layout and lifecycle [12]. ### _Android UI Components_ Android provides a large number of UI components [13] that can be used flexibly to have a grandstand view of the app's functionality. For example, TextView is mainly used to display a text message on the current page. Button is an essential UI component used to interact with the users. Button objects can receive user-clickable events. ImageView and ImageButton are UI components available for displaying icons. In addition to these common and basic types, other types of UI components are usually used to enrich the user interface. For example, Menus are used in most apps to deliver user actions and some options. The menus are often laid out with important options that allow changes to be made to the environment variables and environment data that the apps depend on. The navigation drawer is one of the most general effects in Material Design Fig. 1: Scene examples triggered by different UI components. which can hide some menu options on the left of the top app bar. It can display the main navigation items of the app. AlertDialog and ProgressDialog can pop up dialogs on the current page. ## III Approach Fig. 2 shows the overview of SceneDroid, which consists of three main parts: data collection, dynamic analysis, and scene transition graph (SceneTG) construction. SceneDroid takes an APK file as input and outputs a visual SceneTG and other related parsing results such as the corresponding screenshot for each scene and its corresponding layout files. The data collection module collects the Inter-Component Communication (ICC) message for activity launching to facilitate dynamic analysis and the activity transition graph (ATG). The dynamic analysis module runs the apps by employing the Smart Exploration algorithm and identifies new scenes. The SceneTG Construction module takes the outputs of dynamic analysis to generate the SceneTG, including the screenshot of each scene and the scene transitions. ### _Data Collection_ The goal of data collection is to provide the dynamic exploration module with as much information as possible, including the ICC messages for direct activity launching and ATG for indirect activity launching, so as to improve the efficiency and effectiveness of dynamic analysis. #### Iii-A1 **ICC Message Collection** Android enables activity launching via console interfaces, with some requiring extra data. ICC messages, mainly Intent objects with data items, launch target activities. Generating ICC messages entails identifying Basic Attributes and Extra Parameters, found in intent-filters or Java code. Extra Parameters provide necessary specific data for successful launching. Comprising basic structures like String, Char, and Boolean, we generate data according to types to populate the Extra Parameter. The resulting Basic Attribute and Extra Parameter form ICC messages, used for activity launching and supplied to the dynamic analysis module. #### Iii-A2 **ATG Collection** Activity Transition Graph (ATG) is also one of the important features for app exploration, which states the transition relations between different activities. Lots of studies have been proposed to construct ATGs [1, 2, 5, 14, 15], and we use them to collect the initial ATGs for further analysis. In this paper, ATG is mainly used to guide SceneDroid in the following dynamic analysis, especially when the activities fail to be launched directly with ICC messages, ATG can facilitate the exploration by providing the precursor activity for launching. Besides, ATG will be augmented by dynamic analysis and acts as the basis to construct the SceneTG. ### _Dynamic Analysis_ Based on the collected data, the dynamic analysis aims to exhaustively explore the scenes within the apps and identify new scenes and scene transitions during exploration. #### Iii-B1 **Smart Exploration** Smart exploration focuses on obtaining as many different scenes as possible within an app. To achieve it, three strategies are designed: (1) _State fuzzing_; (2) _Exhaustive exploration of each activity_; and (3) _Indirect launching for failed activities_, where different strategies are used in different stages. Specifically, given an app, SceneDroid first tries to launch each activity based on the obtained ICC messages, the target activity is launched successfully, and the first two strategies are used to explore each activity exhaustively. If the activity fails to be launched, SceneDroid will employ the third strategy to indirectly launch activities first and then continue using the first two strategies to explore activities. Details are described as follows. \(\bullet\)**State fuzzing**. Since some activities contain UI components that users can interact with, however, would not trigger a transition to other scenes including EditText, CheckBox, Switch Button, etc. These kinds of components would not cause scene transition, however, may change the execution path of the app and thus potentially explore more states and scenes. Motivated by this, before operating on the interactive components that would trigger new scenes (e.g., Button, ImageButton, MenuButton), we proposed to employ the state fuzzing strategy first. Specifically, we consider employing fuzzing on 3 types of such non-transitive UI components: EditText, CheckBox, and Switch Button. For EditText, since some apps require user input to proceed to the next step, such as adding new items or searching the interface, we need to determine the format or some specific inputs that the component requires users to enter. To achieve it, we first dump the Component Tree (i.e., UI layout) of the current activity, and extract the attributes of Fig. 2: An overview of SceneDroid. EdifText, such as className, resource-id, and bounds. Since the dynamically obtained layout does not contain information about the required type of user input in terms of EditText, we use the extracted attributes to match the component declared in the source layout files, and obtain the required type of string (declared in inputType). We have summarized text, number, phone, date, time, and EmailAddress as common inputType. According to different input types, SceneDroid will randomly generate a correctly formatted string and fill it into the specific EditText. For CheckBox and Switch Button, we can directly identify them by the component type in the layout file. These two kinds of components have two states, checked or not checked (open or close, respectively). We can set them easily by clicking them. When there are multiple types of the aforementioned non-transitive UI components on a single activity, to explore potential new scenes, we go through all the possible combinations to form an initial state for the next strategy (i.e., exhaustive exploration). For example, if an activity contains all these 3 types, i.e., EditText has two values ("fill in" or "blank"), similarly, CheckBox has values of "checked" or "not checked", and Switch Button has values of "open" or "close". SceneDroid will consider all the combinations of them and finally generate \(2^{3}=8\) initial activity states for further exploration. \(\bullet\)**Exhaustive exploration.** From a high level, SceneDroid employs a breadth-first strategy at the Activity level, while exploring scenes on a specific activity, SceneDroid uses a depth-first strategy, aiming to explore as many scenes within the activity. Therefore, based on each generated initial activity, SceneDroid extracts all the actionable components according to the attribute "clickable=true" of each component in the dumped layout file, such as Button, ImageButton, CheckBox, ImageView, and RadioGroup. It combines these actionable components into an exploration queue and takes one component at a time from the queue to interact with. When a new scene associated with the current activity is identified, SceneDroid will record its layout file, screenshots, and experienced components. Besides, SceneDroid iteratively performs this exploration process on the scene and records the scene transition relation as \(scene_{1}\overset{e,c}{\rightarrow}scene_{2}\) where \(e\) and \(c\) represent the event and component triggering this transition, respectively. If it does not reach the new scene or reaches a visited scene, it returns to the previous scene and interacts with the next component. In addition, during exploration, the current activity \(A\) may transit to a new activity \(B\) by operating on specific components (i.e., activity transition), SceneDroid will rollback to \(A\) and continue exploring other scenes within \(A\). Such activity transitions (i.e., \(A\overset{e,c}{\rightarrow}B\)) are also recorded to augment the static ATG and are further used to help exploration and SceneTG construction. \(\bullet\)**Indirect launching for failure activities.** Due to the inconsistency of activity declaration between the app implementation and the AndroidManifest.xml file or incorrect static ICC messages, some activities may not be launched successfully with ICC messages. SceneDroid will find the upstream caller activity as a bridge to indirectly launch the target activity, by utilizing the SceneTG that has been constructed so far. For example, in Fig. 3(a), when _Callee Act._ failed to be launched with ICC messages, SceneDroid will find the caller of it from ATG, i.e., _Caller Act_a_ and _Caller Act_b_, both of which can be used to indirect launch _Callee Act._ Note that ATG is dynamically augmented and updated during exploration, here, we use the latest ATG to ensure the successful launch of the target activity. Specifically, if an activity \(act_{des}\) failed to be directly launched with ICC messages, SceneDroid will traverse the ATG and find the caller activity of \(act_{des}\), i.e., \(act_{src}\), where \(act_{src}\to act_{des}\). After that, we will try to launch \(act_{src}\) with ICC messages, if it is successfully launched, we then use the event (i.e., action) that triggers such an activity transition and operate on it to launch \(act_{des}\). To extract the events triggering the specific transition, we use the maintained ATG which contains the transition relation between different activities together with the events and components that trigger such relation, i.e., \(act_{A}\overset{e,c}{\rightarrow}act_{B}\). However, there may be cases that the direct caller activity \(act_{src}\) cannot be launched, either. Therefore, we obtain a list of caller activities as the candidates to launch \(act_{des}\). For example, in Fig. 3(a), the direct callers of the failed activity (i.e., _Caller Act_a_ and _Caller Act_b_) both failed to be launched, we thus iteratively find the caller of the failed ones and finally launched _Callee Act._ via launching _Caller Act_c_. Once the target activity (\(act_{des}\)) is directly launched by one of the caller activities, we stop this process and employ the two strategies above (i.e., state fuzzing and exhaustive exploration) to explore this activity and the associated scenes. If all the candidate caller activities fail to launch \(act_{des}\) indirectly, we temporarily move it to the end of the exploration queue and continue exploring other activities. For \(act_{des}\), we update ATG and launch it iteratively by traversing it. Algorithm 1 depicts the whole process of smart dynamic exploration, which employs the three strategies alternatively. The input is all the activities with ICC messages for launching Fig. 3: Cases of indirect launching for failed activities. (\(act_{all}\)), and SceneDroid outputs the scenes (\(S\)) explored by using the three strategies. Specifically, \(S\) is first initialized as empty and will be gradually augmented during exploration. For each activity \(act\), we first try to directly launch \(act\) by using the associated ICC message. If \(act\) is launched successfully, we continue to employ the fuzzing strategy and exhaustive exploration on it by calling the method ExploreAct (Lines 3-4). In the activity exploration process (Lines 11-14), we first employ the fuzzing strategy to generate different initial states (\(States\)) for \(act\) (Line 12), and for each state, we start exhaustive exploration (Lines 13-14) and store the explored scenes in \(S\). However, if \(act\) fails to be launched, we employ the indirect launch strategy to identify the caller activity of \(act\) that can indirectly launch it based on the latest ATG (Line 6). If there exists such a caller activity \(act_{caller}\), we utilize it to transit to \(act\), and continue to employ the fuzzing strategy and exhaustive exploration on it (Lines 7-8). Otherwise, \(act\) is added to the exploration queue for a second launch (Lines 9-10), because the ATG is dynamically updated during exploration, the augmented ATG later may be able to launch \(act\). Therefore, we employ it to maximize the possibility of launching each activity. If the ATG is not augmented after an exploration round, we stop re-launching the failed activities, and stop the whole process and return \(S\) (Line 15). ``` Input:\(act_{all}\): All activities with ICC messages in the app; \(ATG\): Activity transition graph. Output:\(S\): All scenes explored within the app 1\(S\leftarrow\varnothing\)foreach\(act,ice\in act_{all}\)do 2ifSuccess(act, \(ice\))then 3 ExploreAct(\(act\)) 4else // Failed to launch \(act\). \(act_{caller}\) = IndirectL Launch(\(act\), \(ATG\)) 5if\(act_{caller}\neq Null\)then 6 ExploreAct(\(act\)) 7else // No such a caller act that can launch \(act\), then \(act\) is added to the queue for a second launch 8\(act_{all}\gets act_{all}\cup act\) 9 10 11FunctionExploreAct(\(act\)): 12\(States\leftarrow\) Fuzzing(\(act\)) 13foreach\(st\in States\)do 14\(S\leftarrow\) ExhaustiveExplore(\(st\)) 15 16return\(S\) ``` **Algorithm 1**Smart Dynamic Analysis #### Iii-B2 **Scene identification** Since the goal of SceneDroid is to construct a relatively complete UI model consisting of different types of fine-grained UI states, i.e., **scene**, we proposed a scene identification method, aiming to identify the unique scenes by abstracting and modeling the UI pages in a fine and suitable manner, so as to avoid keeping exploring duplicated scenes. The scenes identified by SceneDroid include activity, fragment, drawer (e.g., Top/Bottom/Side navigation drawer), dialog, menu, checkbox, spinner, picker, floating action button, etc., some are shown in Fig. 1. Specifically, for each explored UI page, we aim to generate a unique identifier based on the layout dumped dynamically as an abstraction of the UI page. If the identifiers of two UI pages are the same, we regard them as the same scene, otherwise, two scenes are both recorded. To avoid maintaining a massive number of scenes with subtle changes, and model the UI page in a fine and proper grained, we consider abstracting a UI page based on the hierarchy of components on it, the unique ID of each component (i.e., _resource-id_ in the layout file), the type of the components (i.e., _class_), and the package it belongs to (_package_). These attributes preserve the number and the type of components, as well as their hierarchy, meanwhile omitting the subtle changes (such as the text change and color change) which would not cause layout changes but may lead the exploration to a dead end. For example, in Fig. 4, this is a simple drawing app that produces several UI changes when the user selects different brush colors. Since no matter how the values of these UIs change, it is just about the color selection with different values and would not cause an impact on the structure, we thus consider them as the same scene. In detail, for each UI page, we first dump the layout file which contains all the components and their attributes (e.g., resource-id, text, class, package, clickable), and each node represents a component. We then record the hierarchy of all the components and start a Breadth-First traversal to obtain the component sequence as a list. Note that, since SceneDroid dumps the layout structure directly from the UI page, which may introduce the UI of other packages, such as the UI of the status bar or the UI of the input method when it pops up. The UI with these non-target packages will interfere with the judgment of the current UI page, but directly ignoring them may lead to missing new scenes. Therefore, we decided to discard the non-target package UI in SceneDroid, and only Fig. 4: Two UI pages in the app Simple Draw Pro. considered the nodes that belong to this app by matching the package names. For each component in this sequence, we extract the value of three attributes as the unique identifier of it, i.e., _resource-id_, _class_, _package_. We then concatenate these three attribute values and use the MD5 hash algorithm [16] to generate a hash value for the component. If the type of the current node is an adapter view, we will use the information of the view it is really bound to generate the identifier for it. After obtaining the hashed values for all the components, we concatenate them in sequence and use the same hash algorithm to generate a unique identifier for the UI page. Note that, since the detailed contents in adapter views (e.g., ListView or RecyclerView) at runtime are unknown and these adapter views are essentially just repetitive views being populated according to the ListApdapter [17]. While SceneDroid focuses on the structure of the views obtained from the ListApdapter, it only needs to fetch the first view in the adapter view to learn the structure of the other ones. Only the first child view of adapter views counts for scene identification. ### _SceneTG Construction_ To reflect the overall UI states of an app in the runtime, we construct the based on the identified scenes and their transitions during dynamic exploration (as shown in Fig. 5). We highlight that apart from the scene transitions, SceneDroid also can provide the corresponding real UI page for each identified scene. The SceneTG attached with real UI pages indeed aids users in understanding the apps. SceneTG's fine-grained UI model can be used to contribute to improving the performance of existing work including UI testing, regression testing, competitive product analysis, etc. ## IV Effectiveness Evaluation To evaluate the effectiveness of SceneDroid, we aim to conduct the experiments by answering the following research questions. * **RQ1**: Can SceneDroid accurately recognize new scenes that contain different types of new UI views? * **RQ2**: Can SceneDroid outperform existing UI exploration tools in terms of transition relation extraction and scene exploration? * **RQ3**: How much do the different strategies of SceneDroid contribute to enhance UI exploration? ### _RQ1: Scene identification_ #### Iv-A1 **Setup** To investigate whether SceneDroid can effectively identify different types of scenes in the apps, we self-developed 10 apps as our ground-truth benchmark, covering different types of views for UI pages including Drawer, Menu, Dialog, Spinner, Picker, etc. In order to make the benchmark apps more representative of real-world apps, we also add more features and complexity to them with different numbers of activities. Since Android Studio provides numerous code templates that follow the best practice of Android app design and development, to develop apps that are compliant with the latest Material Design principles and reflect the latest Android app features, we utilize the templates provided by Android Studio to create new application modules, various activities, or other specific Android project components. Some templates provide initial code for typical environments, such as drawer navigation bars or login pages, which reflect the latest Android app features. As shown in Table I, the 10 apps we develop consist of many features, varying the number of activities with multiple types of scenes. Moreover, they are implemented with different transitions from Activity to Activity, Activity to Fragment, Fragment to Activity, and Fragment to Fragment, as the rich transition logic that is inserted into the apps. Based on the dataset above, we conducted the experiment to evaluate the effectiveness of SceneDroid in scene identification. To validate the accuracy of SceneDroid, we need to determine the number of activities, scenarios, and transition relations for each program. We use the number of activities declared in the AndroidManifest.xml file as the basis and manually validate the number of scenes and transition pairs identified by SceneDroid for each app. We set a timeout of 15 minutes for the analysis phase and 30 minutes for the dynamic analysis for each app in the dataset. Fig. 5: Example of a SceneTG constructed by SceneDroid. \begin{table} \begin{tabular}{|c|l|c|c|c|} \hline **ID** & **Feature** & **\#All\_Acts** & **\#Pairs** & **\#Scenes** \\ \hline 1 & Basic Act + Fragment + Dialog + Switch Button & 8 & 23 & 17 \\ \hline 2 & Basic Act + Menu & 8 & 18 & 15 \\ \hline 3 & Navl. Drawer Act + Fragment & 9 & 24 & 22 \\ \hline 4 & Navl. Drawer Act + Fragment + Menu & 8 & 21 & 19 \\ \hline 5 & Bottom Navl. Act & 8 & 13 & 13 \\ \hline 6 & Bottom Navl. Act + Menu & 3 & 19 & 19 \\ \hline 7 & Bottom Navl. Act + Fragment + EdifText & 3 & 15 & 14 \\ \hline 8 & Tabbed Act + Menu + Spinner + Picker. & 6 & 14 & 11 \\ \hline 9 & Tabbed Act + Bottom Navl. Act + Menu + Floating Action Button & 3 & 16 & 11 \\ \hline 10 & Navi. Drawer Act + Fragment & 1 & 6 & 9 \\ \hline \end{tabular} \end{table} TABLE I: Ten self-developed benchmark apps with different features, activities, transition pairs, and scenes. #### Iv-A2 **Result** The result indicates that SceneDroid can extract all the activities, scenes, and transition pairs in the 10 benchmark apps, shown in Table I. SceneDroid performed well not only on simple apps composed of activities and fragments but also on complex apps, as displayed in app 4 and app 9. These complex combinations of features are frequently used in industrial environments. In the following RQ2, we will show in detail the strengths and weaknesses of SceneDroid compared to others, especially in apps with complex components. The reason for achieving such excellent results is that SceneDroid leveraged a combination of three smart strategies. These strategies are not used in isolation or stacked repeatedly; rather, the organic combination achieves good results. In the following RQ3, we will conduct an ablation study to comprehensively evaluate the impact of each strategy on the tool's exploration capability. The SceneTG constructed by SceneDroid can indeed build a more fine-grained UI model. We also manually verified the reachability of all the paths explored by SceneDroid, and all of them are feasible in the 10 benchmark apps. **Answer to RQ1:** The experimental results show that SceneDroid can extract all activities, scenarios, and transition pairs in the 10 ground-truth benchmark apps. SceneDroid can accurately recognize new scenes that contain different types of new UI views. ### _RQ2: Scene exploration_ #### Iv-B1 **Setup** To evaluate the capability of SceneDroid in scene exploration, we randomly downloaded 50 closed-source apps from Google Play Store [18] and 50 open-source apps from F-Droid [19] as the evaluation subject to investigate the effectiveness of SceneDroid in real-world apps. Based on the dataset, we compared SceneDroid with four state-of-the-art UI modeling tools: GoalExplorer [4], Gator [20], StoryDistiller [5], and ICCBot [15]. We chose them as the baseline tools because they either have similar goals (StoryDistiller) to SceneDroid or have similar transition results (GoalExplorer, Gator, ICCBot). Specifically, StoryDistiller utilizes a combination of dynamic and static methods to build the UI model of the app, which is with a similar goal to ours but with coarse-grained modeling. The other three tools are state-of-the-art tools that generate transition graphs. GoalExplorer proposes a static parsing approach to build the Screen Transition Graph (STG). Note that, in the experiment, we used the latest released version of GoalExplorer [21] since the initial open-source version on Github is unavailable to compile and use due to missing essential dependencies. Gator is also a mature static analysis suite for Android apps that can be used to build the Window Transition Graph (WTG). ICCBot is demonstrated as the state-of-the-art ICC resolution tool [8]. We separately run these tools on the 100 apps and set a timeout of 15 minutes for each app in the static analysis phase, because, for some closed-source applications, some static analysis tools can be time-consuming due to internal errors. For the evaluation metrics, we use the number of explored activities, the number of explored scenes, and the number of UI transition pairs to evaluate the performance of each tool. Since ICCBot generates ICC relation of the four major components of Android (i.e., Activity, Service, Content Provider, and Broadcast Receiver), while SceneDroid focuses on the UI model construction. To make a fair comparison, we thus only consider the components related to UIs from the result file, i.e., activity and fragment. As for the number of transitions of ICCBot, we focus on four types of transitions: Activity to Activity, Activity to Fragment, Fragment to Activity, and Fragment to Fragment. #### Iv-B2 **Result** The comparison result of these tools is shown in Fig. 6. We can see SceneDroid outperforms the other four tools in all metrics. On average, SceneDroid extracts 30.25 transition pairs (0.81 in GoalExplorer, 13.52 in ICCBot, 12.03 in StoryDistiller, 18.68 in Gator, respectively), and in terms of the identified scenes, SceneDroid achieves 22.93, which is twice of most other tools (1.63 in GoalExplorer, 9.53 in ICCBot, 13.83 in StoryDistiller, 9.95 in Gator, respectively). The reason for SceneDroid's superior results is that SceneDroid introduces smart exploration, which is used to obtain the scenes during dynamic exploration, thus enabling the launch of activities even without using ICC messages. It alleviates the limitations of existing tools that rely on the accuracy of ICC message extraction, effectively enhancing the activity coverage of SceneDroid during the dynamic process. Smart Exploration also introduces the indirect launching phase for failure activities, which helps SceneDroid to explore as many different scenes on an activity as possible. Moreover, fuzzing for EditText, CheckBox, Switch Button, etc., is an exclusive feature that enables SceneDroid to interact with more components than other tools. While StoryDistiller also adopted the idea of combining static and dynamic exploration to build UI models with UI screenshots, it does not perform well because _(1)_ StoryDistiller works with activity as a granularity. While it also tries to trigger each interactive component presented in the activity, it will only go to explore the ones that start the initial activity. Besides, it ignores the possibility of triggering components that will access a scene such as Fragment or Menu, where the newly emerging interactive components may trigger new Fig. 6: Comparison of #Explored activities, #Transition pairs, #Scenes. scenes and new transition relations. _(2)_ StoryDistiller relies on ICC messages to launch the activity and cannot be assisted through the transition pairs obtained by the dynamic exploration process; _(3)_ StoryDistiller does not use fuzzing to increase interactions. As the static analysis methods ignore many of the transition relations brought about by the presence of special components in the new view during analysis. Some components that can trigger the new scene exist in some new views (e.g., Navigation, Snackbar, and BubbleMetaData), while these static methods do not resolve the views, preventing them from triggering the new scene. For example, none of the existing tools properly handle the transition pairs initiated by the Navigation components or navigated using Tabbed Navigation UIs, as shown in Fig. 7. Another example is in Fig. 5, they fail to properly analyze the transition pairs from the Navigation Drawer to the GallerFragment, SlideshowFragment, and Fragment2. Navigation is the interaction that allows users to navigate across, into, or back out from different content blocks in an app [17], which is introduced in Android 3.3. For StoryDistiller, it is based on the grain of activity, and discovering scenes containing Navigation components is beyond the capability of StoryDistiller. For ICCBot (which claims to be able to model Fragments) and GoalExplorer (which is optimized explicitly for Drawer) also fail to correctly discover the transition pairs generated by the Navigation component. This is because the API modeling of these tools failed to keep pace with Android evolution, and neither of them correctly modeled Navigation's API introduced in Android 3.3. Specifically, in the Fragment-Aware Transition and Extraction phases, both of ICCBot and GoalExplorer only captured the APIs commonly used by FragmentManager. For example, when identifying the addition of a fragment, the APIs such as add(Fragment, String) are captured, while GoalExplorer only models the DrawerLayout.openDrawer API when dealing with the component Drawer. However, the APIs used for jumping between fragments in the Navigation component are Navigation.navigate(actionID) and Navigation.navigateUp(). Therefore, they both fail to handle scenes and transition pairs based on the Navigation component properly. **Answer to RQ2:** SceneDroid extracts 30.25 transition pairs and 22.93 scenes on average, which significantly outperforms the existing tools (i.e., 1.63 in GoalExplorer, 9.53 in ICCBot, 13.83 in StoryDistiller, and 9.95 in Gator) in terms of scene exploration on our collected 100 apps. ### _RQ3: Ablation study on different strategies_ #### Iv-C1 **Setup** To evaluate the contribution of different strategies (i.e., State Fuzzing, Scene Identification, and Indirect Launch strategy) in SceneDroid for improving UI exploration, in this RQ, we conducted an ablation study. Specifically, we tested with a modified SceneDroid based on the dataset in RQ2, which can disable a particular strategy alone and we can separately evaluate the three strategies. We ran SceneDroid with different strategies disabled for each of the 100 apps and set a 15-minute runtime limit for each app during the analysis phase, the same setup as that in RQ2. Given that it may get into a duplicate state when some strategies are disabled, leading to extra time consumption, we also set a time limit of 30 minutes during the dynamic run phase. We evaluate the effectiveness of each strategy based on the number of explored activities, scenes, and UI transition pairs. #### Iv-C2 **Result** The results of the ablation study are displayed in Fig. 8. The Indirect Launching strategy has the most impact on the test results of the tool, followed by the Scene Identification strategy. Specifically, in terms of activity exploration capability: the Indirect Launching strategy achieved an average improvement of 15.59% vs. 7.70% in the Scene Identification strategy and 4.76% in the State Fuzzing. Regarding the ability to explore Scenes, the Indirect Launching strategy provides an average 47.02% improvement vs. 21.72% in the Scene Identification strategy and 3,43% in the State Fuzzing. The Indirect Launching strategy provided an average 35.08% increase in the extraction of transition pairs. In comparison, the Scene Identification strategy provided an average of 19.86% increase, and State Fuzzing provided an average of 7.89% increase. From the results, we can see that the **Indirect Launching strategy** contributes the most to the exploration capability of SceneDroid. The possible reason is that, since Activity is the carrier for all scenes and transition pairs, once SceneDroid is able to explore a new Activity that cannot be directly launched before, it would also explore a lot of new scenes and transition Fig. 8: Contribution of each strategy. Fig. 7: Example transition between Tabbed navigation UI. relations. It would be a practical strategy when the current static analysis techniques cannot fully construct the required context for launching activities correctly. For the **Scene Identification strategy**, it brings a relative improvement to the scene exploration capability, proving that introducing the Scene Identification strategy is a justified choice. During the experiments, we found that disabling the Scene Identification strategy made it susceptible to repetitive scene exploration, which resulted in insufficient data. Once the tool gets stuck in repeated scene exploration, it is unable to exit automatically and thus fails to explore the whole application in a limited time. As in the case of the Scene Identification strategy described in the Approach section, apps like Simple Draw Pro get stuck because they cannot identify subtle scene differences. From the experimental results, with the Scene Identification strategy disabled, the tool was able to explore only seven different scenes. It was stuck in a repetitive exploration of the palette scenes. The introduction of the Scene Identification strategy proved to be feasible. As for the **State Fuzzing strategy**, the boost is primarily because many apps contain UI components that users can interact with but do not directly cause scene transitions, including EditText, CheckBox, Switch Button, etc. However, these types of components can change the execution path of the application, making it possible to explore more new states and scenarios. In particular, many apps have scenes that require account passwords or search boxes, which may limit the exploration of scenes if not populated with appropriate data in the EditText component. Although the current State Fuzzing strategy improves the whole exploration, the improvement is relatively small, because some apps require legitimate input (e.g., specific account numbers and passwords) to be provided. **Answer to RQ3:** The Indirect Launching strategy has made the most significant contribution, with an average improvement of 15.59%, 47.02%, and 35.08% in terms of activity exploration, scene exploration, and transition pairs extraction, respectively. The improvement is at least twice as effective as the Scene Identification strategy (7.7%, 21.72%, 19.86%) and State Fuzzing strategy (4.76%, 3.43%, 7.89%). ## V Future Applications and Discussion ### _Future Applications_ In this paper, we conduct fundamental work in UI exploration and fine-grained scene modeling, which can facilitate several follow-up research such as regression testing, and GUI testing for Android apps. #### V-A1 **Regression testing** One of the meaningful areas of Android app testing is regression testing, as regression testing aids agile development in building quality apps. Moreover, related work shows that reusing test samples contribute to the efficiency of Android regression testing [22, 23, 24]. Through experiments, we have demonstrated that SceneDroid benefits from the high-precision UI model it builds and enables effective detection of modification scenes and components occurring in different app versions. By leveraging SceneDroid, developers can focus more on testing the changed or added components or scenes, avoiding keeping testing on the previous functions. Besides, with the help of SceneDroid, developers can write targeted test cases manually or using automated tools depending on the testing report. Goal-driven test case writing reduces the redundancy of testing and significantly saves the time required for testing. We also conducted a pilot study to investigate whether SceneDroid is capable of identifying fine-grained UI changes between different versions of the same app. Specifically, we randomly selected 30 apps in the dataset of RQ2 and collected the three latest minor versions [25] of each app as the evaluation subject. As for UI changes (i.e., updates), we abstract the following two cases as updates: one is adding or deleting scenes, and the other is modifying components within the scene. We identify the UI changes by comparing the component tree of the two scenes (with the same execution path) of the two versions, SceneDroid checks layer by layer whether any nodes have been added or deleted or the properties of the old nodes have been changed. Based on the dataset and the update localization method, we aim to investigate the number of scenes and transitions updated in the newer versions that are identified by SceneDroid. As a result, SceneDroid found 135 updates of scenes and 284 updates of transition pairs in 60 adjacent version iterations of 30 apps. On average, each version update introduces 1.50 scene variations and 3.20 variations of transition pairs, Fig. 9: Fine-grained scene difference identification. indicating that scene updates are relatively frequent during app evolution. Take the app V2Ray [26] as an example, which is a Material-Design-compliant web proxy application. We first discover an update of the \(NavigationView\) on the \(DrawerLayout\). As shown in Fig. 9, from version 1.4.0 to 1.5.0 of the app, V2Ray was updated to support the custom functional modules (i.e., Geoip and Geosite). This feature update visually reflects the difference in scene, with a new entry for "Geo asset files" in \(NavigationView\). The identifier of this scene is also changed from "ae96...7e50" to "dcb1...26ec", which can also be found visually in the SceneTG (Fig. 9(a)). SceneDroid then applies the location algorithm mentioned above to find an additional node in the tree with a resource-id of "com.v2ray.ang:id/user_asset_setting", thus pinpointing the range affected by the update. In addition to the updated case of NavigationView on DrawerLayout, we also found an updated case on Spinner in V2Ray. As shown in Fig. 9(b), the new version of V2Ray adds support for various forms of encryption, including "chacha20" and "aes-256-gcm", an update option that would be ignored if it were a traditional ATG or STG constructed by GoalExplorer. On the other hand, the SceneTG defined by SceneDroid detects this granularity update very well. SceneDroid can also find the updated scene on Menu. This Menu update case is unusual because it happens on a nested Menu (as shown in Fig. 9(c)). Version 1.3 added support for the VLESS protocol compared to version 1.2, so there is a new entry point on the Menu imported by the protocol. SceneDroid observes the scene update on this first Menu; however, it can be seen that there is also a custom configuration option. Clicking on this custom configuration option, SceneDroid finds a second Menu, adding in version 1.3 the ability to restart all services, which needs to be triggered in the second nested Menu. This UI update could not be found if only the general activity or activity to Menu level granularity was created. Due to the fine granularity of the scene and the exhaustive exploration strategy introduced by SceneDroid, UI updates in the nested Menu can be discovered accurately. SceneDroid can identify the fine-grained UI changes based on graphs of SceneTG between multiple versions of the same app. #### V-A2 **Android UI testing** Prior research has shown that even with the current state-of-the-art Android GUI testing kits, the activity coverage is still not high [27, 28, 29, 9, 10, 11]. We believe SceneDroid primarily contributes to improving the existing Android GUI testing efforts in the following two aspects. _(1)_ The indirect launch strategy for activities proposed by SceneDroid could help the existing tools no longer rely solely on the correctness of the constructed context for activity launching, especially for activities that fail to be launched with the current context information. It facilitates the existing testing tools to launch more activities and may finally achieve improvement in the coverage criterion (e.g., activity/method/code coverage). _(2)_ Existing Android GUI test suites usually apply random or modeled strategies. The success of AFL [30] in the binary domain has shown that coverage-based evolutionary algorithms have great potential. Note that activity-based coverage metrics are too coarse from some specific perspectives, for example, there are many scenes that are bound to a single activity, covering the activity does not mean covering all the functionalities in the activity. The fine-grained UI model generated by SceneDroid is helpful in building a scene-based coverage metric. In that case, this more refined metric may motivate the usage and improvement of evolutionary algorithms in Android GUI testing. ### _Limitations_ Limitations of SceneDroid come from two aspects. _(1)_ Failure in launching some activities. Despite our proposed smart exploration strategy, some activities still fail to be launched for various reasons, such as the presence of some activities that require authentication (e.g., login), inconsistent activity declarations between the AndroidManifest.xml file and the implementation code, and limited interaction types. We consider SceneDroid could be improved by upgrading the types of components that can be interacted with and by injecting some random system-level events. For indirect launching failure, which may be due to the change of component information during testing, we can design a more reasonable way to record the path of indirect launching for SceneDroid. _(2)_ Poor support for non-Native apps. Currently, SceneDroid and most Android GUI testing tools are still for Android native apps [31, 32, 33]; however, HTML5 technology [34] and cross-platform development framework have become mainstream in industry [35, 36, 34], such as React Native [37], Weex [38], Kotlin Native [39], Flutter [40], etc., among which Flutter is a cross-platform mobile UI framework strongly supported by Google. In the future, we could work on improving SceneDroid's support for non-native apps. ## VI Related Work ### _GUI exploration_ GUI exploration is an important way of app abstraction and GUI modeling [1, 2, 3, 4, 5]. In general, existing work can be divided into two categories according to different goals of GUI exploration. #### Vi-A1 **GUI exploration for UI modeling** As Android apps are event-driven and composed of activities for user interaction, Activity Transition Graph (ATG) [2] or Window Transition Graph (WTG) [3] is typically used to model the user interface for Android apps. Note that, the extraction has been investigated by both static and dynamic methods. For example, Yang et al. [3] proposed Gator for extracting WTG based on the stack of currently-active windows. The results include the possible GUI window sequences and their associated events and callbacks. Chen et al. [1] introduced StoryDroid for statically generating storyboards for Android apps by extracting ATGs along with statically rendered UI pages. StoryDroid combines the results provided by IC3 [41] and ATGs extracted with Fragment and inner class features. The most related works are GoalExplorer [4] and StoryDistiller [5]. Specifically, Lai et al. [4] proposed GoalExplorer, which statically models the UI screens and their transitions between these screens. Apart from the original ATG and WTG, GoalExplorer further extends the static model by adding fragments, drawers, service, and broadcast receivers. Different from this tool, we handle more features of the UI screen through a smart dynamic exploration instead of a static method. StoryDistiller [5] is an extension of StoryDroid [1], which optimizes the original tool on ATG construction and UI page rendering by combining the original static method and novel dynamic exploration. The strategy of their dynamic exploration is to traverse all clickable components of each UI page that can be launched directly. The goal of dynamic exploration is to obtain new activity transitions that are not parsed in the static method. Compared with StoryDistiller, SceneDroid aims to explore more scenes and scene transitions to construct SceneTG by handling more features such as fragment, drawer, menu, and dialog instead of ATG construction, which is more fine-grained for app UI modeling. #### Vi-B2 **GUI exploration for app testing** In the past decade, Android app GUI testing approaches have evolved rapidly, and many testing tools such as Monkey [42], Dynodroid [43], Ripper [44], A3E [2], Sapienz [45], Droidbot [46], Stoat [47], APEChecker [7], Ape [48], Humanoid [49], Fax [14], and PSDroid [50] have been proposed to explore apps and detect bugs or security bugs ([51, 52, 53]). Since the goal of these tools is to detect more bugs when dynamically testing the apps, the UI transitions are usually incomplete due to the limitation of low activity coverage and test case generation [1, 5]. There are two strategies used in app testing that are related to our work. On the one hand, some of them first generated the ATG statically and then conducted dynamic testing based on it. For example, A3E [2] constructed the ATG by static analysis and leveraged it to guide the dynamic test input generation for app testing. However, many existing works unveiled the statically constructed ATG neglects many activity transitions due to the limitations of static program analysis techniques [1, 15]. On the other hand, some work focused on dynamic exploration for app testing and after testing, they also provided the UI transition based on the dynamic exploration. For example, Li et al. [46] proposed DroidBot, a lightweight UI-guided Android test input generator. Apart from the testing results such as test input and identified bugs, DroidBot also generates ATGs for users. Pure dynamic testing has limited activity coverage, significantly restricting ATG completeness. Moreover, the adopted content-based comparison method could produce redundant and duplicate states. ### _ICC resolution_ Researchers have proposed a large number of tools for ICC resolution such as Epice [54], IC3 [41], IC3DIALDroid [55], RAICC [56], ICCBot [15]. Many works that apply the ICC results have been exhibited for various purposes. In fact, the ICC results also can be used to improve the capability of UI modeling. Yan et al. [15] conducted a comprehensive study to evaluate the ICC resolution techniques. According to the results in this paper, we choose ICCBot as a comparison subject to demonstrate the effectiveness of SceneDroid. Compared with the existing ICC resolution, (1) SceneDroid can generate a more complete ATG and SceneTG through both static and dynamic methods. (2) The corresponding UI page of each scene is also provided for users instead of only a graph structure of the UI transitions. ## VII Conclusion In this paper, we proposed SceneDroid, which extracts GUI scenes dynamically by combining three strategies. We present the GUI scenes as a scene transition graph (SceneTG) to model the GUI of Android apps with high transition coverage and fine-grained granularity. Our empirical evaluation has proved the effectiveness and usefulness of SceneDroid. The constructed high-precision model can effectively identify UI updates between different app versions and facilitate developers to design automated regression testing tools and help develop future UI fuzzing testing tools, providing them with effective coverage information. ## Acknowledgments This work was partially supported by the National Natural Science Foundation of China (Grant No. 62102197, 62102284) and the Natural Science Foundation of Tianjin (Grant No. 22JCYBJC01010).
2302.06411
The $f_\varrho / m_\varrho$ and $f_π/ m_\varrho$ ratios and the conformal window
The $f_\varrho / m_\varrho$ ratio is calculated at N$^3$LO order within perturbative (p)NRQCD with $N_f$ flavors of mass degenerate fermions. The massless limit of the ratio is expanded \'a la Banks-Zaks in $\epsilon = 16.5 - N_f$ leading to reliable predictions close to the upper end of the conformal window. The comparison of the NNLO and N$^3$LO results indicate that the Banks-Zaks expansion may be reliable down to twelve flavors. Previous lattice calculations combined with the KSRF relations provide us with the same ratio for the range $2 \leq N_f \leq 10$. Assuming a monotonous dependence on $N_f$ leads to an estimate for the lower end of the conformal window, $N_f^* \simeq 12$, by matching the non-perturbative and our perturbative results. In any case an abrupt change is observed in $f_\varrho / m_\varrho$ at twelve flavors. As a cross-check we also consider the $f_\pi / m_\varrho$ ratio for which lattice results are also available. The perturbative calculation at present is only at the NNLO level which is insufficient for a reliable and robust matching between the low $N_f$ and high $N_f$ regions. Nonetheless, using the relative size of the N$^3$LO correction of $f_\varrho / m_\varrho$ for estimating the same for $f_\pi / m_\varrho$ leads to the estimate $N_f^* \simeq 13$.
Hee Sok Chung, Daniel Nogradi
2023-02-13T14:49:36Z
http://arxiv.org/abs/2302.06411v2
# The \(f_{\varrho}/m_{\varrho}\) and \(f_{\pi}/m_{\varrho}\) ratios and the conformal window ###### Abstract The \(f_{\varrho}/m_{\varrho}\) ratio is calculated at N\({}^{3}\)LO order within perturbative (p)NRQCD with \(N_{f}\) flavors of mass degenerate fermions. The massless limit of the ratio is expanded a la Banks-Zaks in \(\varepsilon=16.5-N_{f}\) leading to reliable predictions close to the upper end of the conformal window. The comparison of the NNLO and N\({}^{2}\)LO results indicate that the Banks-Zaks expansion may be reliable down to twelve flavors. Previous lattice calculations combined with the KSRF relations provide us with the same ratio for the range \(2\leq N_{f}\leq 10\). Assuming a monotonous dependence on \(N_{f}\) leads to an estimate for the lower end of the conformal window, \(N_{f}^{*}\simeq 12\), by matching the non-perturbative and our perturbative results. In any case an abrupt change is observed in \(f_{\varrho}/m_{\varrho}\) at twelve flavors. As a cross-check we also consider the \(f_{\pi}/m_{\varrho}\) ratio for which lattice results are also available. The perturbative calculation at present is only at the NNLO level which is insufficient for a reliable and robust matching between the low \(N_{f}\) and high \(N_{f}\) regions. Nonetheless, using the relative size of the N\({}^{3}\)LO correction of \(f_{\varrho}/m_{\varrho}\) for estimating the same for \(f_{\pi}/m_{\varrho}\) leads to the estimate \(N_{f}^{*}\simeq 13\). ## I Introduction and summary How gauge theories with spontaneous chiral symmetry breaking transition into conformal gauge theories as the massless fermion content is increased a la Banks-Zaks is a non-trivial QFT problem [1]. We propose dimensionless ratios of meson decay constants and masses as promising candidates to shed light on the particulars of the transition. Concretely, we will study \(f_{\varrho}/m_{\varrho}\) and \(f_{\pi}/m_{\varrho}\) in this paper. Our main objective is to estimate or constrain the critical flavor number, \(N_{f}^{*}\), in other words the lower end of the conformal window. Clearly, a purely perturbative calculation, even at high orders, is not sufficient to determine \(N_{f}^{*}\) with any degree of confidence. Some non-perturbative input is required since just below the conformal window the theory is expected to be strongly coupled. In our work we will carry out high order perturbative calculations valid in the high \(N_{f}\) conformal region and combine it with non-perturbative results from the low \(N_{f}\) region in a meaningful way. Below the conformal window both the nominators and denominators of our ratios have well-defined chiral limits and are both \(O(\Lambda)\), the dynamically generated scale. The ratios are finite and can be computed via non-perturbative lattice calculations carefully extrapolated to the infinite volume, chiral and continuum limits. Inside the conformal window both decay constants and masses scale the same with the fermion mass \(m\) and the ratios again have a well-defined chiral limit. In this way the ratios can meaningfully be compared across the transition covering the full range of fermion content provided asymptotic freedom is present. This observation is the main motivation for our study. The gauge group will be \(SU(3)\) throughout. Perturbation theory is clearly not applicable below the conformal window, at low \(N_{f}\), hence the need for non-perturbative lattice simulations there. Continuum and chirally extrapolated lattice results are available for \(f_{\pi}/m_{\varrho}\) within the range \(2\leq N_{f}\leq 10\)[2; 3; 4]. Using a KSRF-relation [5; 6] these can be reused for \(f_{\varrho}/m_{\varrho}\). This non-perturbative input is essential and will supplement our perturbative results. Close to the upper end of the conformal window, at high \(N_{f}\) where the fixed point coupling is small, perturbation theory is unambiguously reliable. This occurs below \(N_{f}=33/2\) for \(SU(3)\), the point at which asymptotic freedom is lost. In this paper we calculate \(f_{\varrho}\) and \(m_{\varrho}\) in perturbation theory at finite fermion mass within the framework of (p)NRQCD to N\({}^{3}\)LO accuracy, and obtain \(f_{\varrho}/m_{\varrho}\), followed by the massless limit. The deviation between the NNLO and N\({}^{3}\)LO results are very small down to twelve flavors indicating convergence of the perturbative series. Assuming \(f_{\varrho}/m_{\varrho}\) is monotonous as a function of \(N_{f}\) we attempt to match the non-perturbative low \(N_{f}\) region and the perturbative high \(N_{f}\) region. At twelve flavors an abrupt change occurs which we identify as an estimate of the lower end of the conformal window, \(N_{f}^{*}\simeq 12\). The same approach could be applied to the \(f_{\pi}/m_{\varrho}\) ratio as well. On the non-perturbative side continuum and chiral extrapolated lattice results are available in the literature as already mentioned. On the perturbative side, inside the conformal window, we are only able to calculate \(f_{\pi}\) to NNLO order at present, one order lower than it is currently possible for \(f_{\varrho}\). Nonetheless, if we take the relative size of the N\({}^{3}\)LO result found for \(f_{\varrho}/m_{\varrho}\) and estimate the corresponding contribution to \(f_{\pi}/m_{\varrho}\) to be about the same, we may extract \(N_{f}^{*}\) using the same pro cedure. From \(f_{\pi}/m_{\varrho}\) we obtain in this way \(N_{f}^{*}\simeq 13\) but of course this result should be taken as indicative only, a genuine N\({}^{3}\)LO calculation should be performed for \(f_{\pi}\) in the future to validate it. The organization of the paper is as follows. In the next section we summarize the application of the Banks-Zaks expansion to mesonic bound states in mass perturbed conformal gauge theories. In section III the leading order expressions are presented which are rather straightforward and are given only to fix notation and conventions. Section IV details the (p)NRQCD calculation of the NLO, NNLO and N\({}^{3}\)LO corrections. These are used in section V to attempt to match the non-perturbative low \(N_{f}\) and perturbative high \(N_{f}\) regions. An assumptions is made explicitly, namely that our decay constant to meson mass ratios are monotonous as a function of \(N_{f}\), allowing the extraction of an estimate of \(N_{f}^{*}\), the flavor number where an abrupt change occurs in the ratios. Finally in section VI we conclude and provide an outlook for future work. ## II Banks-Zaks expansion for bound states In the massless case the theories inside the conformal window are non-trivial interacting conformal gauge theories with some fixed point \(g_{*}^{2}\) depending on \(N_{f}\). At least sufficiently close to \(N_{f}=33/2\) there is a single relevant \(SU(N_{f})\)-invariant perturbation of this CFT given by the flavor singlet fermionic mass term. Its anomalous dimension determines the dependence of all RG invariant dimensionful quantities on the perturbing parameter \(m\). Besides the mass dependence, there is of course dependence on the fixed point coupling (which depends on \(N_{f}\)) and further explicitly on \(N_{f}\). Schematically, a quantity of dimension \(k\) can be written as \(m^{k/(1+\gamma)}F(\varepsilon)\) where \(\varepsilon=33/2-N_{f}\) and \(\gamma\) is the mass anomalous dimension of the massless theory. A perturbative expansion can then be developed for small \(\varepsilon\), combining all \(N_{f}\)-dependence. Depending on the quantity in question, the function \(F(\varepsilon)\) can be determined in perturbation theory through an expansion in \(g^{2}\). The observables in question for us are quantities related to bound states: mesons with various quantum numbers. A rigorous perturbative treatment of bound states is given in the non-relativistic effective theory framework (p)NRQCD, which will be our main method. Inside the conformal window massive fermions are always heavy in the (p)NRQCD language. Hence the setup corresponds to zero flavors of light fermions and \(N_{f}\) flavors of heavy fermions in (p)NRQCD terms. Dimensionful quantities, such as \(f_{\pi}\), \(f_{\varrho}\) or \(m_{\varrho}\) are then given as being proportional to \(m\) and a double series expansion in \(a(\mu)=g^{2}(\mu)/(16\pi^{2})\) with some RG scale \(\mu\) and in \(1/m^{2}\) once a choice of RG scheme has been made. In the perturbed CFT the natural scale is \(\mu=m\) which will be our choice as well. Thus, we will find schematically, \[f_{\pi,\varrho} =ma^{3/2}(m)(b_{0}+b_{1}\,a(m)+\ldots)\] \[m_{\varrho} =m(c_{0}+c_{1}\,a(m)+\ldots) \tag{1}\] with some coefficients \(b_{i}(N_{f})\) and \(c_{i}(N_{f})\) which only depend on \(N_{f}\) and where \(\ldots\) refer to higher order as well as non-analytic terms involving \(\log(a(m))\). Naturally, in the massless limit all three quantities are vanishing. But the massless limit can meaningfully can be taken for the ratios, \[\frac{f_{\pi,\varrho}}{m_{\varrho}}=a_{*}^{3/2}(d_{0}+d_{1}\,a_{*}+d_{2}\,a_{ *}^{2}+\ldots) \tag{2}\] where \(a(m\to 0)=a_{*}\) is the fixed point of the massless theory and the coefficients \(d_{i}(N_{f})\) again depend only on \(N_{f}\). The (p)NRQCD calculation will provide all the coefficients above in the \(\overline{\rm MS}\) scheme. Now the fixed point \(a_{*}\) can trivially be expanded in \(\varepsilon\) also, \[a_{*}=\varepsilon\,(e_{0}+e_{1}\,\varepsilon+\ldots) \tag{3}\] up to 5-loop order [35; 36; 37; 38; 39; 40] where the corrections do not contain logarithms only higher orders in \(\varepsilon\). Once all the explicit \(N_{f}\)-dependence of the coefficients \(d_{i}(N_{f})\) is replaced by \(N_{f}=33/2-\varepsilon\) we can expand the final result in \(\varepsilon\), leading to, \[\frac{f_{\pi,\varrho}}{m_{\varrho}}=\varepsilon^{3/2}(h_{0}+h_{1}\,\varepsilon +h_{2}\,\varepsilon^{2}+\ldots)\;, \tag{4}\] where again \(\ldots\) is short hand for both higher orders in \(\varepsilon\) as well as powers of \(\log\varepsilon\). The coefficients of the above series are constants and will be our main result up to N\({}^{3}\)LO order in (p)NRQCD for \(f_{\varrho}/m_{\varrho}\) and up to NNLO order for \(f_{\pi}/m_{\varrho}\). The order-by-order results for \(f_{\varrho}/m_{\varrho}\) are shown in figure 1 where the non-perturbative results for \(2\leq N_{f}\leq 10\) are also indicated. Clearly, the deviation between the NNLO and N\({}^{3}\)LO approximations is not large down to \(N_{f}=12\). The same results for \(f_{\pi}/m_{\varrho}\) are shown in figure 2 but since the N\({}^{3}\)LO correction is not available we can not conclude firmly one way or another about the convergence of the perturbative series in this case. ## III Leading order The perturbative calculation of \(f_{\pi}\) in the NRQCD [8; 9] and pNRQCD [10; 11; 12] formalism is done by first matching the axialvector current of the heavy quark and antiquark pair to NRQCD operators, and then computing the NRQCD matrix elements in pNRQCD in terms of the bound-state wavefunctions [13; 14]. The matching of the decay constant in NRQCD is expressed in terms of NRQCD operator matrix elements which scale with powers of \(v\), the velocity of the heavy quark and antiquark inside the bound state. In the perturbative case, \(v\sim g^{2}\), so that in order to obtain expressions at NNLO accuracy, it suffices to keep corrections up to relative order \(v^{2}\). To relative order \(v^{2}\), \(f_{\pi}\) can be written as [15] \[f_{\pi}=\frac{1}{\sqrt{m_{\pi}}}\bigg{(}c_{p}\langle 0|\chi^{ \dagger}\psi|\pi\rangle\] \[-\frac{d_{p}}{2m^{2}}\langle 0|\chi^{\dagger}(-\tfrac{i}{2} \overleftrightarrow{\mathbf{D}})^{2}\psi|\pi\rangle\bigg{)}, \tag{5}\] where \(\psi\) and \(\chi^{\dagger}\) are operators that annihilate a heavy quark and antiquark, respectively, \(\mathbf{D}=\nabla-ig\mathbf{A}\) is the covariant derivative, \(\mathbf{A}\) is the gluon field, \(\chi^{\dagger}\overleftrightarrow{\mathbf{D}}\psi=\chi^{\dagger}\mathbf{D} \psi-(\mathbf{D}\chi)^{\dagger}\psi\), \(|\pi\rangle\) is the relativistically normalized \(\pi\) state at rest, and \(c_{p}=1+O(a)\) and \(d_{p}=1+O(a)\) are the matching coefficients that are given by a series in \(a\). The NRQCD matrix elements can be computed in pNRQCD in terms of the bound-state wavefunction \(\psi(r)\) and the binding energy \(E\), which satisfy the Schrodinger equation \[\left(-\frac{\nabla^{2}}{m}+V(r)\right)\psi(r)=E\psi(r), \tag{6}\] where the potential \(V(r)\) is obtained by perturbatively matching pNRQCD to NRQCD. The mass of the bound state is given in terms of the binding energy \(E\) by \[m_{\pi}=2m+E, \tag{7}\] and the matrix elements are given by \[\langle 0|\chi^{\dagger}\psi|\pi\rangle =\sqrt{2N_{c}}|\psi(0)|, \tag{8}\] \[\langle 0|\chi^{\dagger}(-\tfrac{i}{2} \overleftrightarrow{\mathbf{D}})^{2}\psi|\pi\rangle =\sqrt{2N_{c}}|\psi(0)|mE,\] where \(N_{c}=3\) is the number of colors. These lead to the following expression for \(f_{\pi}\) \[f_{\pi}=\sqrt{\frac{N_{c}}{m}}\left[c_{p}-\left(\frac{c_{p}}{4}+\frac{d_{p}}{ 2}\right)\frac{E}{m}\right]|\psi(0)|, \tag{9}\] which is valid up to corrections of order \(a^{4}\). Analogously, \(f_{\varrho}\) is given in NRQCD by [16; 13; 17; 18] \[f_{\varrho}=\frac{1}{\sqrt{m_{\varrho}}}\bigg{(}c_{v}\langle 0 |\chi^{\dagger}\epsilon\cdot\sigma\psi|\varrho\rangle\] \[-\frac{d_{v}}{6m^{2}}\langle 0|\chi^{\dagger}\epsilon\cdot \sigma(-\tfrac{i}{2}\overleftrightarrow{\mathbf{D}})^{2}\psi|\varrho\rangle \bigg{)}, \tag{10}\] where \(\sigma\) is a Pauli matrix, \(\epsilon\) is the polarization vector for the \(\varrho\), and \(|\varrho\rangle\) is the relativistically normalized \(\varrho\) state at rest. Similarly to the case of \(f_{\pi}\), the NRQCD matrix elements appearing in the expression for \(f_{\varrho}\) can be computed in pNRQCD, which lead to the following expressions \[\langle 0|\chi^{\dagger}\epsilon\cdot\sigma\psi|\varrho\rangle =\sqrt{2N_{c}}|\psi(0)|, \tag{11}\] \[\langle 0|\chi^{\dagger}\epsilon\cdot\sigma(-\tfrac{i}{2} \overleftrightarrow{\mathbf{D}})^{2}\psi|\varrho\rangle =\sqrt{2N_{c}}|\psi(0)|mE,\] and \(m_{\varrho}=2m+E\). From these we obtain \[f_{\varrho}=\sqrt{\frac{N_{c}}{m}}\left[c_{v}-\left(\frac{c_{v}}{4}+\frac{d_{ v}}{6}\right)\frac{E}{m}\right]|\psi(0)|, \tag{12}\] which is again valid up to corrections of order \(a^{4}\). At leading order in \(a\), it suffices to solve the Schrodinger equation for the Coulomb potential \(V(r)=-4\pi aC_{F}/r\), where \(C_{F}=(N_{c}^{2}-1)/(2N_{c})=4/3\). In this case the bound-state solutions are known exactly and we obtain for the ground state \[\psi(0) =(4m\pi aC_{F})^{3/2}/(8\pi)^{1/2}[1+O(a)], \tag{13}\] \[E =-m(4\pi aC_{F})^{2}/4+O(a^{3}),\] for both the spin-triplet and spin-singlet states. From these we obtain the following expressions for \(f_{\pi}\), \(f_{\varrho}\), and \(m_{\varrho}\) that are valid at leading orders in \(a\). \[f_{\pi} =\sqrt{8N_{c}C_{F}^{3}\pi}ma^{3/2}[1+O(a)],\] \[f_{\varrho} =\sqrt{8N_{c}C_{F}^{3}\pi}ma^{3/2}[1+O(a)], \tag{14}\] \[m_{\varrho} =2m[1+O(a^{2})].\] Note that the order-\(a\) correction to \(m_{\varrho}\) is absent because \(E\) begins at order \(a^{2}\). ## IV Nlo, NNLO and N\({}^{3}\)LO corrections Now we discuss the sources of radiative corrections needed to obtain expressions at NNLO accuracy. We first note that because \(E\) begins at order \(a^{2}\), the leading-order expression for \(E\) suffices for obtaining \(m_{\varrho}\) at NNLO accuracy. The corrections at higher orders in \(a\) to NNLO accuracy to the decay constants come from the radiative corrections to the matching coefficients \(c_{p}\) and \(c_{v}\), as well as the corrections to the wavefunction at the origin. The corrections to \(c_{p}\) have been computed analytically in [15] at NLO and in [19] at NNLO. Likewise, analytical expressions for the radiative corrections to \(c_{v}\) are available in [16; 17] at NLO and in [21; 22] at NNLO. As is well known from heavy quarkonium phenomenology, the NNLO corrections to \(c_{v}\) and \(c_{p}\) contain logarithms of the NRQCD factorization scale, which must cancel with the logarithms coming from the renormalization of the NRQCD matrix elements [24; 25]. The corrections to \(|\psi(0)|\) have been computed to NNLO accuracy in [13; 23] for the \(S\)-wave spin-triplet case. For the spin-singlet case, the corrections to \(|\psi(0)|\) to NNLO accuracy can be obtained from the results in ref. [26]. The NNLO corrections contain the logarithms of the NRQCD factorization scale that cancel against the logarithms coming from \(c_{v}\) and \(c_{p}\), so that the decay constants are free of dependencies on the factorization scale. These are sufficient ingredients for computing \(f_{\pi}\) and \(f_{\varrho}\) to NNLO accuracy. We note that the dependence on \(N_{f}\) only comes from the matching coefficients, because all \(N_{f}\) flavors are heavy and are integrated out from the effective field theory. Also note that a non-vanishing imaginary part of the matching coefficients can be discarded at our current level of accuracy. Additionally, the N\({}^{3}\)LO correction to \(c_{v}\) has been computed in refs. [27; 28], and the N\({}^{3}\)LO correction to \(|\psi(0)|\) has been computed for the \(S\)-wave spin-triplet case in [29; 30]. Together with the NLO correction to \(E\) available in [23; 31] and the NLO correction to \(d_{v}\) available in [32; 33], these make possible the computation of \(f_{\varrho}\) and \(m_{\varrho}\) to N\({}^{3}\)LO accuracy. At N\({}^{3}\)LO accuracy, in addition to NNLO and N\({}^{3}\)LO corrections to \(c_{v}\), the NLO correction to \(d_{v}\) also contains a logarithm of the factorization scale, which cancels against the ultrasoft correction to \(|\psi(0)|\) at N\({}^{3}\)LO accuracy [34]. Because only part of the N\({}^{3}\)LO correction to \(c_{v}\) is analytically known, we only obtain numerical results for the coefficients of the order-\(a^{3}\) terms in \(f_{\varrho}\). We present below the results of the NLO and NNLO corrections for \(f_{\pi}\) and also the N\({}^{3}\)LO correction for \(m_{\varrho}\) and \(f_{\varrho}\). ### \(\varrho\) mass From the binding energy \(E\) we have, to N\({}^{3}\)LO accuracy, \[m_{\varrho}=c_{0}m[ 1+c_{2}a^{2}(m)+c_{30}a^{3}(m)\] \[+ c_{31}a^{3}(m)\log a(m)+O(a^{4})]. \tag{15}\] The order-\(a\) term in \(m_{\varrho}\) is zero because \(E\) begins at order \(a^{2}\). The first two coefficients are determined by the leading-order binding energy and to NNLO and N\({}^{3}\)LO we obtain the further coefficients, \[c_{0} =2\] \[c_{2} =-2C_{F}^{2}\pi^{2} \tag{16}\] \[c_{30} =\frac{4}{9}\pi^{2}C_{A}C_{F}^{2}\left(66\log(4\pi C_{F})-97\right)\] \[c_{31} =\frac{88}{3}\pi^{2}C_{A}C_{F}^{2},\] with \(C_{A}=N_{c}=3\). ### \(\varrho\) decay constant From the corrections to \(c_{v}\) and \(|\psi(0)|\) available to NNLO accuracy, we obtain \[f_{\varrho} =b_{0}^{\varrho}ma^{3/2}(m)\left(1+\right. \tag{17}\] \[+\left.\sum_{n=1}^{3}\sum_{k=0}^{n}b_{nk}^{\varrho}a^{n}(m)\log^{ k}a(m)+O(a^{4})\right).\] The coefficients \(b_{nl}^{\varrho}\) up to relative order \(a^{2}\) are known analytically and are given by \[b_{0}^{\varrho} =\sqrt{8N_{c}C_{F}^{3}}\pi,\] \[b_{10}^{\varrho} =\frac{161}{6}-\frac{11\pi^{2}}{3}+33\log\left(\frac{3}{16\pi} \right), \tag{18}\] \[b_{11}^{\varrho} =-33,\] \[b_{20}^{\varrho} =\left(-\frac{64\pi^{2}}{27}+\frac{704}{27}\right)N_{f}+\frac{97 81\zeta(3)}{9}-\frac{27\pi^{4}}{8}\] \[+\frac{1126\pi^{2}}{81}+\frac{9997}{72}+\frac{1815\log^{2}\pi}{2 }+\frac{1815}{2}\log^{2}\left(\frac{16}{3}\right)\] \[+\log\left(\frac{16}{3}\right)\left(-\frac{2581}{2}+\frac{605\pi ^{2}}{3}+1815\log(\pi)\right)\] \[+\left(\frac{4325\pi^{2}}{27}-\frac{2581}{2}\right)\log(\pi)- \frac{256}{81}\pi^{2}\log(8)\] \[-\frac{1120}{27}\pi^{2}\log\left(\frac{8}{3}\right)-\frac{512}{9 }\pi^{2}\log(2),\] \[b_{21}^{\varrho} =\frac{4325\pi^{2}}{27}-\frac{2581}{2}+1815\log\left(\frac{16\pi }{3}\right),\] \[b_{22}^{\varrho} =\frac{1815}{2}.\] The results for the relative order \(a^{3}\) terms are only obtained numerically because the analytical result for \(c_{v}\) at N\({}^{3}\)LO is only partially known, \[b_{30}^{\varrho} =0.8198N_{f}^{2}-362.7N_{f}-1.0901(1)\times 10^{6},\] \[b_{31}^{\varrho} =-88.42N_{f}-7.7493\times 10^{5},\] \[b_{32}^{\varrho} =-2.1651\times 10^{5},\] \[b_{33}^{\varrho} =-2.3292\times 10^{4}. \tag{19}\] ### \(\pi\) decay constant From the corrections to \(c_{p}\) and \(|\psi(0)|\) available to NNLO accuracy, we obtain \[f_{\pi} =b_{0}^{\pi}ma^{3/2}(m)\left(1+\right. \tag{20}\] \[\left.\sum_{n=1}^{2}\sum_{k=0}^{n}b_{nk}^{\pi}a^{n}(m)\log^{k}a(m )+O(a^{3})\right),\] and the coefficients \(b_{nk}^{\pi}\) are given by \[b_{0}^{\pi} =\sqrt{8N_{c}C_{F}^{3}\pi},\] \[b_{10}^{\pi} =\frac{59}{2}-\frac{11\pi^{2}}{3}+33\log\left(\frac{3}{16\pi} \right), \tag{21}\] \[b_{11}^{\pi} =-33,\] \[b_{20}^{\pi} =N_{f}\left(-\frac{32\pi^{2}}{9}+\frac{344}{9}\right)+961\zeta(3 )-\frac{27\pi^{4}}{8}\] \[\quad+\frac{1310\pi^{2}}{27}+\frac{23053}{72}+\frac{1815\log^{2} \pi}{2}+\frac{1815}{2}\log^{2}\left(\frac{16}{3}\right)\] \[\quad+\log\left(\frac{16}{3}\right)\left(-\frac{2757}{2}+\frac{1 271\pi^{2}}{9}+1815\log\pi\right)\] \[\quad+\left(\frac{1271\pi^{2}}{9}-\frac{2757}{2}\right)\log\pi- \frac{272}{9}\pi^{2}\log 2,\] \[b_{21}^{\pi} =\frac{1271\pi^{2}}{9}-\frac{2757}{2}+\frac{1815}{2}\log\left( \frac{256\pi^{2}}{9}\right),\] \[b_{22}^{\pi} =\frac{1815}{2}.\] Unfortunately at present the N\({}^{3}\)LO corrections for \(f_{\pi}\) are not available. ### Banks-Zaks expansion of ratios Now that all three quantities of interest are available in perturbation theory we may expand the ratios in \(\varepsilon=33/2-N_{f}\) as outlined in section II. Using the 5-loop \(\beta\)-function [35; 36; 37; 38; 39; 40] for the expansion of \(a_{*}\) and the perturbative series (15), (17) and (20) we obtain the two meson decay constant to mass ratios in numerical form as, \[\frac{f_{\varrho}}{m_{\varrho}} =\varepsilon^{3/2}C_{0}\left(1+\sum_{n=1}^{3}\sum_{k=0}^{n}C_{nk }\varepsilon^{n}\log^{k}\varepsilon+O(\varepsilon^{4})\right)\] \[\frac{f_{\pi}}{m_{\varrho}} =\varepsilon^{3/2}C_{0}\left(1+\sum_{n=1}^{2}\sum_{k=0}^{n}D_{nk }\varepsilon^{n}\log^{k}\varepsilon+O(\varepsilon^{3})\right)\;,\] with the coefficients, \[C_{0} =0.005826678\] \[C_{10} =0.4487893\] \[C_{11} =-0.2056075\] \[C_{20} =0.2444502\] \[C_{21} =-0.1624891\] \[C_{22} =0.03522870 \tag{22}\] \[C_{30} =0.10604(3)\] \[C_{31} =-0.1128420\] \[C_{32} =0.03695458\] \[C_{33} =-0.005633665\] \[D_{10} =0.4654041\] \[D_{11} =-0.2056075\] \[D_{20} =0.2845697\] (23) \[D_{21} =-0.1737620\] \[D_{22} =0.03528692\;.\] Even though the coefficients (16), (18) and (21) are dangerously increasing in the series (15), (17) and (20), the above coefficients of the ratios are much better behaved. This will be important for the reliability and robustness of our findings. The coefficients (22) and (23) are the main results of this paper. ## V Matching across the conformal window The perturbative calculations are valid close to the upper end of the conformal window where \(\varepsilon=33/2-N_{f}\) is small. Non-perturbative results are available in the low \(N_{f}\) region, specifically for \(2\leq N_{f}\leq 10\), all extrapolated to the chiral and continuum limit. With the perturbative results for \(f_{\varrho}/m_{\varrho}\) and \(f_{\pi}/m_{\varrho}\) up to N\({}^{3}\)LO and NNLO order, respectively, at hand we attempt to match them to the non-perturbative ones. The latter shows that below the conformal window both of our ratios are constants as a function of \(N_{f}\) to high precision. At \(N_{f}=33/2\) both ratios are vanishing, and it is natural to expect that both reach zero in a monotonous fashion. Assuming it is indeed the case we may attempt to interpolate. ### \(f_{\varrho}/m_{\varrho}\) In order to match our perturbative \(f_{\varrho}/m_{\varrho}\) results to the non-perturbative (low \(N_{f}\)) region, continuum and chirally extrapolated lattice results for \(f_{\varrho}\) would be needed. These are not available at the moment, but they are [2; 3; 4] for \(f_{\pi}\) in the range \(2\leq N_{f}\leq 10\) and the KSRF-relations [5; 6] can be used to relate \(f_{\pi}\) and \(f_{\varrho}\). The relation we need is simply \(f_{\varrho}=\sqrt{2}f_{\pi}\). One does not expect this relation to hold exactly, but even in QCD at finite quark masses it is a surprisingly good approximation and towards the chiral limit it is expected to hold to even higher precision. Note that in supersymmetric QCD the KSRF-relations have actually been rigorously derived [7]. Hence by combining the non-perturbative lattice results and the KSRF relations we will have access to \(f_{\varrho}/m_{\varrho}\) for \(2\leq N_{f}\leq 10\). This is shown, together with the Banks-Zaks expansion (22) order-by-order in figure 1. Clearly, the deviation between the NNLO and N\({}^{3}\)LO results for \(N_{f}\geq 12\) is not substantial. And curiously, close to \(N_{f}=12\) the perturbative result reaches the non-perturbative one almost exactly. More quantitatively, in the range \(11.9\leq N_{f}\leq 12.1\), the deviation between the NNLO and N\({}^{3}\)LO results is at most 4%, or in the range \(11.5\leq N_{f}\leq 12.5\) at most 13%. Hence we conclude that in the region of interest, \(N_{f}\sim 12\), the N\({}^{3}\)LO result is robust and reliable. Assuming \(f_{\varrho}/m_{\varrho}\) is a monotonous function of \(N_{f}\) and that around \(N_{f}\sim 12\) the perturbative result is indeed reliable we are led to conclude that the combination of non-perturbative and perturbative results cover the entire \(N_{f}\) range. And at twelve flavors an abrupt change occurs in the ratio which is tempting to identify with the lower end of the conformal window. Concretely, we obtain \(N_{f}^{*}=12.00(4)\) and \(N_{f}^{*}=12.08(6)\) from the NNLO and N\({}^{3}\)LO approximations, respectively. The uncertainties are coming from the uncertainty of the lattice results. Clearly, the NNLO and N\({}^{3}\)LO approximations agree and lead to \(N_{f}^{*}=12\) for integer flavor numbers. Our line of reasoning cannot of course determine where exactly the twelve flavor theory lies, whether [41; 42; 43; 44; 45; 46; 47; 48; 49; 50] it is just below the conformal window and is hence spontaneously broken or just inside and is hence conformal. ### \(f_{\pi}/m_{\varrho}\) A similar analysis can be performed for \(f_{\pi}/m_{\varrho}\) as well. Here non-perturbative lattice results are available directly without reliance on any further input. The perturbative calculation could unfortunately be only carried out to NNLO order though. The increasing perturbative orders are shown in figure 2 which also shows the non-perturbative result obtained from continuum and chirally extrapolated lattice calculations. The N\({}^{3}\)LO correction for \(f_{\varrho}/m_{\varrho}\) was essential to establish the reliability of the perturbative series hence we can not make a similar statement for \(f_{\pi}/m_{\varrho}\). We may however estimate the size of the N\({}^{3}\)LO correction by assuming that relative to the NNLO result it is comparable to the case of \(f_{\varrho}/m_{\varrho}\). Assuming this is the case we obtain a very similar picture; the perturbative series seems reliable down to \(N_{f}^{*}=13\) where it matches the non-perturbative result. The only difference relative to \(f_{\varrho}/m_{\varrho}\) is the shift in the estimate of the lower end of the conformal window, from \(N_{f}^{*}\simeq 12\) to \(N_{f}^{*}\simeq 13\). This latter estimate should of course be checked by a genuine N\({}^{3}\)LO calculation of \(f_{\pi}\) in the future. ## VI Conclusion and outlook In this paper we introduced two quantities we believe are useful proxies for the transition between chirally broken and conformal gauge theories as the flavor number is varied. A minimal requirement for any such quantity is that it should be well-defined and calculable in the massless limit both outside and inside the conformal window. Outside the conformal window lattice calculations offer a way to obtain results whereas close to the upper end perturbative ones. Our quantities are related to bound states defined in the mass perturbed models and the chiral limit is meaningful for both ratios. It appears the bridge between the low \(N_{f}\) non-perturbative and high \(N_{f}\) perturbative regions may not be as large as one might have expected. Current lattice results are available up to \(N_{f}=10\) and the main result from this paper is that at \(N_{f}=12,\,13\) the perturbative series might be reliable if calculations are performed up to N\({}^{3}\)LO order leaving only the \(N_{f}=11\) model to be interpolated. Interestingly, at least for the \(f_{\varrho}/m_{\varrho}\) ratio, the perturbative N\({}^{3}\)LO result at \(N_{f}=12\) agrees with the non-perturbative \(N_{f}=10\) lattice calculation (and Figure 2: The \(f_{\pi}/m_{\varrho}\) ratio in increasing perturbative order as obtained from the Banks-Zaks expansion in \(\varepsilon=33/2-N_{f}\). The non-perturbative result from lattice calculations [2; 3; 4] is also shown. Figure 1: The \(f_{\varrho}/m_{\varrho}\) ratio in increasing perturbative order as obtained from the Banks-Zaks expansion in \(\varepsilon=33/2-N_{f}\). The non-perturbative result from combined lattice calculations [2; 3; 4] and the KSRF-relation is also shown. the ratio is approximately constant for \(2\leq N_{f}\leq 10\)). If we assume the ratio is a monotonously decreasing function of \(N_{f}\), which is a natural assumption based on the behavior at \(N_{f}=10\) and \(N_{f}=33/2\), we conclude that a matching between the low \(N_{f}\) non-perturbative and high \(N_{f}\) perturbative regions is possible with an abrupt change at \(N_{f}\simeq 12\). It is tempting to identify this with the lower end of the conformal window \(N_{f}^{*}\simeq 12\). Our other ratio, \(f_{\pi}/m_{\varrho}\) offers a similar analysis, but unfortunately at the moment only NNLO perturbative results are available. The reliability of the perturbative series cannot be judged from the NLO and NNLO corrections alone, in fact it is clear from the behavior of \(f_{\varrho}/m_{\varrho}\) that the N\({}^{3}\)LO correction is mandatory in order to conclude. Such a calculation of \(f_{\pi}\) within (p)NRQCD seems feasible and will be pursued in the future. Meanwhile, we have estimated the relative size of the unknown N\({}^{3}\)LO correction for \(f_{\pi}/m_{\varrho}\) from that of \(f_{\varrho}/m_{\varrho}\). Assuming that this is justified we are led to believe that a matching between the non-perturbative and perturbative regions is possible at \(N_{f}\simeq 13\) with a similarly abrupt change at this value. Hence the estimate shifted to \(N_{f}^{*}\simeq 13\), however it is important to stress that a genuine N\({}^{3}\)LO calculation of \(f_{\pi}\) should be sought first. Needless to say, we have nothing firm to conclude about the \(N_{f}=12\) model, whether it is just inside or just outside the conformal window. In general it is important to remember a key assumption underlying our entire calculation; namely that the only \(SU(N_{f})\)-invariant relevant perturbation of the conformal field theories we discuss is the fermionic mass term. This is certainly correct for small \(\varepsilon\) but might not hold for a sufficiently strongly coupled CFT, for instance it is conceivable that a 4-fermi term becomes relevant. Addressing this potential situation is beyond the scope of the present paper but we hope to return to it in the future. ## Acknowledgment We would like to thank George Fleming, Yuchiro Kiyo and Alexander Penin for useful correspondence. The work of H.S.C. is supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) under Contract No. NRF2020R1A2C3009918 and by Korea University. The work of D.N. is supported by the NKFIH grant KKP-126769. All authors contributed equally to this work.
2308.13721
Robust Machine Learning Modeling for Predictive Control Using Lipschitz-Constrained Neural Networks
Neural networks (NNs) have emerged as a state-of-the-art method for modeling nonlinear systems in model predictive control (MPC). However, the robustness of NNs, in terms of sensitivity to small input perturbations, remains a critical challenge for practical applications. To address this, we develop Lipschitz-Constrained Neural Networks (LCNNs) for modeling nonlinear systems and derive rigorous theoretical results to demonstrate their effectiveness in approximating Lipschitz functions, reducing input sensitivity, and preventing over-fitting. Specifically, we first prove a universal approximation theorem to show that LCNNs using SpectralDense layers can approximate any 1-Lipschitz target function. Then, we prove a probabilistic generalization error bound for LCNNs using SpectralDense layers by using their empirical Rademacher complexity. Finally, the LCNNs are incorporated into the MPC scheme, and a chemical process example is utilized to show that LCNN-based MPC outperforms MPC using conventional feedforward NNs in the presence of training data noise.
Wallace Tan Gian Yion, Zhe Wu
2023-08-26T01:14:47Z
http://arxiv.org/abs/2308.13721v1
# Robust Machine Learning Modeling for Predictive Control Using Lipschitz-Constrained Neural Networks ###### Abstract Neural networks (NNs) have emerged as a state-of-the-art method for modeling nonlinear systems in model predictive control (MPC). However, the robustness of NNs, in terms of sensitivity to small input perturbations, remains a critical challenge for practical applications. To address this, we develop Lipschitz-Constrained Neural Networks (LCNNs) for modeling nonlinear systems and derive rigorous theoretical results to demonstrate their effectiveness in approximating Lipschitz functions, reducing input sensitivity, and preventing over-fitting. Specifically, we first prove a universal approximation theorem to show that LCNNs using SpectralDense layers can approximate any 1-Lipschitz target function. Then, we prove a probabilistic generalization error bound for LCNNs using SpectralDense layers by using their empirical Rademacher complexity. Finally, the LCNNs are incorporated into the MPC scheme, and a chemical process example is utilized to show that LCNN-based MPC outperforms MPC using conventional feedforward NNs in the presence of training data noise. keywords: Lipschitz-Constrained Neural Networks; Robust Machine Learning Model; Generalization Error; Model Predictive Control; Neural Network Sensitivity; Over-fitting + Footnote †: journal: Journal of Computer Vision ## 1 Introduction Model Predictive Control (MPC) is an advanced optimization-based control strategy for various chemical engineering processes, such as batch crystallization processes (Kwon et al. (2013, 2014)) and continuous stirred tank reactors (CSTRs) (Chen et al. (1995); Wu (2001)). Machine learning (ML) techniques such as artificial neural networks (ANN) have been utilized to develop prediction models that will be incorporated into the design of MPC. Among various ML-based MPC schemes, an accurate and robust process model for prediction has always been one of the key components that ensures desired closed-loop performance. Despite the success of ANNs in modeling complex nonlinear systems, one prominent issue is that they could potentially be sensitive to small perturbations in input features. Such sensitivity issues arise naturally when a small change in the input (e.g., due to data noise, perturbation, or artificially generated adversarial inputs) could result in a drastic change in the output. For example, in classification problems, adversarial input perturbations have been shown to lead to a large variation in NN output, thereby leading to misclassified results (Szegedy et al. (2013)). Additionally, Balda et al. (2019) proposed a novel approach for constructing adversarial examples for regression problems using perturbation analysis for the underlying learning algorithm for the neural network. Since the lack of robustness of NNs could adversely affect the prediction accuracy in ML-based MPC, it is important to address the sensitivity issue for the implementation of NNs in performance-critical applications. To mitigate this issue, adversarial training has been adopted as one of the most effective approaches to train NNs against adversarial examples (Szegedy et al. (2013)). For example, Shaham et al. (2015) proposed a robust optimization framework with a different loss function in the optimization process that aims to search for adversarial examples near the vicinity of each training data point. This approach has been empirically shown to be effective against adversarial attacks by both Shaham et al. (2015) and Madry et al. (2017). However, there is a lack of a provably performance guarantee on the input sensitivity of neural networks for this approach. Therefore, a type of neural network with a fixed Lipschitz constant termed Lipschitz-Constrained Neural Networks (LCNNs) has received an emerging amount of interest in recent years (Baldi and Sadowski (2013); Anil et al. (2019)). One immediate way to control the Lipschitz constant of a neural network is to bound the norms of the weight matrices, and use activation functions with bounded derivatives (e.g., ReLU activation function). However, it is demonstrated in Anil et al. (2019) that this method substantially limits the model capacity of the NNs if component-wise activation functions are used. A recent breakthrough using a special activation function termed GroupSort (Anil et al. (2019)) significantly increases the expressive power of the NNs. Specifically, the GroupSort activation function enables NNs with norm-constrained GroupSort architectures to serve as universal approximators of 1-Lipschitz functions. Using the restricted Stone-Weierstrass theorem, Anil et al. (2019) proved the universal approximation theorem for GroupSort feedforward neural networks with appropriate assumptions on the norms of the weight matrices. However, since the proof in Anil et al. (2019) is based on the \(\infty\)-norm of matrices, the way to demonstrate the universal approximation property of LCNNs using the spectral norm (i.e., \(\ell_{2}\) matrix norm) remains an open question. The second prominent issue in the development of ANNs is over-fitting, where the networks perform very well on the training data but fail to predict accurately on the test data, which results in a high generalization error. One possible reason for over-fitting is that the training data has noise that negatively impacts learning performance (Ying (2019)). Additionally, over-fitting occurs when there is insufficient training data, or when there is a high hypothesis complexity, in terms of large weights, a large number of neurons, and extremely deep architectures. Therefore, designing neural network architectures that are less prone to over-fitting is a pertinent issue in supervised machine learning. Sabiri et al. (2022) provides an overview of popular solutions to prevent over-fitting. For example, one of the most common solutions to over-fitting is regularization, such as \(\ell_{1}\) or \(\ell_{2}\) regularization (e.g. Moore and DeNero (2011); Cortes et al. (2012)), which implements the size of the weights as a soft constraint. Other popular solutions include dropout (e.g. Baldi and Sadowski (2014); Srivastava et al. (2014)), where certain neurons are dropped out during training time with a certain specified probability, and early stopping, where the training is stopped using a predefined predicate, usually when the validation error reaches a minimum (e.g. Baldi and Sadowski (2013)). For example, in our previous work Wu et al. (2021), the Monte Carlo dropout technique was utilized in the development of NNs to mitigate the impact of data noise and reduce over-fitting. In addition to the above solutions, LCNNs have been demonstrated to be able to efficiently avoid over-fitting by constraining the Lipschitz constant of a network. However, at this stage, a fundamental understanding of the capability of LCNNs in reducing over-fitting in terms of the generalization ability of LCNNs over the underlying data distribution is missing. Motivated by the above considerations, in this work, we incorporate LCNNs using SpectralDense layers in MPC and demonstrate that the LCNNs can effectively resolve the two aforementioned issues: sensitivity to input perturbations and over-fitting in the presence of noise. Rigorous theoretical results are developed to demonstrate that LCNNs are provably robust against input perturbations because of their low Lipschitz constant and provably robust against over-fitting due to their lowered hypothesis complexity, i.e., low Rademacher complexity. The rest of this article is organized as follows. In Section 2, the nonlinear systems that are considered and the application method of FNNs in MPC are first presented. In Section 3, we present the formulation of LCNNs using SpectralDense layers, followed by a discussion on their improved robustness against input perturbations. In Section 4, we prove the universal approximation theorem for 1-Lipschitz continuous functions for LCNNs using SpectralDense layers. In Section 5, we develop a probabilistic generalization error bound for LCNNs to show that LCNNs using SpectralDense layers can effectively prevent over-fitting. This is done by computing an upper bound on the empirical Rademacher complexity method (ERC) of the function class represented by LCNNs using SpectralDense layers. Finally, in Section 6, we carry out a simulation study of a benchmark chemical reactor example, where we will exhibit the superiority of LCNNs over conventional FNNs with dense layers in the presence of noisy training data. ## 2 Preliminaries ### Notations \(\|W\|_{F}\) and \(\|W\|_{2}\) denote the Frobenius norm and the spectral norm of a matrix \(W\in\mathbb{R}^{n\times m}\) respectively. \(\mathbb{R}^{\geq 0}\) denotes the set of all nonnegative real numbers. A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is continuously differentiable if and only if it is differentiable and the Jacobian of \(f\), denoted by \(J_{f}\), is continuous. Given a vector \(x\in\mathbb{R}^{n}\), let \(\|x\|\) denote the Euclidean norm of \(x\). A metric space is a set \(X\) equipped with a metric function \(d_{X}:X\times X\rightarrow\mathbb{R}^{\geq 0}\) such that 1) \(d_{X}(x,y)=0\) if and only if \(x=y\), and 2) for all \(x,y,z\in X\), the triangle inequality \(d_{X}(x,z)\leq d_{X}(x,y)+d_{X}(y,z)\) holds. We denote a metric space as an ordered pair \((X,d_{X})\). Given an event \(A\), we denote \(\mathbb{P}(A)\) to be its probability. Given a random variable \(X\), we denote \(\mathbb{E}[X]\) to be its expectation. ### Class of Systems The nonlinear systems that are considered in this article can be represented by the following ordinary differential equation (ODE): \[\dot{x}=F(x,u):=f(x)+g(x)u \tag{1}\] Here \(x\in\mathbb{R}^{n}\) is the current state vector, \(u\in\mathbb{R}^{m}\) is the control vector, and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\) are continuously differentiable matrix-valued functions. We also assume that \(f(0)=0\), so that the origin \((x,u)=(0,0)\) is an equilibrium point. We also assume that there is a Lyapunov function \(V:D\rightarrow\mathbb{R}^{\geq 0}\) that is continuously differentiable and is equipped with a controller \(\Phi:D\to U\), such that the origin \((x,u)=(0,0)\) is an equilibrium point that is exponentially (closed-loop) stable. Here \(D\) and \(U\) are compact subsets of \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\), respectively, that contain an open set surrounding the origin. In addition, the stability region for this controller \(\Phi(x)\) is then taken to be a sublevel set of \(V\), i.e., \(\Omega_{\rho}:=\{x\mid V(x)\leq\rho\}\) with \(\rho\) positive and \(\Omega_{\rho}\subset D\). Additionally, since \(f,g\) are continuously differentiable, the following inequalities can be readily derived for any \(x,x^{\prime}\in D\), \(u\in U\) and some constants \(K_{F}\), \(L_{x}\): \[\|F(x,u)\|\leq K_{F} \tag{2a}\] \[\|F(x^{\prime},u)-F(x,u)\|\leq L_{x}\|x^{\prime}-x\| \tag{2b}\] ### Feedforward Neural Network (FNN) This subsection gives a short summary of the development of FNNs for the nonlinear systems represented by Eq. 1. Specifically, since the FNN is developed as the prediction model for model predictive controllers, we consider the FNNs that are built to capture the nonlinear dynamics of Eq. 1, whereby the control input actions are applied using the sample-and-hold method. This means that given an initial state \(x_{0}\), the control action \(u\) applied is constant throughout the entire period of the sampling time \(\Delta>0\). Suppose that the system state is currently at \(x(0)=x_{0}\). From Eq. 2b and the Picard-Lindelof Theorem for ODEs, there exists a unique state trajectory \(x(t)\) such that \[x(t)=x_{0}+\int_{0}^{t}F(x(s),u)ds \tag{3}\] We can then define \(\tilde{F}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) by \(\tilde{F}(x_{0},u)=x(\Delta)\), which is the function that takes the current state and the control action as inputs and predicts the state at \(\Delta\) time later. This function can then be approximated by an FNN, such as an LCNN, and the approximation is denoted by \(\tilde{F}_{nn}\). In order to develop the neural networks, open-loop simulations of the ODE system represented by Eq. 1 using varying control actions will be conducted to generate the required dataset. Specifically, we perform a sweep of all possible initial states \(x_{0}\in\Omega_{\rho}\) and control actions \(u\in U\), and use the forward Euler method with integration time step \(h_{c}\ll\Delta\), to deduce the value of \(\tilde{F}(x_{0},u)\), that is, to deduce the state at \(t=\Delta\). The training dataset consists of all such possible pairs \((x_{0},u)\) (FNN inputs) \(\tilde{F}(x_{0},u)\) (FNN outputs), where \(\tilde{F}(x_{0},u)\) is the actual future state. Therefore, the two functions \(x_{t+1}:=\tilde{F}(x_{t},u_{t})\) and \(x_{t+1}:=\tilde{F}_{nn}(x_{t},u_{t})\) represent define two distinct nonlinear discrete-time systems respectively, where \(x_{t+1}\) denotes the state at \(t+\Delta\). Since \(\tilde{F}_{nn}\) will be the function used in the MPC optimization algorithm, ensuring that \(\tilde{F}_{nn}\) is an accurate approximation of \(\tilde{F}\) is necessary so that the neural network captures the nonlinear dynamics well. In general, the FNN model \(\tilde{F}_{nn}\) should be developed with sufficient training data and an appropriate architecture in terms of the number of neurons and layers in order to achieve the desired prediction accuracy on both training and test sets. However, in the presence of insufficient training data or noisy data, over-fitting might occur, leading to a model that performs poorly on the test data set and generalizes poorly. Also, the developed FNN model for \(\tilde{F}_{nn}\) should not be overtly sensitive to input perturbations (that is, it should not have overtly large gradients) in order to ensure that \(\tilde{F}_{nn}\) can generalize well to other data points outside the training dataset but within the desired domain \(D\times U\). Therefore, to address the issues of over-fitting and sensitivity, we will develop LCNNs for the nonlinear system of Eq. 1 in this work, and show that LCNNs can overcome the limitations of conventional FNNs with dense layers by lowering sensitivity and preventing over-fitting. ## 3 Lipschitz-Constrained Neural Network Models Using SpectralDense Layers In this section, the architecture of LCNNs using SpectralDense layers will first be introduced, followed by a discussion on the reduced sensitivity of LCNN to input perturbations as compared to conventional FNNs. First, we begin with an important definition. _Definition 1_.: A function \(f:X\to Y\) where \(X\subset\mathbb{R}^{n}\) and \(Y\subset\mathbb{R}^{m}\) is **Lipschitz continuous** with Lipschitz constant \(L\) (or \(L\)-Lipschitz) if \(\forall x,y\in X\), one has \[\|f(x)-f(y)\|\leq L\cdot\|x-y\|\] It is readily shown that if \(f\) is \(L\)-Lipschitz continuous, given a small perturbation to the input, the output \(f(x)\) changes by at most \(L\) times the magnitude of that perturbation. As a result, as long as the Lipschitz constant of a neural network is constrained to be a small value, it is less sensitive with respect to input perturbations. In the next subsection, we demonstrate that LCNNs using SpectralDense layers have a constrained small Lipschitz constant, where each of the SpectralDense layers has a Lipschitz constant of 1. ### SpectralDense layers The mathematical definition of the SpectralDense layers used to construct an LCNN is first presented. Recall that by the singular value decomposition (SVD), for any \(W\in\mathbb{R}^{m\times n}\), there exist orthogonal matrices \(U\in\mathbb{R}^{m\times m}\), \(V\in\mathbb{R}^{n\times n}\), and a rectangular diagonal matrix \(D\in\mathbb{R}^{m\times n}\) with positive entries such that \(W=UDV^{T}\). First, we recall the definition of a conventional dense layer: _Definition 2_.: A **dense layer** is a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) of the form \[f:x\to\sigma(Wx+b)\] where \(\sigma\in\mathbb{R}^{m}\to\mathbb{R}^{m}\) is an activation function, \(W\in\mathbb{R}^{m\times n}\) is a weight matrix, and \(b\in\mathbb{R}^{n}\) is a bias term. A dense layer is a layer that is deeply connected with its preceding layer (i.e., the neurons of the layer are connected to every neuron of its preceding layer). It should be noted that in conventional dense layers, the activation function \(\sigma\) is applied component-wise, that is, \(\sigma(x_{1},x_{2},\ldots,x_{m})=(\sigma^{\prime}(x_{1}),\sigma^{\prime}(x_{2 }),\ldots,\sigma^{\prime}(x_{m}))\) where \(\sigma^{\prime}:\mathbb{R}\to\mathbb{R}\) is a real-valued function, such as ReLU or \(\tanh\). However, in SpectralDense layers, the following GroupSort function is used as the activation function \(\sigma\): _Definition 3_.: (Anil et al. (2019)) The **GroupSort function** (of group size 2) is a function \(\sigma:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) defined as follows: If \(m\) is even, then \[\sigma([x_{1},x_{2},\cdots,x_{m-1},x_{m}]^{T})=[\max(x_{1},x_{2}),\min(x_{1},x_{ 2}),\cdots,\max(x_{m-1},x_{m}),\min(x_{m-1},x_{m})]^{T}\] (4a) else, if \[m\] is odd, \[\sigma([x_{1},x_{2},\cdots,x_{m-2},x_{m-1},x_{m}]^{T})=[\max(x_{1},x_{2}),\min (x_{1},x_{2}),\cdots,\min(x_{m-2},x_{m-1}),x_{m}]^{T} \tag{4b}\] For example, in the case where the output layer has dimension \(m=4\), we have \[\sigma([0,3,4,2]^{T})=[3,0,4,2]^{T}\ \ \sigma([5,3,2,4]^{T})=[5,3,4,2]^{T} \tag{4c}\] SpectralDense layers can now be defined as follows: _Definition 4_.: (Serrurier et al. (2021)) **SpectralDense layers** are dense layers such that 1) the largest singular value of \(W\) is 1, and 2) the activation function \(\sigma\) is GroupSort function. Therefore, SpectralDense layers are similar to dense layers in terms of their structure, except that the activation function does not act component-wise and the weight matrices have a spectral norm of 1. The spectral norm of a matrix is also equal to the largest singular value in its SVD. Since the largest singular value of the weight matrix \(W\) is 1, the spectral norm \(\|W\|_{2}\) is also 1. The function \(\sigma:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) is also 1-Lipschitz continuous (with respect to the Euclidean norm), since it has Jacobian of spectral norm 1 almost everywhere (everywhere except a set of measure 0); thus it is also 1-Lipschitz continuous (see Theorem 3.1.6 in Federer (2014)). We therefore conclude that every SpectralDense layer is also 1-Lipschitz continuous. Next, the definition of the class of LCNNs is given as follows. _Definition 5_.: Let \(\mathcal{LN}_{n}^{m}\) be the class of Lipschitz-constrained neural networks (LCNNs) as follows: \[\begin{array}{l}\mathcal{LN}_{n}^{m}:=\{\,f\ |\ f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}\,,\ \exists j\in\mathbb{N}\text{ such that }f=W_{j+1}f_{j}\circ f_{j-1}\circ...\circ f_{2}\circ f_{1},\\ \text{ where }f_{i}=\sigma(W_{i}x+b),\text{ and }\|W_{i}\|_{2}=1,i=1,...,j\,\} \end{array} \tag{5}\] where \(\sigma\) is the GroupSort activation with group size 2. Thus, each LCNN in \(\mathcal{LN}_{n}^{m}\) consists of many SpectralDense layers (i.e., \(W_{i}\), \(i=1,...,j\)) composed together, with a final weight matrix \(W_{j+1}\) at the end. The spectral norm constraint is imposed for all weight matrices except the final weight matrix \(W_{j+1}\). Since the Lipschitz constant for each of the functions \(f_{i}\), \(i=1,...,j\), is bounded by \(1\), it is readily shown that for each neural network in \(\mathcal{LN}_{n}^{m}\), the Lipschitz constant is bounded by the spectral norm of the final weight matrix \(W_{j+1}\). This implies that we can control the Lipschitz constant of an LCNN by manipulating the spectral norm of the final weight matrix \(W_{j+1}\), and this can be done in a variety of ways, such as imposing the constraint for the final weight matrix as a whole, or restricting the absolute value of each entry in the final weight matrix during the training process. In the simulation study in Section 5, we impose the absolute value constraint on each entry of the final weight matrix to control the Lipschitz constant of the LCNN. _Remark 1_.: The SpectralDense layers adopted in this work differ from those used in Anil et al. (2019), since the matrix norm used in this work is the spectral norm, whereas the norm used by Anil et al. is the \(\infty\)-norm. Although all matrix norms give rise to equivalent topologies (the same open sets), we use the spectral norm as it is directly related to the Jacobian of the function. Specifically, a well-known theorem by Rademacher (see Lemma 3.1.7 of Federer (2014) for a proof) states that if \(X\subset\mathbb{R}^{n}\) is open and \(f:X\to\mathbb{R}^{m}\) is \(L\)-Lipschitz continuous, then \(f\) is almost everywhere differentiable, and we have \[L=\sup_{x\in X}\|J_{f}(x)\|_{2} \tag{6}\] Therefore, it is observed that the Lipschitz constant using the Euclidean norm on the input space \(\mathbb{R}^{n}\) and output space \(\mathbb{R}^{m}\) is actually the supremum of the spectral norm of the Jacobian matrix of \(f\). If the \(\infty\)-norm were to be used, to the best of our knowledge, no such essential relationship that involves the Lipschitz constant has been proven to this date. ### Robustness of LCNNs We now discuss how LCNNs using SpectralDense layers can resolve the issue of sensitivity to input perturbations. Let \(f:X\to\mathbb{R}^{m}\) be a neural network that has been trained using a training algorithm and \(X\subset\mathbb{R}^{n}\) be an open subset such that \(f\) is almost everywhere differentiable with Jacobian \(J_{f}\). Given a set of training data, at each point \(x\in X\), one plausible way to maximize the impact of the input perturbation is to traverse along the direction corresponding to the largest eigenvalue (the spectral norm) of \(J_{f}(x)\) in its SVD, since this is the direction that leads to the largest variation in output (Szegedy et al. (2013); Goodfellow et al. (2014)). For any \(f\in\mathcal{LN}_{n}^{m}\), Eq. 5 shows that the Lipschitz constant of \(f\) is bounded by the spectral norm of the final weight matrix. If the spectral norm of the final matrix is small, the corresponding LCNN will have a small Lipschitz constant, making it difficult to perturb, even if we travel along the direction corresponding to the largest eigenvalue of \(J_{f}\). Specifically, if the input perturbation is of size \(\delta\), the output change is at most \(L\times\delta\), where \(L\) is constrained to be a small value. Therefore, one plausible method to reduce the sensitivity of neural networks to input perturbations is to constrain the Lipschitz constant of the networks. However, since the Lipschitz constant affects the network capacity, controlling the upper bound of the Lipschitz constant in LCNNs could result in a reduced network capacity. To address this issue, we will demonstrate in the next section that the function class \(\mathcal{LN}_{n}^{m}\) is a universal approximator for any Lipschitz continuous target function. Additionally, a pertinent question that arises is whether, in practice, the Lipschitz constants of conventional FNNs (e.g., FNNs using conventional dense layers and ReLU activation functions) are indeed much larger than those in LCNNs. In the special case of FNNs with ReLU activation functions, Bhowmick et al. (2021) have designed a provably correct approximation algorithm known as Lipschitz Branch and Bound (LipBaB), which obtains the Lipschitz constant of such networks on a compact rectangular domain. In Section 6.7, we will demonstrate empirically that with noisy training data, FNNs with dense layers could have a Lipschitz constant several orders of magnitude higher than that of LCNNs. ## 4 Universal Approximation Theorem for LCNNs This section develops the universal approximation theorem for LCNNs using SpectralDense layers, which demonstrates that the LCNNs with a bounded Lipschitz constant can approximate any nonlinear function as long as the target function is Lipschitz continuous. Before we present the proof for vector-valued LCNNs that are developed for nonlinear systems with vector-valued outputs such that of Eq. 1, we first develop a theorem that considers the approximation of real-valued functions. Then, the results for real-valued functions can be generalized to the multi-dimensional output case. We first define the real-valued function class as follows. \[\mathcal{LN}_{n}:=\{f\mid f:\mathbb{R}^{n}\rightarrow\mathbb{R},\, \exists j\in\mathbb{N},s.t.\,f=f_{j}\circ...f_{2}\circ f_{1},f_{i}=\sigma(W_{i}x +b),\|W_{i}\|_{2}=1,i=1,...,j\} \tag{7}\] where \(\sigma\) is the GroupSort activation with group size 2. The definition of Eq. 7 is similar to that of Eq. 5, except that the functions are real-valued, and the spectral norm for each weight matrix is 1, including the final weight matrix. Note that the final map \(f_{j}\) is an affine map, without any sorting, since the output of \(f_{j}\) is a single real number. It readily follows that any function in \(\mathcal{LN}_{n}\) is also 1-Lipschitz continuous since the spectral norm of the final weight matrix is one. Given a target function \(F:D\rightarrow\mathbb{R}\) where \(D\) is a compact and connected domain and \(F\) is Lipschitz continuous, we prove that real-valued functions from \(\mathcal{LN}_{n}\) are universal approximators of 1-Lipschitz functions, provided that we allow for an amplification of at most \(\sqrt{2}\) at the end. In principle, this implies that LCNNs can approximate any Lipschitz function (i.e., they are dense with respect to the uniform norm on a compact set). We will follow the notation in Anil et al. (2019), but with slight modifications. We first present the following definitions, which will be used in the proof of the universal approximation theorem. _Definition 6_.: Let \((X,d_{X})\) be a metric space. We use \(C_{L}(X,\mathbb{R})\) to denote the set of all 1-Lipschitz real-valued functions on \(X\). _Definition 7_.: Let \(A\) be a set of functions from \(\mathbb{R}^{n}\) to \(\mathbb{R}\), and let \(k\) be a real number. We define \(kA\) as follows. \[kA:=\{\;cf\mid|c|\leq k,\;f\in A\} \tag{8}\] _Definition 8_.: A **lattice**\(\mathcal{L}\) in \(C_{L}(X,\mathbb{R})\) is a set of functions that is closed under point-wise maximums and minimums, that is, \(\forall f,g\in\mathcal{L}\), \(\min(f,g),\max(f,g)\in\mathcal{L}\). The following restricted Stone-Weierstrass theorem allows us to approximate 1-Lipschitz continuous functions using lattices. **Theorem 1** (Restricted Stone-Weierstrass Anil et al. (2019)).: _Let \((X,d_{X})\) be a compact metric space and \(\mathcal{L}\) be a lattice in \(C_{L}(X,\mathbb{R})\). Suppose that for all \(a,b\in\mathbb{R}\) and \(x,y\in X\) such that \(|a-b|\leq d_{X}(x,y)\), there exists an \(f\in\mathcal{L}\) such that \(f(x)=a\) and \(f(y)=b\). Then \(\mathcal{L}\) is dense in \(C_{L}(X,\mathbb{R})\) with respect to the uniform topology, that is, for every \(\epsilon>0\) and for every \(f\in C_{L}(X,\mathbb{R})\), there exists an \(\tilde{f}\in\mathcal{L}\) such that_ \[\sup_{x\in X}\,|f(x)-\tilde{f}(x)|<\epsilon \tag{9}\] The proof of Theorem 1 is given in Anil et al. (2019) and is omitted here. Based on Theorem 1, we develop the following theorem to prove that the LCNN networks constructed using SpectralDense layers can also serve as universal approximators, if we allow for an amplification of the output at the end. The proof uses Theorem 1 to show that \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) is a lattice. The proof techniques and structure are similar to those in Anil et al. (2019), while the key difference is the use of the SVDs of the weight matrices since we are using the spectral norm instead. **Theorem 2**.: _Let \(D\subset\mathbb{R}^{n}\) be a compact subset, and \(\mathcal{LN}_{n}\) be the set of LCNNs defined in Eq. 5. \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) is dense in \(C_{L}(D,\mathbb{R})\) with respect to the uniform topology, i.e., for every \(\epsilon>0\) and for every \(f\in C_{L}(D,\mathbb{R})\), there exists an \(\tilde{f}\in\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) such that_ \[\sup_{x\in D}|f(x)-\tilde{f}(x)|<\epsilon \tag{10}\] Proof.: Theorem 2 states that if we allow for an amplification of \(\sqrt{2}\) at the end, then the LCNNs using SpectralDense layers defined in Eq. 5 can approximate 1-Lipschitz continuous functions arbitrarily accurately. To prove Eq. 10, we first show that \(\mathcal{L}:=\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) satisfies the assumptions needed for the restricted Stone-Weierstrass theorem. Note that for all \(a,b\in\mathbb{R}\) and \(x,y\in D\) such that \(|a-b|\leq\|x-y\|\), there exists an \(f\in\mathcal{L}\) such that \(f(x)=a\) and \(f(y)=b\). To construct such an \(f\), let \(f(v)=w^{T}(v-x)+a\) and choose \(w\in\mathbb{R}^{n}\) with \(\|w\|=1\) carefully so that this holds. We need to ensure that \(w^{T}(y-x)=b-a\) such that \(f(y)=b\). Therefore, we need to choose \(w\) such that the following equation holds. \[b-a=w^{T}(y-x)\leq\|w\|\cdot\|y-x\|=\|y-x\| \tag{11}\] We can first set \(w\) to be in the same direction as \(y-x\), and then gradually rotate \(w\) away from \(y-x\) (for example, using a suitable orthogonal matrix) so that we eventually have \(b-a=w^{T}(y-x)\). Next, we need to show that \(\mathcal{L}\) is a lattice. This is equivalent to showing that \(\mathcal{L}\) is closed under pointwise maximums and minimums by Definition 8. Following the same proof as in Anil et al. (2019), we assume that \(f,g\in\mathcal{L}\) are defined by their weights and biases: \[[W_{1}^{f},b_{1}^{f},W_{2}^{f},b_{2}^{f},\ldots,W_{d_{f}}^{f},b_{d_{f}}^{f}]~{}~ {}[W_{1}^{g},b_{1}^{g},W_{2}^{g},b_{2}^{g},\ldots,W_{d_{g}}^{g},b_{d_{g}}^{g}] \tag{12}\] where \(d_{f}\) and \(d_{g}\) represent the depths of the networks \(f\) and \(g\), respectively. We assume without loss of generality that the two neural networks \(f\) and \(g\) have the same depth, i.e., \(d_{f}=d_{g}\). This is possible since if \(d_{f}>d_{g}\), one can pad the neural network with identity matrix weights and zero biases until they are of the same depth, and likewise for \(d_{f}<d_{g}\). We also assume without loss of generality that each of the weight matrices (except the final weight matrix) has an even number of rows. In the case where the neural network has a weight matrix with an odd number of rows, the weight matrix can be padded with a zero row under the last row and with a bias \(-M\) where \(M>0\) is sufficiently large in that row (this is possible since \(D\) is compact and by the extreme value theorem). This is to prevent a different sorting configuration of the output vector of that layer. Then, in the next matrix, we add a column of zeros to remove the \(-M\) entry. We now construct a neural network \(h\) for \(\max(f,g)\) and \(\min(f,g)\) with new weights such that each of the weights satisfies \(\|W_{i}^{h}\|_{2}=1\) for \(i=1,\cdots,d_{f}\), and the scaling factor of \(\sqrt{2}\) will be applied later. We first construct a suitable neural network with the layers of \(f\) and \(g\) side by side, but with some modifications. Specifically, the first matrix \(W_{1}^{h}\) and the first bias \(b_{1}^{h}\) are designed as follows: \[W_{1}^{h}=c[W_{1}^{f},W_{1}^{g}]^{T}~{}~{}b_{1}^{h}=[b_{1}^{f},b_{1}^{g}] \tag{13}\] where \(c\) is a positive constant chosen based on \(W_{1}^{f}\) and \(W_{1}^{g}\) to ensure that \(\|W_{1}^{h}\|_{2}=1\). From Eq. 13, it is shown that the output of the first layer is obtained by concatenating the output of the first layer of \(f\) and \(g\) together, and multiplied by a positive constant \(c\). Note that \(1\leq c\leq\frac{1}{\sqrt{2}}\) since each of \(W_{1}^{f}\) and \(W_{1}^{g}\) have spectral norms \(1\). Then, for the rest of the layers (\(i\geq 2\)), we define \[W_{i}^{h}=\begin{bmatrix}W_{i}^{f}&0\\ 0&W_{i}^{g}\end{bmatrix} \tag{14}\] To prove that \(\|W_{i}^{h}\|_{2}=1\), we use the singular value decomposition. \[W_{i}^{h} =\begin{bmatrix}W_{i}^{f}&0\\ 0&W_{i}^{g}\end{bmatrix}=\begin{bmatrix}U_{i}^{f}D_{i}^{f}{V_{i}^{f}}^{*}&0\\ 0&U_{i}^{g}D_{i}^{g}{V_{i}^{g}}^{*}\end{bmatrix} \tag{15a}\] \[=\begin{bmatrix}U_{i}^{f}&0\\ 0&U_{i}^{g}\end{bmatrix}\begin{bmatrix}D_{i}^{f}&0\\ 0&D_{i}^{g}\end{bmatrix}\begin{bmatrix}{V_{i}^{f}}^{*}&0\\ 0&{V_{i}^{g}}^{*}\end{bmatrix}\] (15b) \[=UDV^{T} \tag{15c}\] It is noted that \(\|W_{i}^{h}\|_{2}\) is simply the largest singular value of its singular value decomposition. On the right-hand side (RHS) of Eq. 15c, even if the matrix \(D\) is not rectangular block diagonal, we can permute the columns of \(D\) and the rows of \(V^{T}\) simultaneously, and then the rows of \(D\) and columns of \(U\) simultaneously, to obtain a new rectangular block diagonal matrix. Permuting the columns of \(U\) and the rows of \(V^{T}\) does not change the unitary property of these matrices. Therefore, the largest singular value of \(W_{i}^{h}\) is 1 since both \(D_{i}^{f}\) and \(D_{i}^{g}\) have the largest singular values of 1, and therefore \(\|W_{i}^{h}\|_{2}=1\). We also take the following. \[b_{i}^{h}=c[b_{i}^{f},b_{i}^{g}] \tag{16}\] for each of the biases. Finally, after passing through the GroupSort activation function, the output of the last layer is \[c[\max(f(x),g(x)),\min(f(x),g(x))]^{T} \tag{17}\] By passing Eq. 17 through the weight matrix \([1,0]\) or \([0,1]\), we obtain \(h(x)=c\max(f(x),g(x))\) or \(h(x)=c\min(f(x),g(x))\) respectively. Since we consider the set \(\mathcal{L}=\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\), and \(1\leq c^{-1}\leq\sqrt{2}\), we observe that \(c^{-1}\,h(x)=\max(f(x),g(x))\) or \(c^{-1}\,h(x)=\min(f(x),g(x))\). This proves that \(\max(f(x),g(x))\) and \(\min(f(x),g(x))\) are in \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\), which implies that \(\mathcal{L}\) is a lattice. Therefore, \(\sqrt{2}\mathcal{LN}_{n}\cap C_{L}(D,\mathbb{R})\) satisfies the assumptions needed for the restricted Stone-Weierstrass theorem in Theorem 1, and this completes the proof. Theorem 2 implies that for 1-Lipschitz target functions, if we allow for an amplification of \(\sqrt{2}\) at the last layer, then LCNNs using SpectralDense layers are universal approximators for the target function. Similarly, LCNNs can approximate an \(L\)-Lipschitz continuous function so long as an amplification of \(\sqrt{2}L\) is allowed. The final amplification can be easily implemented by using a suitable weight matrix such as a constant multiple of the identity matrix as the final layer. _Remark 2_.: The above theorem can be generalized to regression problems from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\). Specifically, a similar result can be derived for LCNNs developed for \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) by approximating each component function \(f_{i}\) for \(i=1,...,m\) with a 1-Lipschitz neural network, and then padding the neural networks together to form a large neural network with \(m\) outputs. However, note that, as a result, the resulting approximating function might be \(\sqrt{m}\)-Lipschitz continuous. ## 5 Preventing Over-fitting and Improving Generalization with LCNNs In this section, we provide a theoretical argument using the empirical Rademacher complexity to show that LCNNs can prevent over-fitting and therefore generalize better than conventional FNNs by computing the empirical Rademacher complexity of the LCNNs, and comparing that with usual FNNs made using the same architecture (i.e., same number of neurons in each layer and same depth). Specifically, we develop a bound of the empirical Rademacher complexity (ERC) for any FNN that utilizes the GroupSort function (hereby called GroupSort Neural Networks), and subsequently use this to obtain a bound for LCNNs using SpectralDense layers as an immediate corollary. Then, we compare this bound with the bound for the empirical Rademacher complexity of FNNs using 1-Lipschitz component-wise activation functions (e.g. ReLU) and show that the GroupSort NNs achieve a tighter generalization error bound. ### Assumptions and Preliminaries We first assume that the input domain is a bounded subset \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\), where for each \(x\in\mathcal{X}\), one has \(\|x\|\leq B\), and the output space is a subset of the vector space \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\). We also assume that each input-output pair in the dataset \((x,y)\) is drawn from some distribution \(\mathcal{D}\subset\mathcal{X}\)\(\times\)\(\mathcal{Y}\) with some probability distribution \(\mathbb{P}\). We assume that \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) is a loss function that satisfies the following two properties: 1) there exists a positive \(M>0\) such that for all \(y,y^{\prime}\in\mathbb{R}\)\(L(y,y^{\prime})\leq M\), and 2) for all \(y^{\prime}\in\mathbb{R}^{d_{y}}\), the function \(y\to L(y,y^{\prime})\) is \(L_{r}\)-Lipschitz for some \(L_{r}>0\). Let \(h:\mathcal{X}\rightarrow\mathcal{Y}\) be a function that represents a neural network model (termed hypothesis) in a hypothesis class. _Definition 9_.: We define **generalization error** as \[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}\left[L(h(x),y)\right]=\int_{\mathcal{X }\times\mathcal{Y}}L(h(x),y)\,\mathbb{P}(dx\times dy) \tag{18}\] _Definition 10_.: For any dataset \(S=\{s_{1},s_{2},\cdots,s_{m}\}\subset\mathcal{X}\times\mathcal{Y}\) with \(s_{i}=(x_{i},y_{i})\), we define the **empirical error** as \[\hat{\mathbb{E}}_{S}[L(h(x),y)]=\frac{1}{m}\sum_{i=1}^{m}L(h(x_{i}),y_{i}) \tag{19}\] The generalization error is a measure of how well a hypothesis \(h:\mathcal{X}\rightarrow\mathcal{Y}\) generalizes from the training dataset to the entire domain being considered. On the other hand, the empirical error is a measure of how well the hypothesis \(h\) performs on the available data points \(S\). _Remark 3_.: In this work, we use the \(\ell_{2}\) error for the loss function \(L\) in the NN training process, i.e., \(L(y,y^{\prime})=\|y-y^{\prime}\|^{2}\), which is locally Lipschitz continuous but not globally Lipschitz continuous. However, since we consider an input domain of \(D\times U\) which is a compact set, and the function \(\tilde{F}\) is also continuous, the range of \(\tilde{F}\) is also compact and bounded. As a result, the output domain \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) is bounded, and the assumptions for the loss function \(L(\cdot,\cdot)\) are satisfied using the \(\ell_{2}\) error. ### Empirical Rademacher Complexity Bound for GroupSort Neural Networks We first define the empirical Rademacher complexity of a real-valued function hypothesis class. _Definition 11_.: Given an input domain space \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\), suppose \(H\) is a class of real-valued functions from \(\mathcal{X}\) to \(\mathbb{R}\). Let \(S=\{x_{1},x_{2},\cdots,x_{m}\}\subset\mathcal{X}\), which is a set of samples from \(\mathcal{X}\). The **empiricial Rademacher complexity** (ERC) of \(S\) with respect to \(H\), denoted by \(\mathcal{R}_{S}(H)\), is defined as: \[\mathcal{R}_{S}(H):=\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in H}\frac{1}{m} \sum_{i=1}^{m}\epsilon_{i}h(x_{i}) \tag{20}\] where each of \(\epsilon_{i}\) are i.i.d Rademacher variables, i.e., with \(\mathbb{P}(\epsilon_{i}=1)=\frac{1}{2}\) and \(\mathbb{P}(\epsilon_{i}=-1)=\frac{1}{2}\). The ERC measures the richness of a real-valued function hypothesis class with respect to a probability distribution. A more complex hypothesis class with a larger ERC is likely to represent a richer variety of functions, but may also lead to over-fitting, especially in the presence of noise or insufficient data. The ERC is often used to obtain a probabilistic bound on the generalization error in statistical machine learning. Specifically, we first present the following theorem in Mohri et al. (2018) that obtains a probabilistic upper bound on the generalization error for a hypothesis class. **Theorem 3** (Theorem 3.3 in Mohri et al. (2018)).: _Let \(\mathcal{H}\) be a hypothesis class of functions \(h:\mathcal{X}\subset\mathbb{R}^{d_{x}}\rightarrow\mathcal{Y}\subset\mathbb{R}^{d _{y}}\) and \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) be the loss function that satisfies the properties in Section 5.1. Let \(\mathcal{G}\) be a hypothesis class of loss functions associated with the hypotheses \(h\in\mathcal{H}\):_ \[\mathcal{G}:=\{g:(x,y)\to L(h(x),y)\,,\,h\in\mathcal{H}\} \tag{21}\] _For any set of \(m\) i.i.d. training samples \(S=\{s_{1},s_{2},\cdots,s_{m}\}\), \(s_{i}=(x_{i},y_{i})\), \(i=1,...,m\), drawn from a probability distribution \(\mathcal{D}\subset\mathcal{X}\times\mathcal{Y}\), for any \(\delta\in(0,1)\), the following upper bound holds with probability \(1-\delta\) :_ \[\underset{(x,y)\sim\mathcal{D}}{\mathrm{E}}[L(h(x),y)]\leq\frac{1}{m}\sum_{i=1 }^{m}L(h(x_{i}),y_{i})+2\mathcal{R}_{S}(\mathcal{G})+3M\sqrt{\frac{\log\frac{ 1}{\delta}}{2m}} \tag{22}\] Eq. 21 in Theorem 3 explains why an overtly large hypothesis class with a high degree of complexity could inevitably increase the generalization error. Specifically, if the hypothesis class is overtly large, the first term in the RHS of Eq. 21 (i.e., the empirical error \(\hat{E}_{S}[L(h(x),y)]=\sum_{i}L(h(x_{i}),y_{i})\)) is expected to be sufficiently small, since one could find a hypothesis \(h\) that minimizes the training error using an appropriate training algorithm. However, the trade-off is that the ERC \(\mathcal{R}_{S}(\mathcal{G})\) will increase with the size of the hypothesis class. On the contrary, if the hypothesis class is limited, the model complexity represented by the ERC of \(\mathcal{R}_{S}(\mathcal{G})\) is reduced, while the empirical error \(\hat{E}_{S}[L(h(x),y)]\) is expected to increase. Since the ERC of the hypothesis class of loss functions \(\mathcal{R}_{S}(\mathcal{G})\) appears on the RHS of the inequality above, it implies that a lower ERC leads to a tighter generalization error bound. Therefore, in this section, we demonstrate that LCNNs using SpectralDense layers have a smaller ERC \(\mathcal{R}_{S}(\mathcal{G})\) than conventional FNNs using dense layers. Since \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) is a subset of a real vector space, while the definition of ERC in Definition 11 is with respect to a real-valued class of functions, to simplify the discussion, we can first apply the following contraction inequality. _Lemma 4_ (Vector Contraction Inequality Maurer (2016)).: Let \(\mathcal{H}\) be a hypothesis class of functions from \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) to \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\). For any set of data points \(S=\{s_{1},s_{2},\cdots,s_{m}\}\subset\mathcal{X}\times\mathcal{Y}\) and \(L_{r}\)-Lipschitz loss function \(y\to L(y,y^{\prime})\) for some \(L_{r}>0\), we have \[\mathcal{R}_{S}(\mathcal{G})=\mathbb{E}\sup_{\epsilon}\frac{1}{n \in\mathcal{H}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}L(h(x_{i}),y)\leq\sqrt{2} L_{r}\mathbb{E}\sup_{\epsilon}\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{d_{y}} \epsilon_{ik}h_{k}(x_{i}) \tag{23}\] where \(h_{k}\) is the \(k^{\text{th}}\) component function of \(h\in\mathcal{H}\) and each of \(\epsilon_{ik}\) are i.i.d. Rademacher variables. The proof of the above inequality can be found in Maurer (2016), and is omitted here. The RHS of the inequality in Eq. 23 can be bounded by taking out the sum in the supremum: \[\mathbb{E}\sup_{\epsilon}\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{d_{y}}\epsilon_ {ik}h_{k}(x_{i})\leq\sum_{k=1}^{d_{y}}\mathbb{E}\sup_{\epsilon}\frac{1}{m} \sum_{i=1}^{m}\epsilon_{ik}h_{k}(x_{i}) \tag{24}\] where each of the \(\mathcal{H}_{k}\) is a real-valued function class that correspond to the \(k^{\text{th}}\) component function of each function in \(h\in\mathcal{H}\). The RHS of Eq. 24 is the sum of the Rademacher complexities of the hypothesis classes \(\mathcal{H}_{k}\). Therefore, to obtain an upper bound for \(\mathcal{R}_{S}(\mathcal{G})\), we can first consider the case of a real-valued function hypothesis class, and then extend the results to the multidimensional case by applying Eq. 24. Specifically, we first develop an ERC bound for GroupSort Neural Networks, which refers to any FNN that utilizes the GroupSort activation function. Since LCNNs using SpectralDense layers are FNNs that also utilize the GroupSort activation function, the ERC bound for LCNNs using SpectralDense layers will follow as an immediate corollary. The following definitions are first presented to define the classes of real-valued GroupSort neural networks and of real-valued LCNNs using SpectralDense layers. _Definition 12_.: We use \(\mathcal{H}_{d}\) to denote the hypothesis class of GroupSort neural networks with depth \(d\) that map the input domain \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) to \(\mathbb{R}\): \[x\to W_{d}\,\sigma(W_{d-1}\,\sigma(\cdots\sigma(W_{1}x))) \tag{25}\] where each of the weight matrices \(W_{i}\in\mathbb{R}^{m_{i}\times n_{i}}\) has a bounded Frobenius norm, that is, \(\|W_{i}\|_{F}\leq R_{i}\) for some \(R_{i}\geq 0\), \(m_{i}\) and \(n_{i}\) are even for all \(i=1,...,d-1\) except for \(n_{1}\), and \(\sigma\) is the GroupSort function with group size 2. _Definition 13_.: We use \(\mathcal{H}_{d}^{SD}\) to denote the hypothesis class of LCNNs using SpectralDense layers with depth \(d\) that map the input domain \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) to \(\mathbb{R}\): \[x\to W_{d}\,\sigma(W_{d-1}\,\sigma(\cdots\sigma(W_{1}x))) \tag{26}\] where each of the weight matrices satisfies \(\|W_{i}\|_{2}=1\), \(m_{i}\) and \(n_{i}\) are even for all \(i=1,...,d-1\) except for \(n_{1}\), and \(\sigma\) is the GroupSort function with group size 2. Note that the key difference between \(\mathcal{H}_{d}\) and \(\mathcal{H}_{d}^{SD}\) is that \(\mathcal{H}_{d}\) is defined for any FNN using GroupSort activation functions, while \(\mathcal{H}_{d}^{SD}\) is defined for LCNNs that use the GroupSort activation function and satisfy \(\|W_{i}\|_{2}=1\). Additionally, the Frobenius norm is used in \(\mathcal{H}_{d}\), while the spectral norm is used in \(\mathcal{H}_{d}^{SD}\). Despite the differences between \(\mathcal{H}_{d}\) and \(\mathcal{H}_{d}^{SD}\), it will be demonstrated from the following lemmas and theorems that the two classes \(\mathcal{H}_{d}\) and \(\mathcal{H}_{d}^{SD}\) are highly related. _Remark 4_.: The bias term is ignored in Eq. 25 and Eq. 26 to simplify the formulation, as in principle, we can take into account the bias term by padding the input with a vector consisting of ones, and introducing another weight matrix into each of the \(W_{i}\). Additionally, we assume that \(m_{i}\) and \(n_{i}\) are even without any loss of generality, since this does not affect the expressiveness of the class of networks using LCNNs using SpectralDense layers, as shown in the proof of Theorem 2. Next, we develop a bound on the ERC of \(\mathcal{H}_{d}\), i.e., \(\mathcal{R}_{S}(\mathcal{H}_{d})\), and then the ERC bound for \(\mathcal{H}_{d}^{SD}\) follows as an immediate corollary. The main intuition to obtain such an upper bound for \(\mathcal{R}_{S}(\mathcal{H}_{d})\) is to recursively "peel off" the weight matrices and activation functions. Such methods were used in Golowich et al. (2018); Neyshabur et al. (2015); Wu et al. (2021, 2022), where the activation function was applied element-wise. The key difficulty in our setting stems from the fact that the GroupSort activation function \(\sigma\) is not an element-wise function. To address this issue, we first represent the functions \(\max(a,b)\) and \(\min(a,b)\) as follows: \[\max(a,b)=\frac{1}{2}(a+b+|b-a|)\ \ \min(a,b)=\frac{1}{2}(a+b-|b-a|) \tag{27}\] Before we present the results for peeling off the weight matrices of FNNs, the following definition is first given and will be used in the proof of Lemma 5 that peels off one GroupSort activation function layer. _Definition 14_.: For \(d\geq 1\), we define \(\tilde{\mathcal{H}}_{d}\) as the class of (vector-valued) functions on the input domain \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) of the form \[x\to\sigma(W_{d}\,\sigma(\cdots\sigma(W_{1}x))) \tag{28}\] where each of the weight matrices \(W_{i}\in\mathbb{R}^{m_{i}\times n_{i}}\) has a bounded Frobenius norm, that is, \(\|W_{i}\|_{F}\leq R_{i}\) for some \(R_{i}\geq 0\), \(m_{i}\) and \(n_{i}\) are even for all \(i=1,...,d\) except \(n_{1}\), and \(\sigma\) is the GroupSort function with group size \(2\). If \(d=0\), we define \(\tilde{H}_{0}\) as a hypothesis class that contains only the identity map on \(\mathcal{X}\). Note that Definition 14 is very similar to Definition 12, but the last layer has been removed, so that the resultant hypothesis class is vector-valued. Subsequently, the following lemma provides a way to "peel off" layers using the GroupSort function, which is the main tool used in the derivation of ERC for GroupSort NNs. _Lemma 5_.: Let \(\tilde{H}_{d}\) be a vector-valued hypothesis class defined in Definition 14, with \(d\geq 1\), and suppose that \(\|W_{d}\|_{F}\leq R_{d}\). For any dataset with \(m\) data points, we have the following inequality: \[\operatorname*{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d}}\left\| \frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\leq 2R_{d}\operatorname*{ \mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d-1}}\frac{1}{m}\left\| \sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{29}\] Proof.: Letting \(w_{1}^{T},w_{2}^{T},\cdots,w_{k}^{T}\) represent the rows of \(W_{d}\), we have \[\operatorname*{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H }}_{d}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| =\operatorname*{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{ d},h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i} \sigma(W_{d}h(x_{i}))\right\| \tag{30a}\] \[=\operatorname*{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_ {d},h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i }\sigma\begin{bmatrix}w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k}^{T}h(x_{i})\end{bmatrix}\right\| \tag{30b}\] By expanding the components of \(\sigma\) using the identities for the maximum and minimum in Eqs. 27, Eq. 30 can be written as \[\operatorname*{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde {\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})+|w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})-|w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ \vdots\\ w_{k-1}^{T}h(x_{i})+w_{k}^{T}h(x_{i})+|w_{k-1}^{T}h(x_{i})-w_{k}^{T}h(x_{i})| \end{bmatrix}\right\| \tag{31}\] We can bound Eq. 31 using the triangle inequality with \(A_{1}\) and \(A_{2}\) defined as follows. \[A_{1}= \mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d},h\in \tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i} \begin{bmatrix}w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k-1}^{T}h(x_{i})\\ w_{k}^{T}h(x_{i})\end{bmatrix}\right\|+\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d }\|_{F}\leq R_{d},h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{ m}\epsilon_{i}\begin{bmatrix}w_{2}^{T}h(x_{i})\\ w_{1}^{T}h(x_{i})\\ \vdots\\ w_{k}^{T}h(x_{i})\\ w_{k-1}^{T}h(x_{i})\end{bmatrix}\right\| \tag{32a}\] \[A_{2}=\mathbb{E}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} |w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ -|w_{1}^{T}h(x_{i})-w_{2}^{T}h(x_{i})|\\ \vdots\\ |w_{k-1}^{T}h(x_{i})-w_{k}^{T}h(x_{i})|\\ \end{bmatrix}\right\| \tag{32b}\] We first bound \(A_{1}\) appropriately. By noting that the first term in Eq. 32a is equal to the second term since we only permuted the components, the following equation is derived for \(A_{1}\). \[A_{1}=\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k-1}^{T}h(x_{i})\\ \end{bmatrix}\right\|=\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d },h\in\tilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i} W_{d}h(x_{i})\right\| \tag{33}\] The RHS of Eq. 33 can be bounded using similar strategies that can be found in Lemma 3.1 of Golowich et al. (2018). Subsequently, we rewrite the expression inside the supremum and expectation of the first term as follows. \[\sqrt{\sum_{j=1}^{k}\lVert w_{j}\rVert^{2}\bigg{(}\sum_{i=1}^{m}\bigg{(} \epsilon_{i}\frac{w_{j}^{T}}{\lVert w_{j}\rVert}h(x_{i})\bigg{)}\bigg{)}^{2}} \tag{34}\] Note that Eq. 34 is maximized when one of the \(\lVert w_{j}\rVert=R_{d}\) and the rest of \(\lVert w_{l}\rVert=0\) for \(l\neq j\) (this is simply because we maximize a positive linear function over \(\lVert w_{1}\rVert^{2},\lVert w_{2}\rVert^{2},\cdots,\lVert w_{h}\rVert^{2}\)). Therefore, it follows that \[\mathbb{E}\sup_{\epsilon}\limits_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}\begin{bmatrix} w_{1}^{T}h(x_{i})\\ w_{2}^{T}h(x_{i})\\ \vdots\\ w_{k-1}^{T}h(x_{i})\\ \end{bmatrix}\right\| =\mathbb{E}\sup_{\epsilon}\limits_{\|w\|\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}w^{T}h(x_{i})\] \[\leq\mathbb{E}\sup_{\epsilon}\limits_{\|w\|\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\lVert w\rVert\bigg{\|}\sum_{i=1}^{m}\epsilon_{i} h(x_{i})\bigg{\|}\] \[=R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\widetilde{\mathcal{H}}_{d-1}} \frac{1}{m}\bigg{\|}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\bigg{\|} \tag{35}\] The second term \(A_{2}\) can be rewritten in the following way. \[A_{2}=\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in \widetilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i} \begin{bmatrix}|w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ |w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ \vdots\\ |w_{k-1}^{T}h(x_{i})+w_{k}^{T}h(x_{i})|\\ \end{bmatrix}\right\| \tag{36}\] Note that the negative signs in even coordinates can be removed since we take the magnitude of the vector, and removal of negative signs in even rows \(w_{2},w_{4},\cdots,w_{k}\) is possible by the symmetry of the set \(\|W_{d}\|_{F}\leq R_{d}\). Then, we rewrite the inner expression of Eq. 36 as follows. \[\frac{1}{2m}\sqrt{\sum_{j=1}^{k/2}2\|w_{2j}+w_{2j-1}\|^{2}\bigg{(}\sum_{i=1}^{ m}\epsilon_{i}\Big{|}\frac{(w_{2j}+w_{2j-1})^{T}}{\|w_{2j}+w_{2j-1}\|}h(x_{i}) \bigg{|}\bigg{)}^{2}} \tag{37}\] Using the triangle inequality property of norms, Eq. 37 can be bounded by \[\frac{1}{2m}\sqrt{\sum_{j=1}^{k/2}4\bigg{(}\|w_{2j}\|^{2}+\|w_{2j-1}\|^{2} \bigg{)}\bigg{(}\sum_{i=1}^{m}\epsilon_{i}\bigg{|}\frac{(w_{2j}+w_{2j-1})^{T}} {\|w_{2j}+w_{2j-1}\|}h(x_{i})\bigg{|}\bigg{)}^{2}} \tag{38}\] It is readily shown that Eq. 38 is maximized when \(\|w_{2j}\|^{2}+\|w_{2j-1}\|^{2}=R_{d}^{2}\) for some \(j\), by a similar argument to the one for \(A_{1}\). Thus, Eq. 38 can be further bounded by \[\frac{1}{2m}\sqrt{4R_{d}^{2}\bigg{(}\sum_{i=1}^{m}\epsilon_{i}\bigg{|}\frac{( w_{2j}+w_{2j-1})^{T}}{\|w_{2j}+w_{2j-1}\|}h(x_{i})\bigg{|}\bigg{)}^{2}}=\frac{R_{d} }{m}\bigg{|}\sum_{i=1}^{m}\epsilon_{i}\bigg{|}\frac{(w_{1}+w_{2})^{T}}{\|w_{1 }+w_{2}\|}h(x_{i})\bigg{|}\bigg{|} \tag{39}\] Therefore, we finally derive the bound for \(A_{2}\) defined in Eq. 32b as follows. \[A_{2} =\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in \widetilde{\mathcal{H}}_{d-1}}\left\|\frac{1}{2m}\sum_{i=1}^{m}\epsilon_{i} \begin{bmatrix}|w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ |w_{1}^{T}h(x_{i})+w_{2}^{T}h(x_{i})|\\ \vdots\\ |w_{k-1}^{T}h(x_{i})+w_{k}^{T}h(x_{i})|\\ \end{bmatrix}\right\|\] \[\leq\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h \in\widetilde{\mathcal{H}}_{d-1}}\frac{R_{d}}{m}\bigg{|}\sum_{i=1}^{m}\epsilon _{i}\bigg{|}\frac{(w_{1}+w_{2})^{T}}{\|w_{1}+w_{2}\|}h(x_{i})\bigg{|}\bigg{|}\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_ {d},h\in\widetilde{\mathcal{H}}_{d-1}}\frac{1}{m}\bigg{|}\sum_{i=1}^{m}\epsilon _{i}\frac{(w_{1}+w_{2})^{T}}{\|w_{1}+w_{2}\|}h(x_{i})\bigg{|}\qquad\text{by Talagrand's Contraction Lemma}\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{\mathcal{H}}_{d-1}}\frac{1}{m}\left\|\frac{(w_{1}+w_{2})^{T}}{\|w_{ 1}+w_{2}\|}\right\|\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{ H}}_{d-1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{40}\] In the second inequality, a slightly different version of Talagrand's Contraction Lemma is used, which can be found in Mohri et al. (2018). Combining \(A_{1}\) and \(A_{2}\), we derive Eq. 29 as follows. \[\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d}} \left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \leq A_{1}+A_{2}\] \[\leq R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i}) \right\|+R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d- 1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\] \[=2R_{d}\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in\tilde{\mathcal{H} }_{d-1}}\frac{1}{m}\left\|\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{41}\] This completes the proof. **Theorem 6**.: _Assume that \(d\geq 1\) and \(\mathcal{H}_{d}\) is the hypothesis class of real-valued functions defined in Definition 12. Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). For any set of \(m\) training samples \(S=\{x_{1},x_{2},\cdots,x_{m}\}\), we have_ \[\mathcal{R}_{S}(\mathcal{H}_{d})\leq\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d}R_{i} \tag{42}\] _where \(\|W_{i}\|_{F}\leq R_{i}\) for each weight matrix in \(\mathcal{H}_{d}\)._ Proof.: Since the proof is very similar to that in Golowich et al. (2018), we provide a proof sketch only for clarity. Note that \[\mathcal{R}_{S}(\mathcal{H}_{d})=\mathop{\mathbb{E}}_{\epsilon}\sup_{h\in \mathcal{H}_{d}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})=\mathop{ \mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|\leq R_{d},h\in\tilde{\mathcal{H}}_{d-1}} \frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}W_{d}h(x_{i}) \tag{43}\] Since \(W_{d}\) has only 1 row, it follows that \(\|W_{d}\|_{2}=\|W_{d}\|_{F}\) by Lemma 7. By the Cauchy-Schwarz inequality, the RHS of Eq. 42 can be bounded by: \[\mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{F}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}W_{d}h(x_{i})\leq \mathop{\mathbb{E}}_{\epsilon}\sup_{\|W_{d}\|_{2}\leq R_{d},h\in\tilde{ \mathcal{H}}_{d-1}}\|W_{d}\|_{2}\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x _{i})\right\|\] \[\leq R_{d}\mathop{\mathbb{E}}\limits_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d-1} }\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\| \tag{44}\] By recursively applying Lemma 5, we derive the following inequality: \[R_{d}\mathop{\mathbb{E}}\limits_{\epsilon}\sup_{h\in\tilde{\mathcal{H}}_{d-1} }\left\|\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}h(x_{i})\right\|\leq(2^{d-1}\Pi_{ i=1}^{d}R_{i})\mathop{\mathbb{E}}\limits_{\epsilon}\left\|\frac{1}{m}\sum_{i=1}^{m} \epsilon_{i}x_{i}\right\| \tag{45}\] The expected value can then be bounded using Jensen's inequality: \[\mathop{\mathbb{E}}\limits_{\epsilon}\left\|\frac{1}{m}\sum_{i=1 }^{m}\epsilon_{i}x_{i}\right\| \leq\frac{1}{m}\sqrt{\mathop{\mathbb{E}}\limits_{\epsilon}\left\| \sum_{i=1}^{m}\epsilon_{i}x_{i}\right\|^{2}}\] \[=\frac{1}{m}\sqrt{\mathop{\mathbb{E}}\limits_{\epsilon}\sum_{i= 1}^{m}\sum_{j=1}^{m}\epsilon_{i}\epsilon_{j}x_{j}^{T}x_{i}}\] \[=\frac{1}{m}\sqrt{\mathop{\mathbb{E}}\limits_{\epsilon}m\|x\|^{ 2}}=\frac{B}{\sqrt{m}} \tag{46}\] Therefore, the following inequality is derived, and this completes the proof. \[\mathcal{R}_{S}(\mathcal{H}_{d})\leq\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d}R_{i} \tag{47}\] Theorem 6 develops the upper bound for the ERC of GroupSort NNs. Based on the results derived in Theorem 6, we subsequently derive the bound for the ERC of LCNNs. We begin with a lemma that relates the Frobenius norm to the spectral norm. _Lemma 7_.: Given a weight matrix \(W\in\mathbb{R}^{m\times n}\), the following inequality holds: \[\|W\|_{2}\leq\|W\|_{F}\leq\min(m,n)\|W\|_{2} \tag{48}\] The proof can be readily obtained based on the fact that \(\|W\|_{F}\) is the norm of the vector consisting of all singular values of \(W\), and the spectral norm is simply the largest singular value. Using this lemma, we develop the following bound for \(\mathcal{R}_{S}(\mathcal{H}_{d}^{SD})\), where \(\mathcal{H}_{d}^{SD}\) is the hypothesis class of LCNNs using SpectralDense layers of depth \(d\). **Corollary 1**.: _Let \(\mathcal{H}_{d}^{SD}\) be the real-valued function hypothesis class defined in Definition 13 with \(d\geq 1\). Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). For any set of \(m\) training samples \(S=\{x_{1},x_{2},\cdots,x_{m}\}\), we have the following inequality:_ \[\mathcal{R}_{S}(\mathcal{H}_{d}^{SD})\leq\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{ d}\min(m_{i},n_{i})=\frac{B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d-1}\min(m_{i},n_{i}) \tag{49}\] _where \(m_{i}\) and \(n_{i}\) are the number of rows and columns of the \(i^{th}\) weight matrix \(W_{i}\)._ Proof.: The proof can be readily obtained using the fact that \(\|W_{i}\|_{2}=1\) for all functions in \(\mathcal{H}_{d}^{SD}\) and Lemma 7 (i.e., \(\|W_{i}\|_{F}\leq\min(m_{i},n_{i})\|W_{i}\|_{2}=\min(m_{i},n_{i})\)). Then, by substituting this inequality into Theorem 6, we derive the inequality in Eq. 49. The last equality immediately follows since \(m_{d}=1\), and this completes the proof. _Remark 5_.: Note that the bound derived in Eq. 49 is a completely size-dependent bound for the ERC of \(\mathcal{H}_{d}^{SD}\). Therefore, once the architecture of the LCNNs using SpectralDense layers has been decided, the ERC of the set of LCNNs is bounded by a constant, which only depends on the neurons in each layer, and most importantly, not on the choice of weights in each weight matrix. Subsequently, using the vector contraction inequality in Lemma 4, we derive the following corollary that generalizes the results to the multi-dimensional output case. **Corollary 2**.: _Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). Let \(\mathcal{G}\) be the hypothesis class of loss functions defined in Eq. 21, where \(\mathcal{H}\) is the hypothesis class of functions from \(\mathcal{X}\) to \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) with each component function \(h_{k}\in\mathcal{H}_{d}^{SD}\) for \(k=1,...,d_{y}\), and \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) is a \(L_{r}\)-Lipschitz loss function that satisfies the properties in Section 5.1. Then we have_ \[\mathcal{R}_{S}(\mathcal{G})\leq\frac{\sqrt{2}d_{y}L_{r}B}{\sqrt{m}}2^{d-1}\Pi _{i=1}^{d-1}\min(m_{i},n_{i}) \tag{50}\] _where \(m_{i}\) and \(n_{i}\) are the number of rows and columns in \(W_{i}\)._ Proof.: The proof of Eq. 50 can be readily obtained by applying Corollary 1 to Eq. 23 in Lemma 4 and using Eq. 24 as follows. \[\mathcal{R}_{S}(\mathcal{G})=\mathbb{E}\sup_{\epsilon}\frac{1}{n} \sum_{h\in\mathcal{H}}^{m}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{i}L(h(x_{i}),y) \leq\sqrt{2}L_{r}\mathbb{E}\sup_{\epsilon}\frac{1}{m}\sum_{i=1}^{m} \sum_{k=1}^{d_{y}}\epsilon_{ik}h_{k}(x_{i})\] \[\leq\sqrt{2}L_{r}\sum_{k=1}^{d_{y}}\mathbb{E}\sup_{h_{k}\in \mathcal{H}_{d}^{SD}}\frac{1}{m}\sum_{i=1}^{m}\epsilon_{ik}h_{k}(x_{i})\] \[=\sqrt{2}L_{r}d_{y}\,\mathbb{E}\sup_{h_{k}\in\mathcal{H}_{d}^{SD} }\frac{1}{m}\sum_{i=1}^{m}\epsilon_{ik}h_{k}(x_{i})\] \[=\frac{\sqrt{2}d_{y}L_{r}B}{\sqrt{m}}2^{d-1}\Pi_{i=1}^{d-1}\min( m_{i},n_{i}) \tag{51}\] Finally, we develop the following theorem for the generalization error bound of LCNNs using SpectralDense layers as an immediate result of Corollary 2. **Theorem 8**.: _Suppose that \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) is a bounded subset such that for all \(x\in\mathcal{X},\|x\|\leq B\). Let \(\mathcal{G}\) be the hypothesis class of loss functions defined in Eq. 21, where \(\mathcal{H}\) is the hypothesis class of functions from \(\mathcal{X}\) to \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\) with each component function \(h_{k}\in\mathcal{H}_{d}^{SD}\) for \(k=1,...,d_{y}\), and \(L:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}^{\geq 0}\) is a \(L_{r}\)-Lipschitz loss function that satisfies the properties in Section 5.1. For any set of \(m\) i.i.d. training samples \(S=\{s_{1},s_{2},\cdots,s_{m}\}\) with \(s_{i}=(x_{i},y_{i})\), \(i\in[1,m]\) drawn from a probability distribution \(\mathcal{D}\subset\mathcal{X}\times\mathcal{Y}\), then for any \(\delta\in(0,1)\), the following upper bound holds with probability \(1-\delta\) :_ \[\underset{(x,y)\sim\mathcal{D}}{E}[L(h(x),y)]\leq\frac{1}{m}\sum_{i=1}^{m}L(h( x_{i}),y_{i})+\frac{\sqrt{2}d_{y}L_{r}B}{\sqrt{m}}2^{d}\,\Pi_{i=1}^{d-1}\min(m_{i},n_ {i})+3M\sqrt{\frac{\log\frac{1}{\delta}}{2m}} \tag{52}\] _where \(m_{i}\) and \(n_{i}\) are the number of rows and columns in \(W_{i}\)._ Proof.: The proof of Eq. 52 follows immediately from substituting Eq. 50 (i.e., the upper bound for \(\mathcal{R}_{S}(\mathcal{G})\)) from Corollary 2 into Eq. 22 in Theorem 3. ### Comparison of Empirical Rademacher Complexity Bounds The bound in Corollary 1 shows that the ERC of LCNNs using SpectralDense layers is simply bounded by a constant that depends on the size of the network (i.e., the number of neurons in each layer and the depth of the network). However, if we consider the ERC of the class of conventional FNNs with 1-Lipschitz activation functions of depth \(d\), denoted by \(\mathcal{R}_{S}(\mathcal{H}_{d}^{D})\), we have the following bound by Golowich et al. (2018) : \[\mathcal{R}_{S}(\mathcal{H}_{d}^{D})\leq\frac{B(\sqrt{2\log(2)d})}{\sqrt{m}}\, \Pi_{i=1}^{d}R_{i} \tag{53}\] A slight advantage of this bound is that we have an \(O(\sqrt{d})\) dependence on the depth, while the bound in Corollary 1 has an exponential dependence on \(d\). However, the main advantage of the bound in Corollary 1 is that it is only dependent on the size of the network, i.e., the number of neurons in each layer. For conventional FNNs with no constraints on the norm of the weight matrices, even though the training error could be rendered sufficiently small, the term \(\Pi_{i=1}^{d}R_{i}\) is not controlled, which could result in a sufficiently large ERC for the class \(\mathcal{H}_{d}^{D}\), and subsequently a large generalization error. Furthermore, given an FNN using conventional dense layers and an LCNN using SpectralDense layers of the same size parameters, we have demonstrated that the ERC of LCNNs using SpectralDense layers is bounded by a fixed constant, while there is no constraint on the ERC bound of the FNNs using dense layers. This is a tremendous advantage since all the parameters in Eq. 52 in Theorem 8 are fixed except the training sample size \(m\) as long as our target function is Lipschitz continuous and the neural network architecture is fixed. This implies that there is a provably correct probabilistic guarantee that the generalization error is at most \(O(\frac{1}{\sqrt{m}})\). Therefore, it is demonstrated that LCNNs using SpectralDense layers can not only approximate a wide variety of nonlinear functions (i.e., the universal approximation theorem in Section 4), but also effectively mitigate the issue of over-fitting to data noise and exhibit better generalization properties, since LCNNs improve the robustness against data noise as compared to conventional FNNs and are developed with reduced model complexity. ## 6 Application of LCNNs to Predictive Control of a Chemical Process In order to show the robustness of LCNNs, we develop LCNNs using SpectralDense layers and incorporate the LCNNs into a model predictive controller (MPC). We will demonstrate that LCNNs using SpectralDense layers are able to accurately model the nonlinear dynamics of a chemical process, and furthermore, they are able to prevent over-fitting in the presence of data noise. ### Chemical Process Description The chemical process in consideration, which was first developed by Wu et al. (2019), is a non-isothermal, well-mixed CSTR that contains an exothermic second-order irreversible reaction that converts molecule \(A\) to molecule \(B\). There is a feed stream of \(A\) into the CSTR, as well as a jacket that either supplies heat or cools the CSTR. We denote \(C_{A}\) to be the concentration of \(A\) and \(T\) the temperature of the CSTR. The CSTR dynamics can be modeled with the governing equations as follows: \[\begin{split}\frac{dC_{A}}{dt}&=\frac{F}{V}(C_{A0} -C_{A})-k_{0}e^{-\frac{E}{RT}}C_{A}^{2}\\ \frac{dT}{dt}&=\frac{F}{V}(T_{0}-T)-\frac{\Delta H}{ \rho_{L}C_{p}}k_{0}e^{-\frac{E}{RT}}C_{A}^{2}+\frac{Q}{\rho_{L}C_{p}V}\end{split} \tag{54}\] where \(F\) is the feed flowrate, \(\Delta H\) is the molar enthalpy of reaction, \(k_{0}\) is the rate constant, \(R\) is the ideal gas constant, \(\rho_{L}\) is the fluid density, \(E\) is the activation energy, and \(C_{p}\) is the specific heat capacity. \(C_{A0}\) is the feed flow concentration of \(A\), and \(Q\) is the rate at which heat is transferred to the CSTR. The values of the process parameters used in this work are omitted, as they are exactly the same as those in Wu et al. (2019). The CSTR of Eq. 54 has an unstable steady-state \([C_{As},T_{s},C_{A0_{s}},Q_{s}]=[1.95\text{ mol }/\text{ dm}^{3},402\text{ K},4\text{ mol }/\text{ dm}^{3},0\text{ kJ }/\text{ h}]\). We use \(x\) to denote the system states: \(x^{T}=[C_{A}-C_{As},T-T_{s}]\) and \(u\) to denote the manipulated inputs: \(u^{T}=[C_{A0}-C_{A0_{s}},Q-Q_{s}]\), so that the equilibrium point \((x_{s},u_{s})\) is located at the origin. Following the formulation presented in Section 2.2, after extensive numerical simulations, a Lyapunov function \(V(x):=x^{T}Px\) with \(P=\left[\begin{array}{cc}1060&22\\ 22&0.52\end{array}\right]\) was constructed. The region \(\Omega_{\rho}\) with \(\rho=372\) is found via open-loop simulations by sweeping through many feasible initial conditions within the domain space \(D\), such that for any initial state \(x_{0}\in\Omega_{\rho}\), the controller \(\Phi\) renders the origin of the state space exponentially stable. ### Data Generation In order to train the LCNNs, we conducted open-loop simulations of the CSTR dynamics shown in (54) using many different possible control actions to generate the required dataset. Specifically, we performed a sweep of all possible initial states \(x_{0}\in\Omega_{\rho}\) and control actions \(u\in U\). The forward Euler method with step size \(h_{c}=10^{-5}\) hr was used to obtain the value of \(\tilde{F}(x_{0},u)\), that is, to deduce the state after \(\Delta=10^{-3}\) hr time has passed. The inputs in the dataset will be all such possible pairs \((x_{0},u)\) and the outputs will be \(\tilde{F}(x_{0},u)\). We collected 20000 input-output pairs in the dataset, and split it into training (52.5 %), validation (17.5 %), and testing (30 %) datasets. Before the training process, the dataset was pre-processed appropriately by standard scalers to ensure that each variable has a variance of the same order of magnitude. ### Model Training With Noise-Free Data For the prediction model \(\tilde{F}_{nn}\) that approximates the function \(\tilde{F}\), we first demonstrate that in the absence of training data noise, the LCNNs using SpectralDense layers can capture the dynamics of \(\tilde{F}_{nn}\) well in the operating region \(\Omega_{\rho}\). Specifically, we used an LCNN using two SpectralDense layers with 40 neurons each, followed by a dense layer with weights that have absolute values bounded by 1.0 and linear activation functions as the final layer. The weight bound was implemented using the _max_norm_ constraint function provided by Tensorflow. The neural network was trained using the Adam Optimizer package provided by Tensorflow. The loss function that was utilized to train the neural networks was the mean squared error (MSE), and the testing error for the LCNN reached \(2.83\times 10^{-5}\), which was considered sufficiently small using normalized training data. ### Lyapunov-based MPC Subsequently, we incorporate the LCNN model into the design of Lyapunov-based MPC (LMPC). The control actions in LMPC are implemented using the sample-and-hold method, where \(\Delta\) is the sampling period. Let \(N\) be a positive integer that represents the prediction horizon of the MPC. The state at time \(t=k\Delta\) is denoted by \(x_{k}\), and the LMPC using LCNN model corresponds to the optimization problem described below, following the notation in Wu et al. (2019): \[\mathcal{J}=\min_{\tilde{u}}\sum_{i=1}^{N}L(\tilde{x}_{k+i},u_{k+ i-1})\] (55a) s.t. \[\tilde{x}_{t+1}=\tilde{F}_{nn}(\tilde{x}_{t},u_{t}),\ \ \forall t\in[k,k+N) \tag{55b}\] \[u_{t}\in U,\ \ \forall t\in[k,k+N)\] (55c) \[\tilde{x}_{k}=x_{k} \tag{55d}\] \[V(\tilde{F}_{nn}(x_{k},u_{k}))\leq V(\tilde{F}_{nn}(x_{k},\Phi(x_{k}))), \quad\text{if}\;\;x_{k}\in\Omega_{\rho}\setminus\Omega_{\rho_{nn}} \tag{55e}\] \[V(x_{t})\leq\rho_{nn}\;\;\forall t\in[k,k+N],\;\;\text{if}\;\;x_{k} \in\Omega_{\rho_{nn}} \tag{55f}\] where \(\tilde{u}=[u_{k},u_{k+1},u_{k+2},....,u_{k+N-1}]\), and \(\Omega_{\rho_{nn}}\) is a much smaller sublevel set than \(\Omega_{\rho}\). The state predicted by the LCNN model \(\tilde{F}_{nn}(\tilde{x}_{t},u_{t})\) is represented by \(\tilde{x}\). Eq. 55c describes the input constraints imposed, since we assume that \(U\) is the set of all possible input constraints. The initial condition of the prediction model is obtained from the feedback state measurement at \(t=k\Delta\), as shown in Eq. 55d. The constraints of Eqs. 55e-55f guarantee closed-loop stability, i.e., it ensures that the state which is originally in the set \(\Omega_{\rho}\) will eventually converge to the much smaller sublevel set \(\Omega_{\rho_{nn}}\), provided that a sufficiently small \(\Delta>0\) is used such that the controller \(\Phi\) when applied with the sample-and-hold method still guarantees convergence to the origin. Note that the key difference between the LMPC of Eq. 55 in this work and the one in our previous work Wu et al. (2019) is that the LCNN model, which is a type of feedforward neural network, is used as the underlying model for prediction of future states in Eq. 55b to predict the state one sampling time forward, while in Wu et al. (2019), a recurrent neural network was developed to predict the trajectory of future states within one sampling period. Therefore, the formulated objective function shown in Eq. 55a and the constraints of Eqs. 55e-55f only account for the predicted states in the sampling instance. The above optimization problem is solved using IPOPT, which is a package for solving large-scale nonlinear and non-convex optimization problems. The LMPC of Eq. 55 for the CSTR example is designed with the following parameters: \(\rho=372\), \(\rho_{nn}=2\), \(N=2\), and \(L(x,u)=x^{T}Q_{1}x+u^{T}Q_{2}u\), where \(Q_{1}=\left[\begin{array}{cc}6.25\times 10^{-4}&0\\ 0&1\end{array}\right]\) and \(Q_{2}=\left[\begin{array}{cc}0.01&0\\ 0&4.0\times 10^{-12}\end{array}\right]\). ### Closed-Loop Simulation Results The closed-loop simulation results under LMPC using LCNN models (termed LCNN-LMPC) are shown in Figure 1 and Figure 2, where the initial condition is \(x=(72\text{ K}\;,\;-1.65\;\text{ kmol}\;/\;\text{m}^{3})\). The closed-loop simulation under LMPC using the first-principles model of Eq. 54 is also carried out as the reference for comparison purposes. As shown in Figure 1, the state trajectory when then LCNN is used overlaps with the trajectory when the first-principles model is used in MPC, suggesting that the neural network modeling error is sufficiently small such that the state can be driven to the equilibrium point under MPC. The control actions taken by the two models are similar as shown in Fig. 2, where slight deviations and oscillations occur under the MPC using LCNN model. The oscillations in the manipulated input profile could be due to the following factors: 1) slight model discrepancies between the model and the first-principles model, and 2) the IPOPT software being trapped within local minima of the objective function, or a combination of the two factors (in fact, the first factor might inadvertently cause the second because of irregularities in the LMPC loss function \(L\) when LCNN is used). Nevertheless, the LCNN is able to drive the state to the origin effectively, which shows that the LCNN can effectively model the nonlinear dynamics of the CSTR. _Remark 6_.: The weight bound of 1.0 in the last layer was chosen to ensure that the class of functions that can be represented using this neural network architecture is sufficiently large to contain the target function. If a smaller weight bound is chosen, the target function might not be approximated well, since the Lipschitz constant of the network does not meet the one for the target function. ### Robustness against Data Noise As discussed in Section 5, one of the most crucial advantages of LCNNs using SpectralDense layers is their robustness against data noise and potential over-fitting during the training process. For example, when the number of neurons per hidden layer is exceptionally large, the neural network tends to overestimate the complexity of the problem and ultimately learns the data noise (see Ke and Liu (2008) and Sheela et al. (2013) for more details on this phenomenon). Therefore, in this subsection, we will demonstrate that when Gaussian data noise is introduced into the training datasets, the LCNNs outperform the FNNs using conventional dense layers (termed "Dense FNNs"). We followed the data generation process in Section 6.2, and added Gaussian noise with a standard deviation of 0.1 or 0.2 to the training dataset. To show the difference between SpectralDense LCNNs and conventional Dense FNNs in robustness against over-fitting, we trained SpectralDense LCNNs and Dense FNNs with the same set of hidden layer architectures. Specifically, we used two hidden layers in each type of neural network with 640 or 1280 neurons each. The LCNNs were developed with SpectralDense hidden layers, while the conventional Dense FNNs were developed using the dense layers from Tensorflow with ReLU activation functions. Throughout the training process for all the networks, the same training hyperparameters were used, such as the number of epochs, the early stopping callback, and the batch size used in the Adam Optimizer. Table 1 shows the testing errors for the various neural networks trained. As seen in Table 1, the testing errors of the conventional Dense FNNs have an order of magnitude of \(10^{-3}\) to \(10^{-2}\), which are significantly larger than those of SpectralDense LCNNs (i.e., the testing errors of SpectralDense LCNNs have an order of magnitude of \(10^{-5}\) to \(10^{-4}\)). The increase in testing error in the Dense FNNs is due to over-fitting of the noise since the testing error has the same order of magnitude as the variance of the Gaussian noise. Additionally, we integrated the LCNN and Dense FNN with 640 neurons per layer and 0.1 standard deviation Gaussian Noise into MPC, similar to the process described in Section 6.5. The results when both the LCNNs and Dense FNNs are integrated into MPC are shown in Figure 3 and 4. From the plot of the Lyapunov function value \(V(x)\) in Figure 3, it is demonstrated that the Dense FNN is unable to effectively drive the state to the origin compared to the LCNN. This is readily observed because not only is the Lyapunov function value for the Dense FNN much higher, but there are also considerable oscillations in the function value, especially in the time frame between 0.15 hr and 0.3 hr. In addition, in Figure 4, it is observed that, while the predicted control actions of the LCNN are very similar to those of the first-principles model, the predicted control actions under the Dense FNN show large oscillations and differ largely from those of the first-principles model. This large disparity between the Dense FNN and the first-principles model shows that the Dense FNN has become incapable of accurately modeling the process dynamics when embedded into MPC. ### Comparison of Lipschitz Constants between LCNNs and Dense FNNs Additionally, we compare the Lipschitz constants of the LCNNs and the Dense FNNs developed for the CSTR of Eq. 54 and demonstrate that the conventional Dense FNNs have a much larger Lipschitz constant than the SpectralDense LCNNs as a result of noise over-fitting and the lack of constraints on the weight matrices. For the Dense FNNs, we used the LipBaB algorithm to obtain the Lipschitz constant for the FNNs using dense layers, and for the SpectralDense LCNNs, we took the SVD of the last weight matrix and obtained the spectral norm of the last weight matrix as the upper bound of the Lipschitz constant. The results are shown in Table 2. The Dense FNNs have Lipschitz constants in the order of magnitude \(10^{2}\) compared to SpectralDense LCNNs with Lipschitz constants in the order of magnitude \(10^{0}\). The comparison of Lipschitz constants demonstrates that the LCNNs are also provably and certainly less sensitive to input perturbations as compared to the Dense FNNs since they have a Lipschitz constant several orders of magnitude lower. Furthermore, the calculation of Lipschitz constants demonstrates that LCNNs are able to prevent over-fitting data noise by maintaining a small Lipschitz constant, while conventional Dense FNNs with a large number of neurons could over-fit data noise. ## 7 Conclusions In this work, we developed LCNNs for the general class of nonlinear systems, and discussed how LCNNs using SpectralDense layers can mitigate sensitivity issues and prevent over-fitting to noisy data from the perspectives of Lipschitz constants and generalization error. Specifically, we first proved the universal approximation theorem for LCNNs using SpectralDense layers to demonstrate that LCNNs are capable of retaining expressive power for Lipschitz target functions despite having a small hypothesis class. Then, we derived the generalization error bound for SpectralDense LCNNs using the Rademacher complexity method. The above results provided the theoretical foundations to demonstrate that LCNNs can improve input sensitivity due to their constrained Lipschitz constant and generalize better to prevent over-fitting. Finally, LCNNs using SpectralDense layers were integrated into MPC and applied to a chemical reactor example. The simulations show that the LCNNs effectively captured the process dynamics and outperformed conventional Dense FNNs in terms of smaller testing errors, higher prediction accuracy in MPC, and smaller Lipschitz constants in the presence of noisy training data. ## 8 Acknowledgments Financial support from the NUS Start-up grant R-279-000-656-731 is gratefully acknowledged.
2305.13905
EfficientSpeech: An On-Device Text to Speech Model
State of the art (SOTA) neural text to speech (TTS) models can generate natural-sounding synthetic voices. These models are characterized by large memory footprints and substantial number of operations due to the long-standing focus on speech quality with cloud inference in mind. Neural TTS models are generally not designed to perform standalone speech syntheses on resource-constrained and no Internet access edge devices. In this work, an efficient neural TTS called EfficientSpeech that synthesizes speech on an ARM CPU in real-time is proposed. EfficientSpeech uses a shallow non-autoregressive pyramid-structure transformer forming a U-Network. EfficientSpeech has 266k parameters and consumes 90 MFLOPS only or about 1% of the size and amount of computation in modern compact models such as Mixer-TTS. EfficientSpeech achieves an average mel generation real-time factor of 104.3 on an RPi4. Human evaluation shows only a slight degradation in audio quality as compared to FastSpeech2.
Rowel Atienza
2023-05-23T10:28:41Z
http://arxiv.org/abs/2305.13905v1
# _EfficientSpeech_: An on-device Text to Speech Model ###### Abstract State of the art (SOTA) neural text to speech (TTS) models can generate natural-sounding synthetic voices. These models are characterized by large memory footprints and substantial number of operations due to the long-standing focus on speech quality with cloud inference in mind. Neural TTS models are generally not designed to perform standalone speech syntheses on resource-constrained and no Internet access edge devices. In this work, an efficient neural TTS called _EfficientSpeech_ that synthesizes speech on an ARM CPU in real-time is proposed. _EfficientSpeech_ uses a shallow non-autoregressive pyramid-structure transformer forming a U-Network. _EfficientSpeech_ has 266k parameters and consumes 90 MFLOPS only or about 1% of the size and amount of computation in modern compact models such as _Mixer-TTS. EfficientSpeech_ achieves an average mel generation real-time factor of 104.3 on an RP4. Human evaluation shows only a slight degradation in audio quality as compared to _FastSpeech2_. Rowel Atienza Electrical and Electronics Engineering Institute and AI Graduate Program, University of the Philippines [email protected] TTS, FLOPS, parameters, RTF, CMOS ## 1 Introduction Voice is one of our primary means of communication. If our devices can also speak, a new type of natural interaction with electronic gadgets and appliances is feasible. Even better, if devices can perform standalone voice synthesis without relying on cloud services, new applications and advantages will emerge. For instance, a WiFi router can tell us what went wrong when there is no Internet access. A smart camera installed in a remote area can warn intruders. These useful actions are done by the device in autonomous manner and without relying on cloud services. As added benefits of on-device voice synthesis, privacy issues are mitigated, robustness is enhanced, and high responsiveness, low-latency and availability can be guaranteed. In terms of natural sounding voice generation, neural TTS systems such as _FastSpeech2_[1], _FastPitch_[2], _Tacotron2_[3], _Deep Voice 3_[4], _TransformerTTS_[5] and _Mixer-TTS_[6] dominate the state of the art performance in MOS scores. These neural TTS models are designed with AI accelerators such as GPUs or TPUs in mind. There is little emphasis on investigating the feasibility of achieving standalone on-device model inference. In particular, autoregressive models like _Tacotron2_, _Deep Voice 3_ and _TransformerTTS_ are inherently slow. While non-autoregressive neural TTS such as _FastSpeech2_ and _Mixer-TTS_ are fast and have competitive voice quality that is comparable to autoregressive counterparts, these models have big footprints making them unsuitable for memory-constrained edge devices. Recent attempts to build on-device neural TTS include _On-device TTS_[7], _LiteTTS_[8], _PortaSpeech_[9], _LightSpeech_[10] and _Mix-TTS_[11]. _On-device TTS_ is slow and resource intensive since it is a modified _Tacotron2_ for mel spectrogram generation and uses _WaveRNN_ for vocoder. Though _LiteTTS_ can generate voice from text, it is still resource intensive with 13.4M parameters. In addition, two-stage TTS models are still better in terms of both training stability and synthetic voice quality. _PortaSpeech_ uses VAE and Flow models to generate mel spectrogram. The smallest version has 6.7M parameters and is characterized by noticeable voice quality deterioration. _LightSpeech_ uses neural architecture search (NAS) to reduce the model size of _FastSpeech2_. While the resulting model is small at 1.8M parameters, the NAS process is notoriously compute intensive with a huge environmental impact. Furthermore, NAS is susceptible to overfitting. A model architecture optimized on one language dataset (e.g. English) is not guaranteed to work on another (e.g. Korean). _Nix-TTS_ applied knowledge distillation to reduce the size of _VITS_[12] to 5.2MB by separately training text-to-latent encoder and latent-to-waveform decoder. While there is a significant reduction in size, the decoder is single-use or encoder specific unlike general purpose vocoders such as _HiFiGAN_[13] that is available in a sub-1M-parameter model for edge devices. Ironically, while the above mentioned models promote on-device TTS, there was no validation done on ARM CPUs except for _Nix-TTS_ that used a compiled ONNX model. Furthermore, most of these models have no publicly available implementations. Thus, reproducibility, fair comparison and analysis are difficult to perform. In this paper, _EfficientSpeech_, a natural sounding TTS model that is suitable for edge devices is proposed. _EfficientSpeech_ is using a shallow U-Network [14] pyramid transformer phoneme encoder and a shallow transposed convolutional block as the mel spectrogram decoder. _EfficientSpeech_ has 266k parameters only, about 15% of the size of _LightSpeech_ or 0.8% of _FastSpeech2_. _EfficientSpeech_ consumes 90 MFLOPS only to generate 6 sec of mel spectrogram. Using the compact version of HiFiGAN [13], the total model parameters is 1.2M or 22% of text to speech waveform _Mix-TTS_. Using HiFiGAN as vocoder, it runs at an RTF of 1.7 for voice generation on RPi4. Without the vocoder overhead, the mel spectrogram generation is at RTF speed of 104.3. _EfficientSpeech_ achieves a competitive CMOS of -0.14 when trained on LJSpeech dataset [15] and evaluated against _FastSpeech2_. Due to its small size, _EfficientSpeech_ can be trained on a single GPU in 12hrs. ## 2 Model Architecture Figure 1 shows the model architecture of _EfficientSpeech_. The phoneme sequence \(\mathbf{x}_{phone}\in\mathbb{R}^{N\times d}\) is an embedding of the input text phonemes. All convolutional layers are 1D. \(N\) is the variable phoneme sequence length while \(d=128\) is the embedding size. The _Phoneme Encoder_ is made of 2 transformer blocks. Each block is made of a depth-wise separable convolution for feature merging, _Self-Attention_ between merged features and _Mix-FFN_ for non-linear feature extraction. _Mix-FFN_ is similar to a typical transformer [16]_FFN_ except for an additional convolution layer and the use of GeLU [17] activation between two linear layers. Layer Normalization (_LN_) [18] is applied after _Self-Attention_ and _Mix-FFN_. Both _Self-Attention_ and _Mix-FFN_ use residual connection for fast convergence. The first transformer block retains the sequence length while reducing the feature dimension by \(\frac{1}{4}\). The second transformer block reduces the sequence length by half while doubling the feature dimension. Each transformer block output feature is upsampled using a linear layer and a transposed convolutional layer. An identity layer replaces the transposed convolution if the target feature shape of \(N\times\frac{d}{4}\) is already in place. Both features are then fused together to form the final phoneme features. This U-Network [14] style of architecture was inspired by _SegFormer_[19] for semantic segmentation in computer vision. Reducing the feature dimension and sequence length lowers the FLOPS and the number of parameters of the model. The _Acoustic Features and Decoders_ block borrows the idea from _Variance Adaptor_ of _FastSpeech2_. It forces the network to predict the _Energy_: \(\mathbf{y}_{e}\), _Pitch_: \(\mathbf{y}_{p}\) and _Duration_: \(\mathbf{y}_{d}\). The difference in our implementation is that instead of predicting the acoustic parameters in series, _EfficientSpeech_ generates them in parallel which results to a faster inference. The predicted values of, _Energy_: \(\mathbf{y}_{e}\), _Pitch_: \(\mathbf{y}_{p}\) and _Duration_: \(\mathbf{y}_{d}\), are generated by 2 blocks of _Conv-LN-ReLU_ and a final linear layer (with _ReLU_ for duration to ensure positive values). The binned energy and pitch features are embedded at the last layer to produce _Energy_: \(\mathbf{z}_{e}\) and _Pitch_: \(\mathbf{z}_{p}\). Meanwhile, _Duration_: \(\mathbf{z}_{d}\) is extracted before the _ReLU_ activation. At the _Features Fuser and Up Sampler_ block, all acoustic features are reused and fused together with the phoneme fea Figure 1: Model architecture of _EfficientSpeech_. The phoneme encoder is made of two transformer encoder blocks fused with up sampled features resembling a U-Net. _EfficientSpeech_ uses parallel acoustic features and outputs prediction. Acoustic features are merged with phoneme features and up sampled for mel-spectrogram decoding which is made of two blocks. tures. The fused features are then up sampled to the correct mel sequence length \(M\) using the predicted _Duration_: \(\textbf{y}_{d}\). The last stage is the _Mel Spectrogram Decoder_. It is made of 2 blocks of a linear layer and two layers of depth-wise separable convolution. Each layer uses _Tanh_ activation followed by _LN_. ### Model Training The dataset used for training is LJSpeech [15] that is made of \(13,100\) audio clips with corresponding text transcripts. \(12,588\) samples are set aside for training while \(512\) clips are for testing. The phoneme sequence is generated using _g2p_[20], an open-source English grapheme (spelling) to phoneme (pronunciation) converter. The waveform is transformed into mel spectrogram with window and FFT lengths of \(1,024\), hop length of \(256\) and sampling rate of \(22,050\). The resulting mel spectrogram has \(80\) channels. Montreal Force Alignment (MFA) [21] is used to establish the target phoneme duration. Pitch and energy ground truth values are computed using STFT and WORLD vocoder [22] respectively. The total loss function is shown in Equation 1. Mel spectrogram loss function \(\mathcal{L}_{mel}\) is _L1_ with \(\alpha=10\). \(MSE\) is used for _Pitch_: \(\mathcal{L}_{p}\), _Energy_: \(\mathcal{L}_{e}\), and _Duration_: \(\mathcal{L}_{d}\) loss functions. \(\beta=2\), \(\gamma=2\) and \(\lambda=1\). \[\mathcal{L}=\alpha\mathcal{L}_{mel}+\beta\mathcal{L}_{p}+\gamma\mathcal{L}_{e }+\lambda\mathcal{L}_{d}. \tag{1}\] The _EfficientSpeech_ model is trained for \(5,000\) epochs. Batch size is \(128\). The optimizer is AdamW [23] with learning rate of \(0.001\), cosine learning rate decay and warm up of \(50\) epochs. ## 3 Experimental Results The _EfficientSpeech_ evaluation is not only in terms of the generated speech quality but also its trade off with respect to the number of parameters, amount of computations as measured by floating point operations (FLOPS), and speed or throughput in terms of latency. A comprehensive benchmark enables us to get the overall picture of our model performance as a function of memory, computational budget and time [24] instead of focusing only on selected favorable metrics. The number of parameters is commonly used as a proxy to the amount of memory needed by the model during execution. FLOPS reflects the number of Fused-Multiply-Add (FMA) operations needed to complete an inference. For variable input text sequence length like in TTS, FLOPS is measured using 128 randomly sampled text inputs from the test split. FLOPS increases with input text length. Latency is measured in terms of the number of seconds of voice generated per second or the real-time-factor (RTF). The inverse of this RTF, the time needed to generate 1 sec of voice, can also be used but it leads to small fractional numbers that are less intuitive to interpret. To focus on the speed of _EfficientSpeech_, mel spectrogram real-time-factor (mRTF) is introduced. mRTF is the number of seconds of speech divided by the mel generation time. fvcore[25] is used to compute the number of parameters and FLOPS. Time measurements use the CPU wall clock. Table 1 shows the number of parameters and the relative footprint of _EfficientSpeech_ in comparison with state-of-the-art mel spectrogram generators. _EfficientSpeech_ is tiny at 266k parameters leading to a very small number of FLOPS as shown in Table 2. The effect of the small number of parameters and FLOPS is a fast mel spectrogram generation reaching mRTF of 953.3 on a V100 GPU as shown in Table 3. The speed is more evident on an RPi4 ARM CPU where _EfficientSpeech_ reaches mRTF of 104.3 which is \(20.1\times\) faster compared to _FastSpeech2_. For _Tacotron2_ and _MixerTTS_, the pre-trained versions provided by NVIDIA NeMo [26] with HiFiGANv1 was evaluated. For speech generation, both models are unable to run with RTF \(\geq\) 1.0 on the ARM CPU of RPi4. Furthermore, NeMo employed mixed precision training and other optimizations providing a significant acceleration in GPUs. Table 5 shows the CMOS [27] as evaluated by 15 participants with high English listening comprehension. The synthesized speech waveforms are from the test split. Both _EfficientSpeech_ and _FastSpeech2_ used the small version of off-the-shelf HiFiGANv2 with 0.9M parameters. In terms of au \begin{table} \begin{tabular}{l|r|r} \hline \hline & \# Parameters & ES Relative \\ Model & (M)\(\downarrow\) & \# Parameters \\ \hline _EfficientSpeech_ (ES) & **0.27** & – \\ _FastSpeech2[1]_ & \(30.81\) & \(0.86\%\) \\ _Tacotron2[3]_ & \(23.81\) & \(1.12\%\) \\ _MixerTTS[6]_ & \(20.06\) & \(1.33\%\) \\ _LightSpeech[10]_ & \(1.80\) & \(14.78\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: The number of parameters in different mel spectrogram generator models. _LightSpeech_ is based on published data. \begin{table} \begin{tabular}{l|r|r} \hline \hline & & ES Relative \\ Model & GFLOPS \(\downarrow\) & GFLOPS \\ \hline _EfficientSpeech_ (ES) & **0.09** & – \\ _FastSpeech2[1]_ & \(15.87\) & \(0.57\%\) \\ _Tacotron2[3]_ & \(16.20\) & \(0.56\%\) \\ _MixerTTS[6]_ & \(10.29\) & \(0.87\%\) \\ _LightSpeech[10]_ & \(0.76\) & \(11.84\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Amount of computations in terms of GFLOPS in different mel spectrogram generator models. Average voice length is 6 sec. _LightSpeech_ is based on published data for 9 sec of speech. dio quality, _EfficientSpeech_ outputs only suffer a slight degradation in quality in spite of its small size. For reference, the published CMOS score of _LightSpeech_ as compared to _FastSpeech2_ is also shown. However, note that the samples used to obtain this score are not available. ## 4 Discussion The RTF slow down from Table 3 to 4, can be attributed to the inefficient vocoder. At mRTF of 104.3 on RPi4, _EfficientSpeech_ has a significant headroom to speed up the voice generation given a counter part lightweight vocoder. In the experimental setup, the HiFiGAN consumes 5.0 GFLOPS while the _EfficientSpeech_ model overhead is only 0.09 GFLOPS. Meanwhile, majority of SOTA mel generator models have used up most of RPi4 Model B 13.5 to 32 GFLOPS (estimates vary). The computational performance of low-cost BCM2835 SoC ARMv6 256MB to 512MB RAM used in RPi Zero, A and B is about 0.2 to 0.3 GFLOPS giving _EfficientSpeech_ enough leeway but not for the vocoder. RPi3 Model B BCM2837/B0 SoC ARMv7/8 1GB RAM has a computing performance of about 3.6 to 6.2 GFLOPS. RPi2 Model B BCM2836 and BCM2837 SoCs ARMv7 1GB RAM has about 1.5 to 4.4 GFLOPS. Theoretically, a sub 0.1 GFLOPS vocoder will enable wide adoption of neural TTS such as _EfficientSpeech_ on many low-cost and low-power devices. A sub 1 GFLOPS vocoder can already broaden the device coverage of neural TTS to RPi2. At 266k parameters, 16-bit floating point, the footprint of _EfficientSpeech_ is about 532kb leaving enough RAM space to store results of intermediate layers even on low memory 256MB SoCs. Note that although the number of model parameters and FLOPS have impact on RTF, there are other factors that may contribute to latency. For instance, a model architecture that has dense skip connections has inherent delays in the forward propagation due to buffering. Models with many layers are slow due to the increasing forward propagation steps. Feature dimensions mismatch, normalization layers and complex activation functions can also cause slow model inference. ## 5 Conclusion The quality voice synthesis improves as the model size increases. _EfficientSpeech_ code and pre-trained weights are available on GitHub for: Tiny (266k), Small (952k) and Base (4M). See: [https://github.com/roatienza/efficientspecch](https://github.com/roatienza/efficientspecch) ## 6 Acknowledgement Project funding by Rowel Atienza through Sibyl.AI. Conference attendance funding by ERDT-FRDG. \begin{table} \begin{tabular}{l|r|r|r|r|r|r} \hline \hline & RTF & ES Relative & RTF & ES Relative & RTF & ES Relative \\ Model & V100 \(\uparrow\) & Speed-up & Xeon 2.2G \(\uparrow\) & Speed-up & ARM 1.5G \(\uparrow\) & Speed-up \\ \hline _EfficientSpeech_ (ES) & \(\mathbf{363.0}\) & \(-\) & \(\mathbf{24.1}\) & \(-\) & \(\mathbf{1.7}\) & \(-\) \\ _FastSpeech2[1]_ & \(66.9\) & \(5.4\times\) & \(11.9\) & \(2.0\times\) & \(1.3\) & \(1.3\times\) \\ _Tacotron2[3]_ & \(7.7\) & \(47.3\times\) & \(1.0\) & \(24.9\times\) & \(0.1\) & \(12.4\times\) \\ _MixerTTS[6]_ & \(56.6\) & \(6.4\times\) & \(6.4\) & \(3.8\times\) & \(0.2\) & \(6.9\times\) \\ \hline \hline \end{tabular} \end{table} Table 4: RTF is the average of number of seconds of speech divided by the waveform generation time for 128 samples from the test split. See Table 3 on the hardware specifications. No available data for _LightSpeech_. \begin{table} \begin{tabular}{l|r|r|r|r|r|r} \hline \hline & mRTF & ES Relative & mRTF & ES Relative & mRTF & ES Relative \\ Model & V100 \(\uparrow\) & Speed-up & Xeon 2.2G \(\uparrow\) & Speed-up & ARM 1.5G \(\uparrow\) & Speed-up \\ \hline _EfficientSpeech_ (ES) & \(\mathbf{953.3}\) & \(-\) & \(\mathbf{470.2}\) & \(-\) & \(\mathbf{104.3}\) & \(-\) \\ _FastSpeech2[1]_ & \(371.3\) & \(2.6\times\) & \(64.7\) & \(7.3\times\) & \(5.2\) & \(20.1\times\) \\ _Tacotron2[3]_ & \(8.3\) & \(114.7\times\) & \(1.2\) & \(379.4\times\) & \(0.2\) & \(462.2\times\) \\ _MixerTTS[6]_ & \(204.9\) & \(4.7\times\) & \(55.2\) & \(8.5\times\) & \(2.9\) & \(36.5\times\) \\ _LightSpeech[10]_ & \(-\) & \(-\) & \(107.5\) & \(4.4\times\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 3: mRTF is the average of number of seconds of speech divided by the mel generation time for 128 samples from the test split. _LightSpeech_ is from published data on Xeon 2.6GHz and it was not tested on other processors. The benchmarks were done on NVIDIA V100 32GB, Intel Xeon CPU E5-2650 v4 @ 2.20GHz and Raspberry Pi 4 Model B BCM2711 Quad Cortex A72 (ARMv8) 64-bit 1.5GHz. \begin{table} \begin{tabular}{l|r} \hline Model & CMOS\(\uparrow\) \\ \hline _FastSpeech2[1]_ & \(0.0\) \\ _EfficientSpeech_ & \(-0.14\) \\ _LightSpeech[10]_ & \(0.04\) \\ \hline \hline \end{tabular} \end{table} Table 5: The CMOS between _FastSpeech2_ and _EfficientSpeech_. For reference, we include the published results of _LightSpeech_.
2310.19720
Quantum Oscillation Signatures of Fermi Arcs in Tunnel Magnetoconductance
Fermi-arc surface states of Weyl semimetals exhibit a unique combination of localization to a surface and connectivity to the bulk Weyl fermions that can move along the localization direction. We predict anomalous quantum-oscillation signatures of Fermi arcs in the tunnel mangetoconductance across an interface between two Weyl semimetals. These oscillations stem from a momentum-space analog of Aharonov-Bohm interference of electrons moving along the interface Fermi arcs, driven by an external magnetic field normal to the interface. The Fermi arcs' connectivity to the bulk enables their characterization via transport normal to the interface, while their localization manifests in a strong field-angle anisotropy of the oscillations. This combination distinguishes these anomalous oscillations from conventional Shubnikov-de Haas oscillations and makes them identifiable even in complex oscillation spectra of real materials.
Adam Yanis Chaou, Vatsal Dwivedi, Maxim Breitkreiz
2023-10-30T16:48:10Z
http://arxiv.org/abs/2310.19720v1
# Quantum Oscillation Signatures of Fermi Arcs in Tunnel Magnetoconductance ###### Abstract Fermi-arc surface states of Weyl semimetals exhibit a unique combination of localization to a surface and connectivity to the bulk Weyl fermions that can move along the localization direction. We predict anomalous quantum-oscillation signatures of Fermi arcs in the tunnel magnetoconductance across an interface between two Weyl semimetals. These oscillations stem from a momentum-space analog of Aharonov-Bohm interference of electrons moving along the interface Fermi arcs, driven by an external magnetic field normal to the interface. The Fermi arcs' connectivity to the bulk enables their characterization via transport normal to the interface, while their localization manifests in a strong field-angle anisotropy of the oscillations. This combination distinguishes these anomalous oscillations from conventional Shubnikov-de Haas oscillations and makes them identifiable even in complex oscillation spectra of real materials. _Introduction._-- Weyl semimetals (WSMs) are a class of three-dimensional topological semimetals that host pairs of topologically protected gapless points that can be described as Weyl fermions at low energies[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. A remarkable feature of Weyl fermions is the chiral anomaly [12; 13], which can be understood as a spectral flow along the chiral zeroth Landau level dispersing parallel to an applied magnetic field [14]. The boundary manifestation of the bulk topology of WSMs are Fermi arcs -- lines of zero-energy surface states that connect projections of opposite-chirality Weyl nodes within the surface Brillouin zone[1]. An interface between two WSMs also features Fermi arcs unless Weyl nodes of the same chirality from different WMSs project on top of each other [15; 16; 17; 18; 19; 20; 21]. Interface Fermi arcs either connect nodes of opposite chirality from the same WSM (as in the case of surface Fermi arcs), which we term _heterochiral_ connectivity, or nodes of identical chirality from different WSMs, which we call _homochiral_ connectivity. In the presence of a magnetic field normal to the interface (the "longitudinal" direction), anomalous charge current carried by the chiral Landau levels in longitudinal direction are redirected along the Fermi arcs by the Lorentz force. For homochiral Fermi arcs, this leads to perfect transmission of the anomalous charge current[21], while for heterochiral Fermi arcs, it leads to perfect reflection and hence the vanishing of the tunnel conductance. In both cases, the Fermi arcs bear a unique combination of local and nonlocal qualities in that they are localized to the interface but mediate transport normal to it i.e., along their localization direction [21; 22; 23]. Quantum-oscillations, such as the Shubnikov-de Haas (SdH) or de Haas-van Alphen effects, constitute standard experimental tools for mapping low-energy states of metals [24]. Identification of Fermi arcs using these well-established techniques has, however, been challenging. In principle, Fermi arcs are detectable via quantum oscillations stemming from the so-called Weyl orbit [25; 26] -- the cyclotron orbit of a thin WSM slab that involves the coherent motion along Fermi arcs on both surfaces of the slab, connected by chiral Landau levels across the slab width. However, in this case the characteristic nonlocality manifests itself only in the slab-width dependence of the oscillation shift, which, alongside the requirement of a small slab width to ensure phase coherence, makes the experimental identification of Fermi arcs this way very difficult [27; 28]. Furthermore, typical WSM materials exhibit additional Fermi pockets, whose trivial orbits also contribute to the full quantum oscillation spectrum. In this article, we predict a characteristic quantum-oscillation signature of Fermi arcs in the tunnel magnetoconductance. We consider interfaces between two WSMs where Fermi arcs exhibit two or more close encounters, as exemplified in Fig. 1. Such a Fermi-arc configuration can be experimentally realized, e.g., at an interface between two weakly coupled WSMs with curved Fermi arcs, which can be two different WSM materials or same materials rotated with respect to each other, such as rotated TaAs (001) surfaces [29; 30] shown in Fig. 1 (c). We predict oscillations in the magnetoconductance as a function of the inverse longitudinal magnetic-field component, whose frequency depends on the momentum-space areas enclosed by the Fermi arcs. Due to magnetic breakdown at the encounters [31; 32; 33], there are multiple effective paths [34; 35; 36; 37] connecting interface projections of the Weyl nodes. The field-induced motion along the Fermi arcs thus becomes subject to Aharonov-Bohm-like interference in momentum space. The resulting quantum oscillations of the tunnel magnetoconductance are fundamentally different from the possibly coexisting SdH oscillations of trivial states, stemming from Landau-quantized levels passing the Fermi energy. Experimentally they are identifiable by their characteristic field-angle anisotropy, which is a direct consequence of the unique local/nonlocal Fermi-arc character, and the spectrum of higher harmonics. In the following, we describe the proposed setup, compute the tunnel magnetoconductance semiclassically, and compare the predictions to exact numerical simulations on a lattice model. _Fermi-arc mediated tunnel magnetoconductance._-- We consider the tunnel magnetoconductance across an interface between two WSMs, for which at least some of the projections of Weyl nodes onto the interface Brillouin zone do not overlap, such that the interface features Fermi arcs connecting well-separated Weyl-node projections. More precisely, we require this separation to be larger than the inverse magnetic length \(\ell_{B}^{-1}\) (\(\sim 0.02\text{\AA}^{-1}\) for the maximum realistic magnetic field \(B=30\)T). The interface Fermi arcs result from the hybridization of the surface Fermi arcs of the two WSMs, so that their exact form depends on the specific Fermi-arc arrangements of the two coupled surfaces and the coupling strength. If the surface Fermi arcs from the two sides intersect, then a weak coupling generally leads to an avoided crossing in the momentum space (see Fig. 1 (a) and (b)), which we term a "close encounter". A pair of interface Fermi arcs contributes tunnel conductance [21] \[G=\frac{e^{2}}{h}N_{B}T, \tag{1}\] where \(e^{2}/h\) is the quantum of conductance, \(N_{B}\propto B\) is the Landau level degeneracy (number of flux quanta through the interface), and \(0<T<1\) is the total transmission probability along the Fermi arcs. For a pair of homochiral (heterochiral) Fermi arcs that are well separated everywhere, (i.e., for separations \(\gg\ell_{B}^{-1}\)), the transmission probability is \(T=1\) (\(T=0\)). In the former case, this implies a universal conductance independent of band details such as Fermi-arc shape, Fermi velocity, and Fermi energy. However, if two Fermi arcs approach within \(\sim\ell_{B}^{-1}\), then magnetic breakdown leads to suppression (enhancement) of the transmission probability for the homochiral (heterochiral) Fermi arcs. In particular, for large fields, this results in a transmission probability proportional to \(1/B\), leading to the saturation of the conductance [21]. Unlike the contributions of trivial states to the magnetoconductance, the Fermi-arc contribution does not show SdH quantum oscillations because the Fermi-arc mediated current is carried exclusively by the lowest Landau levels of Weyl Fermions at all field strengths [38]. We now show that anomalous, _non-SdH_ quantum oscillations occur for Fermi-arc arrangements with more than one close encounter. In case of two encounters, the two possible interface Fermi arc configurations are depicted in Fig. 1 (a) and (b), which exhibit homochiral and heterochiral connectivity, respectively. In the absence of magnetic breakdown, these two configurations would yield \(G=(e^{2}/h)N_{B}\) and \(G=0\), respectively (in analogy with the single-encounter case [21]). We next compute the transmission probability in presence of magnetic breakdown. For clarity, we focus on the two-encounter case, but our analysis can be straightforwardly extended to more nodes and/or close encounters. _Semiclassical analysis._-- To compute the transmission amplitudes, we employ a semiclassical approach away from the close encounters coupled with the full quantum problem near them. Semiclassically, the electron wavepackets incident on the interface are driven along the Fermi arc by the Lorentz force until they encounter another Weyl node of identical/opposite chirality, leading to transmission/reflection across the interface. The quantum effects are encoded in the path-dependent Aharonov-Bohm, de Broglie, and Maslov phases picked up by them [39]. Near a close encounter, the description of magnetic breakdown maps onto the Landau-Zener problem [33; 40]. Thus, the splitting of electron trajectories is described by the S-matrix [33] \[S(B)=\begin{pmatrix}\sqrt{1-\text{e}^{-\gamma}}\text{e}^{\text{i}\alpha}&- \text{i}\sqrt{\text{e}^{-\gamma}}\\ -\text{i}\sqrt{\text{e}^{-\gamma}}&\sqrt{1-\text{e}^{-\gamma}}\text{e}^{- \text{i}\alpha}\end{pmatrix}, \tag{2}\] where \(\gamma=B_{0}/B\), \(\text{e}^{-\gamma}\) is the tunneling probability and \[\alpha=\frac{\pi}{4}+\frac{\gamma}{2\pi}\left[1-\ln\left(\frac{\gamma}{2\pi} \right)\right]+\arg\Gamma\left(\frac{\text{i}\gamma}{2\pi}\right) \tag{3}\] is the additional phase acquired by a state when it does _not_ tunnel. The scattering process is governed by a single free parameter, the breakdown field \(B_{0}\), which is determined by the geometry of the close encounter as \[B_{0}=\frac{\pi}{4}\Delta^{2}\tan\theta, \tag{4}\] Figure 1: Fermi-arc configurations (black lines) with (a) homochiral and (b) heterochiral connectivity, exhibiting two close encounters at a weakly-coupled interface between two WSMs. The projections of Weyl nodes from the two WSMs are depicted as yellow/blue circles (chirality indicated as \(+/-\)), the arrows indicate the direction of motion in a magnetic field out of the plane, and the (green) shaded region between the two close encounters determines the frequency of quantum oscillations. The inset shows a single close encounter with the minimum separation \(\Delta\) and the opening angle \(2\theta\). (c) Long crescent Fermi arcs of two TaAs (001) surfaces from _ab initio_ calculations [30] rotated by \(20^{\circ}\) with respect to each other. where \(\theta\) is the angle of intersection between the two Fermi arcs in the decoupled limit and \(\Delta\) is the minimum separation (see inset of Fig. 1 (a)). The total transmission amplitude is the sum over all paths weighted with the scattering amplitudes of the encounters given in (2) and phase factors stemming from motion along the connecting Fermi-arc segments. For the heterochiral Fermi-arc configuration, the sum over two possible paths (see Supplemental Material (SM) for a detailed derivation) leads to the transmission probability \[T_{\rm het}=2{\rm e}^{-\gamma}(1-{\rm e}^{-\gamma})(1+\cos\phi), \tag{5}\] where \[\phi=2\alpha-\beta,\qquad\beta=\frac{\mathcal{A}}{B}+\pi. \tag{6}\] Here, \(\beta\) is the difference between the de Broglie phases acquired along the two paths between the close encounters, given by the momentum space area \(\mathcal{A}\) enclosed by the Fermi arcs between the two points of minimum separation at the Fermi level. The additional \(\pi\) in \(\beta\) is a Maslov phase [39] corresponding to the two classical turning points encountered in going around the loop. The magnetoconductance oscillates as a function of \(B^{-1}\), with a slowly varying envelope given by \(T_{\rm het}^{\rm max}=4{\rm e}^{-\gamma}(1-{\rm e}^{-\gamma})\). For \(B\to 0\), we get an exponentially suppressed \(T_{\rm het}\simeq 4{\rm e}^{-\gamma}\), vanishing as expected for interface Fermi arcs connecting the nodes from the same WSM. For \(B\to\infty\), \(T_{\rm het}^{\rm max}\approx 4B_{0}/B\), so that the conductance saturates at a value proportional to \(B_{0}\). For the homochiral configuration, there are infinitely many paths leading to transmission, corresponding to tunneling into the loop of zero modes, traversing it arbitrarily many times, and finally tunneling out of it on the same side of the loop. Summing over these possibilities, we obtain the transmission probability as \[T_{\rm hom}=1-\frac{{\rm e}^{-2\gamma}}{{\rm e}^{-2\gamma}+2(1-{\rm e}^{- \gamma})(1-\cos\phi)}, \tag{7}\] where \[\phi=2\alpha+\beta,\qquad\beta=\frac{\mathcal{A}}{B}+\pi. \tag{8}\] Here, \(\beta\) is the total semiclassical phase acquired on traversing the loop of zero modes (in opposite direction compared to the phase difference of the heterochiral case), whereby the first term comes from the sum of Aharanov-Bohm and de Broglie phases given by the momentum space area \(\mathcal{A}\) enclosed by the loop, and the additional Maslov phase \(\pi\). The magnetoconductance also oscillates as a function of \(B^{-1}\), with the envelope given by \(T_{\rm hom}^{\rm max}=4(1-{\rm e}^{-\gamma})/(2-{\rm e}^{-\gamma})^{2}\). For \(B\to 0\), we get \(T_{\rm hom}=1\) as expected for homochiral Fermi arcs, leading to a linear-in-\(B\) magnetoconductance. The limit \(B\to\infty\) again leads to saturation of the magnetoconductance \(T_{\rm hom}^{\rm max}\approx 4B_{0}/B\). In Fig. 2, we plot the conductance (given by Eq. 1) as well as the tunnel probability and its Fourier transform for a specific value of parameters and compare them with numerical simulations on a lattice model (detailed below). While the conductance shows qualitatively similar features for both homochiral and heterochiral connectivities, they can be easily distinguished by the Fourier transform. For the heterochiral connectivity, as the oscillations result from the interference of only two paths, the Fourier transform exhibit a single peak, with the frequency corresponding to the area \(\mathcal{A}\) enclosed between the two Fermi arcs. On the other hand, for homochiral connectivities, the trivial loop connecting the homochiral Fermi arcs can be traversed multiple times (similar to SdH oscillations), leading to harmonics at frequencies \(n\mathcal{A}\). In contrast to SdH oscillations, however, for each traversal of the loop, there is a nonzero probability of tunneling out of the loop to the opposite Fermi arc, leading to reflection from the interface. Thus, the the higher harmonics are damped, with the height of the \(n^{\rm th}\) peak proportional to \([\Psi\left(n/2+1\right)-\Psi\left(n/2+1/2\right)]\), where \(\Psi(z)=\Gamma^{\prime}(z)/\Gamma(z)\) is the digamma function (see SM for the derivation). This damping profile exhibits a long tail and should be visible even for small \(\mathcal{A}\). _Lattice model and numerical results._-- We compare our analytical predictions with a numerical simulation on an explicit lattice model, for which we compute the model-specific parameters \(\Delta\), \(\theta\), and \(\mathcal{A}\) that enter the analytical formula, so that analytics and numerics can be compared without any fitting parameters. We consider a Bloch Figure 2: Left column: Conductance as a function of magnetic field for heterochiral (top row) and homochiral (bottom row) connectivity obtained from analytics (solid orange lines) and numerics (dashed blue lines). Right: Fourier transform of tunnel probability \(\widetilde{T}(\Omega)\) normalized to \(\widetilde{T}(\mathcal{A})=1\) (inset shows \(T(1/B)\)). The dashed black line shows the analytically obtained damping of higher harmonics. Model parameters are \(\kappa=0.07\) and \(\varepsilon_{F}=0.1\) (others in text). Hamiltonian of the form [41] \[\mathcal{H}(\mathbf{k})=\mathcal{H}_{x}(k_{x})+\eta_{y}(\mathbf{k}_{\perp})\tau^{y}+\eta_ {z}(\mathbf{k}_{\perp})\tau^{z}, \tag{9}\] where the Pauli matrices \(\tau^{a}\) represent a pseudospin degree of freedom and \[\mathcal{H}_{x}(k_{x})=\sin k_{x}\tau^{x}+(1-\cos k_{x})\tau^{z}. \tag{10}\] The lattice constant and the hopping strength along \(x\) are set to unity. This lattice model has Weyl nodes in the \(k_{x}=0\) plane at transverse momenta \(\mathbf{k}_{\perp}\) that satisfy \(\eta_{y}(\mathbf{k}_{\perp})=\eta_{z}(\mathbf{k}_{\perp})=0\). For a surface normal to \(x\), the Fermi arcs are given by [41]\(\eta_{y}(\mathbf{k}_{\perp})=0\), which exist only for lattice momenta satisfying \(\eta_{z}(\mathbf{k}_{\perp})<0\). We consider an interface between two WSMs that are described by the lattice model above with \[\eta_{y}^{A}(\mathbf{k}_{\perp}) =\xi_{A}\left(\cos k_{y}-\cos b_{y}+\zeta_{A}\sin k_{z}-\sin b_{z} \right),\] \[\eta_{z}^{A}(\mathbf{k}_{\perp}) =\cos b_{z}-\cos k_{z}, \tag{11}\] where \(A\in\{\mathrm{L},\mathrm{R}\}\), \(b_{y},b_{z}\in(0,\pi)\) and \(\xi_{A},\zeta_{A}\in\{\pm 1\}\). The two WSMs have Weyl nodes at \(\mathbf{k}_{A}=(0,\pm b_{y},\zeta_{A}b_{z})\) with chiralities \(\chi=\pm\xi_{A}\zeta_{A}\). We model the tunnel junction by modulating the hopping along \(x\) at the interface by a factor \(0\leq\kappa\leq 1\). To ensure that in the decoupled limit (\(\kappa=0\)), the Fermi arcs of the two sides intersect at two points, we set \(\zeta_{\mathrm{L}}=-\zeta_{\mathrm{R}}=-1\). The intersection point is then given by \(\mathbf{k}_{\perp}=(\pm b_{0},0)\) with \(b_{0}\equiv\cos^{-1}\left(\cos b_{y}+\sin b_{z}\right)\). For \(\kappa>0\), the Fermi arc connectivity is homochiral if \(\xi_{\mathrm{L}}=-\xi_{\mathrm{R}}=1\) and hetero-rochiral if \(\xi_{\mathrm{L}}=\xi_{\mathrm{R}}=1\). We hereafter set \(b_{y}=3\pi/4\) and \(b_{z}=\pi/2\). We obtain \(\theta\) by linearizing the Fermi arc contours \(\eta_{y}^{\mathrm{L/R}}(\mathbf{k}_{\perp})=0\) about the intersection points \((\pm b_{0},0)\). This yields \(q_{z}\approx-\zeta_{\mathrm{L/R}}\sin b_{0}q_{y}\), so that \(\theta\) is given by \(\tan\theta=|q_{z}/q_{y}|=|\sin b_{0}|\) (\(\tan\theta=|q_{y}/q_{z}|=|\cos b_{0}|\) ) for homochiral (heterodiral) connectivity. To compute \(\Delta\) and \(\mathcal{A}\), we employ the generalized transfer matrices [41; 42], as detailed in SM. This yields an implicit expression for the interface Fermi arcs in terms of \(\varepsilon\), \(\mathbf{k}_{\perp}\), and \(\kappa\). Using the fact that the minimum separation \(\Delta(\kappa)\) occurs along the lines \(k_{z}=0\) and \(k_{y}=\pm b_{0}\) for the homochiral and heterochiral cases, respectively, we obtain \[\Delta_{\mathrm{hom}}(\kappa) =\cos^{-1}(\cos b_{0}-\kappa)-\cos^{-1}(\cos b_{0}+\kappa) \tag{12}\] \[\approx 2\kappa\csc b_{0}+\mathrm{O}(\kappa^{3}),\] and \[\Delta_{\mathrm{het}}(\kappa) =2\left[\cos^{-1}\left(\frac{\kappa}{1+\kappa^{2}}\right)-\tan^{ -1}\left(\frac{1-\kappa^{2}}{2\kappa}\right)\right] \tag{13}\] \[\approx 2\kappa+\mathrm{O}(\kappa^{3}).\] The computation of \(\mathcal{A}\) is analytically intractable, so that we obtain it by numerically integrating the implicit condition for the Fermi arcs at \(\varepsilon=\varepsilon_{F}\). Inserting the expressions for \(\Delta\) and \(\tan\theta\) into Eq. (4), we obtain the breakdown field which, together with \(\mathcal{A}\), determines the analytic magnetoconducatance via Eqns. (5) and (7) inserted into (1). The exact numerical computation of the magnetoconductance is performed using the _Kwant_ package [43]. We find perfect agreement between the analytics and numerics, as exemplified in Fig. 2, for the relevant range of parameters \(\kappa\), \(\varepsilon_{F}\), and \(B\), set by the requirement \(\mathcal{A}\gg l_{B}^{-2}\sim l_{B_{0}}^{-2}\) to ensure observable oscillations (not too small oscillation frequencies) and uncoupled Weyl-node projections. _Discussion and conclusions._-- We have demonstrated anomalous quantum oscillations in the tunnel magnetoconductance across a Weyl semimetal interface, arising from Aharonov-Bohm like interference effects enabled by magnetic breakdown in Fermi-arc networks. As the oscillations appear in the electron transport normal to the interface and thus along the Fermi-arc localization direction, they manifest the unique combination of local and nonlocal qualities of Fermi arcs. The experimental fingerprint of the anomalous oscillations is an extreme field-angle anisotropy: The Fermi-arc contributions only depend on the longitudinal field component, which distinguishes them from other quantum oscillations that may arise from various two- or three-dimensional trivial Fermi pockets. The former lead to oscillations also in the transverse magnetic-field components, while the latter do not contribute to transport along their localization direction. This Fermi-arc signature appears to be better accessible than the width-dependence of SdH oscillation shifts of Weyl-orbits. Further peculiarities lie in the behavior of higher harmonics of the anomalous oscillations, which, moreover, allow to distinguish different Fermi-arc connectivities. For heterochiral connectivity, the spectrum does not feature higher harmonics (unlike SdH oscillations), while for homochiral connectivity, the higher harmonics feature unusual damping stemming from magnetic breakdown. As the Fermi arc contribution to quantum oscillations requires coherent transport along entire loops of the interface Fermi arcs, we expect them to be sensitive to temperature- and disorder-induced decoherence, similar to conventional SdH oscillations, which sets a lower bound on the magnetic field for observation of the oscillations [24]. However, this is less restrictive to the constraints for the observation of Weyl orbits, which require coherent motion across both the bulk and the surface of a WSM thin film [25]. _Acknowledgments._-- We thank P.W. Brouwer, L.I. Glazman, M. Mansouri, S.A. Parameswaran, and S. Wiedmann, for useful discussions. This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through CRC-TR 183 "Entangled States of Matter" and the Emmy Noether program, Project No. 506208038. ## Supplemental Material ### Computation of the S-matrix In this section, we compute the total S-matrix for the interface. More explicitly, we compute \(S_{\text{full}}\) defined as \[\begin{pmatrix}f_{1}\\ f_{2}\end{pmatrix}=S_{\text{full}}\begin{pmatrix}i_{1}\\ i_{2}\end{pmatrix},\] (S1) where \(i_{1,2}\) and \(f_{1,2}\) are defined in Fig. S1. To this end, we use the S-matrix for a single encounter given by Eq. (2), as well as the semiclassical phase relating the internal (dashed) inputs. Heterchiral connectivity.--In this case, the wavefunctions on the internal legs are related as \(i^{\prime}_{a}=\mathrm{e}^{\mathrm{i}\beta_{a}}f^{\prime}_{a}\) for \(a=1,2\). The full S-matrix is thus given by \[S_{\text{full}}=S_{0}\begin{pmatrix}\mathrm{e}^{\mathrm{i}\beta_{2}}&0\\ 0&\mathrm{e}^{\mathrm{i}\beta_{1}}\end{pmatrix}S_{0}=\mathrm{e}^{\mathrm{i} \beta_{0}}\begin{pmatrix}\mathrm{e}^{\mathrm{i}\alpha}\left(\mathrm{e}^{ \mathrm{i}\phi/2}-2W\cos\frac{\phi}{2}\right)&-2\mathrm{i}\sqrt{W(1-W)}\cos \frac{\phi}{2}\\ -2\mathrm{i}\sqrt{W(1-W)}\cos\frac{\phi}{2}&\mathrm{e}^{-\mathrm{i}\alpha} \left(\mathrm{e}^{-\mathrm{i}\phi/2}-2W\cos\frac{\phi}{2}\right)\end{pmatrix},\] (S2) where \(W=\mathrm{e}^{-B_{0}/B}\), \(\beta_{0}=\frac{1}{2}(\beta_{1}+\beta_{2})\) and \(\phi=2\alpha-\beta\), with \(\beta\equiv\beta_{2}-\beta_{1}\). The amplitude for transmission across the interface is given by the _off-diagonal_ term of this S-matrix since the transmission across the interface is associated with transmission to a Weyl-node projection of the same chirality (see Fig. S1), so that \[T=4W(1-W)\sin^{2}\left(\frac{\phi}{2}\right)=2W(1-W)(1+\cos\phi).\] (S3) The transmission probability exhibits an oscillatory behavior, with local maxima for \(\phi=2n\pi\) and zeros for \(\phi=(2n+1)\pi\). The global maxima occurs near \(W=1/2\), whereby \(B=B_{0}/\ln 2\). Heterochiral connectivity.--In this case, we first compute the matrix \(M\) such that \[\begin{pmatrix}f_{a}\\ f^{\prime}_{a}\end{pmatrix}=S\begin{pmatrix}i_{a}\\ i^{\prime}_{a}\end{pmatrix}\iff\begin{pmatrix}i^{\prime}_{a}\\ f^{\prime}_{a}\end{pmatrix}=M\begin{pmatrix}i_{a}\\ f_{a}\end{pmatrix};\qquad a=1,2.\] (S4) Explicitly, rearranging Eq. (2), we get \[M=-\frac{\mathrm{i}}{\sqrt{W}}\begin{pmatrix}\sqrt{1-W}\mathrm{e}^{-\mathrm{ i}\alpha}&-1\\ 1&-\sqrt{1-W}\mathrm{e}^{\mathrm{i}\alpha}\end{pmatrix}.\] (S5) The wavefunctions on the internal legs are related as \(i^{\prime}_{2}=\mathrm{e}^{\mathrm{i}\beta_{1}}f^{\prime}_{1}\) and \(f^{\prime}_{2}=\mathrm{e}^{\mathrm{i}\beta_{2}}i^{\prime}_{1}\). Thus, we can compute the matrix \(M_{\text{full}}\) that relates \((i_{2},f_{2})^{T}\) to \((i_{1},f_{1})^{T}\) as \[M_{\text{full}}=M^{-1}\begin{pmatrix}0&\mathrm{e}^{\mathrm{i}\beta_{2}}\\ \mathrm{e}^{\mathrm{i}\beta_{1}}&0\end{pmatrix}M=-\frac{\mathrm{e}^{\mathrm{i }\beta_{0}}}{W}\begin{pmatrix}-2\mathrm{i}\sqrt{1-W}\sin\frac{\phi}{2}& \mathrm{e}^{-\mathrm{i}\alpha}\left(W\mathrm{e}^{-\mathrm{i}\phi/2}+2\mathrm{ i}\sin\frac{\phi}{2}\right)\\ \mathrm{e}^{\mathrm{i}\alpha}\left(W\mathrm{e}^{-\mathrm{i}\phi/2}-2\mathrm{ i}\sin\frac{\phi}{2}\right)&2\mathrm{i}\sqrt{1-W}\sin\frac{\phi}{2}\end{pmatrix},\] (S6) where we now have \(\phi=2\alpha-\beta_{1}+\beta_{2}=2\alpha+\beta\), with \(\beta\equiv\beta_{2}-\beta_{1}\) being the total semiclassical phase acquired by traversing the loop _counterclockwise_. The full S-matrix is thus given by \[S_{\text{full}}=-\frac{\mathrm{e}^{\mathrm{i}\alpha}}{W\mathrm{e}^{-\mathrm{ i}\phi/2}+2\mathrm{i}\sin\frac{\phi}{2}}\begin{pmatrix}-2\mathrm{i}\sqrt{1-W}\sin \frac{\phi}{2}&W\mathrm{e}^{-\mathrm{i}\beta_{0}}\\ W\mathrm{e}^{\mathrm{i}\beta_{0}}&-2\mathrm{i}\sqrt{1-W}\sin\frac{\phi}{2} \end{pmatrix}.\] (S7) The amplitude for transmission across the interface is given by the _diagonal_ term of this S-matrix (see Fig. S1), so that \[T=\frac{4(1-W)\sin^{2}\left(\frac{\phi}{2}\right)}{W^{2}+4(1-W)\sin^{2}\left( \frac{\phi}{2}\right)}=1-\frac{W^{2}}{W^{2}+2(1-W)(1-\cos\phi)}.\] (S8) Thus, the transmission probability exhibits an oscillating behavior with local maxima at \(\phi=(2n+1)\pi\) and zeros at \(\phi=2n\pi\), in contrast to the heterochiral case. This transmission probability can also be computed by summing over the possible paths for transmission/reflection. It is somewhat easier to compute the amplitude for reflection. Using the amplitude for tunneling \(-{\rm i}W\) and that for reflection \(\sqrt{1-W}{\rm e}^{{\rm i}\alpha}\) at each close encounter, the reflection amplitude can be written as \[r=-W{\rm e}^{{\rm i}\beta_{1}}\sum_{n=0}^{\infty}(1-W)^{n}{\rm e}^{-{\rm i}n \phi},\] (S9) where the \(n^{\rm th}\) term in the series corresponds to tunneling into the loop from the left side, going around it \(n+\frac{1}{2}\) times, and tunneling out from the right. Performing the infinite sum, we get \[r=-\frac{W{\rm e}^{{\rm i}\beta_{1}}}{1-(1-W){\rm e}^{-{\rm i}\phi}}=-\frac{W{ \rm e}^{{\rm i}(\beta_{1}+\phi/2)}}{W{\rm e}^{-{\rm i}\phi/2}+2{\rm i}\sin \frac{\phi}{2}},\] (S10) which equals \([S_{\rm full}]_{21}\) derived in Eq. (S7). ### Fourier transform of the conductance We now compute the Fourier transform of the homochiral transmission probability as a function of \(u\equiv 1/B\). As this probability is invariant under flipping the direction of the magnetic field, \(T(u)=T(|u|)\), so that the Fourier transform is given by \(\mathcal{F}[T]=\widetilde{T}(\Omega)+\widetilde{T}(-\Omega)\), where \[\widetilde{T}(\Omega)\equiv\int_{0}^{\infty}{\rm d}u\,T(u){\rm e}^{-{\rm i} \Omega u}.\] (S11) As \(T(u)=1-R(u)\) and computing the Fourier transform of \(R(u)\) is somewhat simpler, we first compute \(\widetilde{R}(\Omega)\), in terms of which \[\widetilde{T}(\Omega)=\pi\delta(\Omega)-\widetilde{R}(\Omega).\] (S12) To compute \(\widetilde{R}(\Omega)\), we use Eq. (S9) to write the reflection probability as a sum \[R(u)=\left|r\right|^{2}=W^{2}\sum_{m,n=0}^{\infty}(1-W)^{m+n}{\rm e}^{{\rm i} (m-n)\phi},\] (S13) where \(W={\rm e}^{-B_{0}u}\) and \({\rm e}^{{\rm i}\phi}={\rm e}^{{\rm i}\mathcal{A}u}\), whereby we have ignored the \(2\alpha\) contribution to \(\phi\) since it is slowly varying as compared to \(\mathcal{A}u\) and thus behaves as a constant phase shift when \(\mathcal{A}\gg B_{0}\). Thus \[R(u)=\sum_{m,n=0}^{\infty}(1-{\rm e}^{B_{0}u})^{m+n}{\rm e}^{-2B_{0}u+{\rm i} (m-n)\mathcal{A}u}=\sum_{r=-\infty}^{\infty}\sum_{n=0}^{\infty}(1-{\rm e}^{B_ {0}u})^{r+2n}{\rm e}^{-2B_{0}u+{\rm i}r\mathcal{A}u}.\] (S14) We can now compute \[\widetilde{R}(\Omega) =\frac{1}{B_{0}}\sum_{r=-\infty}^{\infty}\sum_{n=0}^{\infty}\int _{0}^{1}{\rm d}WW^{1+{\rm i}(r\mathcal{A}-\Omega)/B_{0}}(1-W)^{r+2n}\] \[=\frac{1}{B_{0}}\sum_{r=-\infty}^{\infty}\sum_{n=0}^{\infty} \frac{(r+2n)!\Gamma(2+{\rm i}\frac{\mathcal{A}-\Omega}{B_{0}})}{\Gamma(3+r+2n +{\rm i}\frac{\mathcal{A}-\Omega}{B_{0}})},\] (S15) where we identify the integral as the Euler beta function and use its relation to the gamma function. The \(r=n=0\) term is \[\frac{\Gamma(2-\mathrm{i}\Omega/B_{0})}{\Gamma(3-\mathrm{i}\Omega/B_{0})}=\frac{ 1}{2-\mathrm{i}\Omega/B_{0}}.\] (S16) For \(r\neq 0\), the magnitude of the individual terms is \[\left|\frac{(r+2n)!\Gamma(2+\mathrm{i}\frac{rA-\Omega}{B_{0}})}{\Gamma(3+r+2n+ \mathrm{i}\frac{rA-\Omega}{B_{0}})}\right|^{2}=\frac{1}{r+2n+1}\prod_{m=1}^{r+ 2n+1}\frac{m}{(m+1)^{2}+\left(\frac{rA-\Omega}{B_{0}}\right)^{2}},\] (S17) where \(x=(r\mathcal{A}-\Omega)/B_{0}\) and we have used \(\left|\Gamma(1+\mathrm{i}x)\right|^{2}=\pi x/\sinh(\pi x)\) to write \[\left|\Gamma(n+\mathrm{i}x)\right|^{2}=\frac{\pi x}{\sinh(\pi x)}\prod_{m=1}^ {n-1}(m^{2}+x^{2}).\] (S18) Thus, \(\widetilde{R}(\Omega)\) is sharply peaked when \(\Omega=\nu\mathcal{A}\) for \(\nu\in\mathbb{Z}\). The dominant contributions to the sum over \(r\) thus come from \(r=\nu\) as these terms are positive real numbers whereas terms \(r\neq\nu\) are smaller in magnitude and have oscillating phases. The height of these peaks \(\widetilde{R}_{\nu}\equiv\widetilde{R}(\nu\mathcal{A})\) is given by \[\widetilde{R}_{\nu}=\frac{1}{B_{0}}\sum_{n=0}^{\infty}\frac{(\nu+2n)!}{(\nu+2 n+2)!}=\frac{1}{B_{0}}\sum_{n=0}^{\infty}\frac{1}{(\nu+2n+1)(\nu+2n+2)},\] (S19) where we have ignored the contribution from the \(r=n=0\) term \(\sim B_{0}/\mathcal{A}\) since \(B_{0}\ll\mathcal{A}\). The above sum can be evaluated by rewriting it as \[\widetilde{R}_{\nu}=\frac{1}{2B_{0}}\sum_{n=0}^{\infty}\left(\frac{1}{n+\frac{ \nu+1}{2}}-\frac{1}{n+\frac{\nu}{2}+1}\right)=\frac{1}{2B_{0}}\left[\Psi\left( \frac{\nu}{2}+1\right)-\Psi\left(\frac{\nu+1}{2}\right)\right],\] (S20) where \(\Psi(z)=\Gamma^{\prime}(z)/\Gamma(z)\) is the digamma function with the series representation \[\Psi(z)=-\gamma+\sum_{n=1}^{\infty}\left(\frac{1}{n+1}-\frac{1}{n+z}\right),\] (S21) with \(\gamma\) being the Euler-Mascheroni constant. Using the recurrence relation \(\Psi(z+1)=\Psi(z)+1/z\), we can relate the strengths of the consecutive Fourier peaks as \(\widetilde{R}_{\nu}=1/(\nu B_{0})-\widetilde{R}_{\nu-1}\). Finally, as \(\widetilde{T}(\Omega)\) is identical to \(\widetilde{R}(\Omega)\) for \(\Omega\neq 0\), it also exhibits peaks of strength \(\widetilde{T}_{\nu}=\widetilde{R}_{\nu}\) at \(\Omega=\nu\mathcal{A}\). ### Transfer matrices for the model Hamiltonian In the main text, we consider Hamiltonians of the form \[\mathcal{H}(\mathbf{k})=\,\sin k_{x}\tau^{x}+\eta_{y}(\mathbf{k}_{\perp})\tau^{y}+ \left(1-\cos k_{x}+\eta_{z}(\mathbf{k}_{\perp})\right)\tau^{z}.\] (S22) The corresponding generalized transfer matrix along \(x\) is given by [41; 42] \[\mathcal{T}=\frac{1}{1+\eta_{z}}\begin{pmatrix}\varepsilon^{2}-\eta_{y}^{2}-(1 +\eta_{z})^{2}&-\varepsilon+\eta_{y}\\ \varepsilon+\eta_{y}&-1\end{pmatrix}.\] (S23) For an interface along \(x\) with tunneling strength \(\kappa\), a necessary condition for the existence of a mode localized at the interface is [15] \[\det\left[\mathcal{T}_{\mathrm{L}},K^{-1}\mathcal{T}_{\mathrm{R}}K\right]=0,\] (S24) where \(\mathcal{T}_{\mathrm{L,R}}\) denote the transfer matrices on the two sides of the interface and \(K=\mathrm{diag}\left\{(1/\kappa),\kappa\right\}\). To simplify this further, we set \[\mathcal{T}_{A}=\begin{pmatrix}a_{A}&b_{A}\\ c_{A}&d_{A}\end{pmatrix},\qquad\delta_{A}\equiv a_{A}-d_{A};\quad A\in\{ \mathrm{L,R}\}.\] (S25) Substituting in Eq. (S24), after some algebra we arrive at \[\beta^{2}u^{2}+\beta\delta_{\rm L}\delta_{\rm R}u-\left(b_{\rm L}c_{\rm L}\delta_ {\rm R}^{2}+\delta_{\rm L}^{2}b_{\rm R}c_{\rm R}+4\beta^{2}\right)=0,\] (S26) where \(u=\zeta+\zeta^{-1}\) and \(\beta\) and \(\zeta\) are chosen such that \[\kappa^{2}c_{\rm L}b_{\rm R}=\beta\zeta,\quad\kappa^{-2}b_{\rm L}c_{\rm R}= \beta\zeta^{-1}.\] (S27) To compute the area enclosed by the loop \({\cal A}\) for a fixed \(\varepsilon\) and \(\kappa\), we numerically solve Eq. (S26) to obtain \(k_{z}(k_{y})\) in the first quadrant, which we numerically integrate. The computation of the minimum separation \(\Delta\) is, however, analytically tractable, as we now discuss. ### Computing the minimum arc separation From symmetry, we anticipate that the minimum separation between various branches of the interface Fermi arc lies along \(k_{z}=0\) for the homochiral case, whereby \[\eta_{y}^{\rm L/R}(k_{y},0) = \pm(\cos k_{y}-\cos b_{0}),\] \[\eta_{z}^{\rm L/R}(k_{y},0) = \cos b_{z}-1.\] (S28) Similarly, for the heterochiral case, the minimum separation lies along \(k_{y}=b_{0}\), whereby \[\eta_{y}^{\rm L/R}(b_{0},k_{z}) = \pm\sin k_{z},\] \[\eta_{z}^{\rm L/R}(b_{0},k_{z}) = \cos b_{z}-\cos k_{z}.\] (S29) Thus, in both cases, \(\eta_{y}^{\rm L}=-\eta_{y}^{\rm R}\) and \(\eta_{z}^{\rm L}=\eta_{z}^{\rm R}\). Thus, for both cases, we can write the transfer matrices as \[{\cal T}_{\rm L}=\begin{pmatrix}a&b\\ c&d\end{pmatrix},\quad{\cal T}_{\rm R}=\begin{pmatrix}a&-c\\ -b&d\end{pmatrix}.\] (S30) Eq. (S27) now reduces to \(\beta\zeta=-\kappa^{2}c^{2}\), \(\beta\zeta^{-1}=-\kappa^{-2}b^{2}\), so that we can choose \[\zeta=\frac{c}{b}\kappa^{2},\quad\beta=-bc.\] (S31) Eq. (S26) thus becomes \[bcu^{2}-\delta^{2}u-2\left(\delta^{2}+2bc\right)=0,\] (S32) in deriving which we have assumed that \(b\) and \(c\) are nonzero, since \(b=0\) and \(c=0\) yield the interface spectrum in the decoupled case, and we are interested in \(\kappa\neq 0\). We solve the quadratic in \(u\) to get \[u=2\left(1+\frac{\delta^{2}}{2bc}\right),-2.\] (S33) But \(u=-2\implies\zeta=-1\), which yields \(\eta_{y}=0\) for \(\varepsilon=0\). Since we expect the zero crossing of the interface spectrum to depend on \(\kappa\), we ignore this branch. Thus, we are left with \[u=2+\frac{\delta^{2}}{bc}\implies c^{2}\kappa^{2}+b^{2}\kappa^{-2}-2bc=\delta ^{2}\implies c\kappa-b\kappa^{-1}=\mu\delta,\] (S34) where \(\mu=\pm 1\). Substituting \[\{b,c,\delta\}=\left\{\frac{-\varepsilon+\eta_{y}}{1+\eta_{z}},\frac{ \varepsilon+\eta_{y}}{1+\eta_{z}},\frac{\varepsilon^{2}-\eta_{y}^{2}-\eta_{z} (2+\eta_{z})}{1+\eta_{z}}\right\},\] (S35) we get \[\varepsilon^{2}-2\mu\varepsilon\cosh\lambda-\eta_{y}^{2}+2\mu\eta_{y}\sinh \lambda-\eta_{z}(2+\eta_{z})=0,\] (S36) where we have set \(\kappa=\mathrm{e}^{-\lambda}\) with \(\lambda\in[0,\infty)\). The interface spectrum is thus given by \[\varepsilon=\mu\cosh\lambda+\mu^{\prime}\sqrt{(\eta_{y}-\mu\sinh\lambda)^{2}+(1+ \eta_{z})^{2}}.\] (S37) We get four branches of solutions since \(\mu,\mu^{\prime}=\pm 1\). For \(\kappa\to 0\), these become \(\varepsilon\approx(\mu+\mu^{\prime})\kappa^{-1}+\mathrm{O}(1)\), and since the physically relevant solutions are regular in the decoupled limit, we choose \(\mu=-\mu^{\prime}=\mp 1\). Thus, the interface spectrum is given by \[\varepsilon=\pm\left[\sqrt{(\eta_{y}\pm\sinh\lambda)^{2}+(1+\eta_{z})^{2}}- \cosh\lambda\right].\] (S38) We can now use this expression to compute nodal separation for a fixed energy. For homochiral connectivity, the interface spectrum becomes \[\varepsilon_{\mathrm{hom}}(k_{y})=\pm\left[\sqrt{(\cos k_{y}-\cos b_{0}\pm \sinh\lambda)^{2}+\cos^{2}b_{z}}-\cosh\lambda\right].\] For \(b_{z}=\pi/2\), this further simplifies to \[\varepsilon_{\mathrm{hom}}(k_{y})=\pm\left[\sinh\lambda\pm(\cos k_{y}-\cos b _{0})-\cosh\lambda\right]=\cos k_{y}-\cos b_{0}\mp\kappa,\] (S39) since \(\cosh\lambda\pm\sinh\lambda=\mathrm{e}^{\pm\lambda}=\kappa^{\pm 1}\). For \(\varepsilon=0\), we thus get \(k_{y,\pm}=\cos^{-1}(\cos b_{0}\pm\kappa)\), so that the minimum separation is given by \[\Delta(\kappa)=\cos^{-1}(\cos b_{0}-\kappa)-\cos^{-1}(\cos b_{0}+\kappa).\] (S40) For heterochiral connectivity, we get \[\varepsilon_{\mathrm{het}}(k_{z})=\pm\left[\sqrt{(\sin k_{z}\pm\sinh\lambda)^{ 2}+(1+\cos b_{z}-\cos k_{z})^{2}}-\cosh\lambda\right].\] For \(b_{z}=\pi/2\), this becomes \[\varepsilon_{\mathrm{het}}(k_{z})=\pm\left[\sqrt{\cosh^{2}\lambda-2\cosh \lambda\cos(k_{z}\pm\varphi)+1}-\cosh\lambda\right],\] (S41) where \(\varphi=\tan^{-1}(\sinh\lambda)\). For \(\varepsilon=0\), we get \(\cos(k_{z}\pm\varphi)=\frac{1}{2}\cos\phi\). Since \(\varphi\rightarrow\pi/2\) as \(\kappa\to 0\), the two solutions closest to \(k_{z}=0\) are given by \(k_{z,\pm}=\pm\left[\cos^{-1}\left(\frac{1}{2}\cos\varphi\right)-\phi\right]\), so that the minimum separation is given by \[\Delta(\kappa)=2\left[\cos^{-1}\left(\frac{1}{2}\cos\varphi\right)-\varphi \right]=2\left[\cos^{-1}\left(\frac{\kappa}{1+\kappa^{2}}\right)-\tan^{-1} \left(\frac{1-\kappa^{2}}{2\kappa}\right)\right].\] (S42)
2306.15824
Confidence-based Ensembles of End-to-End Speech Recognition Models
The number of end-to-end speech recognition models grows every year. These models are often adapted to new domains or languages resulting in a proliferation of expert systems that achieve great results on target data, while generally showing inferior performance outside of their domain of expertise. We explore combination of such experts via confidence-based ensembles: ensembles of models where only the output of the most-confident model is used. We assume that models' target data is not available except for a small validation set. We demonstrate effectiveness of our approach with two applications. First, we show that a confidence-based ensemble of 5 monolingual models outperforms a system where model selection is performed via a dedicated language identification block. Second, we demonstrate that it is possible to combine base and adapted models to achieve strong results on both original and target data. We validate all our results on multiple datasets and model architectures.
Igor Gitman, Vitaly Lavrukhin, Aleksandr Laptev, Boris Ginsburg
2023-06-27T23:13:43Z
http://arxiv.org/abs/2306.15824v1
# Confidence-based Ensembles of End-to-End Speech Recognition Models ###### Abstract The number of end-to-end speech recognition models grows every year. These models are often adapted to new domains or languages resulting in a proliferation of expert systems that achieve great results on target data, while generally showing inferior performance outside of their domain of expertise. We explore combination of such experts via confidence-based ensembles: ensembles of models where only the output of the most-confident model is used. We assume that models' target data is not available except for a small validation set. We demonstrate effectiveness of our approach with two applications. First, we show that a confidence-based ensemble of 5 monolingual models outperforms a system where model selection is performed via a dedicated language identification block. Second, we demonstrate that it is possible to combine base and adapted models to achieve strong results on both original and target data. We validate all our results on multiple datasets and model architectures. Igor Gitman\({}^{1}\), Vitaly Lavrukhin\({}^{1}\), Aleksandr Laptev\({}^{1,2}\), Boris Ginsburg\({}^{1}\)\({}^{1}\)NVIDIA, USA \({}^{2}\)ITMO University, Russia [email protected], [email protected], [email protected], [email protected] **Index Terms**: ensembles, confidence, end-to-end, speech recognition, language identification, accent adaptation ## 1 Introduction Deep neural network (DNN) models are commonly combined in an ensemble to improve predictions' accuracy. Most widely used types of ensembles include bagging [1], boosting [2] and stacking [3]. A less popular ensembling approach is to only use an output of a single model that is deemed best for the current input. A typical way to pick the "best" output is to select a model with the highest confidence score, which provides an estimate of how likely the output is to be correct. We refer to such systems as _confidence-based ensembles_ (see Figure 1). There exists a number of applications of confidence-based ensembles to automatic speech recognition (ASR) and classification. Metze et al. [4] used confidence scores of multiple monolingual Hidden Markov Model (HMM)-based ASR models to improve language identification (LID) performance. This idea has been later applied to build a hybrid DNN-HMM multilingual ASR model where a combination of a dedicated LID block and a confidence-based model selection was used [5, 6]. Wang et al. [7] extended this further to include additional acoustic and language model scores and trained a neural network to make the final model selection. Confidence-based ensembles have also been applied to dialectal and accented speech recognition. Soto et al. [8] showed that a confidence-based hypothesis selection improves recognition results on Arabic dialects. Kukk and Alumae [9] improved accepted speech LID by directly building text classification models on top of the ASR output. While there are many applications of confidence-based ensembles of HMM-based ASR systems, there is no study on using confidence to combine multiple _end-to-end_ neural ASR models. We fill this gap by building ensembles of connectionist temporal classification (CTC) [10] and recurrent neural network transducer (RNNNT) [11] models (using Conformer [12] as an encoder). There are different ways how to define confidence measure for such models. A traditional approach is to use a maximum probability of the output tokens [13]. Laptev et al. [14] observed that using entropy-based measures helps to reduce networks' overconfidence. It is possible to train separate models for confidence estimation [15, 16], but such methods are not directly applicable to pretrained models, so we do not consider them in this paper. Instead, we focus on the collection of entropy-based measures proposed in [14] and systematically study their effect on the quality of confidence-based ensembles. We use two applications to evaluate the effectiveness of our approach. First, we consider a task of multilingual ASR. The current trend in building multilingual end-to-end ASR models is to train a single network that can recognize multiple languages [17, 18, 19]. Another popular line of work is to build mixture-of-expert systems that use separate sub-networks specializing in different languages [20, 21]. However, such techniques require significant computational resources for training and might produce worse recognition than monolingual predictors [18]. An alternative approach is to combine monolingual Figure 1: An illustration of a confidence-based ensemble applied to ASR. The same input audio is passed to all models. They run in parallel to produce an output text as well as an estimation of the output’s correctness (confidence). A confidence of each model can generally consist of multiple values (, acoustic and language model scores). Model selection block can be implemented as any mapping from confidence values to the index of the “most confident” model. models into a multilingual system using a dedicated language identification (LID) block [6]. We show that confidence-based ensembles outperform dedicated LID models on long audio segments (\(>5\) seconds) and can be combined with LID model's scores to improve predictions on short segments. We then show that it is possible to use confidence-based ensembles for the models sharing the same input language. We consider a task of accent and dialect adaptation. A typical solution to this problem is to finetune an ASR model on the target data. A common issue of this approach is that model's performance on the original domain might significantly degrade, which is known as catastrophic forgetting [22]. While many techniques have been proposed to address this issue [23, 24], they often require access to the original training data, which might not be available. Majumdar et al. [25] proposes using limited training strategy and regularized adapter modules to reduce the degradation on the source domain without access to the original training data. In this paper we show that a simple confidence-based combination of original and finetuned models achieves significantly better results. Additionally, we demonstrate that the proposed ensemble can interpolate between better performance on source or target domain via a _runtime_ change in the weights of the confidence scores. A clear limitation of most ensemble systems is that required computation grows linearly with each added model. To address this problem, we show that it is possible to use an output of intermediate layers for confidence estimation. We demonstrate this technique through an ensemble of models finetuned with intermediate CTC loss [26] and show that using only 4 out of 18 layers is enough for accurate model selection. We apply this technique to both CTC and Transducer models and find that it generally improves the base models' accuracy. Summing up, our findings are as follows: * Confidence-based ensembles of end-to-end ASR models have better recognition accuracy than a system with a dedicated LID block on long audio segments. Our method provides approximately \(10\%\) relative word-error-rate reduction (WERR) compared to the state-of-the-art LID systems on 3 public multilingual datasets. * Our method can be used to significantly limit catastrophic forgetting during model adaptation. Compared to the constraint adaptation of [25], confidence-based ensembles show \(10\)-\(50\%\) WERR on the target data, while being \(2\%\) relatively better on the source domain. * Confidence can be reliably estimated from the output of early layers (4 out of 18) if the model was trained with intermediate CTC loss [26]. This reduces the runtime cost of adding a new model to the ensemble by \(4.5\) times. ## 2 Method An illustration of confidence-based ensembles is presented in Figure 1. This general structure allows for any vector of confidence values to be produced by each model and any non-linear mapping of the confidence vectors to the model indices to be used as a "model selection block". In our experiments we adopt a simpler pipeline. We only use a single confidence score for each model and train a logistic regression (LR) on a few audio samples to predict matching model index from the generated confidence values. ### Model selection Formally, our model selection block is defined as follows. Let's denote the set of models in the ensemble as \(\mathcal{M}=\left\{m_{k}\right\}_{k=1}^{M}\) and the set of evaluation datasets as \(\mathcal{D}=\left\{d_{i}\right\}_{i=1}^{D}\). We assume that there is a surjective mapping \(L:\mathcal{D}\rightarrow\mathcal{M}\) that defines which model is considered "correct" for which dataset. We select a small set of \(N\) utterances1 of the _training_ subset of each dataset \(d_{i}\) denoted as \(\left\{t_{i}^{j}\right\}_{j=1}^{N}\). Finally, let's define \(c_{k}(t_{i}^{j})\) as the confidence score of the model \(k\) computed on the sample \(j\) from the dataset \(i\). Then we train \(\text{LR}:\mathcal{R}^{M}\rightarrow\mathcal{M}\) to map \(x_{i}^{j}=\left[c_{1}(t_{i}^{j}),c_{2}(t_{i}^{j}),...,c_{M}(t_{i}^{j}) \right]\to y_{i}^{j}=L(d_{i})\). Footnote 1: We use \(N=100\) in our experiments, but for datasets with a lot of variability it might be better to use larger training data. Recall that logistic regression is a linear classifier that only needs to train \(n+1\) coefficients for \(n\) models. E.g., in the case of a 2-model ensemble, LR produces the following model selection rule: \(ac_{1}+bc_{2}>c\). While a more powerful non-linear functions might achieve better results, using a simple LR allows us to train model selection block with only a few audio samples and helps to avoid overfitting. ### Evaluation metrics To evaluate the accuracy of the ASR models we use a traditional word-error-rate (WER) metric. To evaluate the accuracy of the model selection within ensemble we use the _average per-dataset accuracy (A\({}_{\text{avg}}\))_ metric: \[A_{\text{avg}}=\frac{1}{D}\sum_{i=1}^{D}\frac{1}{D_{i}}\sum_{j=1}^{D_{i}} \left[\text{LR}\left(c_{1}(v_{i}^{j}),...,c_{m}(v_{i}^{j})\right)=L\left(d_{i} \right)\right]\] where \(v_{i}^{j},j=1..D_{i}\) are the _validation/test_ utterances from dataset \(d_{i}\) and \(D_{i}\) is the size of the validation/test set. \(\left[X=Y\right]\) is an indicator function that equals \(1\) if \(X=Y\) and \(0\) otherwise. Put simply, an average per-dataset accuracy is a mean of the model prediction accuracies across all datasets. We use \begin{table} \begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**Conformer-Transducer**} & \multicolumn{3}{c}{**Conformer-CTC**} \\ **Confidence** & **LID 5 languages** & **CORAAL** & **SLR83** & **LID 5 languages** & **CORAAL** & **SLR83** \\ \hline untuned max-prob & 98.18 \(\pm\) 0.77 & 88.00 & 75.32 \(\pm\) 4.35 & 98.76 \(\pm\) 0.82 & 84.89 & 57.84 \(\pm\) 4.53 \\ default confidence & 99.13 \(\pm\) 0.44 & 94.13 & 81.85 \(\pm\) 4.73 & 99.37 \(\pm\) 0.39 & 91.39 & 77.08 \(\pm\) 5.23 \\ tuned confidence & 99.39 \(\pm\) 0.32 & 95.31 & 93.60 \(\pm\) 1.88 & 99.48 \(\pm\) 0.37 & 93.31 & 90.37 \(\pm\) 4.35 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different confidence estimation methods for ensembles of Conformer models. All the numbers in the table represent an average per-dataset accuracy on the validation subsets. “LID 5 languages” is a combination of MCV, MLS and VoxPopuli data. For the “LID 5 languages” and “SLR83” columns we report mean \(\pm\) standard deviation of results across all datasets. “untuned max-prob” confidence is defined as a product of probabilities of emitted tokens (including blanks). “default confidence” corresponds to Renyi entropy with linear normalization, mean aggregation (excluding blanks), \(T=1.0,\alpha=0.25\). “tuned confidence” is the best confidence measure, which is generally different for each dataset. this metric instead of a regular multi-class prediction accuracy in order to account for label imbalances present in some dataset combinations. E.g., multilingual datasets can be very imbalanced, but we assume that the performance on each language is of equal importance. ## 3 Experiments For all experiments we followed the same evaluation setup. We fit LR on 100 training utterances and tuned LR's hyperparameters (regularization strength and class weights in case of imbalanced datasets) maximizing the \(A_{\text{avg}}\) metric on the validation subsets of all datasets. In the following tables we report the results on the test subsets unless noted otherwise. All experiments were performed using NeMo toolkit2[27]. Footnote 2: [https://github.com/NVIDIA/NeMo](https://github.com/NVIDIA/NeMo) ### Datasets #### 3.1.1 Multilingual ASR. To evaluate the performance of our method we used 3 public multilingual ASR datasets: Mozilla Common Voice (MCV) [28], Multilingual LibriSpeech (MLS) [29] and VoxPopuli [30]. For all datasets we used standard train, validation and test splits. We refer the reader to the original papers for additional information about the used datasets. We built ensembles that supports five languages: English, Italian, Spanish, German and French. For all languages we used "Conformer Large" models from NVIDIA NGC catalog. #### 3.1.2 Accent and dialect adaptation. We tested our method on 2 public datasets: Corpus of Regional African American Language (CORAAL) [31] and Open-source Multi-speaker Corpora of the English Accents in the British Isles (SLR83) [32]. We manually split CORAAL into train, validation and test splits containing 18, 3 and 22 hours respectively. The training, validation and test sets contained audio from different speakers. For SLR83 we re-used data splits from [25]. To be able to have direct comparison with [25] we used Conformer Large models trained on LibriSpeech dataset [33] as the base for model adaptation on SLR83 datasets. For CORAAL, we started finetuning from Conformer Large trained on 24325 hours of English speech from NVIDIA NGC catalog. ### Impact of the confidence measure In this section we study the effect of choosing a confidence estimation method on the ensemble's quality. We evaluated Gibbs, Tsallis [34] and Renyi [35] entropies (with both linear and exponential normalization) proposed in [14] as well as the maximum probability as confidence measures. To aggregate confidence information across time steps we tested minimum, maximum, mean and product aggregation functions. Both RNNT and CTC models use a special _blank_ symbol to align input to the output. We test both including and ignoring blanks in the confidence aggregation. For all measures we tuned the softmax's temperature \(T\in[0.01,0.05,0.1,0.25,0.5,0.75,1.0,2.0,5.0,10.0]\). For the entropy-based measures we also ran a grid-search over the parameter \(\alpha\in[0.1,0.2,0.25,0.33,0.5,1.0]\). Overall, the search space spans 2960 combinations of confidence measures. While the best performing measure for each dataset is typically different, we found that using _Renyi entropy with linear normalization, mean aggregation, \(T=1.0,\alpha=0.25\) and blank symbols excluded_ tends to perform well across all evaluated tasks3. Table 1 shows the comparison of using the product of probabilities of the output tokens (untuned max-prob), Renyi entropy with the aforementioned parameters (default confidence) and the best confidence measure for each dataset (tuned confidence). The proposed default confidence always outperforms maximum probability and can be further improved if confidence hyperparameters are tuned for each dataset. Footnote 3: This measure might not be well-calibrated as it is tuned to optimize the model-selection accuracy. Due to the space constraints all subsequent results are reported with default confidence and only for Transducer models. ### Multilingual ASR In this section we evaluate the performance of the confidence-based ensembles for the multilingual ASR task. We compared our method with several state-of-the-art LID models [36, 37]. Since these models were trained to recognize much larger set of languages, we constrained them to only select the highest probability index out of the 5 evaluated languages. Table 2 shows LID accuracy and WER of the resulting systems on VoxPopuli, MLS and MCV datasets. Table 3 shows how the LID accuracy changes for different audio duration. Our experiments demonstrate that while confidence-based ensembles outperform dedicated LID models on the long audio segments (\(>5\) seconds), they underperform the best baseline for shorter durations. Last column of Table 3 shows that using a combination of confidence and LID scores as an input to logistic regression obtains significantly better predictions for the short segments. \begin{table} \begin{tabular}{l r r r} \hline \hline **LID model** & **VoxPopuli** & **MLS** & **MCV** \\ \hline _LID accuracy_ & & & \\ ECAPA-TDNN-CE [36] & 96.29 & 98.41 & 94.42 \\ XLS-R [37] & 97.63 & 98.69 & 97.47 \\ confidence-based & **98.76** & **99.69** & **98.82** \\ \hline _WER_ & & & \\ oracle & 9.44 & 5.89 & 6.05 \\ ECAPA-TDNN-CE [36] & 11.26 & 6.90 & 9.44 \\ XLS-R [37] & 10.53 & 6.78 & 7.23 \\ confidence-based & **9.57** & **6.04** & **6.24** \\ \hline \hline \end{tabular} \end{table} Table 2: LID accuracy and WER on VoxPopuli, MLS and MCV datasets. All numbers show an average of the results across all 5 languages. Confidence-based ensembles outperform all baseline LID systems. \begin{table} \begin{tabular}{l r r r} \hline \hline **Audio duration** & **Confidence** & **XLS-R** & **Combination** \\ \hline 3 sec & 90.63 & 93.45 & 96.00 \\ 5 sec & 95.50 & 95.45 & 97.56 \\ 10 sec & 98.05 & 97.07 & 98.86 \\ 15 sec & 99.02 & 97.56 & 99.46 \\ full audio & 99.57 & 97.67 & 99.57 \\ \hline \hline \end{tabular} \end{table} Table 3: Model selection accuracy with different audio duration on VoxPopuli dataset. Only utterances with more than 15 seconds of audio were used to have the same evaluation set for all durations. “Combination” column shows that using a combination of confidence and LID scores can improve the results for short segments. ### Accent and dialect adaptation For all finetuning experiments in this section we ran training for 100 epochs on a single 16GB NVIDIA V100 GPU re-using training configuration of the base model, except for the batch size and learning rate parameters. Batch size was selected to fully utilize GPU memory and learning rate was tuned over 8 log-uniform grid points in \([10^{-7},10^{-3}]\). Table 4 shows the comparison of the confidence-based ensembles with the constraint adaptation approach of [25] on the SLR83 dataset. Our method achieves significantly better accuracy on both original and target domains. Note that for one dialect confidence-based ensemble outperforms the finetuned model. This is possible because the base model can be "incorrectly" selected on utterances where it has lower WER than the finetuned model. Table 5 shows the results of the confidence-based ensemble after adaptation on the CORAAL dataset. The last two rows of the table demonstrate an ability of the ensemble to trade-off between performance on the target and the original domain via a runtime change of the logistic regression threshold. Based on the specific requirements users can adjust the performance of the model with a simple configuration change. ### Decreasing runtime cost A clear limitation of model ensembles is that runtime cost increases linearly with ensemble size. To partially overcome this, we propose to use outputs of the intermediate layers for model selection4. Table 6 shows that it is possible to get a high-quality confidence estimate using intermediate encoder layers. We fine-tuned all models for 100 epochs on the combination of MCV, MLS and VoxPopuli datasets using intermediate CTC loss [26] applied to different layers and used the output of those layers to compute confidence scores. Following the original paper, we added the new loss with coefficient of \(0.3\). We re-used training configuration of the base models except the learning rate which was 10 times smaller. Unlike the original paper we applied this technique to both CTC and Transducer models and it generally increased models' accuracy because of the extra regularization. Surprisingly, even though WER of intermediate layers is significantly worse, using intermediate confidence has only a minor reduction in the model-selection accuracy. Footnote 4: This approach requires finetuning models and thus can only be used if the training data is available. ## 4 Conclusion In this paper we showed that confidence-based ensembles of end-to-end ASR models can improve state-of-the-art results for several speech recognition problems. Our method achieves close to \(10\%\) WERR on 3 multilingual ASR datasets compared to systems using a dedicated LID block. It also improves results on accent and dialect adaptation tasks by \(10\)-\(50\%\) WERR. However, there are several limitations that need to be acknowledged. Confidence-based ensembles are not well suited for latency-critical applications as they require a few seconds of audio to select the most confident model. The runtime cost grows linearly with each added model, which limits the practically useful ensemble size. Finally, given enough compute and data, it is likely possible to build specialized models that would outperform confidence-based ensembles on most tasks. Taking these limitations into account, we think that confidence-based ensembles can be useful in a wide range of applications beyond what is covered in our experiments. We consider building confidence-based ensembles as a general technique to combine multiple black-box expert models into a single system that achieves competitive results on all target domains. The models can be combined with almost no additional training and without the need to share either data or model weights. We think that these properties can enable new applications of speech technology for users who don't have access to large compute clusters or big training datasets. \begin{table} \begin{tabular}{l r r} \hline \hline **Confidence from** & **CORAAL** & **VoxPopuli** & **MLS** \\ \hline Layer 4 & 91.58 & 98.67 & 99.80 \\ Layer 9 & 93.32 & 98.08 & 99.75 \\ Layer 18 (final) & 94.13 & 98.76 & 99.69 \\ \hline \hline \end{tabular} \end{table} Table 6: Model selection accuracy using confidence estimation from different layers. All models have been finetuned with intermediate CTC loss [26]. \begin{table} \begin{tabular}{l r r r r r r r r r r r} \hline \hline **Model** & **LS other** & **Ir-M** & **Mi-F** & **Mi-M** & **No-F** & **No-M** & **Sc-F** & **Sc-M** & **So-F** & **So-M** & **We-F** & **We-M** \\ \hline base & 5.12 & 20.69 & 9.61 & 11.25 & 11.11 & 10.18 & 12.26 & 11.94 & 9.70 & 10.22 & 8.51 & 11.46 \\ finetuned & 7.85 \(\pm\) 2.28 & 9.17 & 7.51 & 8.36 & 7.60 & 6.87 & 8.32 & 6.35 & 4.14 & 3.59 & 4.67 & 6.49 \\ \hline constrained [25] & 5.40 \(\pm\) 0.05 & 15.86 & 8.40 & 9.43 & 9.33 & 8.54 & 10.00 & 8.68 & 7.73 & 7.90 & 6.64 & 9.70 \\ ensemble & **5.26 \(\pm\) 0.11** & **9.17** & **7.51** & **8.36** & **7.63** & **6.78** & **8.40** & **6.82** & **4.64** & **3.98** & **4.79** & **6.70** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of the WER for different model adaptation techniques on the SLR83 dataset. The LibriSpeech test-other WER is averaged across all speakers. We abbreviate each speaker id using the first two letters of speaker’s dialect and the first letter of gender (e.g., “Ir-M” stands for “Irish-English Male”). Our “base” model is the same as in [25], but the “finetuned” model is generally better on the target data as we do not constraint training. \begin{table} \begin{tabular}{l r r} \hline \hline **Model** & **LS other** & **VoxPopuli** & **CORAAL** \\ \hline base & 3.73 & 6.37 & 31.23 \\ finetuned & 5.25 & 10.51 & 8.29 \\ \hline ensemble default & 3.82 & 6.57 & 8.72 \\ tuned for base & 3.76 & 6.39 & 9.77 \\ tuned for target & 4.31 & 8.85 & 8.33 \\ \hline \hline \end{tabular} \end{table} Table 5: WER after model adaptation on CORAAL dataset. We trained ensemble to select “base” model for both LibriSpeech and VoxPopuli datasets and “finetuned” model for CORAAL dataset. The last two rows demonstrate capability of ensemble to prioritize original vs target domain via a runtime adjustment of logistic regression probability threshold.
2310.16413
Chiral forces in longitudinally invariant dielectric photonic waveguides
Optical forces can be chiral when they exhibit opposite signs for the two enantiomeric versions of a chiral molecule or particle. Such forces could be eventually used to separate enantiomers, which could find application in numerous disciplines. Here, we analyze numerically the optical chiral forces arising in the basic element of photonic integrated circuitry: a dielectric waveguide with rectangular cross-section. Such waveguides are inherently lossless thus generating chiral forces that are invariant in the longitudinal direction and therefore enable enantiomeric separation over long (cm-scale) distances. Assuming Brownian motion in a liquid environment, we calculate first the force strength and time span needed to perform the separation of chiral nanoparticles as a function of the radii. Then we analyze the chiral forces produced by the fundamental quasi-TE guided mode in a silicon nitride waveguide and show that it can lead to enantiomeric separation via the transverse spin at short wavelengths (405 nm). At longer wavelengths (1310 nm), the proper combination of degenerate quasi-TE and quasi-TM modes would result in a quasi-circularly polarized mode with intrinsic chirality (helicity), leading to chiral gradient forces that also enable the enantiomeric separation of smaller nanoparticles. We report particle tracking simulations where the optical force field produced by a quasi-TE and a quasi-circular mode proved to separate enantiomers under a time span of two seconds. Our results suggest the viability of enantiomeric separation using simple photonic integrated circuits, though different wavelength windows should be selected according to the nanoparticle size.
Josep Martínez-Romeu, Iago Díez, Sebastian Golat, Francisco J. Rodríguez-Fortuño, Alejandro Martínez
2023-10-25T06:58:25Z
http://arxiv.org/abs/2310.16413v1
# Chiral forces in longitudinally invariant dielectric photonic waveguides ###### Abstract Optical forces can be chiral when they exhibit opposite signs for the two enantiomeric versions of a chiral molecule or particle. Such forces could be eventually used to separate enantiomers, which could find application in numerous disciplines. Here, we analyze numerically the optical chiral forces arising in the basic element of photonic integrated circuitry: a dielectric waveguide with rectangular cross-section. Such waveguides are inherently lossless thus generating chiral forces that are invariant in the longitudinal direction and therefore enable enantiomeric separation over long (cm-scale) distances. Assuming Brownian motion in a liquid environment, we calculate first the force strength and time span needed to perform the separation of chiral nanoparticles as a function of the radii. Then we analyze the chiral forces produced by the fundamental quasi-TE guided mode in a silicon nitride waveguide and show that it can lead to enantiomeric separation via the transverse spin at short wavelengths (405 nm). At longer wavelengths (1310 nm), the proper combination of degenerate quasi-TE and quasi-TM modes would result in a quasi-circularly polarized mode with intrinsic chirality (helicity), leading to chiral gradient forces that also enable the enantiomeric separation of smaller nanoparticles. We report particle tracking simulations where the optical force field produced by a quasi-TE and a quasi-circular mode proved to separate enantiomers under a time span of two seconds. Our results suggest the viability of enantiomeric separation using simple photonic integrated circuits, though different wavelength windows should be selected according to the nanoparticle size. ## 1 Introduction Separation of enantiomers from racemic mixtures is essential in fields such as chemistry and pharmaceutics since the required performance is only exhibited by an enantiomer with a certain handedness (or chirality). Usually, methods based on chemical interactions, such as chiral High-Performance Liquid Chromatography (HPLC), are employed to separate enantiomers starting from racemic mixtures. However, such chemical methods are usually slow, expensive, and molecule-dependent [1]. An interesting alternative would be the use of optical chiral forces [2]: since the chiral force exerted by light changes its sign when acting on enantiomers of different handedness [3], it could ultimately lead to light-driven enantiomeric separation overcoming many of the limitations of chemical methods. There have been many recent theoretical and simulation works addressing the separation of enantiomers using light [4, 5, 6, 7, 8, 9, 10]. Indeed, there have been several experiments demonstrating optically-induced separation, though for relatively large nanoparticles and nanostructures [11, 12, 13]. In all cases, free-space propagating beams are used, either being reflected at dielectric interfaces [5] or interfering with other beams [7] to produce the transverse optical spin that generates the required forces. Alternatively, one may think of using guided light for separation purposes, taking advantage of the enhancement of chiral interaction because the light is confined in subwavelength cross-sections over relatively long distances (ideally infinite for a lossless waveguide). One possibility is the use of optical nanofibers with cylindrical cross-section, as recently noticed by Golat _et al._[14]. Another possibility would be the use of dielectric waveguides that can be created by lithography in photonic integrated circuits (PICs) and can exhibit either transverse [15] or longitudinal spin [16] for guided modes. Remarkably, such waveguides can be massively integrated into PICs and, in the case of silicon-related materials, fabricated in large volumes using low-cost processes. Recently, several approaches to separating enantiomers using integrated waveguides have been presented [17, 18, 19]. However, in all of them, the separating chiral forces are not kept over long propagation distances, just missing this clear advantage of PICs over free-space approaches. In this work, we analyze the chiral separation properties of the most simple photonic integrated structure: a lossless dielectric waveguide with rectangular cross-section on a lower-index substrate. We consider silicon nitride (SiN) as the material to build the waveguide core since it is transparent from telecom wavelengths down to the ultraviolet. The refractive index of SiN is large enough to ensure tight guiding when the core lies on a silicon dioxide substrate and is surrounded by water. Remarkably, waveguides with very low propagation loss (\(<\) 1dB/cm) can be fabricated using mature tools and processes [20]. We first calculate the order of magnitude of the required chiral forces to perform the enantiomeric separation of particles under a reasonable time assuming Brownian diffusion of the target particles in a liquid environment. Then, we calculate numerically the electric and magnetic fields at different wavelengths from 405 to 1310 nm to obtain the optical forces using well-established equations [14]. We show that at short wavelengths (\(\approx\) 405 nm) lateral chiral forces arising from the transversal spin of the guided quasi-TE mode [15] can overcome the achiral forces and be used for separation of chiral particles of 80 nm radius. At longer wavelengths (\(\approx\) 1310 nm), this lateral force becomes much smaller but we combine the quasi-TE and quasi-TM modes of the waveguide with a proper \(90^{\circ}\) phase shift between them to generate a quasi-circularly polarized (quasi-CP) mode [16]. This mode produces a large transversal chiral gradient force that could separate nanoparticles of 52 nm radius and potentially molecules with a size of the order of 1 nm. Our results suggest that long SiN waveguides are simple but realistic structures towards achieving enantiomeric separation within seconds using guided light in PICs. ## 2 Optical forces exerted on small chiral particles The electromagnetic field of light carries momentum that can be transferred to a particle through the action of an optical force, and consequently, cause its motion. In this work, we restrict to study the motion of small chiral particles (whose size is smaller than the wavelength of light) subjected to optical forces. A small particle is fully characterized by its electric dipole moment, \(\mathbf{p}\), which can be thought of as the separation of positive and negative charges, and its magnetic dipole moment, \(\mathbf{m}\), which represents the overall current loop within the particle. The time-averaged force \(\mathbf{F}\) that the electromagnetic field exerts on a small particle is: [5, 14, 21, 22] \[\mathbf{F}=\frac{1}{2}\Re\Big{[}(\mathbf{\nabla}\!\otimes\!\mathbf{E})\mathbf{p}^{*}\!+\!\mu( \mathbf{\nabla}\!\otimes\!\mathbf{H})\mathbf{m}^{*}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \((\varepsilon_{p},\mu_{p},\kappa)\) refer to the relative permittivity, relative permeability, and chirality parameter of the particle, and \((\varepsilon_{m},\mu_{m})\) refer to the relative permittivity and permeability of the non-chiral background medium. A radiation damping has to be added to expressions (3) to satisfy the conservation of energy [23]. This so-called radiative correction is often applied incorrectly in the literature for chiral particles, because it is only applied to the electric and magnetic polarizabilities, thus, neglecting the correction for the chiral polarizability. However, as shown by Sersic _et al._[23], the tensor radiative correction has to be applied to the full \(6\times 6\) square polarizability matrices, yielding the following expressions for the dynamic polarizabilities [14], which are used for calculating the dipole moments in Eq. (2). \[\begin{split}\alpha_{\mathrm{e}}&=\frac{\alpha_{ 0\mathrm{e}}-\mathrm{i}\frac{k^{3}}{6\pi}(\alpha_{0\mathrm{c}}^{2}-\alpha_{0 \mathrm{e}}\alpha_{0\mathrm{m}})}{1+\left(\frac{k^{3}}{6\pi}\right)^{2}( \alpha_{0\mathrm{c}}^{2}-\alpha_{0\mathrm{e}}\alpha_{0\mathrm{m}})-\mathrm{i }\frac{k^{3}}{6\pi}(\alpha_{0\mathrm{e}}+\alpha_{0\mathrm{m}})},\\ \alpha_{\mathrm{m}}&=\frac{\alpha_{0\mathrm{m}}- \mathrm{i}\frac{k^{3}}{6\pi}(\alpha_{0\mathrm{c}}^{2}-\alpha_{0\mathrm{e}} \alpha_{0\mathrm{m}})}{1+\left(\frac{k^{3}}{6\pi}\right)^{2}(\alpha_{0\mathrm{ c}}^{2}-\alpha_{0\mathrm{e}}\alpha_{0\mathrm{m}})-\mathrm{i}\frac{k^{3}}{6\pi}( \alpha_{0\mathrm{e}}+\alpha_{0\mathrm{m}})},\\ \alpha_{\mathrm{c}}&=\frac{\alpha_{0\mathrm{c}}}{1+ \left(\frac{k^{3}}{6\pi}\right)^{2}(\alpha_{0\mathrm{c}}^{2}-\alpha_{0\mathrm{ e}}\alpha_{0\mathrm{m}})-\mathrm{i}\frac{k^{3}}{6\pi}(\alpha_{0\mathrm{e}}+\alpha_{0 \mathrm{m}})}.\end{split} \tag{4}\] With Eqs. (1,2,3,4) we are ready to compute the time-averaged optical force due to a monochromatic electromagnetic field on a small particle. However, we gain more insight by developing the expression of the force from Eq. (1). The force can be split into several terms which depend on the following six time-averaged local field properties: electric energy density \(W_{e}\), magnetic energy density \(W_{m}\), helicity density \(\mathfrak{G}\), electric spin density \(\mathbf{S}_{e}\), magnetic spin density \(\mathbf{S}_{m}\) and complex Poynting vector \(\mathbf{\Pi}\): \[\begin{split} W_{e}&=\frac{1}{4}\varepsilon|\mathbf{E} |^{2}\ \ \left[\frac{J}{m^{3}}\right]\\ W_{m}&=\frac{1}{4}\mu|\mathbf{H}|^{2}\ \ \left[\frac{J}{m^{3}} \right]\\ \mathfrak{G}&=\frac{1}{2\omega c}\Im\left(\mathbf{E} \cdot\mathbf{H}^{*}\right)\ \ \left[\frac{J\cdot s}{m^{3}}\right]\\ \mathbf{S}_{e}&=\frac{1}{4\omega}\Im\left(\varepsilon\ \mathbf{E}^{*}\,\times\mathbf{E}\right)\ \ \left[\frac{J\cdot s}{m^{3}} \right]\\ \mathbf{S}_{m}&=\frac{1}{4\omega}\Im\left(\mu\,\mathbf{H}^{* }\times\mathbf{H}\right)\ \ \left[\frac{J\cdot s}{m^{3}} \right]\\ \mathbf{\Pi}&=\frac{1}{2}\mathbf{E}\times\mathbf{H}^{*}\ \ \left[\frac{W}{m^{2}} \right]\end{split} \tag{5}\] The fully developed expression of the optical force acting on a particle is shown below, which has been split into the chiral and achiral terms, depending on whether the terms are a function of the chiral polarizability or not, respectively [14]: \[\begin{split}\mathbf{F}_{\text{chiral}}&=\underbrace{\omega \ \Re(\alpha_{c})\nabla\mathfrak{G}}_{\text{helicity gradient}}-\underbrace{\frac{1}{c}\Im(\alpha_{c})\nabla\times\Re\mathbf{\Pi}}_{ \text{vortex}}\\ &+\underbrace{\left(\ 2k\Im(\alpha_{c})-\frac{k^{4}}{3\pi}\Re( \alpha_{e}^{*}\alpha_{c})\right)\omega\mathbf{S}_{e}}_{\text{electric spin}}\ +\underbrace{\left(\ 2k\Im(\alpha_{c})-\frac{k^{4}}{3\pi}\Re(\alpha_{c}^{*}\alpha_{m})\right) \omega\mathbf{S}_{m}}_{\text{magnetic spin}}\\ \mathbf{F}_{\text{achiral}}&=\underbrace{\Re(\alpha_{e}) \nabla W_{e}}_{\text{electric gradient}}+\underbrace{\Re(\alpha_{m}) \nabla W_{m}}_{\text{magnetic gradient}}\ -\ \underbrace{\omega\nabla\times(\Im(\alpha_{e})\mathbf{S}_{e}+\Im(\alpha_{m})\mathbf{S}_ {m})}_{\text{spin-curl}}\\ &+\underbrace{\left(\frac{k}{c}\Im(\alpha_{e}+\alpha_{m})-\frac {k^{4}}{6\pi}\frac{1}{c}\left(\Re(\alpha_{e}^{*}\alpha_{m})+|\alpha_{c}|^{2} \right)\right)\Re\mathbf{\Pi}}_{\text{radiation pressure}}\\ &-\underbrace{\frac{k^{4}}{6\pi}\frac{1}{c}\Im(\alpha_{e}^{*} \alpha_{m})\Im\mathbf{\Pi}}_{\text{flow}}\end{split} \tag{6}\] All the forces exhibit an inherent dependency on the volume of the particle because the polarizabilities are proportional to \(r^{3}\). In addition, some of the forces show explicit dependency on the wavelength (or wavenumber) too, being proportional to \(r^{3}/\lambda\) or \(r^{6}/\lambda^{4}\), such as the electric and magnetic spin forces and radiation pressure force, whereas others, such as the gradient forces, do not. Therefore, the dominance of chiral forces over achiral forces vary depending on particle size and wavelength. A more detailed analysis of the chiral and achiral optical forces exerted on small chiral particles is discussed by Golat _et al._[14]. ## 3 Optically driven motion of small chiral particles in a fluid To study under what circumstances the optical enantioseparation is possible, we first need to examine the motion of particles in a fluid under the influence of an external chiral optical force field. To this end, we consider the following assumptions for this system: there is no net fluid flow, and the mass of the particles is negligible so that the viscous forces dominate over the inertial forces. Under these considerations, the motion of the particle is driven by a combination of the external optical force, \(\mathbf{F}\), the friction or drag force due to the viscosity of the fluid (that is opposite to the movement of the particle), and the force arising from the stochastic collisions of the smaller fluid molecules with the particle (Brownian motion). The variation of the particle position, \(\mathbf{x}(t)\), in time is governed by the overdamped Langevin equation [24, 25] \[\underbrace{\gamma\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}}_{\text{ friction}}=\underbrace{\mathbf{F}}_{\text{optical}}+\underbrace{\gamma\ \sqrt{2D}\ \mathbf{\xi}(t)}_{\text{stochastic}} \tag{7}\] where \(D=k_{B}T/\gamma\) is the particle's diffusion coefficient within the bulk of the fluid, \(k_{B}\) is the Boltzmann constant and \(T\) is the absolute temperature. Under the assumption that the particles have spherical shape of radius \(r\), the friction coefficient can be expressed as \(\gamma=6\pi\eta r\) (Stoke's law of friction), where \(\eta\) is the dynamic viscosity of the fluid. Eq. (7) is a stochastic differential equation that can be solved numerically with the Euler-Maruyama integration scheme to track the location of the particle upon an increment in time \(\Delta t\)[26, 27]. The solution can be expressed as: \[x^{(m+1)} =x^{(m)}+\frac{F_{x}^{(m)}}{\gamma}\Delta t+\sqrt{2D_{x,\perp}^{ (m)}\Delta t}\;N_{x}(0,1)\] \[y^{(m+1)} =y^{(m)}+\frac{F_{y}^{(m)}}{\gamma}\Delta t+\sqrt{2D_{y,\perp}^{ (m)}\Delta t}\;N_{y}(0,1) \tag{8}\] \[z^{(m+1)} =z^{(m)}+\frac{F_{z}^{(m)}}{\gamma}\Delta t+\sqrt{2D_{z,||}^{(m) }\Delta t}\;N_{z}(0,1)\] where the upper index (\(m\)) represents the \(m-\)th instant of time, and \(N_{x/y/z}(0,1)\) represents three independent standard normal distributions with \(0-\)mean and \(1-\)variance. The force field \((F_{x}^{(m)},F_{y}^{(m)},F_{z}^{(m)})\) and the diffusion coefficients are evaluated at the location of the particle \((x^{(m)},y^{(m)},z^{(m)})\). The diffusion coefficients are modified with respect to their bulk value (\(D\)) as the particle moves near a boundary, such as the interface of the fluid with the waveguide or the walls of a microfluidic channel. This modification depends on whether the movement of the particle is perpendicular or parallel to the boundary [28]. More details are given in Appendix 7.1. Eq. (8) is used in Section 5 to follow the trajectories of individual particles throughout a microfluidic channel subjected to the force field exerted by the waveguide mode. The magnitude of the chiral optical forces required for enantiomeric separation and the needed sorting time can be estimated upon further assumptions. For that we examine the movement of a cloud of particles under the influence of an external optical force field in the bulk, i.e. in an infinite system with no boundaries. We followed the derivation from Kravets _et al._[24]. The enantiomers are modelled as non-interacting spherical particles and are assumed to be initially mixed and distributed within a spherical cloud of diameter \(L_{0}\), as shown in Fig. 1(a). Let us assume the optical force field is uniform in space, constant in time, oriented along the \(x-\)direction, and dominantly chiral, i.e. the achiral part of the force is negligible compared to the chiral part (\(F=|F_{\mathrm{chiral}}|>>|F_{\mathrm{achiral}}|\)). The chiral optical force \(F\) exerted on the particles moves each enantiomer in opposite directions, thus effectively separating the initial racemic mixture into two separate clouds of particles. Upon an interval of time \(t\), each enantiomer cloud is displaced a distance \(d_{\mathrm{opt}}=Ft/\gamma\) due to the optical force. In addition to this displacement, the radius for each enantiomer cloud is expected to increase on average \(d_{\mathrm{B}}=\sqrt{2Dt}\) due to Brownian motion. These two simultaneous processes of motion are schematized in Fig. 1(a). The condition to achieve a separation \(\Delta x\) between the two clouds within a sorting time \(t_{\mathrm{sort}}\) gives rise to the following equation whose derivation is shown in detail in the Appendix 7.2. \[t_{\rm sort}=3\pi\eta k_{\rm B}T\frac{r}{F^{2}}\left(1+\sqrt{1+\frac{F(L_{0}+ \Delta x)}{k_{\rm B}T}}\right)^{2} \tag{9}\] In the integrated waveguide system, the particles flow along a microfluidic channel and the force field is generated by a waveguide. Thus, the actual force field is not spatially uniform and there are boundaries that enclose the region of space where the particles can move and that modify the diffusion coefficient value. However, Eq. (9) provides a quite accurate estimation of the sorting time for a longitudinally invariant dielectric waveguide-microchannel system. We assume the initial extension of the racemic mixture cloud is \(L_{0}=1\)\(\mu\)m. This is achievable with the well-known microfluidics technique called hydrodynamic flow focusing [29], where two lateral flows can control the width of the middle fluid channel where the particles are suspended. The fluid is composed mainly of water at temperature \(T\)=293 K so the dynamic viscosity is \(\eta=10^{-3}\) Pa \(\cdot\) s [24]. Fig. 1(b) shows the sorting time for particles of different radii, calculated with Eq. (9) for \(\Delta x=0\), which marks the starting point of separation. Stronger forces are needed to sort larger particles for the same value of sorting time. Moreover, larger particles take longer to be sorted for the same value of the force. We emphasize that this sorting time is obtained for a force field that is constant in time and uniform in space, which is not the actual situation of the force field generated by a waveguide system. In the latter case, the forces are stronger at distances closer to the waveguide and decay with the distance due to the evanescent field of the mode in the fluid. The usefulness of this graph is to know what range of optical force magnitude the waveguide Figure 1: (a) Schematic showing the combined action of the translation of the cloud of enantiomers due to the optical force (\(d_{\rm opt}\)) and the increase in the cloud size due to Brownian motion (\(d_{\rm B}\)). After a sorting time (\(t_{\rm sort}\)), the clouds are separated by a distance of \(\Delta x\). Schematic adapted from [24]. (b) Sorting time of the enantiomer clouds as a function of the modulus of the separating optical force. Particles of different radii display different sorting time curves, from 1 nm to 1000 nm. system needs to generate to sort particles under a reasonable time: 1 ms \(-\) few hours. We must therefore look for integrated waveguides that generate optical chiral forces within the range of \(10^{-3}\) fN\(-10^{3}\) fN for sorting particles of radii between 1 nm and 1000 nm. Once we have designed those waveguide systems, we use the particle tracking algorithm (Eq. (8)) to test the actual enantiomeric separation that our waveguides can reach. ## 4 Description of the integrated photonic waveguide We consider a photonic strip waveguide made with a SiN core (refractive index \(n\approx 2\)) on top of a SiO\({}_{2}\) (\(n=1.4468\)) substrate, and surrounded by water (\(n=1.33\)) as the system that produces the force field responsible for the enantiomeric sorting; as shown in Fig. 2(a). The use of SiN has several practical advantages such as transparency at visible and near-infrared wavelengths, a relatively large refractive index to ensure tight localization of the fields in the waveguide core, and its processing with a mature silicon technology to produce low-loss waveguides [20]. Moreover, SiN is particularly appropriate for applications requiring immersion in fluid, such as photonic biosensing [30]. In order to obtain the electric and magnetic fields of the guided modes, the Maxwell's equations are solved in the waveguide using the finite element method implemented by the FemSIM solver in the commercial software RSoft (Synopsis). The software computes the eigenmode of the cross-section of the waveguide system, which is assumed to be invariant along the longitudinal direction (translational symmetry along the optical axis, i.e. \(z-\)axis). The resulting electric and magnetic fields are plugged into Eq. (6) to obtain the optical forces per amount of power guided by the mode. We assume 20 mW of power in our simulations which, despite being a high power level for integrated optics, can be attained using standard semiconductor continuous wave laser without causing material damage. Noticeably, the forces depend on the characteristics of the particle, which are modelled by the polarizabilities (Eqs. (3) and (4)). The particle's material is modelled with a relative permittivity of \(\varepsilon_{p}=2\), relative permeability \(\mu_{p}=1\), and chirality parameter \(\kappa=\pm 0.5\). These values have been widely used to characterize chiral nanoparticles in the literature [5, 31]. The medium (water) is modelled with the values \(\varepsilon_{m}=1.77\) and \(\mu_{m}=1\). We take into account the following two conditions for the design of the waveguide towards enantiomeric separation: (I) \(|F_{\mathrm{chiral}}|>|F_{\mathrm{achiral}}|\), to ensure that chiral forces dominate over achiral ones (although it is not a necessary condition for achieving separation); and (II) \(|F_{\mathrm{chiral}}|\sim 10^{-3}\) fN\(-10^{3}\) fN, according to our calculations from Fig. 1. After the total optical force field is computed, we use the particle tracking algorithm to track the position of an individual particle for each enantiomer for either 1 or 2 seconds throughout a hypothetical microfluidic channel surrounding the waveguide. Due to the stochastic nature of the Brownian motion, we repeat this tracking 500 times from the same initial position to do a statistical analysis of the enantioseparation process. The initial position for all particles is at \(x=0\) and at a middle height between the top surface of the waveguide and the ceiling of the microchannel. From the final position of the particles, we calculate the enantiomer fraction (EF) for each enantiomeric cloud. More details about the particle tracking algorithm and statistical analysis can be found in Appendix 7.1. ## 5 Results We consider two different approaches to sorting chiral nanoparticles throughout the transversal plane (\(xy-\)plane) around the waveguide: the fundamental quasi-transverse electric mode (quasi-TE mode) for horizontal sorting and a quasi-circularly polarized mode (quasi-CP mode) for attractive-repulsive sorting. The electric field intensity and polarization of the quasi-TE mode and the quasi-CP mode are plotted in Fig. 2(b). A more detailed decomposition of the electric and magnetic fields for the quasi-TE, quasi-TM, and quasi-CP modes can be found in the Appendix 7.3 (Fig. 6). ### Quasi-TE mode The fundamental quasi-TE mode (or TE\({}_{0}\)) in a strip waveguide is characterized by the transverse electric field mainly pointing along the horizontal plane (\(x-\)axis) and the transverse magnetic field mainly polarized along the vertical plane (along \(y-\)axis). This guided mode exhibits a non-zero longitudinal component of the electric and magnetic field as a consequence of confining the wave inside a waveguide. The longitudinal component oscillates out-of-phase in comparison with the transversal components. This results in a transverse spin [15], which can be interpreted as the quantum spin Hall effect of light [32] and gives rise to transversal chiral forces. The optical forces that a quasi-TE mode exerts on a particle are computed for a wavelength of 405 nm, and a particle radius of 80 nm. The field properties that are responsible for the dominant transversal forces in this system are the electric energy density \(W_{e}\), the magnetic energy density \(W_{m}\), and the magnetic spin \(\mathbf{S}_{m}\). These field properties together with their respective force are plotted in Fig. 3. The electric gradient force attracts any particle toward the sidewalls of the waveguide, whereas the magnetic gradient force repels any particle from the top of the waveguide. The magnetic spin force moves the (\(+\))-particles towards the left and the (\(-\))-particles towards the right. Over the top of the waveguide, the achiral gradient electric and gradient magnetic forces have opposite directions, thus reducing the strength of the total achiral force. That is why the chiral force magnitude (\(\sim 4.29\) fN/mW) is of the same order as the total achiral force (\(\sim 9.19-6.02=3.17\) fN/mW), resulting in the total force along the \(x-\)axis changing sign for opposite enantiomers over the top of the waveguide, thus, pushing enantiomers to opposite sides along the \(x-\)direction. The motion of 500 particles per enantiomer is tracked individually for 1 s given the total optical force field shown in Fig. 3. The microchannel dimensions (width \(\times\) height) are 3 \(\mu\)m \(\times\) 0.35 \(\mu\)m. The resulting final positions of the particles are plotted in Fig. 3 for both enantiomers. From among the 500 (\(+\))-particles 64% end up on the left side (\(x<0\)) and 59% of (\(-\))-particles end up on the right side (\(x>0\)), yielding an enantiomer fraction (EF) of 61.0% and 62.1% respectively. The (\(+\))-EF is calculated at \(x<0\) and the (\(-\))-EF is calculated at \(x>0\). Particle tracking simulations assuming powers stronger than 20 mW, and therefore stronger forces, suggest that the EF increase with power injected into the mode yielding 76% for 50 mW and 85% for 100 mW. A way to increase the strength of chiral forces over the achiral forces is by exploiting the wavelength dependency of the forces. Some of the forces depend on the ratio \(r/\lambda\), as previously discussed for Eq. (6). This dependency is studied in Fig. 2(c) for the quasi-TE mode, where the total chiral and achiral forces along \(x-\) direction, which is the sorting direction, are represented with respect to the particle radius for three wavelengths (405 nm, 780 nm, and 1310 nm). In all the cases, the forces are evaluated at a point positioned at a vertical distance equal to the particle's radius over the top of the waveguide, and at a horizontal distance equal to 1/4 of the waveguide's Figure 2: (a) Schematic of the waveguide cross-section representing the opposite action of the total optical force exerted by the guided mode (in red) onto particles with opposed chirality. (b) Transversal electric field intensity (in color map) and polarization (in arrow or ellipse map) of the quasi-TE mode and of the quasi-CP mode. The handedness of the polarization ellipses does not change its sign throughout the cross-section. (c) Net chiral and achiral force along the x-direction that a quasi-TE mode in a SiN strip waveguide exerts on a particle, depending on its size. This is calculated for three different SiN waveguides, each one operating at a different wavelength: 405 nm, 780 nm, or 1310 nm. The SiN cross-section size (width \(\times\) thickness) of the waveguides are: 0.151 \(\mu\)m \(\times\) 0.139 \(\mu\)m (for \(\lambda=405\) nm), 0.292 \(\mu\)m \(\times\) 0.268 \(\mu\)m (for \(\lambda=780\) nm), and 0.495 \(\mu\)m \(\times\) 0.45 \(\mu\)m (for \(\lambda=1310\) nm). (d) Net transversal chiral and achiral forces that a quasi-CP mode in a strip waveguide exert on a particle depending on the particle size. (e) Cross-section of the strip waveguide showing the position where the forces are evaluated for (c) and (d): at a vertical distance over the top of the waveguide equal to the radius of the particle, and at a horizontal distance equal to the fourth of the waveguide width from the center of the waveguide. Figure 3: Field properties (\(W_{e}\), \(W_{m}\) and \(\mathbf{S}_{m}\)) that generate the dominant forces of the quasi-TE mode in a strip waveguide (0.151 \(\mu\)m wide \(\times\) 0.139 \(\mu\)m thick) at \(\lambda\) = 405 nm. The force stemming from the \(\mathbf{S}_{m}\) as well as the total optical force are represented for both chiralities of the particle (\(\kappa\) = \(\pm\) 0.5). All forces are calculated for a particle of 80 nm radius. The axes of all graphs refer to the \(x-\) and \(y-\)coordinate measured in \(\mu\)m units. The arrow map represents the transversal components of the vectorial quantities being plotted, and the colormap represents the scalar quantity or the \(z-\)component of the vectorial quantity being plotted. The particle tracking graph shows the last position of 500 particles per enantiomer inside the microchannel (3 \(\mu\)m wide \(\times\) 0.5 \(\mu\)m thick) after 1 s of motion given the total optical force field produced by the waveguide. The initial position of all particles is marked by the \(\times\) symbol. width from the center; as shown in the inset in Fig. 2(e). That vertical distance is the minimum distance the particle can be at due to its size. Fig. 2(c) shows that there is a particle size range over which chiral forces become larger than achiral forces along the \(x-\)direction: 75\(-\)138 nm for \(\lambda=405\) nm, 148\(-\)288 nm for \(\lambda=780\) nm, and 235\(-\)460 nm for \(\lambda=1310\) nm. In fact, there is a specific radius that maximizes the ratio of chiral force over achiral force within those intervals. In these ranges, the spin magnetic force is stronger than the other achiral forces along the \(x-\)direction. However, for smaller radii, the achiral magnetic gradient becomes dominant, and for larger radii, the sum of the achiral flow force due to the imaginary part of the Poynting vector and the achiral magnetic gradient becomes dominant instead. In addition, these intervals suggest that operating at shorter wavelengths is more suitable for sorting smaller chiral particles. Another way to facilitate the horizontal chiral separation is to lower the strength of the achiral gradient forces along the \(x-\)axis. This can be achieved by making the waveguide wider. The power is thus spread over a larger area, thereby reducing the gradient of the fields and their respective achiral gradient forces along the \(x-\)axis. In Figure 4: The total optical force (transversal in arrow map, and longitudinal in colormap) that a quasi-TE mode in a wide strip waveguide (2.270 \(\mu\)m wide \(\times\) 0.139 \(\mu\)m thick) at \(\lambda=405\) nm exerts on an 80 nm-radius particle is shown for both chiralities of particle \(\kappa\pm 0.5\). The particle tracking graph shows the last position of 500 particles per enantiomer inside the microchannel (3 \(\mu\)m wide \(\times\) 0.5 \(\mu\)m thick) after 2 s of motion given the total optical force field shown in the same figure. The initial position of all particles is marked by the \(\times\) symbol. addition, the smallest particle size for which chiral and achiral forces have the same magnitude reduces down to 40 nm for \(\lambda=405\) nm. This comes at the expense of reducing the strength of the chiral optical force too (\(\sim 0.484\) fN/mW), as shown in Fig. 4 for a waveguide 2.27 \(\mu\)m wide \(\times\) 0.5 \(\mu\)m thick, since the power of the mode has been distributed over a larger cross-section. The particle tracking simulation inside a microchannel of the same dimensions as before (3 \(\mu\)m wide \(\times\) 0.5 \(\mu\)m thick), yielded practically the same value of EF (\(\sim 61\%/\)\(\sim 62\%\)) for (\(+\))/(\(-\))-enantiomers as with the narrower waveguide despite the magnitude of the forces being one order of magnitude less. This might be because the wider waveguide allows the interaction between the particle and optical force for a longer time because the waveguide width is larger. In spite of yielding the same enantiomeric separation capability, this wider configuration should be easier to implement experimentally due to the larger area of interaction between the mode and the particles. ### Quasi-CP mode We refer to the quasi-CP mode as a guided mode that is obtained by the superposition of the TE\({}_{0}\) mode and the TM\({}_{0}\) mode delayed by a phase shift of 90\({}^{\circ}\). As the electric field is predominantly horizontally polarized in the TE\({}_{0}\) mode and predominantly vertically polarized in the TM\({}_{0}\) mode, the combination originates a guided mode with an effective circular polarization and, therefore, local helicity [16]. The waveguide width and thickness are chosen so that the TE\({}_{0}\) mode and the TM\({}_{0}\) mode are degenerate, i.e. both modes exhibit the same effective refractive index (\(\Delta n=n_{\rm TE}-n_{\rm TM}=0\)) at the target wavelength. This degeneracy allows the circular polarization of the mode to be maintained along the waveguide. We computed the dominant achiral and chiral forces of this system for a particle of 52 nm radius and a wavelength of 1310 nm (see Fig. 5). The field of the quasi-CP mode exhibits an intrinsic local helicity \(\mathfrak{G}\), which naturally diminishes along the evanescent field beyond the waveguide core. This is a robust way of achieving a helicity gradient, which is in fact the predominant chiral force in this system. This force is particularly interesting because it depends only on the chiral polarizability, which means that it grows with the radius of the particle as \(r^{3}\) instead of \(r^{6}\), and does not depend explicitly on the wavelength. The total optical force exhibits opposite directions for opposite chiralities, being attractive towards the waveguide for the (\(+\))-particles (with value \(\sim 3.8\) fN/mW) and repulsive for the (\(-\))-particles (with value \(\sim 2.07\) fN/mW). In order to identify the possible range of sizes that might be easier to sort with the quasi-CP compound mode, we performed the study of the chiral forces dependence on the particle's size at the wavelength of 1310 nm. As shown in Fig. 2(d), the net transversal chiral force (along \(x-\) and \(y-\)axis) dominates over the net transversal achiral force, up to a maximum size (409 nm) where the combination of the forces due to the real and imaginary parts of the Poynting vector and the rotational of the spin overtake the helicity gradient force. Crucially, this means that the size range has no minimum radii where the chiral force is smaller than the achiral force, as it occurs for the quasi-TE mode. This means that we can expect to reach enantioseparation for smaller particles when compared to the TE mode, even down to radii around 1 nm (molecular size), making this force the most promising for sorting molecules. The Figure 5: Field properties (\(W_{e},W_{m}\) and \(\mathfrak{G}\)) that generate the dominant forces of the quasi-CP mode in a strip waveguide (0.495 \(\mu\)m wide \(\times\) 0.450 \(\mu\)m thick). The force stemming from the gradient of \(\mathfrak{G}\) as well as the total force are represented for both chiralities of the particle (\(\kappa=\pm 0.5\)). All forces are calculated for a particle of 52 nm radius. The arrow map represents the transversal components of the vectorial quantities being plotted, and the colormap represents the scalar quantity or the z-component of the vectorial quantity being plotted. The particle tracking graph shows the last position of 500 particles per enantiomer inside the microchannel (1.5 \(\mu\)m wide \(\times\) 1 \(\mu\)m thick) after 1 s of motion given the total optical force field produced by the waveguide. The initial position of all particles is marked by the \(\times\) symbol. particle tracking results inside a microchannel (1.5 \(\mu\)m wide \(\times\) 1 \(\mu\)m thick) for the system in Fig 5 show that 61.4% of the 500 (+)-particles become 'attached' to the waveguide within a radius of 425 nm measured from the center of the waveguide, and that 90.2% of the 500 (\(-\))-particles are repelled. The (+)-EF within a radius of 425 nm is 86.4% and the (\(-\))-EF outside the radius of 425 nm is 70.0%. ## 6 Conclusions In conclusion, we have identified an opportunity for transversal enantioseparation via optical forces in photonic integrated waveguides utilizing distinct mechanisms depending on particle size and operating wavelength. To this end, we have used the most straightforward waveguide structure: a strip SiN waveguide placed on a silica substrate. At short wavelengths such as around 405 nm, the spin magnetic force arising from a quasi-TE mode is in principle strong enough to sort particles larger than 80 nm radius under in less than 1 s. At longer wavelengths, such as 1310 nm, the helicity gradient force stemming from a quasi-CP mode can sort particles of size down to 52 nm radius under time spans below 1 s. In contrast to other approaches using optical waveguides [17, 18, 19], our waveguides are longitudinally invariant, meaning that the optical chiral forces could be exerted over long distances (cm-scale), thus facilitating practical enantioseparation with realistic optical powers in the chip (20 mW). These findings underscore the potential of optical forces generated in integrated waveguides in facilitating enantioseparation within the specified parameters of particle size and wavelength. ## Acknowledgements The authors acknowledge financial support from the European Commission under contract EIC Pathfinder CHIRALFORCE 101046961. A. M. acknowledges partial funding from the Generalitat Valenciana under the NIRVANA grant (PROMETEO Program, CIPROM/2022/14) and F. J. R-F. acknowledges partial funding from Innovate UK Horizon Europe Guarantee (UKRI project 10045438). ## 7 Appendix ### Particle tracking algorithm As shown in the main text, the motion of a Brownian particle immersed in a fluid and subjected to a force field \(\mathbf{F}=(F_{x},F_{y},F_{z})\) is governed by the overdamped Langevin equation. Its solution can be given by the Euler-Maruyama integration scheme [26, 27] to track the location of the particle in time: \[\begin{split} x^{(m+1)}&=x^{(m)}+\frac{F_{x}^{(m)}}{ \gamma}\Delta t+\sqrt{2D_{x,\perp}^{(m)}\Delta t}\;N(0,1)\\ y^{(m+1)}&=y^{(m)}+\frac{F_{y}^{(m)}}{\gamma}\Delta t +\sqrt{2D_{y,\perp}^{(m)}\Delta t}\;N(0,1)\\ z^{(m+1)}&=z^{(m)}+\frac{F_{z}^{(m)}}{\gamma}\Delta t +\sqrt{2D_{z,||}^{(m)}\Delta t}\;N(0,1)\end{split} \tag{10}\] where \(N(0,1)\) represents a standard normal distribution with \(0-\)mean and \(1-\)variance, and the force field \(\mathbf{F}\) is evaluated at the location of the particle \((x^{(m)},y^{(m)},z^{(m)})\) at each instant of time. The time step used in all the particle tracking simulations is \(\Delta t=10\;\mu s\). The diffusion coefficients are modified, from the bulk value \(D\), to account for the hydrodynamic interaction between the particle and the enclosing boundaries. The modifications are different depending on whether the particle moves along the directions parallel (\(||\)) or perpendicular (\(\perp\)) to a non-slip planar boundary [28]: \[\begin{split} D_{\perp}(h)&=D\left[1-\frac{9}{8} \left(\frac{r}{h}\right)+\frac{1}{2}\left(\frac{r}{h}\right)^{3}-\frac{1}{8} \left(\frac{r}{h}\right)^{5}\right]\\ D_{||}(h)&=D\left[1-\frac{9}{16}\left(\frac{r}{h} \right)+\frac{1}{8}\left(\frac{r}{h}\right)^{3}-\frac{1}{16}\left(\frac{r}{h} \right)^{5}\right]\end{split} \tag{11}\] where \(h\) is the distance from the center of the particle to the wall, and \(r\) is the radius of the spherical particle. We used the \(\perp-\)correction for the \(x-\) and \(y-\)directions and the \(||-\)correction for the \(z-\)direction. We do not allow the particles to come closer than a radius-distance from the wall, i.e. \(h\geq r\). This bounds the values of \(r/h\) between 0 and 1, and therefore: \(D_{\perp}\in[D/4,D]\), \(D_{||}\in[D/2,D]\). These modified coefficients are obtained for the case of a particle moving nearby a single planar boundary. However, in our waveguide system we have multiple boundaries, defined by the ceiling and sidewalls of the microchannel as well as top and sidewalls of the waveguide core and top of the substrate. We assume the modified expressions are still valid. We use the \(D_{\perp}\) expression for calculating the diffusion coefficient along \(x-\) and \(y-\)direction, and use the \(D_{||}\) expression along \(z-\)direction. We take \(h\) as the distance to the nearest boundary in the direction of motion. For instance, for the calculation of \(D_{y,\perp}\) we compute \(h\) by measuring the distance from the particle position to the ceiling of the microchannel and compare it to the distance to the substrate or top of the waveguide core (depending on whether the particle's position is above the substrate or above the waveguide core). Whichever distance is shorter, that is the value of \(h\). The force field used in the algorithm is obtained for a system that is not enclosed by a microchannel. However, we assume the presence of the microchannel does not modify the guided modes, and thus, the force field produced by the waveguide throughout the surrounding medium (water, \(n=1.33\)). This approximation is valid due to the higher refractive index of the of the waveguide core (SiN, \(n\sim\)2) with respect to that of the microchannel material (SiO\({}_{2}\), \(n\sim\)1.45), which ensures the guidance of the mode along the core, and because the microchannel walls (boundaries) are separated by a distance \(\geq\lambda/1.33\) from the waveguide walls. Once the particle tracking simulation is finished, we do statistical analysis with the last positions of the 500 particles for both enantiomers, to compute the enantiomer fraction. Given the number of (+)-particles, \(N_{+}\), and the number of (\(-\))-particles, \(N_{-}\), inside a region of space, we define the enantiomer fraction (EF) as [33]: \[\begin{split}(+)\text{-EF}&=\frac{N_{+}}{N_{+}+N_{- }}\\ (-)\text{-EF}&=\frac{N_{-}}{N_{+}+N_{-}}\end{split} \tag{12}\] Since the chiral forces separate opposite enantiomers in opposite directions the (+)-EF and the (\(-\))-EF are calculated in different regions of space. For instance, for the quasi-TE mode the (+)-EF is calculated for the region \(x<0\) whereas the (\(-\))-EF is obtained for the region \(x>0\). And, for the quasi-CP mode the (+)-EF is calculated for the region \(x^{2}+y^{2}<R^{2}\) whereas the (\(-\))-EF is obtained for \(x^{2}+y^{2}>R^{2}\), where \(R\) is an arbitrary radius that defines a circular region from the center of the waveguide. ### Sorting time for a cloud of enantiomers in a fluid We derive the expression for calculating the time needed for a chiral optical force, \(F\), to separate two clouds of opposite enantiomers a distance \(\Delta x\). This has been previously done by Kravets _et al._[24], but we include it here for completeness. The chiral force (uniform in space and constant in time) moves each enantiomer cloud into opposite directions a distance \(d_{\text{opt}}\) from their initial position \[d_{\text{opt}}=\frac{F}{\gamma}t \tag{13}\] where \(\gamma\) is the friction coefficient of the particle motion in the fluid. Let us assume the chiral force moves the (+)-enantiomer cloud towards the left (\(-x\) direction) and the (\(-\))-enantiomer cloud towards the right (\(+x\) direction), following Fig. 1(a) schematic. Therefore, the separation between the center of mass of both clouds has increased \(2d_{\text{opt}}\). In addition to this displacement, the radius for each enantiomer cloud is expected to increase on average an extra distance \(d_{\text{B}}\) from its initial extension \(L_{0}/2\) due to Brownian motion \[d_{\text{B}}=\sqrt{2Dt} \tag{14}\] where \(D=k_{\text{B}}T/\gamma\) is the diffusion coefficient, \(k_{\text{B}}\) is the Boltzmann constant and \(T\) is the absolute temperature of the fluid. These two simultaneous processes of motion are schematized in Fig. 1, where by comparing the defined lengths one can see that the condition to achieve a separation of \(\Delta x\) in a sorting time \(t\) is given by: \[2d_{\text{opt}}=L_{0}+2d_{\text{B}}+\Delta x \tag{15}\] \[2\frac{F}{\gamma}t=L_{0}+2\sqrt{2D}\sqrt{t}+\Delta x \tag{16}\] This equation can be solved as a second-degree equation on the variable \(\sqrt{t}\), whose solution is \[\sqrt{t_{\rm sort}}=\frac{\gamma}{F}\sqrt{\frac{D}{2}}\left(1+\sqrt{1+\frac{F(L _{0}+\Delta x)}{D\gamma}}\right) \tag{17}\] where we have taken the positive square root solution for avoiding unphysical negative time. By squaring this last expression and substituting \(D=k_{B}T/\gamma\) and \(\gamma=6\pi\eta r\), the sorting time can be obtained: \[t_{\rm sort}=3\pi\eta k_{\rm B}T\frac{r}{F^{2}}\left(1+\sqrt{1+\frac{F(L_{0}+ \Delta x)}{k_{\rm B}T}}\right)^{2} \tag{18}\] ### Electric and Magnetic field of the guided modes
2304.05296
EvAC3D: From Event-based Apparent Contours to 3D Models via Continuous Visual Hulls
3D reconstruction from multiple views is a successful computer vision field with multiple deployments in applications. State of the art is based on traditional RGB frames that enable optimization of photo-consistency cross views. In this paper, we study the problem of 3D reconstruction from event-cameras, motivated by the advantages of event-based cameras in terms of low power and latency as well as by the biological evidence that eyes in nature capture the same data and still perceive well 3D shape. The foundation of our hypothesis that 3D reconstruction is feasible using events lies in the information contained in the occluding contours and in the continuous scene acquisition with events. We propose Apparent Contour Events (ACE), a novel event-based representation that defines the geometry of the apparent contour of an object. We represent ACE by a spatially and temporally continuous implicit function defined in the event x-y-t space. Furthermore, we design a novel continuous Voxel Carving algorithm enabled by the high temporal resolution of the Apparent Contour Events. To evaluate the performance of the method, we collect MOEC-3D, a 3D event dataset of a set of common real-world objects. We demonstrate the ability of EvAC3D to reconstruct high-fidelity mesh surfaces from real event sequences while allowing the refinement of the 3D reconstruction for each individual event.
Ziyun Wang, Kenneth Chaney, Kostas Daniilidis
2023-04-11T15:46:16Z
http://arxiv.org/abs/2304.05296v1
# EvAC3D: From Event-based Apparent Contours to 3D Models via Continuous Visual Hulls ###### Abstract 3D reconstruction from multiple views is a successful computer vision field with multiple deployments in applications. State of the art is based on traditional RGB frames that enable optimization of photo-consistency cross views. In this paper, we study the problem of 3D reconstruction from event-cameras, motivated by the advantages of event-based cameras in terms of low power and latency as well as by the biological evidence that eyes in nature capture the same data and still perceive well 3D shape. The foundation of our hypothesis that 3D-reconstruction is feasible using events lies in the information contained in the occluding contours and in the continuous scene acquisition with events. We propose Apparent Contour Events (ACE), a novel event-based representation that defines the geometry of the apparent contour of an object. We represent ACE by a spatially and temporally continuous implicit function defined in the event x-y-t space. Furthermore, we design a novel continuous Voxel Carving algorithm enabled by the high temporal resolution of the Apparent Contour Events. To evaluate the performance of the method, we collect MOEC-3D, a 3D event dataset of a set of common real-world objects. We demonstrate EvAC3D's ability to reconstruct high-fidelity mesh surfaces from real event sequences while allowing the refinement of the 3D reconstruction for each individual event. The code, data and supplementary material for this work can be accessed through the project page: [https://www.cis.upenn.edu/~ziyunw/evac3d/](https://www.cis.upenn.edu/~ziyunw/evac3d/). ## 1 Introduction Traditional 3D reconstruction algorithms are frame-based because common camera sensors output images at a fixed frame rate. The fixed frame rate assumption challenges researchers to develop complex techniques to handle undesirable situations due to discontinuity between frames, such as occlusions. Therefore, recovering the association between views of the same object has been an essential problem in 3D reconstruction with a single camera. Such challenges fundamentally arise from the discrete time sampling of visual signals, which forces vision algorithms to recover the missing information between views. However, these problems do not exist naturally in biological systems because visual signals are encoded as a stream of temporally continuous spikes. Continuous encoding tremendously benefits humans and animals in many tasks, including estimating the 3D geometry of an object. The question is: _can a computer vision algorithm do better if it sees the same continuous world as humans do?_ In this work, we seek the answer to this question by developing a novel algorithm for bio-inspired event-based cameras. Event-based cameras are novel visual sensors that generate a continuous stream of events triggered by the change of the logarithm of the light intensity. The events are asynchronously generated without temporal binning; therefore, the high-resolution temporal information can be completely recovered from each event with minimal discontinuity. Additionally, the individual pixels of the camera do not have a global shutter speed, which gives the camera extremely high dynamic range. Due to the high dynamic range and high temporal resolution of event cameras, they have become an ideal choice for understanding fast motions. For 3D reconstruction, traditional cameras operate on a fixed frame rate. For image-based visual hull methods, the limited number of views means the smooth surfaces of the object cannot be properly reconstructed, which can be seen from the sphere reconstruction example in Figure 2. Ideally, one can expect to directly perform incremental updates to the geometry of an object at the same high temporal resolution as events. To this end, we propose a 3D reconstruction pipeline that directly predicts a mesh from a continuous stream of events assuming known camera trajectory from a calibrated camera. We introduce a novel concept of **Apparent Contour Events** to define the boundary of an object in the continuous x-y-t space. Through Apparent Contour Events, we incrementally construct the function of a 3D object Figure 1: **EvAC3D Pipeline. We use the previous N events as conditional information to predict the label for the current event. A hyper network is used to inject the conditional information into the decoding classifier. The predicted label is then passed into a geometry-based volume event carving algorithm.** surface at the same high temporal resolution as events. Here is a list of our main contributions: * We introduce a novel event concept of **Apparent Contour Events** that relates high-speed events to the tangent rays of the 3D surface to the viewpoint. * We propose a learning pipeline to predict which events are Apparent Contour Events without manual annotation but using 3D models of known objects. We propose a novel event-based neural network with point-based decoding to classify the Apparent Contour Events. * We present a continuous algorithm to reconstruct an object directly from a stream of events. The algorithm can accurately reconstruct objects with complex geometry in both synthetic and real environments. * We collect MOEC-3D, a high-quality real 3D event dataset for evaluating the performance of 3D reconstruction techniques from events that provides events, ground-truth 3D models, and ground-truth camera trajectories. ## 2 Related Work **3D Reconstruction with Event Cameras** Due to the asynchronous and sparse nature of the event sensors, 3D reconstruction algorithms cannot be directly applied. Most current work in event-based 3D reconstruction uses a stereo pair of cameras [13, 4, 28, 29]. The time coincidence of the events observed from a synchronized pair of cameras is used for stereo matching. These methods work in situations where multiple calibrated cameras are used synchronously. Zhu et al. [29] construct a cost volume based on warping using multiple disparity hypotheses. Carneiro et al. [4] use time incidence between two synchronized event streams to perform stereo matching. Chaney et al. [5] use a event single camera in motion to learn the 3D structure of the world. E3D [2] attempts to directly predict meshes from multi-view event images. This method is trained and mainly Figure 2: Reconstruction of a sphere with visual hull (6 and 12 frames) and with EvAC3D reconstruction on simulated events. The 12-view visual hull method uses roughly the same number of operations as EvAC3D. evaluated on synthetic data due to the large amount of 3D data needed for training. EMVS [20] adopts an event-based ray counting technique. Similar to our method, EMVS treats individual events as rays to take advantage of the sparse nature of the event data. In Section 3.3, we show how sparse processing can be extended further to work with only a particular type of events that contain rather rich geometric information. #### 3.0.1 3D Reconstruction from Occluding Contours. Reconstruction from the perspective projection of a geometric surface has been extensively studied in classical computer vision. Among different geometric representations used in such problems, apparent contour representation is most relevant to our work. Apparent contours, or extreme boundaries of the object, contain rich information about a smooth object surface. Barrow et al. [1] argue that surface orientations can be directly computed from a set of apparent contours in line drawings. Cipolla et al. [7] propose the theoretical framework from reconstructing a surface from the deformation of apparent contours. Based on the idea of contour generator [16], the projection of the apparent contours onto the image plane are used as tangent planes to the object. Furthermore, structure and motion can be simultaneously recovered from apparent contours. Wong et al. [26] propose to solve the camera poses and 3D coordinates of "frontier points", the intersection of the apparent contours in two camera views. A circular motion with a minimum of 3 image frames is assumed to solve the optimization problem. #### 3.0.2 Visual Hull. Visual hull is used to reconstruct 3D objects through Shape-From-Silhouette (SFS) techniques [3, 14]. Information from multiple views are aggregated into a single volume through intersection of the projective cones described by the silhouette at each view. Voxel grid and octrees [12, 23] are commonly used as discretized volumetric representations. SFS methods are particularly susceptible to false-negative labels (labeling an interior point as an exterior point). Figure 3: Qualitative comparisons between EvAC3Dand mask based carving of 12 and 24 views respectively. Cheetah, hippo, and elephant were chosen as a subset of the animal scans. ## 3 Method In this section, we explain how a continuous stream of events can be used to reconstruct the object surface. We divide the pipeline into two stages: **Apparent Contour Event Extraction** and **Continuous Volume Carving**. ### Apparent Contour Event (ACE) The main challenge in reconstructing objects from events is finding the appropriate geometric quantities that can be used for surface reconstruction. In frame-based reconstruction algorithms, silhouettes are used to encode the rays from the camera center to the object. However, computing silhouettes requires integrating frame-based representations, which limits the temporal resolution of the reconstruction updates. Additionally, since events represent the change in log of light intensity, events are only observed where the image gradients are nonzero. Therefore, one would not observe enough events on a smooth object surface. These two facts combined make traditional silhouettes non-ideal for events. To address these two shortcomings, we introduce **Apparent Contour Events (ACE)**, a novel representation that encodes the object geometry while preserving the high temporal resolution of the events. Geometrically, the generator of occluding contours on image planes is constrained by a ray-surface intersection and tangency [9]. A smooth surface \(\mathcal{S}\) with well defined surface normals at each point has an occluding contour generator for each camera center \(\mathbf{{}^{w}p_{e}}\). The contour generator is composed of image rays Figure 4: Qualitative evaluations from ShapeNet using EvAC3D on three categories of objects. that intersect the \(S\) at exactly one point \(\mathbf{{}^{w}X}\). A surface point \(\mathbf{{}^{o}x}\) with normal \(\mathbf{{}^{w}n}\) is included in the contour generator for the camera center \(\mathbf{{}^{w}p_{c}}\) if for an image ray \(\mathbf{{}^{w}v}\) the ray-surface intersection and tangency constraints hold [9]: \[\lambda^{o}\mathbf{v}+\mathbf{{}^{w}p_{c}}=\mathbf{{}^{w}X} \tag{1}\] \[\mathbf{{}^{w}n^{\intercal}(\mathbf{{}^{w}X}-\mathbf{{}^{w}p_{c}} )}=0 \tag{2}\] We define Apparent Contour Events formally. ACEs are events that meet the ray-surface intersection and tangency constraints [9]. Since each event can contain a potentially unique timestamp, the constraints must be thought of in continuous time, as opposed to indexible on a per frame basis. An event \(e_{i}\) generates an image ray \(\mathbf{{}^{c}x(t)}\) at a camera center \(\mathbf{{}^{w}p_{c}(t)}\). \(e_{i}\) is an ACE if for some point \(\mathbf{{}^{w}X}(t)\) on the surface \(\mathcal{S}\): \[\mathbf{{}^{w}n(t)^{\intercal}(\mathbf{{}^{w}X}(t)-\mathbf{{}^{w}p _{c}(t)})} =0 \tag{3}\] \[\lambda(t)\mathbf{{}^{w}R_{c}(t)}\mathbf{{}^{c}x(t)}+\mathbf{{}^{ w}p_{c}(t)} =\mathbf{{}^{w}X}(t) \tag{4}\] Intuitively, ACE can be seen as the set of events \(e_{i}=\{x_{i},y_{i},t_{i},p_{i}\}\) that belong to the active contour of the object at time \(t_{i}\). Due to the contrast between the active contour of an object with the background, a significant number of events are generated around the contour. Unlike silhouettes, which require filling in holes on the "eventless" areas of an integrated image, an ACE is defined purely on events. Traditional algorithms are limited by the frame rate of the input images. Projecting rays from only through the contours produces far fewer intersections of the rays. With events, we can shoot a ray for each event, which continuously refine the geometry around the active contour, as shown in Figure 1. To fully take advantage of the continuous nature of the events, we design a novel continuous volume carving algorithm based on single events, as described in Section 3.3. Figure 5: Comparison of different trajectories in simulation with ShapeNet. The circular and octahedral trajectories only move around major axes missing some contours that would improve the carving results. In comparison, the random trajectory samples evenly across the sphere providing more unique viewpoints. ### Learning Apparent Contour Events We formulate identification of Apparent Contour Events (ACEs) as a classification problem. In other words, the network learns a function \(F_{E_{t_{i}}}\), which maps an event to whether it is an ACE conditioned on the history of events \(E_{t_{i}}\). For an event \(e_{i}=\{x_{i},y_{i},t_{i},p_{i}\}\), we encode the past \(N\) events using a function \(\theta\) as a \(K\) dimensional latent vector \(C_{i}\in\mathbf{R}^{K}\), where \(K\) is a hyperparameter. \[C_{i}=g_{\phi}(\{e_{j}\coloneqq(x_{j},y_{j},t_{j},p_{j})\}):j>\max(i-N,0) \tag{5}\] \(N\) is a hyperparameter that specifies the history of events as the conditional input to the classification problem. The ACE classification problem is modeled as a function that maps from the latent code and an event to the probability that it is an ACE: \[q_{i} =f_{\theta}(e_{i},C_{i}) \tag{6}\] \[e_{i} \coloneqq(x_{i},y_{i},t_{i},p_{i})\] (7) \[q_{i} \in[0,1] \tag{8}\] We use a neural network to parameterize function \(g_{\phi}\) and \(f_{\theta}\). Note that \(g_{\phi}\) takes a list of \(N\) events. In practice, we use an event volume [30] to encode past events. \[E(x,y,t)= \sum_{i}p_{i}k_{b}(x-x_{i})k_{b}(y-y_{i})k_{b}(t-t_{i}^{*}) \tag{9}\] We chose this representation because the values in such volumes represent the "firing rate" of visual neurons in biological systems, which preserves valuable temporal information. The temporal information is needed because labeling ACEs requires the network to predict both where the contours are in the past and how they move over time. To supervise the ACE network, we jointly optimize the encoder and the event decoder using a Binary Cross-Entropy loss directly on the predicted event labels. \[\mathcal{L}_{c}=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{bce}(f_{\theta}(e_{i},C_{ i}),\hat{q}_{i})) \tag{10}\] Here \(\hat{q}_{i}\) is the ground truth event label for \(e_{i}\). In practice, the labels are extremely imbalanced especially in either low light conditions (high noise to signal ratio) or scenes where other objects are moving as well. For training, we equally sample half positive and half negative events to help overcome the imbalance of labels. ArchitectureTo enable classification of individual events, we adopt an encoder-decoder architecture where the decoder maps event coordinates to probabilities. These types of architectures are widely used in learning-based single-view 3D reconstruction methods. Mapping approaches such as AtlasNet [10] and implicit approaches (Occupancy Networks [17], DeepSDF [19]) all use variants of this architecture. In our experiments, we find the decoder part of the network has more weight in the overall mapping performance. Rather than taking fixed-sized latent vector code, we inject the conditional information directly into the weights of the decoder, following [11, 17, 25, 24, 18]. We use the Conditional Batch Normalization to inject the encoding of the prior events into the Batch Normalization layers of the decoder network. The architecture of the network is illustrated in Figure 1. The training details and hyperparameters of the network can be found in the Supplementary Material. ### Event Based Visual Hull In frame-based shape from silhouette and space carving approaches, the goal is to recover the visual hull, defined as the intersection of visual cones that are formed by the apparent contour in each frame. A better definition, though following the original definition by Laurentini [14] would be the largest possible volume consistent with the tangent rays arising from apparent contour events. The visual hull is always a superset of the object and a subset of the convex hull of the object. Due to the continuity of the camera trajectory and the high temporal sampling of events, we expect the obtained visual hull to be tighter to the object than the visual hull obtained from a sparse set of viewpoints that might be closer to the convex hull of the object. #### 3.3.1 Continuous Volume Carving provides smooth continuous incremental changes to the carving volume. This is accomplished by only carving updates through the use of ACEs. This creates a more computationally efficient as shown in Table 3. ACEs are defined by the tangent rays to the surface at any given positional location. The time resolution of event based cameras provide ACEs that are from continuous viewpoints through the trajectory of the camera. These continuous viewpoints, \(C(t)\), are from around the object in the world frame, \(W\). Projecting an individual ACE, into the voxel grid coordinate system gives us a ray with origin, \({}^{V}R_{W}\,^{\mathrm{w}}\mathbf{p}_{\mathrm{c}}(\mathrm{t})+{}^{V}\mathbf{t}_{W}\), and direction, \({}^{V}R_{W}\,^{\mathrm{w}}\mathbf{R}_{\mathrm{c}}(\mathrm{t})\,^{c}\mathbf{x}( \mathrm{t})\). The ray in the voxel grid coordinate system allows us to project through the volume, which is illustrated in Figure 3 (a). To efficiently traverse this volume, a 3D Bresenham algorithm is used to produce a set of voxel coordinates, \(\mathcal{V}_{i}\in\mathbb{Z}^{3}\), along the ray. All voxels in \(\mathcal{V}_{i}\) are incremented. The interior of the object is left empty, as the rays trace along the continuous surface of the object. This can be seen at the bottom of Figure 1. Algorithm 1 covers the process of generating updates to the voxel grid for every event individually. The object's mesh is then extracted from the volume (algorithm in the supplementary material) and optimized. ### Global Mesh Optimization The mesh reconstructed from volume carving can be affected by noise either from pose estimation or sensor noise. Specifically, the object will look "smaller" if some rays erroneously carve into the object due to noise. Consequently, we optimize the consistency between the proposed mesh and the high-confidence cells in the vacancy volume, which we call "high-confidence" surface points. We propose a global optimization to further refine the mesh based on these points. Recall that most rays intersect at the surface of the objects. Define point set \(\hat{Y}\) as the point set of all high-confidence surface points of \(V(x,y,z)\): \[\hat{Y}=\{(x,y,z):V(x,y,z)>\epsilon_{V}\} \tag{11}\] where \(\epsilon_{V}\) is a threshold based on the carving statistics of volume \(V\). For a mesh reconstructed from running Marching Cubes, represented as a graph \(G=(P,E)\) Figure 6: (a): illustration of carving based on Apparent Contour Events. (b): ROC curve of ACE classification network. where \(P\) is the set of vertices and \(E\) is the set of edges that form the faces. A deformation function \(f\) maps the original vertex set \(P\) to a deformed set \(P^{\prime}=f(P)\). We first optimize a one-side Chamfer distance from the high-confidence surface points (less than \(\epsilon\) distance away) to the mesh vertices. In addition, we regularize the mesh by a graph Laplacian loss. The final objective can be written as: \[L_{rf}=\lambda_{1}\frac{1}{|P^{\prime}|}\sum_{\begin{subarray}{c}p^{\prime}_{i }\in P^{\prime}\\ ||p^{\prime}_{i}-\hat{y}||_{2}<\epsilon_{d}\end{subarray}}\min_{\hat{y}\in \hat{Y}}||p^{\prime}_{i}-\hat{y}||_{2}^{2}+\lambda_{2}\frac{1}{|P^{\prime}|} \sum_{p^{\prime}_{i}\in P^{\prime}}\sum_{p^{\prime}_{j}\in\mathcal{N}(P^{ \prime})}\frac{1}{\mathcal{N}(P^{\prime}_{i})}||p^{\prime}_{j}-p^{\prime}_{i} ||_{2}\] where \(\mathcal{N}(P^{\prime}_{i})\) represents all neighbors of a node \(P^{\prime}_{i}\), and \(\lambda_{1}\) and \(\lambda_{2}\) are the weights between the two losses. We find a function \(f\) that minimizes the loss. The \(f\) function can be treated as the point-wise translation of the vertices. All values used in this optimization come from our predictions without using the ground truth. We use Adam Optimizer to optimize the warping function \(f\). ## 4 Experiments In this section, we present the data collection details, evaluation of the carving algorithm, and reconstruction of real objects.To better evaluate the performance of event-based 3D reconstruction algorithms, we collect Multi Object Event Camera Dataset in 3D (MOEC-3D), a 3D event dataset of real objects. Please refer to the Supplementary Material for details about the dataset. For ground truth models, an industrial-level Artec Spider scanner is used to provide the ground truth 3D models with high accuracy. The detailed steps of data collection can be found in the Supplementary Material. ### Evaluation Metrics We report both Chamfer distance and Cosine similarity of the mesh compared to the ground truth model. Chamfer distance is measuring the average distance Figure 7: (Left) Comparison between options available within the optimizer. (Right) Test performance of event-based carving using predicted ACEs from our network. between two point clouds, which reflects the positional accuracy of the reconstruction. It is defined as: \[CD(X,\hat{X})=\frac{1}{|X|}\sum_{x\in X}\min_{\hat{x}\in\hat{X}}||x-\hat{x}||_{2 }+\frac{1}{|\hat{X}|}\sum_{\hat{x}\in\hat{X}}\min_{\hat{x}\in\hat{X}}||\hat{x}- x||_{2} \tag{12}\] \(X\) and \(\hat{X}\) represent the points sampled from the reconstruction and the ground truth model. Surface normal is also a commonly used metric for comparing the geometry of two meshes. We report the average cosine similarity between the corresponding surface samples of two meshes, which is defined as: \[Cos.Sim(X_{gt},X_{pred})=\frac{1}{|X_{gt}|}\sum_{i\in|X_{gt}|}| \vec{n_{i}}\cdot\vec{m}_{\theta(x,X_{pred})}| \tag{13}\] \[\theta(x,X_{gt}\coloneqq\{(\vec{y_{j}},\vec{m_{j}})\})=\operatorname {arg\,min}_{j\in|X_{gt}|}||x-y_{j}||_{2}^{2} \tag{14}\] We use the closest point to approximate the correspondence between two sets of oriented samples, similar to the argmin used in Equation 12. We use a k-nearest neighbor search to estimate the normals of sampled points on the mesh, where \(k\) is 300. ### Evaluating Carving Algorithm To test the effectiveness of our continuous carving algorithm, we utilize the meshes collected as part of this dataset within a simulation environment for fair comparisons. Note that the evaluation is done with real objects and we assume \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{4}{c|}{**Chamfer Distance\(\downarrow\)**} & \multicolumn{3}{c}{**Normal Consistency\(\uparrow\)**} \\ Category & EvAC3D & Mask-24 & Mask-12 & EvAC3D & Mask-24 & Mask-12 \\ \hline Mustard & **3.0164** & 4.6210 & 5.5161 & **0.9619** & 0.9034 & 0.9035 \\ Coffee & **2.1439** & 2.2926 & 3.3019 & 0.9826 & **0.9893** & 0.9877 \\ Soda (b) & 1.5231 & **1.4601** & 2.0635 & **0.9834** & 0.9717 & 0.9735 \\ Jello (s) & **0.9657** & 2.1973 & 4.3524 & **0.9801** & 0.9766 & 0.9234 \\ Jello (b) & 5.9083 & **4.4409** & 7.7409 & 0.8843 & **0.9541** & 0.8952 \\ Tuna & **3.2633** & 3.7045 & 4.3070 & 0.9598 & **0.9665** & 0.9644 \\ Soup & **1.6513** & 2.0130 & 2.8556 & 0.9653 & **0.9705** & 0.9681 \\ Sugar & **0.8651** & 2.4071 & 4.9491 & **0.9935** & 0.9862 & 0.9405 \\ Vitamin & 2.6190 & **1.4478** & 2.4836 & 0.9683 & **0.9947** & 0.9896 \\ Spam & **2.0398** & 3.1969 & 4.8615 & 0.9739 & **0.9760** & 0.9479 \\ \hline Mean & **2.4267** & 3.2652 & 4.3856 & **0.9487** & 0.9377 & 0.9159 \\ \hline \hline \end{tabular} \end{table} Table 1: **Event Carving Evaluation** This table contains the results using ground truth Apparent Contour Events (ACEs). Chamfer distance (lower is better) is reported in \(10^{-3}m\) (millimeters). Surface normal (higher is better) is reported as cosine similarity between the ground truth and predicted surface normal. We sample 10,000 points uniformly both on the reconstructed mesh and the object mesh. the ACEs are known at every point during the camera motion. This is different than the completely synthetic environment employed in [2] because the events generated with an event simulator are not guaranteed to have the same data distribution. We observe a significant amount of noise in the real event data. The quantitative results are provided in Table 1. The mask-based carving is done with ground truth masks as well for fair comparison. In addition to the real data simulation above, we show ShapeNet examples, Figure 4, of our algorithm on objects with more complicated geometry to provide context for our reconstruction quality. To use these models, we use Open3D [27] to capture high frame rate images and ground truth masks. These images were then processed through ESIM [8] to generate a set of simulated events. To generate a close approximation of the real world dataset, a similar trajectory to the real world dataset was chosen. ### Reconstructing Real Objects Many network-based methods only work on simulated datasets because they require a large amount of labeled object-level 3D models. In addition, such networks cannot easily be adapted to work on real data. In comparison, the EvAC3D network can be trained on a small set of data because the labels could be obtained geometrically for events. We report the per-class performance evaluation in Table 2. For each object, we evaluate on an unseen sequence withheld from the training set. EvAC3D uses apparent contour events from the network output to perform carving. For baseline comparisons, we train two separate U-Net [22] \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{**Chamfer Distance\(\downarrow\) / Surface Normal Consistency\(\uparrow\)**} \\ & EvAC3D & Mask 24 & Mask 12 & Image 24 & Image 12 & E3D [2] \\ \hline Mus & **1.537/0.983** & 3.034/0.968 & 7.061/0.926 & 4.192/0.868 & 6.947/0.926 & 7.986/0.713 \\ Cof & **2.286**/0.957 & 2.653/**0.971** & 7.771/0.915 & 5.733/0.840 & 5.930/0.821 & 8.354/0.756 \\ Sod & 2.239/**0.965** & **1.953**/0.957 & 4.611/0.929 & 2.380/0.914 & 3.865/0.884 & 6.762/0.703 \\ Jel(s) & **2.889/0.928** & 3.860/0.925 & 14.248/0.783 & 3.967/0.862 & 7.188/0.757 & 8.255/0.744 \\ Jel(b) & **3.899/0.930** & 4.818/0.926 & 13.405/0.750 & 3.929/0.863 & 6.657/0.736 & 14.910/0.767 \\ Tun & **3.624**/**0.938** & 3.518/0.937 & 5.552/0.753 & 4.254/0.863 & 8.545/0.734 & 10.850/0.704 \\ Sou & **2.111**/**0.959** & 2.392/0.954 & 5.200/0.887 & 2.294/0.922 & 5.276/0.854 & 6.133/0.783 \\ Sug & **1.953**/**0.970** & 7.904/0.854 & 9.929/0.833 & 4.000/0.939 & 9.724/0.775 & 5.924/0.691 \\ Vit & **2.191**/0.957 & 2.338/**0.966** & 5.772/0.949 & 2.226/0.958 & 5.710/0.915 & 8.462/0.715 \\ Spa & **2.784**/**0.953** & 3.667/0.945 & 9.635 /0.738 & 3.295/0.911 & 6.798/0.849 & 10.730/0.747 \\ \hline Mean & **2.551**/**0.954** & 3.614/0.940 & 8.312/0.846 & 3.602/0.900 & 6.664/0.825 & 8.837/0.732 \\ \hline \hline \end{tabular} \end{table} Table 2: **Real Object Reconstruction** This table contains the results using trained network to predict Active Contour Events from real data. Chamfer distance (lower is better) is reported in \(10^{-3}m\) (millimeters). Surface normal (higher is better) is reported as cosine similarity between the ground truth and predicted surface normal. “Mask” means using masks from the event mask network. “Image” means using masks predicted from reconstructed images. The number in each column name represents the number of views used for reconstruction. style networks to output object masks from previous events and from reconstructed images. While they differ in input, they emulate the common situations where a fixed number of frames are used for reconstruction. We follow the multi-view settings in 3D-R2N2 [6] where views are taken around the object. We choose 12 views as the baseline because we can reconstruct reasonable objects while keeping the computational cost close to EvAC3D. To further show the computational efficiency of EvAC3D, we also compare with 24-view carving, whose computational cost is much higher. We compare with E3D [2], the only event-based method that attempts to achieve multi-view 3D reconstruction. For fair comparison, we directly feed in the ground truth poses to E3D. E3D uses multi-view silhouette optimization over the objects, similar to PMO [15]. E3D directly uses the photometric optimization module in PMO [15] on silhouettes and removes the mesh prior from AtlasNet [10]. In our evaluation, we feed ground truth poses to E3D for fair comparison. In our experiments, we find silhouette-based optimization methods sensitive to the position and size of the mesh. To study the various components of EvAC3D, we report the performance of the ACE classification network and overall object reconstruction. For ACE classification, we provide the AUC curve of the classifier in Fig. 3 (b). The overall classification accuracy is 0.9563 (threshold=0.5). In Table 3, we show the mean performance and the mean number of operations. We define number of operations as the number of rays that we shoot out of the camera. EvAC3D uses significantly fewer operations than both 12 and 24 views. We notice that for both mask prediction networks, the quality of reconstruction degrades quickly when the number of views decreases. In practice, the sensor frame rate is not the only limiting factor - the computational power required to carve based on masks is also significantly higher. The average number of carving operations, mean Chamfer distance, and mean normal consistency are summarized in Table 3. This means the reconstruction quality of a frame-based algorithm largely depends on the motion speed, assuming the camera sensor has a fixed frame rate. We overcome this limitation of motion speed by directly operating on a continuous stream of events. We directly compare the qualitative results of the discussed methods in Figure 3. \begin{table} \begin{tabular}{c|c|c|c} \hline Method & Num of Ops\(\downarrow\) & Chamfer\(\downarrow\) & Normal \(\uparrow\) \\ \hline GT-Mask-24 & 6,661,111 & 3.614 & 0.940 \\ GT-Mask-12 & 3,331,536 & 8.312 & 0.846 \\ Image-24 & 6,674,148 & 3.602 & 0.900 \\ Image-12 & 3,345,580 & 6.664 & 0.825 \\ E3D [2] & – & 8.837 & 0.732 \\ EvAC3D & **1,921,976** & **2.551** & **0.954** \\ \hline \end{tabular} \end{table} Table 3: Mean number of carving operations, mean Chamfer distance, and mean cosine similarity. With ACEs, our continuous carving method outperforms the other frame-based methods while using significantly fewer operations. ### Real Objects with Handheld Camera Trajectory In the previous section, we present the experimental results for circular trajectories. However, camera trajectories can have more degrees of freedoms in real life. In this section, we put EvAC3D under test of more general handheld motions. The additional complexity of tasks comes not only from significant background events, but also the noisy camera pose estimation from handheld camera motion. We show a reconstructed hippo in Figure 8. Our reconstruction on this handheld sequence shows success in the main body of the hippo with an average reconstruction error of 1.5mm. The legs do not appear fully formed likely due to the small errors in the calibration and pose, both of which rely upon the reconstructed image to detect the AprilTags. ## 5 Conclusions In this work, we present a novel method for continuous 3D reconstruction using event cameras. At the core of the method is the representation of occluding contours by Apparent Contour Events (ACE), a novel event quantity that can be used to continuously carve out high-fidelity meshes. EvAC3D is able to update the occupancy grid of the object on an event-to-event basis, which achieves better performance than mask-based visual hull approaches while using significantly fewer carving operations. We evaluate the performance of the method on both real and synthetic data. In addition, we contribute MOEC-3D, the first high-quality event-based 3D object dataset. With these contributions, we believe EvAC3D can provide important insights into how we can understand the 3D world through events. **Acknowledgement** We thank the support from the following grants: NSF TRIPODS 1934960, NSF CPS 2038873, ARL DCIST CRA W911NF-17-2-0181, ARO MURI W911NF-20-1-0080, ONR N00014-17-1-2093, DARPA-SRC C-BRIC, and IARPA ME4AI. We also thank William Sturgeon from the Fisher Fine Arts Materials Library for providing the Artec Spider scanner and assistance. Figure 8: Results from a handheld trajectory. Left to right: raw events input, raw events overlaid with our reconstruction, image reconstruction using E2Vid [21], image reconstruction overlaid with our reconstruction, and the subsampled 3D camera trajectory with the computed mesh.
2303.00349
Derivations and Hochschild cohomology of zigzag algebras
Let $\Gamma$ be a connected graph without loops, cycles or multiple edges and $Z(\Gamma)$ the corresponding zigzag algebra. Then every Jordan derivation of $Z(\Gamma)$ is a derivation. Moreover, we will prove that the dimension of 1th Hochschild cohomology group of $Z(\Gamma)$ is one by computing the dimensions of linear spaces spanned by derivations and inner derivations. This implies that the dimension of the 1th Hochschild cohomology group of each algebra derived equivalent to a zigzag algebra is 1.
Yanbo Li, Zeren Zheng
2023-03-01T09:24:02Z
http://arxiv.org/abs/2303.00349v1
# Derivations and Hochschild cohomology of zigzag algebras ###### Abstract. Let \(\Gamma\) be a connected graph without loops, cycles or multiple edges and \(Z(\Gamma)\) the corresponding zigzag algebra. Then every Jordan derivation of \(Z(\Gamma)\) is a derivation. Moreover, we will prove that the dimension of 1th Hochschild cohomology group of \(Z(\Gamma)\) is one by computing the dimensions of linear spaces spanned by derivations and inner derivations. This implies that the dimension of the 1th Hochschild cohomology group of each algebra derived equivalent to a zigzag algebra is 1. Key words and phrases:Hochschild cohomology; derivation; zigzag algebra 2010 Mathematics Subject Classification: 16E40, 16W25, 16T99 Corresponding Author. Zeren Zheng The work is supported by the Natural Science Foundation of Hebei Province, China (A2021501002); China Scholarship Council (202008130184) and NSFC 11871107 an invariance of derived equivalence. The Hochschild cohomology groups are also closely related to the center and deformation theory of the given algebra. For the theory of Hochschild cohomology, it is important to study the actual structure of the Hochschild cohomology groups for particular classes of algebras and many papers are devoted to do it, such as [2, 11, 13, 23, 27] and so on. The main result of this note is to prove the dimension of 1th Hochschild cohomology group of a zigzag algebra is 1 by computing the dimensions of linear spaces spanned by derivations and inner derivations, which will be given after determining the form of derivations of a zigzag algebra in Section 3. ## 2. Zigzag algebras In this section, we recall the definition of zigzag algebras and fix all notations that we need in the next section. The main references are [1, 14]. A _finite quiver_\(Q\) is an oriented graph with the set of vertices \(Q_{0}\) and the set of arrows between vertices \(Q_{1}\) being both finite. For an arrow \(\alpha^{i,j}\) from vertex \(i\) to vertex \(j\), write \(s(\alpha^{i,j})=i\) and \(e(\alpha^{i,j})=j\). We often write an arrow by \(\alpha\) for simplicity if there is no danger of confusion. A length \(n\)_nontrivial path_\(p=(a|\alpha_{1}\cdots\alpha_{n}|b)\) in \(Q\) is an ordered sequence of arrows such that \(s(\alpha_{1})=a,e(\alpha_{n})=b\) and \(e(\alpha_{l})=s(\alpha_{l+1})\) for each \(l<n\). A _trivial path_ is the symbol \(e^{i}\) for each \(i\in Q_{0}\). In this case, we set \(s(e^{i})=e(e^{i})=i\). Let \(K\) be a field and \(Q\) be a quiver. Then the path algebra \(KQ\) is the \(K\)-algebra generated by the paths in \(Q\) and the product of two paths \(x=(a|\alpha_{1}\cdots\alpha_{m}|b)\) and \(y=(c|\beta_{1}\cdots\beta_{n}|d)\) is defined by \[xy=\delta_{bc}(a|\alpha_{1}\cdots\alpha_{m}\beta_{1}\cdots\beta_{n}|d)\] Clearly, \(KQ\) is an associative algebra with the identity \(1=\sum_{i\in\Gamma_{0}}e^{i}\), where \(e^{i}(i\in Q_{0})\) are pairwise orthogonal primitive idempotents of \(KQ\). A _relation_\(\sigma\) on a quiver \(Q\) is a \(K\)-linear combination of paths \(\sigma=\sum_{i=1}^{n}k_{i}p_{i}\), where \(k_{i}\in K\) and \(e(p_{1})=\cdots=e(p_{n}),\;\;s(p_{1})=\cdots=s(p_{n}).\) Moreover, the number of arrows in each path is assumed to be at least 2. Let \(\rho\) be a set of relations on \(\Gamma\). The pair \((Q,\rho)\) is called a _quiver with relations_. Denote by \(K(Q,\rho)\) the algebra \(K\Gamma/<\rho>\), where \(<\rho>\) is the ideal of \(K\Gamma\) generated by the set of relations \(\rho\). For arbitrary element \(x\in KQ\), write by \(\overline{x}\) the corresponding element in \(K(Q,\rho)\). We often write \(\overline{x}\) as \(x\) if this is not misleading or confusing. Given a connected graph \(\Gamma\), denote the set of vertices by \(\Gamma_{0}\). Define a quiver \(Q\) with \(Q_{0}=\Gamma_{0}\) and \(Q_{1}=\{\alpha^{ij},\alpha^{ji}\mid i-j\text{ in }\Gamma\}\), where \(i-j\) implies that there is a line in \(\Gamma\) connecting \(i\) with \(j\). Then type \(\Gamma\) zigzag algebra \(Z(\Gamma)\) is the path algebra of quiver \(Q\) with relations as follows: (1) All paths of length three or greater are zero. (2) All paths of length two that are not cycles are zero. (3) All length-two cycles based at the same vertex are equal. An example of a zigzag algebra is illustrated below. **Example 2.1**.: (Zigzag algebra of type A) Let \(K\) be a field and \(Q\) the following quiver with relation \(\rho\) given above. Note that in [9], Ehrig and Tubbenhauer gave a slightly different definition of a zigzag algebra. However, they are equivalent. Denote the cycle \(\alpha^{i,j}\alpha^{j,i}\) by \(c^{i}\). Then the following lemma is easily verified. **Lemma 2.2**.: _The zigzag algebra \(Z(\Gamma)\) is a finite algebra with basis \(\{e_{i}\mid i\in Q_{0}\}\cup\{c^{i}\mid i\in Q_{0}\}\cup Q_{1}\), and the center \(C(Z(\Gamma))\) has a basis \(\{c^{i}\mid i\in Q_{0}\}\cup\{1\}\). Moreover, \(\dim Z(\Gamma)=2|Q_{0}|+|Q_{1}|\) and \(\dim C(Z(\Gamma))=|Q_{0}|+1\)._ ## 3. Derivations and Jordan derivations In this section, we describe the form of a derivation of zigzag algebra \(Z(\Gamma)\). So let us begin with the definition of a derivation. Let \(K\) be a field and \(\mathcal{A}\) a \(K\)-algebra. Recall that a linear mapping \(\Theta\) from \(\mathcal{A}\) into itself is called a _derivation_ if \[\Theta(ab)=\Theta(a)b+a\Theta(b)\] for all \(a,b\in\mathcal{A}\). For simplicity, we use Einstein summation convention from now on. Note that all the places where do not mean sums are easy to know and we will not point them out later. **Lemma 3.1**.: _A linear mapping \(\Theta\) is a derivation of \(Z(\Gamma)\) if and only if_ 1. \(\Theta(e^{i})=t^{i}_{mi}\alpha^{mi}+t^{i}_{in}\alpha^{in}\)_;_ 2. \(\Theta(\alpha^{ij})=t^{ij}_{ij}\alpha^{ij}+t^{j}_{ji}c^{i}+t^{i}_{ji}c^{j}\)_;_ 3. \(\Theta(c^{i})=(t^{ij}_{ij}+t^{ji}_{ji})c^{i}\)_,_ _where all coefficients are in \(K\) and \(t^{j}_{ji}=-t^{i}_{ji}\)._ Proof.: Let \(\Theta\) be a derivation of \(Z(\Gamma)\) and assume that \[\Theta(e^{i})=k^{i}_{r}e^{r}+t^{i}_{mn}\alpha^{mn}+l^{i}_{r}c^{r} \tag{3.1}\] Note that \(e^{i}\) is an idempotent. This implies that \[\Theta(e^{i})=\Theta(e^{i})e^{i}+e^{i}\Theta(e^{i}) \tag{3.2}\] and consequently, \[e^{i}\Theta(e^{i})e^{i}=0. \tag{3.3}\] Combining (3.1) with (3.3) shows that \[k^{i}_{i}=l^{i}_{i}=0. \tag{3.4}\] Moreover, substituting (3.1) into (3.2) and using (3.4) gives that (1) holds. Note that \(e^{i}e^{j}=0\) and hence \[\Theta(e^{i})e^{j}+e^{i}\Theta(e^{j})=0. \tag{3.5}\] Now substituting (1) into (3.5) yields \(t^{j}_{ji}=-t^{i}_{ji}\). In order to prove (2), suppose that \[\Theta(\alpha^{ij})=k^{ij}_{r}e^{r}+t^{ij}_{mn}\alpha^{mn}+l^{ij}_{r}c^{r}.\] Note that \(\alpha^{ij}=e^{i}\alpha^{ij}\) if \(i\neq j\) and thus \[\Theta(\alpha^{ij})=\Theta(e^{i}\alpha^{ij})=\Theta(e^{i})\alpha^{ij}+e^{i} \Theta(\alpha^{ij}). \tag{3.6}\] By substituting (1) into (3.6) we get \[\Theta(\alpha^{ij})=t^{i}_{ji}c^{i}+k^{ij}_{i}e^{i}+t^{ij}_{in}\alpha^{in}+l^{ ij}_{i}c^{i}. \tag{3.7}\] Similarly, applying the fact \(\alpha^{ij}=\alpha^{ij}e^{j}\) leads to \[\Theta(\alpha^{ij})=t^{i}_{ji}c^{i}+k^{ij}_{j}e^{j}+t^{ij}_{mj}\alpha^{mj}+l^{ij} _{j}c^{j}. \tag{3.8}\] Then we complete the proof of (2) by comparing (3.7) with (3.8). Recall that \(c^{i}=\alpha^{ij}\alpha^{ji}\). Then (3) can be obtained from (2) by easy computation. Conversely, if \(\Theta\) is a linear map on \(Z(\Gamma)\) satisfying the conditions (1)-(3), then it is easy to check that \(\Theta\) is a derivation. We omit the details here. **Remarks 3.2**.: (1) Guo and Li [11] studied the form of a derivation of a path algebra of a quiver without relations, and thus their result can not be used in this note. (2) For all \(a,b\in\mathcal{A}\), denote the _Jordan product_ by \(a\circ b=ab+ba\). Then a linear mapping from \(\mathcal{A}\) into itself is called a _Jordan derivation_ if \[\Theta(a\circ b)=\Theta(a)\circ b+a\circ\Theta(b).\] Every derivation is obviously a Jordan derivation. The converse statement is not true in general. Moreover, an _antiderivation_ is a linear mapping of \(\mathcal{A}\) if \[\Theta(ab)=\Theta(b)a+b\Theta(a)\] for all \(a,b\in\mathcal{A}\). Note that there has been an increasing interest in the study of Jordan derivations of various algebras. The standard problem is to find out whether a Jordan derivation degenerate to a derivation. We refer the reader to [3, 6, 12, 26, 28] and the references therein for relevant results on this topic. As in Lemma 3.1, we can determine the forms of anti-derivations and Jordan derivations of \(Z(\Gamma)\). Then the following results are easily verified. We omit the details and leave them to the reader. 1. Every anti-derivation of \(Z(\Gamma)\) is \(0\). 2. Every Jordan derivation of \(Z(\Gamma)\) is a derivation. ## 4. Hochschild cohomology of zigzag algebras In this section, we will compte the 1th Hochschild cohomology group of a zigzag algebra. Denote the linear space spanned by all the derivations of \(Z(\Gamma)\) by \(Der(Z(\Gamma))\). We need to compute the dimension of \(Der(Z(\Gamma))\). **Lemma 4.1**.: _Let \(\Gamma\) be a connected finite graph without loops, cycles or multiple edges. If \(|\Gamma_{0}|>1\), then \(\dim Der(Z(\Gamma))=3|\Gamma_{0}|-2\)._ Proof.: We prove this lemma by induction on the number of vertices of \(\Gamma\). Let \(|\Gamma_{0}|=2\). Then \(Z(\Gamma)\) has a basis \(\{e^{1},e^{2},\alpha^{12},\alpha^{21},c^{1},c^{2}\}\). For an arbitrary derivation \(\Theta\) of \(Z(\Gamma)\), we have from Lemma 3.1 that \[\Theta(e^{1},e^{2},\alpha^{12},\alpha^{21},c^{1},c^{2})=(e^{1},e^{2},\alpha^{ 12},\alpha^{21},c^{1},c^{2})A,\] where \(A\) is of the form \[\begin{bmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ t^{1}_{12}&-t^{1}_{12}&t^{12}_{12}&0&0&0\\ t^{1}_{21}&-t^{1}_{21}&0&t^{21}_{21}&0&0\\ 0&0&t^{2}_{21}&-t^{12}_{12}&t^{12}_{12}+t^{21}_{21}&0\\ 0&0&0&t^{1}_{12}&0&t^{12}_{12}+t^{21}_{21}\end{bmatrix}\] Clearly, the space spanned by derivations of \(Z(\Gamma)\) is isomorphic to the linear space spanned by the matrices \(A\). Then it is easy to check that \[\dim DerZ(\Gamma)=4=3\times 2-2,\] that is, the lemma holds when \(|\Gamma_{0}|=2\). Suppose that the lemma holds for \(|\Gamma_{0}|=n-1\). Let \(\Gamma\) be a connected finite graph without loops, cycles or multiple edges and suppose that \(|\Gamma_{0}|=n\). Then there must exists a vertex that is connected with only one other vertex. Without lose of generality, we assume the vertex is labeled by \(n\) and it is connected with vertex \(n-1\). Then take a vertex being connected with vertex \(n-1\) that is different from vertex \(n\) and assume it is labeled by \(n-2\). Denote by \(\Gamma\setminus\{n\}\) the graph obtained from \(\Gamma\) by deleting vertex \(n\) and the corresponding line. Clearly, we can get a basis of \(Z(\Gamma)\) by add elements \(\{e^{n},c^{n},\alpha^{n-1,n},\alpha^{n,n-1}\}\) to a basis of \(Z(\Gamma\setminus\{n\})\). Furthermore, every derivation of \(Z(\Gamma)\) can be obtained from one of \(Z(\Gamma\setminus\{n\})\) by determining the images of the above exceptional basis by Lemma 3.1, 1. \(\Theta(e^{n})=-t^{n}_{n,n-1}\alpha^{n,n-1}-t^{n}_{n-1,n}\alpha^{n-1,n}\); 2. \(\Theta(c^{n})=(t^{n-1,n}_{n-1,n}+t^{n,n-1}_{n,n-1})c^{n-1}\); 3. \(\Theta(\alpha^{n-1,n})=t^{n-1,n}_{n-1,n}\alpha^{n-1,n}+t^{n}_{n,n-1}c^{n-1}+t^ {n-1}_{n,n-1}c^{n}\); 4. \(\Theta(\alpha^{n,n-1})=t^{n,n-1}_{n,n-1}\alpha^{n,n-1}+t^{n-1}_{n-1,n}c^{n-1}- t^{n-1}_{n-1,n}c^{n}\), where \[t^{n}_{n,n-1}=-t^{n-1}_{n,n-1},\,\,t^{n}_{n-1,n}=-t^{n-1}_{n-1,n}, \tag{3.9}\] and change the image of \(e^{n-1}\) as follows \[\Theta(e^{n-1})=t^{n-1}_{n-2,n-1}\alpha^{n-2,n-1}+t^{n-1}_{n,n-1}\alpha^{n,n-1 }+t^{n-1}_{n-1,n-2}\alpha^{n-1,n-2}+t^{n-1}_{n-1,n}\alpha^{n-1,n}.\] Still by Lemma 3.1, we have \[\Theta(c^{n-1})=(t^{n-2,n-1}_{n-2,n-1}+t^{n-1,n-2}_{n-1,n-2})c^{n-1}=(t^{n-1,n }_{n-1,n}+t^{n,n-1}_{n,n-1})c^{n-1},\] that is, \[(t^{n-2,n-1}_{n-2,n-1}+t^{n-1,n-2}_{n-1,n-2})=(t^{n-1,n}_{n-1,n}+t^{n,n-1}_{n, n-1}). \tag{3.10}\] Consider \((t^{n-2,n-1}_{n-2,n-1}+t^{n-1,n-2}_{n-1,n-2})\) as a constant and combine (3.9) with (3.10) as a system of linear equations. Then it is easy to check that the solution set is a 3 dimensional manifold. Consequently, \[\dim Der(Z(\Gamma))-\dim Der(Z(\Gamma\setminus n))=3.\] That is, the lemma holds for \(|\Gamma_{0}|=n\). This completes the proof. Given \(x\in\mathcal{A}\), define a linear mapping \(\Theta(a)=[x,a]\) for all \(a\in\mathcal{A}\), where \([x,a]=xa-ax\). Then \(\Theta\) is a derivation of \(\mathcal{A}\), which is called an inner derivation. Clearly, if \(x\in C(\mathcal{A})\), then the inner derivation defined by \(x\) is zero. Write the linear space spanned by all inner derivations by \(IDer(\mathcal{A})\). Then if \(\mathcal{A}\) is a finite dimensional algebra, we have \[\dim IDer(\mathcal{A})=\dim\mathcal{A}-\dim C(\mathcal{A}). \tag{3.10}\] Combining Lemma 2.2 with (3.10) leads to the following result. **Lemma 4.2**.: \(\dim IDer(Z(\Gamma))=|Q_{0}|+|Q_{1}|-1\)_._ Now we are in a position to give the main result of this note. **Theorem 4.3**.: _Let \(\Gamma\) be a finite connected graph without loops, cycles or multiple edges and \(Z(\Gamma)\) the associated zigzag algebra. If \(|\Gamma_{0}|>1\), then the dimension of 1th Hochschild cohomology group of \(Z(\Gamma)\) is 1._ Proof.: It is clear that \(|Q_{1}|=2(|Q_{0}|-1)\) by the properties of \(\Gamma\). Then the theorem is a direct corollary of Lemma 4.1 and Lemma 4.2. Since arbitrary multiplicity one Brauer tree algebra is derived equivalent to a zigzag algebra, we have the following obvious corollary, which can not be obtained from [13, Theorem 4.4]. For the definition of a Brauer tree algebra, we refer the reader to [1]. **Corollary 4.4**.: _The dimension of the 1th Hochschild cohomology of a multiplicity one Brauer tree algebra is 1._ **Remarks 4.5**.: (1) Since Hochschild cohomology is invariant under derived equivalence, the dimension of the 1th Hochschild cohomology group of each algebra derived equivalent to a zigzag algebra is 1. (2) For a zigzag algebra of type A, Mazorchuk and Stroppel [21] computed \(n\)-th Hochschild cohomology group for all \(n\geq 0\).
2304.04188
HyperINR: A Fast and Predictive Hypernetwork for Implicit Neural Representations via Knowledge Distillation
Implicit Neural Representations (INRs) have recently exhibited immense potential in the field of scientific visualization for both data generation and visualization tasks. However, these representations often consist of large multi-layer perceptrons (MLPs), necessitating millions of operations for a single forward pass, consequently hindering interactive visual exploration. While reducing the size of the MLPs and employing efficient parametric encoding schemes can alleviate this issue, it compromises generalizability for unseen parameters, rendering it unsuitable for tasks such as temporal super-resolution. In this paper, we introduce HyperINR, a novel hypernetwork architecture capable of directly predicting the weights for a compact INR. By harnessing an ensemble of multiresolution hash encoding units in unison, the resulting INR attains state-of-the-art inference performance (up to 100x higher inference bandwidth) and can support interactive photo-realistic volume visualization. Additionally, by incorporating knowledge distillation, exceptional data and visualization generation quality is achieved, making our method valuable for real-time parameter exploration. We validate the effectiveness of the HyperINR architecture through a comprehensive ablation study. We showcase the versatility of HyperINR across three distinct scientific domains: novel view synthesis, temporal super-resolution of volume data, and volume rendering with dynamic global shadows. By simultaneously achieving efficiency and generalizability, HyperINR paves the way for applying INR in a wider array of scientific visualization applications.
Qi Wu, David Bauer, Yuyang Chen, Kwan-Liu Ma
2023-04-09T08:10:10Z
http://arxiv.org/abs/2304.04188v1
HyperINR: A Fast and Predictive Hypernetwork for Implicit Neural Representations via Knowledge Distillation ###### Abstract Implicit Neural Representations (INRs) have recently exhibited immense potential in the field of scientific visualization for both data generation and visualization tasks. However, these representations often consist of large multi-layer perceptrons (MLPs), necessitating millions of operations for a single forward pass, consequently hindering interactive visual exploration. While reducing the size of the MLPs and employing efficient parametric encoding schemes can alleviate this issue, it compromises generalizability for unseen parameters, rendering it unsuitable for tasks such as temporal super-resolution. In this paper, we introduce HyperINR, a novel hypernetwork architecture capable of directly predicting the weights for a compact INR. By harnessing an ensemble of multiresolution hash encoding units in vision, the resulting INR attains state-of-the-art inference performance (up to 100\(\times\) higher inference bandwidth) and supports interactive photo-realistic volume visualization. Additionally, by incorporating knowledge distillation, exceptional data and visualization generation quality is achieved, making our method valuable for real-time parameter exploration. We validate the effectiveness of our HyperINR architecture through a comprehensive ablation study. We showcase the versatility of HyperINR across three distinct visualization tasks: novel view synthesis, temporal super-resolution of volume data, and volume rendering with dynamic global shadows. By simultaneously achieving efficiency and generalizability, HyperINR paves the way for applying INR in a wider array of scientific visualization applications. Implicit neural network, hypernetwork, knowledge distillation, interactive volume rendering, parameter exploration. ## 1 Introduction In the field of scientific visualization, continuous fields are often represented using discrete data structures such as grids, unstructured meshes, or point clouds. These structures are limited by resolution and can be cumbersome to handle due to their complexity. To address this, Lu et al. [23] introduced an alternative approach, employing a continuous function modeled using a fully connected multi-layer perceptron (MLP) to implicitly represent data fields. Such an implicit neural representation (INR) offers several key advantages, including substantial data size reductions while preserving high-frequency details, and direct access to spatial locations at arbitrary resolutions without decompression or interpolation. A recent advancement in INR is CoordNet, introduced by Han et al. [14], which generalizes INR to incorporate simulation parameters (\(\theta_{sim}\)) and visualization parameters (\(\theta_{vis}\)) for predictive tasks such as temporal super-resolution or visualization synthesis. With a well-designed fully connected INR and a suitable activation function, Fig. 1: Comparisons between HyperINR and data interpolation for the temporal super-resolution task using the **vortices** dataset. Listed timesteps are midpoints of different interpolation intervals. HyperINR can directly predict the weights of a regular implicit neural representation (INR) for unseen parameters. The predicted INR is in general more accurate than data interpolation results and can support interactive volumetric path tracing. CoordNet generates more meaningful results than direct data interpolation for previously unseen parameters. However, a single pass through such a large fully connected INR may require millions of operations, making it slow for neural network inference and thus unsuitable for interactive visualizations. Wu et al. [48] and Weiss et al. [47] addressed the challenge of long inference times of large fully connected INRs by reducing the neural network size to approximately 200 neurons and incorporating additional trainable parameters through auxiliary data structures to compensate for the reduction in network capacity. Such auxiliary data structures are typically constructed in the form of hash tables [28] or octrees [40], and are responsible for transforming input coordinates into high-dimensional vectors, a process known as parametric positional encoding. Compared with parameters stored in the neural network, querying, interpolating, and optimizing encoding parameters based on input coordinates can be performed more rapidly. Consequently, interactive visualization of INR models can be achieved. However, the majority of data features are strongly embedded into the trainable encoding parameters rather than being generalized by the MLP. In turn, the generalizability of INR in regards to unseen data is significantly reduced. Therefore, a method that can offer strong generalizability while still maintaining exceptional inference performance for enabling interactive visualization and real-time parameter exploration is highly desirable. In this paper, we present HyperINR, a hypernetwork designed to conditionally predict the weights of an Implicit Neural Representation (INR) using multiple compact multiresolution hash encoders [28] and incorporating a deeply embedded weight interpolation operation. These hash encoders correspond to points in the parameter space and are organized within a spatial data structure. Given an input, the data structure is traversed to gather a set of nearest hash encoders, whose weights are subsequently interpolated based on the input and combined with the weights of a shared MLP common to all encoders. This combined weight is then suitable for use as an INR, enabling interactive volumetric path tracing through the state-of-the-art INR rendering algorithm proposed by Wu et al. [48], with the INR generation process taking less than 1\(m\)s to complete (up to 100\(\times\) faster than CoordNet). Furthermore, we optimize HyperINR using knowledge distillation, leveraging CoordNet [14] as the teacher model, to achieve state-of-the-art generalizability for unseen parameters, rendering HyperINR suitable for real-time parameter exploration. We conduct a comprehensive evaluation of HyperINR's architecture through an extensive ablation study and assess its performance in three distinct scientific visualization tasks: Novel View Synthesis (NVS), Temporal Super-Resolution of Volume Data (TSR), and Volume Rendering with Dynamic Global Shadows (DGS). Our contributions can be summarized as follows. * We design HyperINR: A hypernetwork that efficiently generates the weights of a regular INR for given parameters, achieving state-of-the-art inference performance and enabling high-quality interactive volumetric path tracing. * We introduce a framework for optimizing HyperINR through knowledge distillation, attaining state-of-the-art data generalization quality for unseen parameters and supporting real-time parameter exploration. * We demonstrate HyperINR's exceptional inference performance and its ability to generate meaningful data and visualizations across a diverse range of scientific visualization tasks. ## 2 Related Work In this related work section, we delve into the relevant research areas related to our presented work. We begin by providing an overview of recent advancements in generation models for scientific visualization, followed by a review of implicit neural representation. Because our technique involves hypernetwork and knowledge distillation, we also review the latest advancements in these areas. ### _Generation Models for Scientific Visualization_ Using neural networks to generate new data or visualization that are similar to the training data has been studied by many in recent years. We overview related in this area, and we refer to Wang et al. [45] for a more comprehensive survey of this topic. For visualization generation, Berger et al. [3] developed a generative adversarial network (GAN) for synthesizing volume rendering images with different transfer functions and view parameters. GAN was also used by He et al. [17] to create a simulation and visualization surrogate model called InSituNet for exploring ensemble simulation parameters. Engel and Ropinski [7] built a 3D U-Net that can predict local ambient occlusion data for given transfer functions. Weiss et al. [46] proposed a convolutional neural network (CNN) network to coherently upscale isosurface images by training the network using depth and normal information. Han and Wang [15] developed a GAN-based volume completion network for visualizing data with missing subregions. Bauer et al. [2] introduced a CNN-based screen space method that enables faster volume rendering through sparse sampling and neural reconstruction. For data generation, Zhou et al. [51] and Guo [9] designed CNNs for upscaling scalar and vector field volume data respectively. Han and Wang employed GANs for generating time-varying volume data at higher temporal [12] or spatial et al. [13] resolutions. Han et al. [16] later also presented an end-to-end solution for achieving both goals at the same time. Shi et al. [36] improved InSituNet through view-dependent latent-space generative models, and their method can directly predict simulation data rather than being bound to the visualization strategy used for generating the training data. Data generation models are also applicable to volume compression. Jain et al. [19] presented an encoder-decoder network to compress a high-resolution volume. Wurster et al. [49] developed a hierarchical GAN for the same task. Very recently, Han et al. [14] explored the use of implicit neural representation and proposed CoordNet for both data and visualization generation tasks. In our work, we use CoordNet as the teacher model for knowledge distillation. ### _Implicit Neural Representation (INR)_ In the field of scientific visualization, Lu et al. [23] first explored the use of INR for volume compression. Their network utilizes multiple ResNet blocks and the sine activation function to achieve high reconstruction qualities. Previously mentioned CoordNet by Han et al. [14] for time-varying data is also worth mentioning here. CoordNet can be considered as a conditional-INR. However, their method requires a time-consuming training process for every volume data, and more crucially, the network is slow for inference. Wu et al. [48] and Weiss et al. [47] addressed these issues using parametric positional encoding [28, 40] and GPU-accelerated inference routines. Wu et al. also proposed an auxiliary data structure to enable interactive volume rendering and an optimized algorithm for computing global illuminations. In this work, we adopt the INR architecture proposed by Wu et al. as the base. We develop our interactive volume visualization algorithms based on their proposals. INR can also be used in hybrid with other hierarchical data structures. Doyub et al. [20] recently demonstrated this for handling high-resolution sparse volumes. Positional encoding converts an input coordinate to a higher-dimension vector before being passed to subsequent layers. It allows the network to capture high-frequency local details better. Positional encoding was proven to be helpful in the attention components of recurrent networks [8] and transformers [43], and later adopted by NeRF [26] and many INR-based works [1, 29, 41] in computer graphics. To further optimize training time and improve accuracy, _parametric positional encoding_ was introduced. It introduces an auxiliary data structures such as dense grids [24], sparse grids [11], octrees [40], or multiresolution hash tables [28] to store training parameters. Thus, the neural network size can be reduced. Therefore neural networks with such encoding methods can typically converge much faster. In this work, we adopt the multiresolution hash grid method proposed by Muller et al. [28] due to its excellent performance. ### _Hypernetwork for INR_ Hypernetworks or meta-models are networks that generate weights for other neural networks [10]. They have a wide range of applications, including few-shot learning [4], continual learning [44], architecture search [50], and generative modeling [31, 32, 33], among others. Hypernetworks can also be combined with implicit neural representations (INRs). For instance, Klocek et al. [21] developed an INR-based hypernetwork for image super-resolution. DeepMeta [22] builds a hypernet that takes a single-view image and outputs an INR. Skorokhodov et al. [39] developed a GAN-based hyperpert for continuous image generation. Sitzmann et al. [38] proposed an MLP-based hypernetwork to parameterize INRs for 3D scenes consisting of only opaque surfaces. In this work, we utilize hypernetworks to build a large neural network that has the potential to learn an ensemble of data while maintaining the capability of interactive volume visualization. ## 3 Formulation for HyperINR INRs can be regarded as functions mapping 2D or 3D coordinates \(\vec{x}\) to their corresponding field values \(\vec{v}\): \[\Phi:\vec{x}\mapsto\Phi(\vec{x})=\vec{v},\;\vec{x}\in\mathbb{R}^{2}\text{ or }\mathbb{R}^{3}. \tag{1}\] We employ \(\Phi\) for approximating image or volume data \(V\) that are intrinsically parameterized by a scene parameter (e.g., timestep, lighting direction) from a high-dimensional parameter space \(\Theta\). Our objective is to construct a neural network capable of continuously generating such an INR based on parameters in \(\Theta\), using a sparsely sampled training set \(\mathcal{C}=\{V(\theta_{i}),\;\theta_{j}\}\). The resulting INR is expected to be suitable for interactive visualization and real-time parameter exploration. Formally, such a neural network can be defined as a higher-order function, \(S\), which accepts scene parameters \(\theta\) as inputs, and yields \(\Phi\) conditioned on these parameters: \[H(\theta)(\vec{x})=\Phi(\vec{x}\mid\theta)\in V(\theta),\text{ for }\theta\in \Theta,\;\forall\;\vec{x}\in\mathbb{R}^{2}\text{ or }\mathbb{R}^{3}. \tag{2}\] To incorporate the state-of-the-art rendering algorithm by Wu et al. [48] for interactive 3D visualization of INRs, we further decompose \(\Phi\) into two distinct functions: an encoding function \(E\), mapping input coordinates to high-dimensional vectors via a multiresolution hash encoder; and a synthesis function \(S\), converting the high-dimensional vectors into data values parameterized by an MLP: \[\Phi(\vec{x}\mid\theta)=S\circ E(\theta)(\vec{x}). \tag{3}\] In our work, we utilize a shared synthesis function \(S\) for all scene parameters, intentionally designed to significantly enhance training robustness, as demonstrated through our experimental results in Section 6.2. Thus, the primary objective of HyperINR becomes predicting the encoding function \(E\) for a given set of parameters \(\theta\). As illustrated in Figure 2, we first sample \(N\) scene parameters \(\mathcal{E}=\{\theta_{j}\mid j=a,b,c,...\}\) and construct a multiresolution hash encoder \(E_{j}=E(\theta_{j})\) for each \(\theta_{j}\). We refer to \(\mathcal{E}\) as encoder positions and organize them using a KD-tree based on these positions. Given a set of parameters \(\theta\), we traverse the KD-tree to gather the \(K\) nearest encoders around \(\theta\) and interpolate their weights using inverse distance weighting (IDW), also known as Shepard's algorithm [35]: \[E(\theta)=\begin{cases}\sum_{j=1}^{N}\frac{w_{j}E_{j}}{\sum_{j=1}^{N}\frac{w_ {j}E_{j}}{w_{j}}}&\text{if }d(\theta,\theta_{i})\neq 0,\;\forall\;j\\ E_{j}&\text{if }d(\theta,\theta_{j})=0,\end{cases} \tag{4}\] where \(w_{j}=d(\theta,\theta_{j})^{-p}\) with \(p=1\). If the scene parameter space \(\Theta\) is one-dimensional, a fast-path is provided by replacing the KD-tree with a linear array and performing linear interpolation instead of IDW. Importantly, encoder positions \(\mathcal{E}\) can be distinct from the training set \(\mathcal{C}\). We combine Bridson's fast Poisson Disk sampling algorithm [5] and Gaussian kernel sampling to generate \(\mathcal{E}\). Poisson Disk Sampling ensures even distribution of encoders in the parameter space, maintaining a minimum distance between encoders while preventing regular grid-like patterns. Gaussian kernel sampling enables integration of application-specific knowledge into the network. In Section 6.4, we perform an ablation study to analyze the impact of \(\mathcal{E}\) on the network's generalization capabilities. ### _Organize Weight Space via Knowledge Distillation_ Knowledge distillation is a powerful method for knowledge transfer between models, with early demonstrations by Bucilua et al. [6] and subsequent formalizations by Hinton et al. [18]. In this study, we leverage knowledge distillation to enhance the performance of HyperINR by employing a fully-connected conditional INR with strong generalizability as the teacher model. The process begins with training the teacher model, denoted as \(\Omega\), on the training set. Then, a distillation set \(\mathcal{D}\) is created by strategically sampling a set of scene parameters \(\{\theta_{k}\}\), and computing the corresponding data \(\{\Omega(\theta_{k})\}\) using the trained Fig. 3: A) Visualization of the knowledge distillation process, wherein a user provides a training set \(\mathcal{C}\) and a set of encoder positions \(\mathcal{E}\). We utilize \(\mathcal{C}\) to pretrain CoordNet and construct HyperINR using \(\mathcal{E}\). Following pretraining, we create a distillation set \(\mathcal{D}\) and optimize HyperINR using it. B) We introduce a stateless INR evaluator for end-to-end training of the hypernetwork, leading to improved training quality. Fig. 2: The architecture of HyperINR. HyperINR is composed of a shared MLP denoted as \(S\) and a collection of multiresolution hash encoding parameters \(E\), with each parameter associated with a set of values \(\{a,b,c,...\}\). A) In high dimensions, a KD-tree is utilized, B) while in 1D, the encoding parameters are arranged in a linear array. C) The input parameter \(\theta\) is utilized to query the data structure, and the queried encoding parameters are interpolated and combined with the shared MLP to construct an INR. The resulting INR enables interactive visualization and real-time parameter exploration. teacher model. Finally, the HyperINR is optimized using \(\mathcal{D}\). A visual illustration of this process can be found in Figure 3. The distillation set \(\mathcal{D}\) can be pre-computed or generated on-demand. Pre-computing \(\mathcal{D}\) avoids the training process being bottlenecked by the inference bandwidth of the teacher model, while generating \(\mathcal{D}\) on-demand can greatly reduce memory usage. In this work, we pre-compute \(\mathcal{D}\). Furthermore, the selection of a high-quality teacher model and the construction of the distillation set \(\mathcal{D}\) can be critical for producing a good HyperINR. In our work, we choose CoordNet as the teacher model due to its remarkable data generation capability. ### _Multiresolution Hash Encoding_ Multiresolution hash encoding [28] is a key technique that enables interactive visualization for our approach. This method models the encoding function \(E\) using \(L\) levels of independent hash tables with each containing up to \(T\) feature vectors of length \(F\). Each level conceptually corresponds to a virtual grid with feature vectors stored at its vertices. Figure 4A illustrates the steps performed in the encoding process. The grid resolution \(R_{I}\) starts at a base value \(R_{1}\) and increases progressively (\(\times 2\) in this paper) as the level \(l\) increases. Input coordinates are expected to be normalized to \([0,1]^{d}\), and then scaled to the grid's resolution: \(\vec{x}_{l}=\vec{x}\cdot R_{I}+0.5\). The offset of 0.5 causes different scales to be staggered with respect to each other, thus preventing spurious alignment of fractional coordinates upon integer scales. The output encoding at this level is calculated by interpolating grid vertices based on \(x_{l}\). In this paper, we use linear interpolation. Figure 4B highlights how trainable parameters are stored in the data structure. The grid resolution \(R_{I}\) starts from a relatively small base value, and the number of vertices at this level might be smaller than \(T\). In this case, encoding parameters are directly organized as a linear. When the number of vertices becomes greater than \(T\), a spatial hash function is used to condense encoding parameters. The hash function used is given by: \[\text{hash}(\vec{x})=\left(\oplus_{i=1}^{d}x_{i}\vec{x}_{i}\right), \tag{5}\] where \(\oplus\) is a bitwise XOR operation and \(\vec{x}_{i}\) are unique, large prime numbers. Based on Muller et al. [28]'s recommendation, we use \(\pi_{1}=1\), \(\pi_{2}=2\) 654 435 761, and \(\pi_{3}=805\) 459 861. Since multiresolution hash encoding is designed to encode spatial coordinates, only 3 prime numbers are used. ## 4 Applications In this paper, we apply HyperINR to three common generative tasks in scientific visualization, namely novel view synthesis (NVS), temporal super resolution (TSR), and dynamic global shadows for volume rendering (DGS). Detailed descriptions of setups and datasets used for each task are provided below. In addition, the usages of the datasets are summarized in Table 1. ### Novel View Synthesis The objective of NVS is to generate meaningful and visually coherent images of a scene from previously unobserved viewpoints or perspectives, utilizing a collection of pre-existing images. NVS holds significant potential in the realm of scientific visualization, enabling the creation of explorable images [42]. To perform this task, an INR should accept two spatial inputs \((x,y)\) and a viewing direction, subsequently producing an RGB color \((r,g,b)\). In this study, we parameterize the viewing direction using a spherical coordinate system, characterized by a polar angle \(\theta\) and an azimuthal angle \(\phi\). Our experiment employs a dataset of 200 isosurface visualizations, generated by He et al. [17] utilizing the MPAS-Ocean simulation from the Los Alamos National Laboratory, referred to as the **MPAS** dataset. 100 of the visualizations were allocated for training, and the remaining were for testing purposes. Furthermore, the quality of the synthesized images was assessed through the Peak Signal-to-Noise Ratio (PSNR) metric. The model's inference performance was evaluated in terms of the data bandwidth. ### Temporal Super Resolution The goal of TSR is to train a neural network on a sequence of sparsely sampled time-varying volume data and enable the generation of the same sequence at a higher temporal resolution. A more complete review of related TSR techniques can be found in Section 2.1. We focus on scalar field volume data, where an INR receives a 4D input \((x,y,z,t)\) and outputs a scalar value \(v\). Although all of our experiments assumes \(v\) to be a scalar value, our method can be extended to multivariate volume data without any alterations to the design. We assess our approach using three datasets: 1) a time-varying simulation of **vertices** provided by Deborah Silver at Rutgers University, consisting of 100 timesteps with 20 equally spaced steps used for training; 2) the **pressure** field of a Taylor-Green Vortex simulation generated by the NekRS framework, containing 105 timesteps with 21 selected for training; and 3) the **temperature** field from a 14nm flame simulation produced by S3D [34], which includes 90 timesteps with 10 employed for training. To evaluate the quality of the generated data, we employ metrics such as PSNR and the Structural Similarity Index Measure (SSIM). For assessing the inference performance, we measure the average inference bandwidth as well as the interactive volume visualization framerates. ### Dynamic Global Shadows Direct volume rendering with global shadows is a non-physically based shading technique widely employed in scientific volume rendering. It enhances realism and helps distinguish features in the data. However, the technique also imposes \begin{table} \begin{tabular}{c c c c c} \hline \hline Data & Count & Dimensions & Input/Output & Task \\ \hline Vortices & 100 & \(128,128,128\) & \((r,x,y,z)\mapsto v\) & TSR \\ Pressure & 105 & \(128,128,128\) & \((r,x,y,z)\mapsto v\) & TSR \\ Temp & 100 & \(864,240,640\) & \((r,x,y,z)\mapsto v\) & TSR \\ MPAS & 200 & \(256,256\) & \((\theta,\phi,\psi,\psi)\mapsto(r,g,b)\) & NVS \\ MechHand & 150 & \(256,256,256\) & \((\theta,\phi,x,y,z)\mapsto v\) & DGS \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets Used in All Tasks Figure 4: Multiresolution Hash Encoding: A) The encoding process and the virtual grids represented by the hash encoding. B) The actual data arrangement within the hash encoding. Figure 5: A) The visualization of encoder positions \(\mathcal{E}\) and the distillation set \(\mathcal{D}\) for the novel view synthesis task. We generated encoder positions using Poisson disk sampling (2). We generated distillation set also using Poisson disk sampling, but combined with Gaussian kernel sampling (1). We centered Gaussian kernels on training set. B) The same visualization for the dynamic global shadow task. In this task, we also included some distillation parameters sampled uniformly in the parameter space. a significant runtime cost over simple ray casting, as it generates at least one secondary ray towards the light source at each sample point to estimate shadow coefficients. This results in an \(O(n^{2})\) computation for each primary ray. An alternative approach involves recomputing all secondary rays at voxel centers, generating a volume data containing shadow coefficients. This "shadow volume" is subsequently utilized to estimate shadow coefficients at sample positions. Although this method reduces the computational complexity to \(O(n)\) per ray, it substantially increases the memory footprint for rendering. Furthermore, the shadow volume must be regenerated whenever the transfer function or the light changes, presenting challenges for interactive exploration. In this work, as a preliminary study, we examine the potential of achieving dynamic global shadows using HyperINR. Specifically, we propose substituting shadow volumes with regular INRs and estimating shadow coefficients through network inferences. We term these INRs as shadow INRs. This optimization significantly reduces memory footprints. Then, we optimize a HyperINR to generate such INRs. The resulting HyperINR can achieve dynamism for global shadows as the generation process can be done in real-time. We validate this method by generating a set of 150 shadow volumes sampled with varying light positions. For this preliminary study, we fix the transfer function and incorporate only one light source. Then, we utilize 35 evenly distributed shadow volumes to optimize our network. To assess the shadow generation quality, we computed the PSNR and SSIM against the ground truth data. To evaluate the inference performance, we measured the rendering framerate and INR generation latency. ## 4 Implementation Our network is implemented in PyTorch, with GPU-accelerated training using the Tiny-CUDA-NN machine learning framework [27]. We leverage multiresolution hash encoding for training the INR. **Architecture** HyperINR's base hash encoder consists of \(L=8\) encoding levels, with each level containing up to \(T=2^{15}\) feature vectors of size \(F=4\). We selected these hyperparameters based on the results presented in Section 4. The base grid resolution is set to \(R_{1}=8\) for the NVS task and \(R_{1}=4\) for TSR and DGS tasks. The MLP unit adopts the configuration proposed by Wu et al. [48], which uses four hidden layers and a width of 64 neurons. This configuration is suitable for the volume rendering algorithm. As for encoder positions \(\varepsilon\), 177 encoder positions were randomly generated using Poisson disk sampling within a \([0,1]^{2}\) space for the NVS task, 24 encoding units were evenly distributed across the temporal domain for the TSR task, and 206 encoder positions were created using both Poisson disk sampling and Gaussian distribution sampling for the DGS task. The impact of these hyperparameters is further explored in Section 4.4. **Distillation** We select CoordNet as the teacher model for knowledge distillation. However, since a reference implementation of CoordNet is not publicly available, we implement it based on SIREN [37] and NeurComp [23]. Our implementation closely matches the architecture described in the CoordNet paper [14]: we use 3 resblocks as the encoder to process inputs, 10 hidden resblocks of size 256, and 1 resblock as the decoder to produce final outputs. It is worth noting that CoordNet expects input and output values to be within the range of \([-1,1]\), whereas hash-encoding-based INRs use a range of \([0,1]\). Therefore, in our implementation, we use \([0,1]\) as the value range and only convert it to the \([-1,1]\) range when interacting with CoordNet. **Training** We develop an end-to-end training framework to optimize our hypernetwork instead of relying on pre-trained network weights. To achieve this, we implement a stateless INR evaluator (as shown in Figure 3) that takes a coordinate matrix \(C\) and a network weight matrix \(W\) predicted by the hypernetwork as inputs, and generates output data \(V_{\text{pred}}\). A loss is thus calculated with respect to the ground truth \(V_{\text{true}}\). Then the evaluator can compute gradients with respect to \(W\) and backpropagate them to the hypernetwork. For NVS, we use the \(\mathcal{L}^{2}\) loss between pixel colors. For TSR and DGS, we use the \(\mathcal{L}^{1}\) loss following the recommendations of Han et al. [14]. To optimize HyperINR, we use the Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\varepsilon=10^{-10}\). A good convergence speed was observed with a learning rate of \(10^{-3}\). The teacher model, CoordNet, was also trained using the Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\varepsilon=10^{-8}\), and a weight decay of \(10^{-6}\). We find that CoordNet can be sensitive to the learning rate and a learning rate of \(10^{-5}\) enables stable convergence. For NVS, we optimize CoordNet for 300 epochs and then create a distillation set \(\mathcal{D}\) using a combination of Poisson disk sampling and Gaussian kernel sampling, similar to the generation of encoder positions. The \(\mathcal{D}\) is visualized in Figure 4. For TSR tasks, we train CoordNet for 30k epochs to ensure good performance for the teacher model. Then, we create distillation sets by uniformly sampling the time axis. For the DGS task, we also train CoordNet for 30k epochs but generate a \(\mathcal{D}\) containing 400 samples, which is visualized in Figure 4. Table 2 contains the summary of the sizes of the distillation set \(\mathcal{D}\). We utilize automatic mixed precision [25] to accelerate training and reduce the memory footprint. To avoid data underflow in the backpropagation process, gradient scaling was also employed. Some of the experiments were conducted on the Polaris supercomputer at the Argonne Leadership Computing Facility. **Initialization** Properly initializing network weights is crucial for both HyperINR and CoordNet. For HyperINR, we follow Muller et al.'s suggestions and initialize the hash table entries using the uniform distribution \(\mathcal{U}(-10^{-4},10^{-4})\)[28]. This approach provides a small amount of randomness while encouraging initial predictions close to zero, allowing hash encoding units to converge properly. CoordNet heavily utilizes SIREN layers, so we apply SIREN's initialization scheme and use the uniform distribution \(\mathcal{U}(-\frac{\sqrt{6}/n}{30},\frac{\sqrt{6}/n}{30})\) to initialize CoordNet weights, with \(n\) being the number of neurons in the layer [37]. ## 5 Ablation Study In this section, we present results and findings that motivate the design of HyperINR, and conduct a hyperparameter study to determine the optimal configuration for our tasks. ### _Understand Hash Encoding Weights_ Designing an accurate hypernetwork for predicting multiresolution hash encoding weights requires a thorough understanding of these weights. To gain insight, we mapped the weights of each encoding level back to the corresponding grid space { \(g_{x,y,z}=W_{\text{hash}(x,y,z)}\) }, and visualized them using histograms and volume renderings, as shown in Figure 6. ### _Parametric Encoding is an Embedding of Local Features_ Our analysis reveals that when the number of grid vertices is smaller than the hash table size, a strong correlation exists between the encoder parameter values and the actual data values. Furthermore, different features tend to capture different details, resulting in distinct weight distributions. These observations suggest that INRs with parametric Fig. 6: A) The volume visualization of the **vortices** data as well as the histogram data values. B), C) Volume visualizations and histograms of the corresponding hash encoding weights. encoding operate differently from those parameterized solely by MLPs. With parametric encoding, local details in the data are simply projected into a high-dimensional weight space, rather than being approximated indirectly through MLPs. These findings agree with our intuition about parametric encoding and explain why employing multiresolution hash encoding can result in a loss of generalizability. #### Hash Function Breaks Local Similarities When the total number of grid vertices exceeds the hash table size, a spatial hash function is utilized to condense encoding parameters. Our results indicate that this process breaks the aforementioned correlation between parameter values and data values. These local similarities are broken in the hashed space and cannot be restored. This suggests that when designing a hypernetwork for multiresolution hash encoding, levels processed by the spatial hash function must be treated differently from those without it. Operations such as CNNs that take advantage of local spatial coherence may not work even after remapping encoding parameters back to the grid space. Our findings motivate us to focus on weight interpolation methods that rely solely on correlations between parameters stored at the same location in the hash table. ### _Shared MLP Unit_ The fundamental principle behind HyperINR is the effective organization of the INR weight space to facilitate interpolation. Thus, it is crucial to first identify the appropriate interpolation method. In this section, we examine two weight interpolation designs. These two designs are visually illustrated in Figure 8. The straightforward design involves optimizing a standalone INR for each training data, and then interpolating all the network weights for parameters outside the training set. However, this design demonstrates a lack of robustness to perturbations (i.e., different random seedings), resulting in totally different network weights if repeatedly trained on the same dataset. While these INRs can yield similar inference outcomes, interpolating among their network weights is generally meaningless because network parameters in the same relative location may hold entirely different meanings across INR instances. The lower part of Figure 8B presents a volume rendering result of the INR produced by this weight interpolation design. In contrast, we construct our HyperINR by incorporating a shared MLP among all positional encoders. This MLP serves as a common projector, ensuring consistent meanings for network parameters across different INR instances and thus rendering weight interpolation feasible. Our finding, depicted in the upper part of Figure 8B, substantiates the effectiveness of a shared MLP unit in enabling meaningful weight interpolation for HyperINR. ### _Comparison between INRs_ With a good interpolation strategy, we now investigate the necessity of employing a hypernetwork to achieve both good generalizability and inference speed. In this section, we present experimental comparison between HyperINR and three distinct INR architectures. We conducted our experiments on the TSR task, utilizing the **vortices** dataset. The three neural networks we employed in our experiment are CoordNet [14], NeurComp [23], and a hash encoding based INR extended to 4 dimensions (HashINR). We included CoordNet and NeurComp in our experiment as CoordNet has recently demonstrated strong performance on TSR tasks, and there is no report of NeurComp on TSR tasks to the best of our knowledge. HashINR processes spatial coordinates using hash encoding and time using OneBlob encoding [29]. Such a HashINR has been used by several computer graphics works [30] and demonstrated good learning capabilities while offering very good inference performance. To ensure fair comparisons, we used the standard CoordNet configuration described in Section 5 and equalized the number of trainable parameters used in all the other networks (to around 2.1\(\sim\)2.0M). Specifically, for NeurComp, we used 100 resblocks of size 327; for HyperINR, we constructed 6 small hash encoders (\(T=2^{14}\)); and for HashINR, we used medium-sized hash tables (\(T=2^{17}\)). We optimized all the networks sufficiently and report their performances in Figure 7A-B. Notably, we observed that HashINR did not perform well. To rule out the possibility that the HashINR was limited by its network capacity, we performed another comparison against a HashINR with very large hash tables (\(T=2^{21}\)) and a HyperINR with sufficient hash encoders (\(N=24\)), with results highlighted in Figure 7C. Our experimental results lead us to three conclusions. Firstly, under equal conditions, CoordNet performed significantly better than NeurComp in terms of generalizability. This is likely due to the differences in network design, as their network capacities were equalized. Secondly, compared with two pure MLP-based INRs, HashINR was unable to achieve good performance using the same amount of trainable parameters. Even after increasing the hash table size, HashINR still struggled to perform well on unseen parameters. Finally, we found that HyperINR outperformed HashINR in terms of generalizability by splitting a larger hash table into multiple smaller ones. However, there was still a performance gap compared to CoordNet. To bridge this gap, we utilize knowledge distillation. ### _Hyperparameter Study_ With a strong network architecture, we now determine the optimal hyperparameters for HyperINR. Specifically, we examined parameters related to individual hash encoders, as well as the impact of encoder positions in the parameter space on performance. #### Hash Encoder Parameters To evaluate parameters related to individual hash encoders, we constructed various HyperINRs and Fig. 8: Comparisons of different weight interpolation methods. A) The method that uses a shared MLP. B) Volume rendering results. C) The naive design that employs different MLPs for different hash encoders. Fig. 7: Comparisons of different INR architectures on the temporal super-resolution task. A), B) Comparisons of networks with equalized number of trainable parameters. C) To rule out the possibility that HashINR and HyperINR were limited by network capacity, another comparison were performed against a HashINR with a very large hash table size (\(T=2^{21}\)) and a HyperINR with sufficient hash encoders (\(N=24\)). tasked them with performing TSR on the **vortices** dataset. We maintained a fixed number of 24 encoders, distributed uniformly throughout the temporal domain, and assessed reconstruction quality using SSIM. Results are reported in Figure 10. First, we set \(T=2^{15}\), \(F=4\), and varied the number of encoding levels \(L\) from 1 to 16. Our findings suggest that \(L>2\) is sufficient for this particular TSR problem. Increasing \(L\) does not yield significant improvements in generalizability. Next, we held \(L\) constant at 8 and varied the number of features per encoding level \(F\) from 1 to 8. Results showed that good performance can be achieved with \(F>1\), but increasing \(F\) yields diminishing returns. Finally, with \(F=4\) held constant, we adjusted the hash table size \(T\) from \(2^{14}\) to \(2^{19}\). In general, we observed that a larger hash table size leads to better reconstruction quality, although the differences were minimal for our particular problem. We conclude that selecting appropriate hyperparameters depends on the data complexity, and for more complicated data, larger hyperparameter values are likely necessary. We repeated these experiments using other datasets and determined that \(T=2^{15}\), \(F=4\), and \(L=8\) generally produce good performance across all cases considered in this study. #### Hash Encoder Locations In this section, we present our investigation of the impact of hash encoder positions \(\mathcal{E}\) on data prediction quality. We began by studying the problem in 1D using the TSR task and subsequently move to higher dimensions using the DGS task. For the TSR task, we utilized the **vortices** dataset and construct 6 HyperINR networks with varying numbers of hash encoders, as depicted in Figure 9AB. These hash encoders were uniformly distributed along the time axis and configured equally based on findings obtained from the experiment described in Section 6.4. We then followed our knowledge distillation process, utilizing a pre-trained CoortNet to distill all the HyperINRs equally and sufficiently (i.e., more than 30k epochs). Network performances were measured using PSNR and SSIM. Our results show that the knowledge of a CoordNet can be fully distilled into a HyperINR with 16 or more hash encoders. Notably, this number is close to the number of training samples in the TSR task. Further experimentation on the **pressure** and **temperature** datasets confirms that this is not a coincidence. We conclude that, for TSR tasks, the number of hash encoders needs to be comparable to the training set size to ensure good knowledge distillation performance. Moving to higher dimensions, we conducted experiments using the DGS task with 4 different HyperINRs. For the first three networks, we used Poisson disk sampling to generate encoder positions \(\mathcal{E}\) with different disk radii, resulting in approximately 100, 150, and 200 hash encoders, as shown in Figure 9C. For the fourth model, we used approximately 200 hash encoders, but with 3/4 of them positioned using Gaussian kernels centered around training set parameters. This resulted in a stronger bias towards the training data. Our results, as shown in Figure 9C, demonstrate that our model can significantly outperform direct data interpolation with only 100 hash encoders. But this amount of hash encoders cannot fully capture CoordNet's behavior. Nevertheless, gradually adding more hash encoders can shrink the gap between CoordNet and HyperINR, indicating that Poisson disk sampling in general can provide a good heuristic for hash encoder positions. Moreover, it is possible to further improve the model's performance and allow it to be biased towards known information. This operation can be useful if there is prior knowledge about which areas of the parameter space will be explored more. ## 7 Application Performance This section presents the performance of HyperINR in terms of prediction quality and inference speed for each application. We provide a summary of the training-related information in Table II. ### Novel View Synthesis To evaluate the image generation quality of HyperINR, we predicted images for all parameters in the evaluation set using HyperINR and inverse distance weighting (LERP), and compared them with the actual images in the evaluation set. We then calculated the PSNR scores and highlight the differences in Figure 13. Our visualizations show that HyperINR-generated images generally have higher PSNR scores compared to LERP-generated images. We also conducted a statistical hypothesis test on whether the PSNR differences were statistically Fig. 10: Hyperparameter study of hash encoder parameters. Fig. 9: The impact of hash encoder positions \(\mathcal{E}\) on the data reconstruction quality was examined in two tasks. The first task, A-B), conducted temporal super-resolution using the **vortices** dataset. The second task, C), used the **mechhand** dataset to address dynamic global shadows. In C), each line represents the average SSIM across 5 different \(\theta\) angles, with filled areas highlighting the \(\pm\) 1 standard deviation regions. greater than zero. The p-value for the test was \(1.2\times 10^{-12}<0.05\), indicating a significant difference. The predicted images are displayed in Figure 11, which also includes results generated by CoordNet for comparison. We found that CoordNet could generally predict images at novel views well, while LERP images would produce distracting artifacts. HyperINR could avoid these artifacts but introduced some high-frequency noises. Despite these advantages, we observed that when CoordNet's performance was not satisfactory, HyperINR's performance also deteriorated, highlighting one of the limitations of our approach. Finally, we compared the network inference throughput between CoordNet and HyperINR in Figure 12 and found that our method was around \(100\times\) faster. adjusted the azimuthal angle \(\phi\) from 132\({}^{\circ}\) to 180\({}^{\circ}\) while fixing the polar angle \(\theta\). The first and last data frames were presented in the training set. Our visualizations show that LERP could not correctly predict the movement of the shadow and gradually faded shadows in and out. Conversely, HyperlrNR was able to plausibly predict the shadow movement. However, the generated INR seemed to also make the volume brighter in some areas, perhaps due to some incorrectly predicted shadow coefficient values. Additionally, we highlight the rendering performance differences in Figure 16. We found that our method significantly improved the rendering performance by avoiding secondary rays. For the tested configuration, we observed a roughly 2\(\times\) speedup. Furthermore, we present the INR generation latency in Table II, which indicates that our method can support real-time shadow INR generations. ## 8 Discussion and Future Work In this section, we present four key insights derived from our study. Firstly, our results generally highlight HyperlrNR's exceptional performance for data and visualization generation. This can be attributed to the incorporation of CoordNet, which is a highly effective teacher network. In addition, our experiments also demonstrate the possibility to construct a hypernetwork with a weight space meaningful for interpolation. Such a hypernetwork can continuously approximate a high-dimensional space utilizing a finite number of encoder instances. Although we employ knowledge distillation for optimization in this study, directly optimizing a generative hypernetwork to achieve comparable data generation performance should be feasible and represents a promising research direction. Secondly, our HyperlrNR leverages numerous small multiresolution hash encoders to approximate data modeled by high-dimensional parameters, providing a more flexible and effective approach compared to using a single large encoder. This observation is corroborated by our ablation study, as detailed in Section 6.3. This attribute makes HyperlrNR well-suited for learning ensembles of data comprising diverse data frames. Nonetheless, as highlighted in Section 6.4, efficiently constructing encoder positions can be challenging and often necessitates extensive experimentation. In higher dimensions, the addition of encoders does not always result in a linear improvement in HyperlrNR's performance. Therefore, a more general and effective method for computing encoder positions is highly desirable. Thirdly, as demonstrated in Table II, knowledge distillation employed to train HyperlrNR can be time-consuming due to two factors. For one, achieving high distillation quality may require a large distillation set, prolonging the training process. Secondly, when the distillation set is generated on-demand, the training speed can be significantly constrained by the inference bandwidth of CoordNet. We alleviated this bottleneck by pre-computing the distillation set, resulting in up to an 8\(\times\) speedup on an NVIDIA A100 for certain cases. However, this optimization remains inefficient when the distillation set size surpasses the GPU memory capacity, pushing training data to the CPU and causing the training process to be constricted by the CPU/GPU bandwidth. Exploring more effective distillation set generation methods warrants further investigation. Lastly, while knowledge distillation can substantially enhance HyperlrNR's data generation capabilities, it also imposes an upper limit on its performance. In cases where the teacher network underperforms, HyperlrNR's performance may also be considerably hindered, as illustrated in Figure 15. Investigating more flexible training strategies represents a promising avenue for future research. ## 9 Conclusion We introduce HyperlrNR, an innovative hypernetwork facilitating conditional generation of INRs for unseen scene parameters. Enabled by the employment of numerous small multiresolution hash encoders, a shared MLP, and a deeply embedded weight interpolation operation, HyperlrNR achieves an impressive 100\(\times\) higher inference bandwidth and interactive volume rendering with exceptional realism. Moreover, our method attains state-of-the-art data and visualization generation Fig. 16: Rendering results of the dynamic global shadow task. Renderings were performed on a single NVIDIA RTX8000 GPU. As highlighted by white boxes, HyperlrNR could more accurately predict the shadow movements, while LERP gradually faded shadows in and out. Fig. 15: Rendering results of temporal super-resolution experiments using (Left) the **pressure** dataset and (Right) the **temperature** dataset. Renderings were performed on a single NVIDIA RTX8000 GPU. performance through knowledge distillation. Our results underscore the potential of HyperNR in various visualization tasks, showcasing its effectiveness and efficiency. We believe that HyperNR represents a step forward in the development of implicit neural representation based approaches for the field of scientific visualization and beyond. ## Acknowledgments This research was supported in part by the Department of Energy through grant DE-SC0019486 and an Intel oneAPI Centers of Excellence grant. The authors also express sincere gratitudes to Weishen Liu (UC Davis Alumni) and Daniel Zavorotny (UC Davis) for their assistance with data preparation.
2303.17635
Beyond AdS$_2$/dCFT$_1$: Insertions in Two Wilson Loops
We consider two-point correlators of local operator insertions in a system of two Wilson-Maldacena loops in ${\cal N}=4$ supersymmetric Yang-Mills theory on both sides of the AdS/CFT correspondence. On the holographic side the correlator of two Wilson-Maldacena loops is given by a classical string world-sheet which in one phase connects two asymptotically AdS$_2$ regions and in the other phase is given by two disconnected AdS$_2$ caps; this configuration breaks supersymmetry as well as conformal invariance. We present a complete systematic account of the string world-sheet fluctuations, including the fermionic sector, and study the behavior of the holographic two-point correlators. On the field theory side we compute certain two-point correlators of local operator insertions by resumming sets of ladder diagrams. Our results demonstrate the efficacy of previously developed methods in tackling this non-conformal, non-susy regime.
Diego H. Correa, Alberto Faraggi, Wolfgang Mück, Leopoldo A. Pando Zayas, Guillermo A. Silva
2023-03-30T18:01:24Z
http://arxiv.org/abs/2303.17635v2
# Beyond AdS\({}_{2}\)/dCFT\({}_{1}\): Insertions in Two Wilson Loops ###### Abstract We consider two-point correlators of local operator insertions in a system of two Wilson-Maldacena loops in \(\mathcal{N}=4\) supersymmetric Yang-Mills theory on both sides of the AdS/CFT correspondence. On the holographic side the correlator of two Wilson-Maldacena loops is given by a classical string world-sheet which in one phase connects two asymptotically AdS\({}_{2}\) regions and in the other phase is given by two disconnected AdS\({}_{2}\) caps; this configuration breaks supersymmetry as well as conformal invariance. We present a complete systematic account of the string world-sheet fluctuations, including the fermionic sector, and study the behavior of the holographic two-point correlators. On the field theory side we compute certain two-point correlators of local operator insertions by resumming sets of ladder diagrams. Our results demonstrate the efficacy of previously developed methods in tackling this non-conformal, non-susy regime. ## 1 Introduction ### 1.1 Background solution The "background" of the Standard Model (SM) is the standard model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics of particle physics. The SM is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, which is a model of particle physics, is a model of particle physics, which is a model Introduction The duality between \(\mathcal{N}=4\) supersymmetric Yang-Mills and strings in AdS\({}_{5}\times\) S\({}^{5}\) is the paradigmatic example of the AdS/CFT correspondence [1], and the Wilson-Maldacena half-BPS operators have played a central role in it since the very inception of the conjecture [2; 3; 4; 5; 6]. The drive for increasingly precise computations of vacuum expectation values on each side of the correspondence has proven to be quite a fruitful direction. Pestun's localization result [7] proving the conjecture that expectation values of such half-BPS Wilson loops are given by the Gaussian matrix model opened the door to various precision tests. In particular, given that the holographic side describes the Wilson loop as a semi-classical string, precision studies beyond the leading order helped to understand aspects of string perturbation theory in this background [8; 9; 10; 11; 12; 13] and paved the way for successful holographic tests based on string theory on other backgrounds [14; 15]. Since the classical string world-sheet has an AdS\({}_{2}\) geometry and the Wilson-Maldacena loop can be interpreted as a defect CFT\({}_{1}\) inside \(\mathcal{N}=4\) SYM, this setup furnishes a rigorous instance of AdS\({}_{2}\)/CFT\({}_{1}\) descending directly from string theory on AdS\({}_{5}\times S^{5}\). In this context, some quite impressive results for two-point [16] and four-point [17] correlators of certain protected operators have been obtained. The results are in complete agreement with the expectations of CFT\({}_{1}\), and highly non-trivial information about anomalous dimensions as a function of the coupling can be read off from them. Moreover, these results are corroborated by other independent methods including the integrability-based quantum spectral curve method [18] and analytic bootstrap methods [19]. In this manuscript we build on these interesting developments and initiate the study of correlators between insertions of operators in a system of two Wilson-Maldacena loops [20]. A system of two Wilson-Maldacena loops provides a setup where both conformal symmetry as well as supersymmetry are broken. The holographic dual of such a system was first discussed in [21; 22]. It is encouraging that even in this context we are able to make progress and compute, in various approximations, the two-point correlators on either side of the correspondence following relatively standard techniques. We are also motivated by the fact that the expectation value of two Wilson-Maldacena loops is described, in one phase, by a string world-sheet that connects two asymptotically AdS\({}_{2}\) regions where the loops are placed, a Euclidean wormhole. On the other phase, after the Gross-Ooguri transition [23], the configuration dominating the partition functions is given by two disconnected AdS\({}_{2}\) world-sheets. This setup comes, in some aspects, tantalizingly close to a framework in which important developments in nAdS\({}_{2}\)/nCFT\({}_{1}\) have recently taken place to clarify aspects of Hawking radiation (see [24] for a review). We hope to eventually connect to such a framework, some encouraging new evidence has recently been reported in [25]. The rest of the manuscript is organized as follows. In section 2 we review the classical string world-sheet that is holographically dual to a configuration of two Wilson-Maldacena loops. Section 3 presents a detailed account of the string fluctuations; we obtain the equations of motions for the bosonic and fermionic fields. In section 4 we discuss the holographic computation of the correlators. In section 5 we describe the field theoretic problem, introduce a number of technical resources needed to sum certain sets of diagrams and present explicit expressions for the correlators of insertions in a system of two Wilson-Maldacena loops. We conclude in section 6 where we also point to some interesting open questions that our work motivates. We relegate some technical details to a couple of appendices. ## 2 Two holographic Wilson-Maldacena loops A locally supersymmetric Wilson-Maldacena loop in \(\mathcal{N}=4\) SYM theory is given by \[W(C;n^{I})=\operatorname{tr}P\exp\oint_{C}\,\mathrm{d}t\,\left(iA_{\mu}\dot{x }^{\mu}+\Phi_{I}n^{I}|\dot{x}|\right). \tag{1}\] Here, both the gauge field \(A_{\mu}\) and the scalars \(\Phi_{I}\) (\(I=1,2,\ldots,6\)) are assumed to be in the fundamental representation of the gauge group \(U(N)\). The coupling to \(\Phi_{I}\), with \(n^{I}\) being a 6-d unit vector, was introduced by Maldacena [2] and is crucial for supersymmetry. In this manuscript, we consider two loops, with coaxial circular contours, \(C_{1}\) and \(C_{2}\), of opposite orientations and scalar couplings \(n^{I}_{1}\) and \(n^{I}_{2}\)[20], \[\begin{split} C_{1}:&\quad x^{\mu}_{1}(\phi)=(R_{1} \cos\phi,R_{1}\sin\phi,0,0)\,\qquad\quad n^{I}_{1}=(0,0,0,0,0,1)\,\\ C_{2}:&\quad x^{\mu}_{2}(\phi)=(R_{2}\cos\phi,-R_{2} \sin\phi,h,0)\,\qquad n^{I}_{2}=(0,0,0,0,\sin\gamma,\cos\gamma)\.\end{split} \tag{2}\] This configuration, with the contours placed on two parallel planes separated by a distance \(h\), can be shown to be equivalent, by a conformal transformation, to a configuration of two concentric loops on the same plane. In appendix A we review, for the benefit of the reader, the conformal transformations relating them and show that they are all characterized by the invariant parameter \[\alpha=\frac{2R_{1}R_{2}}{h^{2}+R_{1}^{2}+R_{2}^{2}}. \tag{3}\] Thereby, the approach to be discussed below provides a unified description for various configurations considered in the literature, for example those in the original papers [21; 22], as well as configurations obtained using integrability [26]. We remind the reader that the setup of separated parallel loops was instrumental to describe the Gross-Ooguri phase transition [23]. In this section, we re-derive the background solution. Second, we characterize its geometry, which we will need for the field equations of the fluctuations in section 3. Last, we repeat the calculation of the on-shell effective action with a renormalization different from [20] and show that the results remain unchanged. This implies that the phase transition between the connected and disconnected configurations is a robust phenomenon. ### Background solution Let us start by considering the AdS\({}_{5}\times S^{5}\) bulk solution. Its metric is \[\mathrm{d}s^{2}=\frac{L^{2}}{z^{2}}\left(\,\mathrm{d}z^{2}+\,\mathrm{d}r^{2}+r ^{2}\,\mathrm{d}\phi^{2}+\,\mathrm{d}x^{2}-\,\mathrm{d}t^{2}\right)+L^{2} \left(\,\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\Omega_{4}^{2}\right). \tag{4}\] \[F_{5}=\frac{4}{L}\left(1+*\right)\epsilon_{S^{5}}\, \tag{5}\] where \(\epsilon_{S^{5}}\) is the volume form of the \(S^{5}\) part of the bulk (with radius \(L\), not the unit \(S^{5}\)). \(F_{5}\) will be relevant only for the fermion fields. Exploiting the conformal invariance, we consider the string configuration in the form in which the two boundaries lie on concentric circles on the same plane, \(x=t=0\), and have radii \(R_{1}=R_{-}\) and \(R_{2}=R_{+}\). For the time being, we may take \(R_{+}\geq R_{-}\) without loss of generality. Analyzing the world-sheet using concentric rings has the advantage that there is one less variable to consider, namely \(x\), compared to the general set-up. Moreover, the \(x\)-direction becomes a normal direction and is manifestly on the same footing as the \(t\)-direction. This is helpful when parameterizing the fluctuations. Our ansatz for the world-sheet is \(z=z(\tau)\), \(r=r(\tau)\), \(\theta=\theta(\tau)\), \(\phi=\sigma\), while \(x\), \(t\) and the position on \(\Omega_{4}\) are fixed. Thus, the induced metric reads (\({}^{\prime}=\frac{\mathrm{d}}{\mathrm{d}\tau}\)) \[g_{\alpha\beta}=\begin{pmatrix}\frac{L^{2}}{z^{2}}\left(z^{\prime 2}+r^{\prime 2 }+z^{2}\theta^{\prime 2}\right)&0\\ 0&\frac{L^{2}r^{2}}{z^{2}}\end{pmatrix}. \tag{6}\] With our ansatz, the Euclidean Nambu-Goto action takes the form \[S_{NG}=\sqrt{\lambda}\int\,\mathrm{d}\tau\frac{r}{z^{2}}\sqrt{z^{\prime 2}+r^{ \prime 2}+z^{2}\theta^{\prime 2}}\, \tag{7}\] with \(\lambda=L^{4}/\alpha^{\prime 2}\). Notice we still have the freedom to fix \(\tau\)-reparameterizations (see [27] for a similar approach). The solution we are after can be characterized in terms of two conserved charges. The rigid symmetries of (7) are \(\theta\to\theta+\delta\theta\) and the scaling symmetry \(r\to cr\), \(z\to cz\). The associated Noether charges are \[K=\frac{r\theta^{\prime}}{\sqrt{z^{\prime 2}+r^{\prime 2}+z^{2}\theta^{\prime 2 }}}\, \tag{8}\] Figure 1: Two concentric Wilson loops with their respective orientations. \[C=\frac{r(zz^{\prime}+rr^{\prime})}{z^{2}\sqrt{z^{\prime 2}+r^{\prime 2}+z^{2} \theta^{\prime 2}}}\, \tag{9}\] respectively. To proceed, we solve (8) for \(\theta^{\prime}\) (without loss of generality we can assume \(\theta^{\prime}\geq 0\)) and substitute the solution into (9). It is, however, useful to change variables by setting \[z=\rho\sin\psi\,\qquad r=\rho\cos\psi\, \tag{10}\] so that the two world-sheet boundaries are given by \(\psi=0\) and \(\rho=R_{\pm}\). The entire procedure gives rise to the differential equation \[\rho^{\prime 2}\left(\cos^{2}\psi-K^{2}\sin^{2}\psi-C^{2}\sin^{4}\psi\right)=C ^{2}\rho^{2}\psi^{\prime 2}\sin^{4}\psi. \tag{11}\] For \(C\neq 0\), we can take \(\psi\) to be a function of \(\tau\) which increases from zero to a certain maximum at \(\tau=\tau_{0}\) and then decreases back to zero at the other boundary.1 From (11), the maximum \(\psi_{0}=\psi(\tau_{0})\) satisfies Footnote 1: The solution describing a single Wilson loop (with only one boundary) has the parameters \(C=K=0\). Then, \(\rho=R\) and \(\psi=\tau\in[0,\frac{\pi}{2}]\). Other solutions with \(C=0\) can be obtained by setting \(C\to 0\) in the equations below. \[K^{2}=\cot^{2}\psi_{0}-C^{2}\sin^{2}\psi_{0}\, \tag{12}\] which we may use to eliminate \(K\) or \(C\) in favour of \(\psi_{0}\). After eliminating \(K\), (11) gives rise to \[\frac{1}{\rho}\rho^{\prime}=\frac{C\sin\psi_{0}\sin^{2}\psi|\psi^{\prime}|}{ \sqrt{(\sin^{2}\psi_{0}-\sin^{2}\psi)(1+C^{2}\sin^{2}\psi_{0}\sin^{2}\psi)}}. \tag{13}\] Integrating this along the entire string yields \[\ln\frac{R_{+}}{R_{-}}=2J\, \tag{14}\] where we have abbreviated2 Footnote 2: The integral \(J\) was called \(F(s,t)\) in [20]. \[J=\int\limits_{0}^{\psi_{0}}\frac{C\sin\psi_{0}\sin^{2}\psi\ \mathrm{d}\psi}{ \sqrt{(\sin^{2}\psi_{0}-\sin^{2}\psi)(1+C^{2}\sin^{2}\psi_{0}\sin^{2}\psi)}}. \tag{15}\] Evidently, \(R_{+}>R_{-}\) for \(C>0\). Replacing \(C\) by \(-C\) exchanges \(R_{+}\) and \(R_{-}\). The result (14) can be generalized to all conformally equivalent configurations by expressing the left hand side in terms of the invariant combination (3). Using a combination that is independent of the sign of \(C\), one has \[\frac{1}{2}\left(\frac{R_{+}}{R_{-}}+\frac{R_{-}}{R_{+}}\right)=\frac{1}{ \alpha}\, \tag{16}\] so that (14) yields \[\frac{1}{\alpha}=\cosh(2J). \tag{17}\] he displacement \(\gamma\) along \(S^{5}\) is obtained after substituting (10), (12) and (13) into (8) and solving it for \(\theta^{\prime}\), \[\theta^{\prime}=\frac{|\psi^{\prime}|K\sin\psi_{0}}{\sqrt{(\sin^{2}\psi_{0}-\sin ^{2}\psi)(1+C^{2}\sin^{2}\psi_{0}\sin^{2}\psi)}}. \tag{18}\] Integrating along the entire string gives \[\gamma=K\hat{\gamma}=\int\limits_{0}^{\psi_{0}}\frac{2K\sin\psi_{0}\ \mathrm{d}\psi}{ \sqrt{(\sin^{2}\psi_{0}-\sin^{2}\psi)(1+C^{2}\sin^{2}\psi_{0}\sin^{2}\psi)}}. \tag{19}\] We shall return later to the integrals in (17) and (19). ### Geometry Let us schematically denote the AdS\({}_{5}\times S^{5}\) coordinates by \[X^{\mu}=(z,r,\phi,\vec{x};\theta,\vec{\varphi})\,\] where \(\vec{x}=(x_{1},x_{2})\) are two Euclidean coordinates, and \(\varphi\)'s are coordinates on \(S^{4}\). The semicolon separates the AdS\({}_{5}\) from the \(S^{5}\) part. The tangent vectors on the background string world sheet are \[X^{\mu}_{\tau} =\left(z^{\prime},r^{\prime},0,\vec{0};\theta^{\prime},\vec{0} \right)\, \tag{20}\] \[X^{\mu}_{\sigma} =\left(0,0,1,\vec{0};0,\vec{0}\right). \tag{21}\] Using the equations of the previous subsection, the induced metric (6) reduces to \[g_{\alpha\beta}=L^{2}\cot^{2}\psi\begin{pmatrix}\frac{\theta^{\prime 2}}{K^{2}} &0\\ 0&1\end{pmatrix}. \tag{22}\] In the light of this result, we find it henceforth useful to adopt the gauge3 Footnote 3: This is fine in the limit \(K\to 0\), as can be verified from (8). \[\theta=K\tau\,\qquad\tau\in(0,2\tau_{0})\, \tag{23}\] so that the induced metric is conformal to the Euclidean metric on the cylinder. For massless scalars on the world-sheet, which are conformally invariant, the only possible parameter would be the height of the cylinder, \(2\tau_{0}=\hat{\gamma}\), where \(\hat{\gamma}\) was defined in (19). This will have implications for some of the correlators in subsequent sections. For completeness, in the gauge (23), (18) and (13) are simply \[\theta^{\prime}=K\,,\qquad\rho^{\prime}=C\rho\sin^{2}\psi\,,\qquad\psi^{ \prime}=\pm\sqrt{\cos^{2}\psi-K^{2}\sin^{2}\psi-C^{2}\sin^{4}\psi}. \tag{24}\] As discussed before, the \(+\) and \(-\) signs apply for \(\tau\in(0,\tau_{0})\) and \(\tau\in(\tau_{0},2\tau_{0})\), respectively. One can proceed to calculate the other geometric quantities characterizing the embedding of the background world-sheet. We follow the prescriptions and notation summarized in appendix B. First, one needs to specify an orthonormal set of normal vectors. We take \[N_{2}^{\mu} =\frac{z}{L\sqrt{z^{\prime 2}+r^{\prime 2}}}\left(-r^{\prime},z^{ \prime},0,\vec{0};0,\vec{0}\right)\, \tag{25}\] \[N_{3}^{\mu} =\frac{1}{rL\sqrt{z^{\prime 2}+r^{\prime 2}}}\left(Kz^{2}z^{ \prime},Kz^{2}r^{\prime},0,\vec{0};K^{2}z^{2}-r^{2},\vec{0}\right)\,\] (26) \[N_{i}^{\mu} =\frac{z}{L}\left(0,0,0,\vec{n}_{i};0,\vec{0}\right)\,\qquad(i=4,5)\] (27) \[N_{i}^{\mu} =\frac{1}{L\sin\theta}\left(0,0,0,\vec{0};0,\vec{e}_{i}\right)\, \qquad(i=6,7,8,9) \tag{28}\] where \(\vec{n}_{i}\) and \(\vec{e}_{i}\) denote orthonormal bases on the (Lorentzian) 2-plane and on a unit \(S^{4}\), respectively. For convenience, we shall use the indices \(i=(2,3,\dots,9)\) for the normal vectors and reserve \((0,1)=(\underline{\tau},\sigma)\) for the flat world-sheet indices. The second fundamental forms, \(H^{i}{}_{\alpha\beta}\), are determined by the equation of Gauss. The result is \[H^{2}{}_{\alpha\beta} =\frac{LC}{\sqrt{1-K^{2}\tan^{2}\psi}}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\, \tag{29}\] \[H^{3}{}_{\alpha\beta} =\frac{KL\psi^{\prime}}{\sin\psi\cos\psi\sqrt{1-K^{2}\tan^{2}\psi }}\begin{pmatrix}-1&0\\ 0&1\end{pmatrix}\, \tag{30}\] and all others vanish. They are, of course, traceless. The equation of Weingarten determines the connections in the normal bundle, \(A^{i}{}_{j\alpha}\). It turns out that the only non-vanishing connection is \[A^{2}{}_{3\tau}=-A^{3}{}_{2\tau}=\frac{KC\tan^{2}\psi}{1-K^{2}\tan^{2}\psi}. \tag{31}\] ### Renormalized on-shell action The on-shell action must be regularized and renormalized to obtain a sensible value. In [20] this was done by a simple subtraction of the leading divergent term. Here, we prefer to use a covariant prescription adding to (7), at the two boundaries, the counter term \[S_{\text{c.t.}}=-\operatorname{sgn}z^{\prime}\,z\,\frac{\delta S_{NG}}{\delta z ^{\prime}}\, \tag{32}\] with \(S_{NG}\) given by (7). Using the background relations and the gauge (23), the regularized action becomes \[S_{reg}=\sqrt{\lambda}\left[\,\int\limits_{\epsilon}^{2\tau_{0}-\epsilon} \mathrm{d}\tau\cot^{2}\psi-\left[\operatorname{sgn}z^{\prime}\left(\cot\psi \,\psi^{\prime}+C\sin^{2}\psi\right)\right]_{\epsilon}^{2\tau_{0}-\epsilon} \right]. \tag{33}\] In particular, we have \(\operatorname{sgn}z^{\prime}=+1\) at \(\tau=0\) and \(\operatorname{sgn}z^{\prime}=-1\) at \(\tau=2\tau_{0}\). Therefore, the second part of the counter term cancels between the two boundaries (it also would vanish, because \(\sin\psi\to 0\) for \(\epsilon\to 0\)). Moreover, one can show that \[\cot^{2}\psi=-\partial_{\tau}\left(\cot\psi\,\psi^{\prime}\right)-(\psi^{ \prime})^{2}-C^{2}\sin^{2}\psi\cos^{2}\psi\, \tag{34}\] so that the total derivative just cancels the counter term. The integral of the remainder is finite when the cut-off is removed. Thus, the renormalized on-shell action is \[S_{ren} =-\sqrt{\lambda}\int\limits_{0}^{2\pi_{0}}\mathrm{d}\tau\left[( \psi^{\prime})^{2}+C^{2}\sin^{2}\psi\cos^{2}\psi\right]\] \[=-2\sqrt{\lambda}\int\limits_{0}^{\psi_{0}}\mathrm{d}\psi\Bigg{[} \sqrt{1-(K^{2}+1)\sin^{2}\psi-C^{2}\sin^{4}\psi} \tag{35}\] \[\quad+\frac{C^{2}\sin^{2}\psi\cos^{2}\psi}{\sqrt{1-(K^{2}+1)\sin^ {2}\psi-C^{2}\sin^{4}\psi}}\Bigg{]}\.\] For completeness of presentation, we wish to repeat the study [20] of the on-shell action as a function of the macroscopic parameters \(\Delta\theta\) and \(J\). In [20], the parameters \[s=\sin^{2}\psi_{0}\,\qquad t=C^{2}\sin^{4}\psi_{0}\, \tag{36}\] were introduced, with the parameter space limited by a triangle, \(0\leq t\leq 1-s\leq 1\). The relation (12) implies \[K^{2}=\frac{1-s-t}{s}. \tag{37}\] After changing the integration variable to \[x=\frac{\sin^{2}\psi}{\sin^{2}\psi_{0}}\, \tag{38}\] equations (19) and (15) become complete elliptic integrals with modulus4 Footnote 4: The \(\psi_{0}\)-dependence of the integral is not included in the definition of the \(\psi_{0}\)-dependence of the integral. \[k=\sqrt{\frac{s+t}{1+t}}. \tag{39}\] Specifically, one finds \[\gamma=\frac{K}{\sqrt{t}}\int\limits_{0}^{1}\frac{\mathrm{d}x}{\sqrt{x(1-x)( \frac{1}{s}-x)(x+\frac{1}{t})}}=2\sqrt{\frac{1-s-t}{1+t}}\mathbf{K}\, \tag{40}\] and \[J=\int\limits_{0}^{1}\frac{x\ \mathrm{d}x}{\sqrt{x(1-x)(\frac{1}{s}-x)(x+ \frac{1}{t})}}=\sqrt{\frac{t}{s(1+t)}}\left[\mathbf{K}-(1-s)\mathbf{\Pi}(s) \right]. \tag{41}\] Similarly, the renormalized on-shell action (35) becomes \[S_{ren} =-\sqrt{\lambda}\sqrt{t}\int\limits_{0}^{1}\mathrm{d}x\left[ \sqrt{\frac{(1-x)(\frac{1}{t}+x)}{x(\frac{1}{s}-x)}}+\sqrt{\frac{x(\frac{1}{ s}-x)}{(1-x)(\frac{1}{t}+x)}}\right]\] \[=-2\sqrt{\lambda}\sqrt{\frac{1+t}{s}}\left[\mathbf{E}-(1-k^{2}) \mathbf{K}\right]. \tag{42}\] These results agree completely with those of [20], despite the fact that here we have used a different regularization of the action. The resulting phase diagram is illustrated in Figure 2 (see also [20]). We add some comments on it here. First, the upper limit of the parameter domain, \(s+t=1\), corresponds to \(K=\gamma=0\), which is the original Wilson loop correlator [21; 22]. On both axes we have \(J=0\), which implies that the boundaries of the two Wilson loops coincide and the string stretches only along \(S^{5}\). \(s=0\) is the stable branch, \(t=0\), which implies \(C=0\), is the unstable branch. The point \(s=t=0\) is the only point with \(\gamma=\pi\), but it is a singular point, because \(S_{ren}(0,t)=-\infty\), whereas \(\lim_{s\to 0}S_{ren}(s,0)=0\). For \((s,t)=(1,0)\), the connected solution degenerates into two disconnected world-sheets, which barely touch each other, with \(S_{ren}=-2\sqrt{\lambda}\). There is a first-order Gross-Ooguri [23] phase transition between the connected and the disconnected configurations. This phase transition resonates with some of the arguments involved in the recent replica analysis of Hawking radiation (see [24] for a review). We hope that our computationally transparent framework might be used to obtain a quantitative understanding. Figure 2: Phase diagram of the Wilson loop correlators in the \(s\)-\(t\)-plane. The parameter space for connected solutions is the triangle \(s,t\geq 0\), \(s+t\leq 1\). Connected solutions within the yellow shaded area have \(S_{ren}<-2\sqrt{\lambda}\) and are stable. In the blue shaded area, the disconnected solution with \(S_{ren}=-2\sqrt{\lambda}\) is the preferred configuration. The disconnected solution is represented by the point \((s,t)=(1,0)\). We remark that the curves with constant \(\gamma\) are not straight lines. String world-sheet fluctuations ### Scalars With the scalars \(\chi^{i}\) parameterizing the fluctuations in the normal directions, expanding the Nambu-Goto action to quadratic order around the background gives the action \[S_{B}=\frac{1}{4\pi\alpha^{\prime}}\int\,\mathrm{d}^{2}\sigma\sqrt{g}\left[( \hat{\nabla}_{\alpha}\chi_{i})(\hat{\nabla}^{\alpha}\chi^{i})-\left(H_{i\alpha \beta}H_{j}{}^{\alpha\beta}+R_{mpnq}x^{\alpha m}x_{\alpha}^{n}N_{i}^{p}N_{j}^{ q}\right)\chi^{i}\chi^{j}\right]. \tag{11}\] Hence, the field equations read \[\left[\delta^{i}_{j}\hat{\nabla}^{\alpha}\hat{\nabla}_{\alpha}+H^{i}{}_{\alpha \beta}H_{j}{}^{\alpha\beta}-M^{i}{}_{j}\right]\chi^{j}=0\, \tag{12}\] with \[M^{i}{}_{j}=-R_{\lambda\mu\nu\rho}g^{\alpha\beta}X^{\lambda}_{\alpha}X^{\nu}_ {\beta}N^{i\mu}N_{j}{}^{\rho}. \tag{13}\] Note that \(\hat{\nabla}_{\alpha}\) is the generalized covariant derivative (10). Using the background relations, we have explicitly \[M^{2}{}_{2}=M^{i}{}_{i}=\frac{2}{L^{2}}-\frac{K^{2}}{L^{2}}\tan ^{2}\psi (i=4,5)\, \tag{14}\] \[M^{3}{}_{3}=-M^{i}{}_{i}=\frac{K^{2}}{L^{2}}\tan^{2}\psi (i=6,7,8,9)\, \tag{15}\] and all the off-diagonal elements vanish. The field equations (12) can be written down straightforwardly. After some manipulations, they read explicitly \[\left[\frac{\partial^{2}}{\partial\tau^{2}}+\frac{\partial^{2}}{ \partial\phi^{2}}+\frac{4}{\sin^{2}\psi}-\frac{\tan^{2}\psi}{\sqrt{1-K^{2} \tan^{2}\psi}}\left(\frac{\sqrt{1-K^{2}\tan^{2}\psi}}{\tan^{2}\psi}\right)^{ \prime\prime}\right]\chi^{2}=\\ -\frac{2KC\tan^{2}\psi}{\sqrt{1-K^{2}\tan^{2}\psi}}\frac{\partial }{\partial\tau}\left(\frac{\chi^{3}}{\sqrt{1-K^{2}\tan^{2}\psi}}\right)\, \tag{16}\] \[\left[\frac{\partial^{2}}{\partial\tau^{2}}+\frac{\partial^{2}}{ \partial\phi^{2}}-\frac{1}{\sqrt{1-K^{2}\tan^{2}\psi}}\left(\sqrt{1-K^{2}\tan ^{2}\psi}\right)^{\prime\prime}\right]\chi^{3}=\\ \frac{2KC}{\sqrt{1-K^{2}\tan^{2}\psi}}\frac{\partial}{\partial \tau}\left(\frac{\chi^{2}\tan^{2}\psi}{\sqrt{1-K^{2}\tan^{2}\psi}}\right)\, \tag{17}\] and \[\left(\frac{\partial^{2}}{\partial\tau^{2}}+\frac{\partial^{2}}{ \partial\phi^{2}}-2\cot^{2}\psi+K^{2}\right)\chi^{i}=0 (i=4,5)\, \tag{18}\] \[\left(\frac{\partial^{2}}{\partial\tau^{2}}+\frac{\partial^{2}}{ \partial\phi^{2}}+K^{2}\right)\chi^{i}=0 (i=6,7,8,9). \tag{19}\] The above equations are consistent with the background world-sheet symmetry \(\tau\to 2\tau_{0}-\tau\), because \(K\) and \(C\) change sign under this symmetry. This implies that \(N_{2}^{\mu}\) flips its direction while \(N_{3}^{\mu}\) remains invariant. As a consequence, we also have \(\chi^{2}\rightarrow-\chi^{2}\). Let us make a few remarks that somewhat clarify the structure of the fluctuations in various limits and will provide a blueprint for field theory expectations. * An important point of reference are the fluctuations of the half-BPS Wilson-Maldacena loop. In that case the fluctuations have a natural 5+3 split which is interpreted as corresponding to the protected operators: \(\Phi^{a=1,2,3,4,5}\) with \(\Delta=1\) and \(F_{ti}+iD_{i}\Phi^{6}\), \(i=1,2,3\) with \(\Delta=2\). We now recognize that the last four fluctuations presented in equation (3.9) are the string theoretic scalar modes dual to the four operators constructed from \(\Phi^{a=1,2,3,4}\). Analogously, the fluctuations in equation (3.8), correspond to two operators with \(\Delta=2\). The simplicity of these two sets of equations of motion suggests that the field theory treatment might be manageable. * In the special case \(K=0\) (with arbitrary \(C\)), the bosonic modes organize themselves according to a 5+2+1 split. First, the mode \(\chi^{3}\) joins the four mode \(\chi^{i=6,7,8,9}\) forming a quintet of massless scalars. This is intuitively clear, because with \(K=0\) implies no displacement of the classical world-sheet along the \(S^{5}\). For the mode \(\chi^{2}\) one can check that \[\frac{4}{\sin^{2}\psi}-\tan^{2}\psi(\cot^{2}\psi)^{\prime\prime} \stackrel{{ K\to 0}}{{\longrightarrow}}-2\cot^{2}\psi+2C^{2} \tan^{2}\psi.\] (3.10) The last term above, \(2C^{2}\tan^{2}\psi\), is sub-leading near the boundary and we see how mode \(\chi^{3}\) almost pairs with the modes \(\chi^{i=4,5}\). * Let us now consider the \(C\to 0\) limit. The fluctuation \(\chi^{3}\) does not obey the same equations as \(\chi^{i=6,7,8,9}\) as there is motion on \(S^{5}\). However, we do expect, given that \(C=0\) corresponds to a classical world-sheet that extends only along \(S^{5}\), the fluctuation \(\chi^{2}\) to satisfy the same equation as fluctuations \(\chi^{i=4,5}\). Indeed, \[\frac{4}{\sin^{4}\psi}-\frac{\tan^{2}\psi}{\sqrt{1-K^{2}\tan^{2}\psi}}\left( \frac{\sqrt{1-K^{2}\tan^{2}\psi}}{\tan^{2}\psi}\right)^{\prime\prime} \stackrel{{ C\to 0}}{{\longrightarrow}}-2\cot^{2}\psi+K^{2}.\] (3.11) * Equations (3.6) and (3.7) decouple for either \(C=0\) or \(K=0\), which geometrically correspond to world-sheets that stay strictly within \(S^{5}\) or AdS\({}_{5}\), respectively. In the general configuration, the coupling between \(\chi^{2}\) and \(\chi^{3}\) suggests, on the field theory side, an unexpected mixing between operators which in the half-BPS limit had conformal dimensions \(\Delta=1\) and \(\Delta=2\). This mixing is generated by the breaking of conformal invariance. ### Fermions The part of the type IIB superstring action, which is quadratic in the fermions, is given by [6; 30] \[S_{F}=\frac{1}{2\pi\alpha^{\prime}}\int\,\mathrm{d}^{2}\xi\,\sqrt{g}\,\bar{ \Theta}\left(g^{\alpha\beta}-i\epsilon^{\alpha\beta}\sigma_{3}\right)\Gamma_{ \alpha}\mathcal{D}_{\beta}\Theta. \tag{3.12}\] This is the action for a Euclidean world sheet, which is appropriate in our case. Notice that this is not in contradiction with the fact that the bulk is Lorentzian AdS\({}_{5}\times S^{5}\). We use the double spinor notation [31], so that \(\Theta\) is a 64-component spinor consisting of two positive-chirality 10-d Majorana-Weyl spinors, with the explicit Pauli matrices acting on the spinor doublet. In contrast to [6], our \(\epsilon^{\alpha\beta}\) is the epsilon tensor, not a density. Moreover, in (3.12), the generalized covariant spinor derivative \(\mathcal{D}_{\alpha}\) is [31] \[\mathcal{D}_{\alpha}=\hat{D}_{\alpha}+\frac{1}{16}\not{F}\Gamma_{\alpha}(i \sigma_{2})\, \tag{3.13}\] with \[\not{F}=\frac{1}{5!}F_{pqrst}\Gamma^{pqrst}\, \tag{3.14}\] and \(\hat{D}_{\alpha}\) is the pull-back of the bulk covariant derivative on the world sheet given by (B.16). Our first aim is to rewrite the action (3.12) in terms of eight genuine Euclidean 2-d spinors, which would be the super partners of the eight scalars, if the background were supersymmetric. A number of steps are necessary to achieve this aim, first of all \(\kappa\) symmetry gauge fixing, which reduces the \(16+16\) components of \(\Theta\) (Although the double spinor has 64 components, half of them vanish by the chirality condition.) to \(16=8\times 2\). Because, in two dimensions, \(\epsilon^{\alpha\beta}\Gamma_{\alpha}=\Gamma^{\underline{01}}\Gamma^{ \beta}\), the action in (3.12) simplifies to \[S_{F}=\frac{1}{2\pi\alpha^{\prime}}\int\,\mathrm{d}^{2}\xi\,\sqrt{g}\,\bar{ \Theta}\left(1-i\sigma_{3}\Gamma^{\underline{01}}\right)\Gamma^{\alpha} \mathcal{D}_{\alpha}\Theta\, \tag{3.15}\] which enjoys the \(\kappa\) symmetry \[\delta\Theta=\frac{1}{2}\left(1-i\sigma_{3}\Gamma^{\underline{01}}\right) \kappa. \tag{3.16}\] This allows to fix \[\Theta^{1}=\Theta^{2}\equiv\Theta. \tag{3.17}\] Thus, henceforth, \(\Theta\) is a single positive-chirality 10-d spinor, and the action (3.15) reduces to \[S_{F}=\frac{1}{\pi\alpha^{\prime}}\int\,\mathrm{d}^{2}\xi\,\sqrt{g}\,\bar{ \Theta}\left(\Gamma^{\alpha}\hat{D}_{\alpha}-\frac{i}{16}\Gamma^{\underline{ 01}}\Gamma^{\alpha}\not{F}\Gamma_{\alpha}\right)\Theta. \tag{3.18}\] Let us consider the terms in the parentheses separately. First, using the fact that the second fundamental forms \(H^{i}_{\alpha\beta}\) are traceless on the background world sheet, the corresponding term in \(\hat{D}_{\alpha}\) (B.16) vanishes, so that \[\Gamma^{\alpha}\hat{D}_{\alpha}=\Gamma^{\alpha}D_{\alpha}+\frac{1}{4}A_{ij \alpha}\Gamma^{\alpha}\Gamma^{ij}\, \tag{3.19}\] where \(D_{\alpha}\) is the standard 2-d (covariant) spinor derivative and the normal bundle connection is given by (2.31). To obtain the contraction \(\Gamma^{\alpha}\not{F}\Gamma_{\alpha}\) in (3.18), we first decompose \(\not{F}\) into tangential and normal components using the completeness relation (B.3) by writing \[\not{F}=\frac{1}{5!}\left(F_{i_{1}\cdots i_{5}}\Gamma^{i_{1}\cdots i_{5}}+5F_{ \beta i_{1}\cdots i_{4}}\Gamma^{\beta i_{1}\cdots i_{4}}+10F_{\beta_{1}\beta_ {2}i_{1}i_{2}i_{3}}\Gamma^{\beta_{1}\beta_{2}i_{1}i_{2}i_{3}}\right)\.\] The terms with more than two tangential components vanish by antisymmetry. After carrying out the contractions and using \(\Gamma^{\alpha\beta}=\epsilon^{\alpha\beta}\Gamma^{\underline{01}}\), we get \[\Gamma^{\alpha}\not{F}\Gamma_{\alpha}=-\frac{2}{5!}F_{i_{1}\cdots i_{5}} \Gamma^{i_{1}\cdots i_{5}}+\frac{1}{3!}\epsilon^{\alpha\beta}F_{\alpha\beta i_ {1}i_{2}i_{3}}\Gamma^{\underline{01}}\Gamma^{i_{1}i_{2}i_{3}}. \tag{3.20}\] To make progress, we need to substitute five-form field strength (5). In the calculation of the Hodge dual, there is a subtlety related to the frame orientation, which is, in turn, related to the chirality matrix. In order to make this explicit, we take the volume form of the bulk to be \[\epsilon=\pm\epsilon_{AdS}\wedge\epsilon_{S^{5}}\, \tag{31}\] so that (5) is \[F_{5}=\frac{4}{L}\left(\mp\epsilon_{AdS}+\epsilon_{S^{5}}\right). \tag{32}\] We can see that the first term on the right hand side of (30) can receive a contribution only from the \(S^{5}\) part of \(F_{5}\), because only on \(S^{5}\) there are non-zero components of five normal vectors (\(i=3,6,7,8,9\)). Similarly, the second term arises from the AdS part of \(F_{5}\), because it needs two non-zero components of the tangents, while the normals involved are \(i=2,4,5\), because the AdS part of \(N_{3}^{\mu}\) is proportional to the AdS part of the tangent \(X_{\tau}^{\mu}\). Explicitly, one finds (recall that the normal indices are flat) \[\Gamma^{\alpha}\not{F}\Gamma_{\alpha}=\frac{8\sqrt{1-K^{2}\tan^{2}\psi}}{L} \left(\Gamma^{36789}\pm\Gamma^{\underline{01245}}\right)\, \tag{33}\] so that \[\Gamma^{\underline{01}}\Gamma^{\alpha}\not{F}\Gamma_{\alpha}=\mp\frac{8\sqrt{ 1-K^{2}\tan^{2}\psi}}{L}\Gamma^{245}\left(1\mp\Gamma^{\underline{0123456789}} \right). \tag{34}\] In these calculations, recall that the world sheet is Euclidean, whereas the normal vector \(N_{5}^{\mu}\) is time-like. This implies \((\Gamma^{5})^{2}=(\Gamma^{\underline{01}})^{2}=-1\). By convention, the right hand side of (34) must project onto positive-chirality spinors, so that \(\Theta\) in (30) is projected onto itself. Taking the 10-d chirality matrix to be (5 is the bulk time direction) \[\Gamma_{(10)}=\Gamma^{5\underline{01}2346789}=-\Gamma^{\underline{0123456789}} \tag{35}\] implies the upper sign in the previous equations. Of course, the opposite choice implying the lower sign is equivalent. Moreover, it is convenient to write the action in a form that is more appropriate to the Euclidean world sheet and explicitly remove the 16 negative-chirality components of \(\Theta\). Recalling that \(i=5\) corresponds to the bulk time direction, the conjugate spinor is \(\bar{\Theta}=\Theta^{\dagger}\Gamma^{5}\). In the action (30), let us move that \(\Gamma^{5}\) from \(\bar{\Theta}\) into the operator to the right of it and define \(\gamma^{\alpha}=\Gamma^{5}\Gamma^{\alpha}\) and \(\gamma^{i}=\Gamma^{5}\Gamma^{i}\) (\(i\neq 5\)). These matrices span a Euclidean 9-d Clifford algebra and act freely on the 16-component spinor that is equivalent to the 10-d positive-chirality spinor \(\Theta\).5 Therefore, after substituting (31), together with (31), and (34) with the upper sign into (30) and transforming the gamma matrices, we obtain Footnote 5: In particular, the representation is such that \(\gamma^{\underline{012346789}}=1\). \[S_{F}=\frac{1}{\pi\alpha^{\prime}}\int\,\mathrm{d}^{2}\xi\,\sqrt{g}\,\Theta^ {\dagger}\left(\gamma^{\alpha}D_{\alpha}+\frac{KC\tan^{3}\psi}{2L(1-K^{2}\tan ^{2}\psi)}\gamma^{0}\gamma^{23}-\frac{i}{L}\sqrt{1-K^{2}\tan^{2}\psi}\gamma^{2 4}\right)\Theta\, \tag{36}\] where \(\Theta\) is now a 16-component spinor. It remains to decompose \(\Theta\) into eight 2-d spinors. This can be achieved by introducing three quantum numbers, \(a,b,c=\pm 1\), and taking \(\Theta^{abc}\) such that \[\gamma^{24}\Theta^{abc}=ia\Theta^{abc}\,\qquad\gamma^{67}\Theta^{abc}=ib\Theta^{ abc}\,\qquad\gamma^{89}\Theta^{abc}=ic\Theta^{abc}. \tag{3.27}\] The action of \(\gamma^{23}\) and \(\gamma^{34}\) can be obtained by noting that \([\gamma^{23},\gamma^{34}]=2\gamma^{24}\). A suitable, albeit not unique, choice is \[\gamma^{23}\Theta^{abc}=\Theta^{-abc}\,\qquad\gamma^{34}\Theta^{abc}=ia\Theta^{- abc}\.\] After this decomposition, (3.26) finally reduces to \[S_{F} =\frac{1}{\pi\alpha^{\prime}}\int\,\mathrm{d}^{2}\xi\,\sqrt{g}\, \Theta^{abc\uparrow}\Bigg{[}\left(\gamma^{\alpha}D_{\alpha}+\frac{a}{L}\sqrt{1 -K^{2}\tan^{2}\psi}\right)\Theta^{abc} \tag{3.28}\] \[\quad+\frac{KC\tan^{3}\psi}{2L(1-K^{2}\tan^{2}\psi)}\gamma^{0} \Theta^{-abc}\Bigg{]}\,\] where spinors can be treated as 2-d spinors, and the sum over \(a,b\), and \(c\) is implicit. In what follows, we shall drop the labels \(b\) and \(c\), because they do not appear as parameters in the field equations. The derivative operator is \[\gamma^{\alpha}D_{\alpha}=\gamma^{\alpha}\partial_{\alpha}-\frac{\psi^{\prime }}{2L\cos^{2}\psi}\gamma^{0}\, \tag{3.29}\] so that the field equations are \[\left(\partial_{\tau}+\gamma^{01}\partial_{\sigma}-\frac{\psi^{\prime}}{2\sin \psi\cos\psi}+\frac{a\sqrt{1-K^{2}\tan^{2}\psi}}{\tan\psi}\gamma^{0}\right) \Theta^{a}+\frac{KC\tan^{2}\psi}{2(1-K^{2}\tan^{2}\psi)}\Theta^{-a}=0. \tag{3.30}\] Our intuition of the fermionic fluctuations is not as clear as the one for bosonic fluctuations. Notice that the above expression for the fluctuations is independent of the \(b\) and \(c\) indices and points to four couples of inter-related 2-d fermions. Either in the limit of vanishing \(K\) or vanishing \(C\), the fermion field equations decouple into \(4+4\) equations, somewhat mirroring the situation for the bosonic fields. ## 4 Correlators In this section we employ the AdS/CFT correspondence to study the two-point correlators of operators dual to the string theoretic fluctuations discussed in the previous section. Our results in this section describe the strongly coupled limit of those correlators. We start by briefly reviewing the generic prescription. Then we discuss the massless scalars, the massive scalars and finish the section with an estimate for the correlators of heavy operators which can be computed in a geodesic approximation. ### Calculating two-point functions We start this section with a very concise review of the calculation of two-point functions via the AdS/CFT correspondence. In the asymptotic regions, the metric (22) approaches the AdS\({}_{2}\) metric \[\mathrm{d}s^{2}=\frac{L^{2}}{\tau^{2}}\left(\,\mathrm{d}\tau^{2}+\,\mathrm{d} \phi^{2}\right). \tag{43}\] A massive scalar satisfies the field equation \[(\nabla^{2}-m^{2})\chi=0\quad\Rightarrow\quad\left(\partial_{\tau}^{2}+ \partial_{\phi}^{2}-\frac{m^{2}L^{2}}{\tau^{2}}\right)\chi=0. \tag{44}\] Using a Frobenius series, a general solution has the form \[\chi=\tau^{1-\Delta}\left(1+\cdots\right)\hat{\chi}+\tau^{\Delta}\left(1+ \cdots\right)\tilde{\chi}\, \tag{45}\] where \[\Delta=\frac{1}{2}+\sqrt{\frac{1}{4}+m^{2}L^{2}}\] denotes the conformal dimension of the dual operator. We are going to focus on two types of fluctuations: \(\chi^{i=6,7,8,9}\) with \(m^{2}L^{2}=0\) corresponding to \(\Delta=1\), and \(\chi^{i=4,5}\) with \(m^{2}L^{2}=2\), \(\Delta=2\). The (\(\phi\)-dependent) coefficients \(\hat{\chi}\) and \(\tilde{\chi}\) are the source and response coefficients, respectively. Assuming that the bulk scalars are canonically normalized, the _exact_ one-point function of the dual operator is given by [32] \[\langle\mathcal{O}(\phi)\rangle_{exact}=(2\Delta-1)\tilde{\chi}(\phi)\, \tag{46}\] from which the two-point functions are found by differentiation with respect to the source, \[\langle\mathcal{O}(\phi_{1})\mathcal{O}(\phi_{2})\rangle=(2\Delta-1)\frac{ \delta\tilde{\chi}(\phi_{1})}{\delta\tilde{\chi}(\phi_{2})}. \tag{47}\] In our case, in which \(\phi\) has periodicity \(2\pi\), the field as well as the source and response coefficients are expanded in Fourier modes, \[\chi(\tau,\phi)=\sum_{n=-\infty}^{\infty}\mathrm{e}^{in\phi}\,\chi_{n}(\tau) \,\qquad\hat{\chi}(\phi)=\sum_{n=-\infty}^{\infty}\mathrm{e}^{in\phi}\,\hat{ \chi}_{n}\,\qquad\tilde{\chi}(\phi)=\sum_{n=-\infty}^{\infty}\mathrm{e}^{in\phi}\, \tilde{\chi}_{n}. \tag{48}\] For free fields, the modes do not mix with each other. Therefore, (47) reduces to \[\langle\mathcal{O}(\phi_{1})\mathcal{O}(\phi_{2})\rangle=(2\Delta-1)\sum_{n=- \infty}^{\infty}\mathrm{e}^{in(\phi_{1}-\phi_{2})}\,\frac{\tilde{\chi}_{n}}{ \tilde{\chi}_{n}}. \tag{49}\] The procedure applies straightforwardly to our case with two asymptotically AdS regions. Without loss of generality, one may construct a bulk field, which has a source only on one boundary. This condition replaces the usual "regularity condition". Given such a solution, all one has to do to calculate the two-point functions is to expand the solution in the asymptotic regions, read off the source and the response coefficients and substitute them into (4.1). A subtlety arises, if one uses a re-scaled bulk variable, e.g. \(\tau=cx\). If one expands in \(x\), such as \[\chi=x^{1-\Delta}(1+\cdots)\chi_{s}+x^{\Delta}(1+\cdots)\chi_{r}\,\] one must take into account the re-scaling of the source and response coefficients, _i.e._, \[\frac{\tilde{\chi}_{n}}{\tilde{\chi}_{n}}=c^{-(2\Delta-1)}\frac{\chi_{r,n}}{ \chi_{s,n}}\.\] ### Massless scalar For general \(K\), none of the scalar fields discussed in subsection 3.1 is truly massless. However, in the special case \(K=0\) all the scalars associated with fluctuations on \(S^{5}\) are massless. These are the four scalars in (3.1) and \(\chi^{3}\). In this subsection, we shall consider these five scalars in the special case \(K=0\), because one can provide a full analytic expression for the correlator. Because the induced metric is conformally flat and massless fields are conformally invariant, it is evident that the field equation is simply \[\Box\chi(\tau,\sigma)=\left(\frac{\partial^{2}}{\partial\tau^{2}}+\frac{ \partial^{2}}{\partial\sigma^{2}}\right)\chi(\tau,\sigma)=0\, \tag{4.1}\] the general solution of which reads \[\chi(\tau,\sigma)=\sum_{n=-\infty}^{\infty}\left[A_{n}\cosh\left(n\tau\right)+ B_{n}\sinh\left(n\tau\right)\right]\mathrm{e}^{in\sigma}. \tag{4.2}\] As discussed in the previous subsection, we set the source at one of the boundaries to zero. Without loss of generality, we take this boundary to be the one at \(\tau=0\), which implies \(A_{n}=0\) in (4.2). It is straightforward to extract the source and response coefficients from the near-boundary behaviour close to \(\tau=0\) and \(\tau=2\tau_{0}\). Formula (4.1) then yields the correlation functions \[\langle\mathcal{O}^{L,R}(\phi)\mathcal{O}^{L,R}(0)\rangle =-\sum_{n=-\infty}^{\infty}n\coth(2\tau_{0}\,n)\,\mathrm{e}^{in\phi}\] \[=-\frac{1}{2\tau_{0}}-2\sum_{n=1}^{\infty}n\coth(2\tau_{0}\,n) \cos\left(n\phi\right)\,, \tag{4.3}\] \[\langle\mathcal{O}^{L,R}(\phi)\mathcal{O}^{R,L}(0)\rangle =\sum_{n=-\infty}^{\infty}\frac{n}{\sinh(2\tau_{0}\,n)}\,\mathrm{ e}^{in\phi}\] \[=\frac{1}{2\tau_{0}}+2\sum_{n=1}^{\infty}\frac{n}{\sinh(2\tau_{0} \,n)}\cos\left(n\phi\right)\,. \tag{4.4}\] To compute these sums we recall the following properties of the \(\theta\) functions: \[\frac{\partial}{\partial z}\ln\theta_{1}(z,q) =\cot z+4\sum_{n=1}^{\infty}\frac{q^{n}}{q^{-n}-q^{n}}\sin(2nz)\,, \tag{4.12}\] \[\frac{\partial}{\partial z}\ln\theta_{4}(z,q) =4\sum_{n=1}^{\infty}\frac{1}{q^{-n}-q^{n}}\sin(2nz)\,. \tag{4.13}\] It follows that \[\frac{\partial^{2}}{\partial z^{2}}\ln\theta_{1}\left(z,q\right) =-\frac{1}{\sin^{2}z}+8\sum_{n=1}^{\infty}\frac{nq^{n}}{q^{-n}-q^ {n}}\cos(2nz)\,, \tag{4.14}\] \[\frac{\partial^{2}}{\partial z^{2}}\ln\theta_{4}\left(z,q\right) =8\sum_{n=1}^{\infty}\frac{n}{q^{-n}-q^{n}}\cos(2nz)\,. \tag{4.15}\] Using the representation \[-\frac{1}{\sin^{2}z}=4\sum_{n=1}^{\infty}n\cos(2nz)\,, \tag{4.16}\] (4.14) becomes \[\frac{\partial^{2}}{\partial z^{2}}\ln\theta_{1}\left(z,q\right) =4\sum_{n=1}^{\infty}\frac{q^{-n}+q^{n}}{q^{-n}-q^{n}}n\cos(2nz)\,. \tag{4.17}\] Thus, we can re-express the two-point functions (4.10) and (4.11) as \[\langle\mathcal{O}^{L,R}(\phi)\mathcal{O}^{L,R}(0)\rangle =-\frac{1}{2\tau_{0}}-2\frac{\partial^{2}}{\partial\phi^{2}}\ln \theta_{1}\left(\frac{\phi}{2},\mathrm{e}^{-2\tau_{0}}\right)\,, \tag{4.18}\] \[\langle\mathcal{O}^{L,R}(\phi)\mathcal{O}^{R,L}(0)\rangle =\frac{1}{2\tau_{0}}+2\frac{\partial^{2}}{\partial\phi^{2}}\ln \theta_{4}\left(\frac{\phi}{2},\mathrm{e}^{-2\tau_{0}}\right)\,. \tag{4.19}\] Let us finish this subsection by addressing some of the key properties of the two-point functions. A convenient way to visualize the information is provided in Figures 3 and 4, where we have plotted the correlators for different values of \(\tau_{0}\). The chosen values include points near the Gross-Ooguri phase transition. Recall that the correlators were obtained in the \(K=0\) case which, according to Eq. (2.37), implies \(1=t+s\) corresponding to the dotted blue line in the phase diagram Figure 2. To determine the precise values of \(\tau_{0}\) we translate to the parameters \((s,t)\). The above situation corresponds to \(K=0\), then \(C=\cos\psi_{0}/\sin^{2}\psi_{0}=\sqrt{t}/s\). Still need to invert \(\psi_{0}=\psi(\tau_{0})\) which yields \(\tau_{0}=\sqrt{\frac{s}{2-s}}\mathbf{K}\). There are two interesting critical values of \(\tau_{0}\) around which we will explore the behavior of the correlator: \(\tau_{0}^{\mathrm{crit}(1)}\) which corresponds to the value where the connected contribution becomes less important than the disconnected contribution and \(\tau_{0}^{\mathrm{crit}(2)}\) which corresponds to the value of \(\tau_{0}\) beyond which the connected solution ceases to exist. The concrete values are: \[\tau_{0}^{\mathrm{crit}(1)}=1.218,\quad\tau_{0}^{\mathrm{crit}(2)}=1.875\quad \Leftrightarrow\quad s^{\mathrm{crit}(1)}=0.524,\quad s^{\mathrm{crit}(2)}=0.790 \tag{4.20}\] We verified that these values agree, in the appropriate limits, with those given originally in [21] and more recently in [20]. The first sanity check consists in the correlators of insertions in the same loop to approximate the conformal limit when the separation distance vanishes. One can check that \(\theta_{1}(z,q)\) for small \(z\) goes linear in \(z\). After taking two derivatives of \(\log z\), we obtain the expected \(\frac{1}{z^{2}}\) behavior for the correlators in the conformal limit with \(\Delta=1\), as expected. This is seen in the plot in Fig. 3, the dotted line described the conformal propagator. One can clearly see that for \(\phi\to 0,2\pi\), all correlators approximate the conformal one. Figure 4 represents the two-point correlator of insertions on different loops. Its most salient feature is that it goes to a constant for small values of the separation \(\phi\). To the naked eye, the two-point functions are completely unremarkable, they appear as smooth continuous deformations of the conformal correlators. To demonstrate that such a point of view is too simplistic we plot in Figures 5 and 6 the two-point correlators for a fixed value of the separation \(\phi\) but as a function of \(s\). The most salient feature is the gap between the solutions and the conformal correlator. In both cases this is quite pronounced and should clarify that the behavior is certainly different from the one of the conformal correlator. Note that we increase the value of the parameter \(s\) to its critical value \(s^{\text{crit}(1)}=0.524\), after which the disconnected world-sheet is domina feature is the vanishing of the correlators for finite \(\phi\) as we take \(s\to 0\). Essentially, this is the result of the bulk effectively disappearing, which prevents a perturbation in the boundary to propagate through the bulk to another point at some distance. The correlators simply become delta functions. There is one extra feature in the Left-Right correlator as a function of \(s\) that we now address: The appearance of a peak for small values of \(\phi\) as we increase \(s\), see Fig 6. We clarified that for \(s\to 0\) the correlator vanishes, it rises as we increase \(s\). However, the Left-Right correlator must also become weaker again for longer world-sheets because the distance between the loops increases. The position of the peak depends on \(\phi\). As can be seen in Fig. 6, smaller \(\phi\) have a smaller value of \(s\) of the peak. As we consider larger values of \(\phi\), eventually the peak disappears from the plot (or rather it goes beyond the critical value of \(s^{\rm crit(1)}\)). Figure 5: \(\langle\mathcal{O}^{L}(\phi)\mathcal{O}^{L}(0)\rangle\) correlator for fixed values of \(\phi\), as a function of \(s\). For \(s>s^{\rm crit(1)}=0.524\) we depict the conformal correlator. #### 4.2.1 Generic massless scalar In this subsection, we consider the scalars \(\chi^{6,7,8,9}\) for general \(K\). They satisfy (3.9), which has the general solution \[\chi(\tau,\sigma)=\sum_{n=-\infty}^{\infty}\left[A_{n}\cosh\left(\omega_{n}\tau \right)+B_{n}\sinh\left(\omega_{n}\tau\right)\right]\mathrm{e}^{in\sigma}\ \, \tag{4.21}\] where \[\omega_{n}=\sqrt{n^{2}-K^{2}}. \tag{4.22}\] As before, we set \(A_{n}=0\) to switch off the source at one of he boundaries. The resulting correlators are \[\langle\mathcal{O}^{L,R}(\phi)\mathcal{O}^{L,R}(0)\rangle= -\sum_{n=-\infty}^{\infty}\omega_{n}\coth(2\tau_{0}\omega_{n}) \,\mathrm{e}^{in\phi}\ \, \tag{4.23}\] \[\langle\mathcal{O}^{L,R}(\phi)\mathcal{O}^{R,L}(0)\rangle= \sum_{n=-\infty}^{\infty}\frac{\omega_{n}}{\sinh(2\tau_{0}\omega_ {n})}\,\mathrm{e}^{in\phi}\ . \tag{4.24}\] The above expressions need to be slightly modified depending on the precise value of \(K\), because the frequencies \(\omega_{n}\) for \(n^{2}<K^{2}\) are imaginary. ### Massive scalar In this subsection, we shall consider the scalar fields \(\chi^{4,5}\) satisfying the field equation (3.8), \[\left(\partial_{\tau}^{2}+\partial_{\phi}^{2}-2\cot^{2}\psi+K^{2}\right)\chi= 0\, \tag{4.25}\] where we recall the relations \[K^{2}=\frac{1-s-t}{s}\,\qquad s=\sin^{2}\psi_{0}\,\qquad t=C^{2}\sin^{4} \psi_{0}\,\] from subsection 2.3. In the nomenclature of subsection 4.1, these fields have masses \(m^{2}L^{2}=2\), which implies \(\Delta=2\) for the dual operators. First, introduce the Fourier modes \(\chi_{n}\), for which (4.25) becomes \[\left(\partial_{\tau}^{2}-2\cot^{2}\psi+K^{2}-n^{2}\right)\chi_{n}=0. \tag{4.26}\] We may view (4.26) as a Schrodinger equation with a periodic potential. With hindsight, let us proceed as follows. We introduce a lattice \(\mathbb{L}=\{m+n\tilde{\tau};m,n\in\mathbb{Z}\}\) with \(\tilde{\tau}\in\mathbb{C},\Im\tilde{\tau}>0\). The complex parameter \(\tilde{\tau}\) will be determined later. Then, with \[\omega_{1}=\frac{1}{2},\qquad\omega_{3}=\frac{1}{2}\tilde{\tau}\,\qquad\omega_{2} =-\frac{1+\tilde{\tau}}{2}\, \tag{4.27}\] let \(\wp(x)\) be the Weierstrass elliptic function with periodicity on \(\mathbb{L}\). The roots are defined as usual by [33] \[e_{i}=\wp(\omega_{i})\.\] Now, let us perform the following variable transformation, \[\frac{\tan^{2}\psi(\tau)}{\tan^{2}\psi_{0}}=\frac{e_{1}-e_{2}}{\wp(x)-e_{2}}. \tag{101}\] The correspondence between particular coordinate values is illustrated in the table below. \[\frac{\tau}{x}\begin{array}{c|cc}0&\tau_{0}&2\tau_{0}\\ 0&\frac{1}{2}&1\end{array}\] With (101), (102) transforms into the Schrodinger equation6 Footnote 6: Originally, our path to (100) began with the change of variables \(y=\frac{\tan^{2}\psi(\tau)}{\tan^{2}\psi_{0}}\) together with \(\tilde{\chi}_{n}=y^{\frac{1}{2}}\chi_{n}\), which gives rise to a Heun equation. Then, this Heun equation is transformed into (100) using the techniques of [34]. It is, however, very easy to apply the transformation (101) directly, which shows that \(\tau=\sqrt{e_{1}-e_{2}}\tan\psi_{0}\,x\). This implies, in particular, \(\tau_{0}=\sqrt{e_{1}-e_{2}}\frac{\sqrt{2}}{2\sqrt{1-s}}\). \[\left[\partial_{x}^{2}-2\wp(x)+E_{n}\right]\chi_{n}=0\, \tag{102}\] where the roots satisfy the relation \[\frac{e_{2}-e_{3}}{e_{1}-e_{2}}=\frac{s+t}{1-s} \tag{103}\] and the energies \(E_{n}\) are given by \[E_{n}=\frac{s(e_{1}-e_{2})(1-n^{2})}{(1-s)}-e_{2}. \tag{104}\] The relation (103) determines the roots and the complex parameter \(\tilde{\tau}\) via the standard relations \[k^{2}=\frac{e_{2}-e_{3}}{e_{1}-e_{3}}\,\qquad\omega_{1}=\frac{\mathbf{K}}{ \sqrt{e_{1}-e_{3}}}\,\qquad\omega_{3}=i\frac{\mathbf{K}^{\prime}}{\sqrt{e_{1}-e_{3}}}. \tag{105}\] One easily finds that the modulus \(k\) defined in (105) is just the modulus (39) defined for the integrals in the background solution. Therefore, given the background parameters \(s\) and \(t\), from which one gets the modulus \(k\) from (39), one calculates the lattice parameter \(\tilde{\tau}\) and the roots from (105) and (106). Specifically, one gets \[e_{3}=-\frac{4}{3}\mathbf{K}^{2}(1+k^{2})\,\qquad e_{1}=\frac{4}{3}\mathbf{K} ^{2}(2-k^{2})\,\qquad e_{2}=\frac{4}{3}\mathbf{K}^{2}(2k^{2}-1). \tag{106}\] Moreover, from (40) (or from the relation at the end of footnote 6) one finds the relation \[\tau_{0}=\sqrt{\frac{s}{1+t}}\mathbf{K}. \tag{107}\] Equation (102) represents the case of example 2.2 of [34]. Taking the solutions from [34], one can easily construct the solution that has no source on the boundary at \(x=0\). It is, up to an insignificant constant, \[\chi_{n}(x)=\sqrt{\wp(x)+E_{n}}\sinh\left[\sqrt{-Q(E_{n})}\int\limits_{0}^{x} \frac{\mathrm{d}x}{\wp(x)+E_{n}}\right]\, \tag{108}\] where \(Q(E)\) is the cubic polynomial \[Q(E)=(E+e_{1})(E+e_{2})(E+e_{3}). \tag{4.36}\] The solution (4.35) is valid for \(n^{2}\neq 1\), because \(E_{\pm 1}=-e_{2}\), so that \(Q(E_{\pm 1})=0\). We shall consider the special case \(n^{2}=1\) further below. In the general case, it remains to expand \(\chi_{n}\) around \(x=0\) and \(x=1\) to read off the source and response coefficients, from which we will get the two-point functions. Close to \(x=0\), we have \[\chi_{n}(x)\approx\frac{1}{3}\sqrt{-Q(E_{n})}x^{2}\, \tag{4.37}\] confirming that there is no source. Close to \(x=1\), one gets \[\chi_{n}(x)\approx\frac{\sinh\zeta_{n}}{1-x}-\frac{1}{3}\sqrt{-Q(E_{n})}\cosh \zeta_{n}(1-x)^{2}\, \tag{4.38}\] where we have introduced the constants \[\zeta_{n}=\sqrt{-Q(E_{n})}\int\limits_{0}^{1}\frac{\mathrm{d}x}{\wp(x)+E_{n} }. \tag{4.39}\] The integral can be converted to complete elliptic integrals. First, by symmetry, \[\zeta_{n}=2\sqrt{-Q(E_{n})}\int\limits_{0}^{\frac{1}{2}}\frac{\mathrm{d}x}{ \wp(x)+E_{n}}\,\] after which we apply the change of integration variable \[1-y=\frac{e_{1}-e_{2}}{\wp(x)-e_{2}}\.\] This yields \[\zeta_{n} =\frac{2(1-s)\sqrt{-Q(E_{n})}}{\sqrt{e_{1}-e_{3}}(e_{1}-e_{2})} \int\limits_{0}^{1}\frac{\mathrm{d}y\,\sqrt{1-y}}{\sqrt{y(1-k^{2}y)}[1-s+(1-n ^{2}s)(1-y)]}\] \[=\frac{2(1-s)\sqrt{-Q(E_{n})}}{\sqrt{e_{1}-e_{3}}(e_{1}-e_{2})s(1 -n^{2})}\left[\mathbf{K}-\frac{1-s}{1-n^{2}s}\mathbf{\Pi}\left(\frac{(1-n^{2} )s}{1-n^{2}s}\right)\right]\.\] Let us also write out the polynomial \(Q(E_{n})\) using (4.36), (4.31), and (2.39), \[Q(E_{n})=-\frac{s(1-n^{2})(1-n^{2}s)(t+n^{2}s)}{(1+t)(1-s)^{2}}(e_{1}-e_{3})(e _{1}-e_{2})^{2}. \tag{4.40}\] Thus, one obtains (recall \(n\neq\pm 1\)) \[\zeta_{n}=2\sqrt{\frac{(1-n^{2}s)(t+n^{2}s)}{s(1+t)(1-n^{2})}}\left[\mathbf{K }-\frac{1-s}{1-n^{2}s}\mathbf{\Pi}\left(\frac{(1-n^{2})s}{1-n^{2}s}\right) \right]. \tag{4.41}\] It is an amazing coincidence that \(\zeta_{0}\) is \[\zeta_{0}=2\sqrt{\frac{t}{s(1+t)}}\left[{\bf K}-(1-s){\bf\Pi}(s)\right]=2J\, \tag{109}\] with \(J\) being the background geometry parameter (15), as can be seen from (41) and (39). In the special case \(n^{2}=1\), the solution of (108) without source at the boundary at \(x=0\) is, up to an irrelevant constant, \[\chi_{\pm 1}(x)=\sqrt{\wp(x)-e_{2}}\int\limits_{0}^{x}\frac{{\rm d}x}{\wp(x)-e_{ 2}}. \tag{110}\] Expanding it close to the boundaries at \(x=0\) and \(x=1\) yields \[\chi_{\pm 1}(x)\approx\frac{1}{3}x^{2}\,\qquad\chi_{\pm 1}(x)\approx\frac{ \tilde{\zeta}}{1-x}-\frac{1}{3}(1-x)^{2}\, \tag{111}\] respectively, where the constant \(\tilde{\zeta}\) is \[\tilde{\zeta}=\int\limits_{0}^{1}\frac{{\rm d}x}{\wp(x)-e_{2}}=\frac{{\bf E}- (1-k^{2}){\bf K}}{4k^{2}(1-k^{2}){\bf K}^{3}}=-\frac{1}{8k}\,\frac{{\rm d}}{{ \rm d}k}{\bf K}^{-2}. \tag{112}\] The integral can be done using the same transformation as in the general case. Again, it is an amazing coincidence that \(\tilde{\zeta}\) is proportional to the on-shell action (42), \[\tilde{\zeta}=-\frac{S_{ren}}{\sqrt{\lambda}}\frac{s^{2}}{(s+t)(1-s)(2\tau_{0} )^{3}}. \tag{113}\] Finally, we can write down the two-point functions for the operators that are dual to the scalars \(\chi^{4,5}\). We separate the modes \(n=-1,0,1\), for which we have the special relations (109) and (113), as well as \[\sqrt{-Q(E_{0})}=\frac{e_{1}-e_{2}}{1-s}\sqrt{\frac{st}{1+t}(e_{1}-e_{3})}=8{ \bf K}^{3}\sqrt{\frac{st}{(1+t)^{3}}}=\frac{(2\tau_{0})^{3}\sqrt{t}}{s}. \tag{114}\] Similarly, we write from (108) \[\sqrt{-Q(E_{n})}=\frac{(2\tau_{0})^{3}}{s}\sqrt{(1-n^{2})(1-n^{2}s)(t+n^{2}s) }. \tag{115}\] The factor \((2\tau_{0})^{3}\) compensates the variable re-scaling, as discussed at the end of subsection 4.1. Therefore, we obtain the two-point functions \[\langle{\cal O}^{L,R}(\phi){\cal O}^{L,R}(0)\rangle =-\frac{\sqrt{t}}{s\sqrt{1-\alpha^{2}}}+\cos\phi\frac{2\sqrt{ \lambda}(s+t)(1-s)}{s^{2}S_{ren}} \tag{111}\] \[\quad-\frac{2}{s}\sum_{n=2}^{\infty}\cos(n\phi)\sqrt{(1-n^{2})(1- n^{2}s)(t+n^{2}s)}\coth\zeta_{n}\,\] \[\langle{\cal O}^{L,R}(\phi){\cal O}^{R,L}(0)\rangle =\frac{\sqrt{t}\alpha}{s\sqrt{1-\alpha^{2}}}-\cos\phi\frac{2\sqrt {\lambda}(s+t)(1-s)}{s^{2}S_{ren}}\] (112) \[\quad+\frac{2}{s}\sum_{n=2}^{\infty}\cos(n\phi)\frac{\sqrt{(1-n^{ 2})(1-n^{2}s)(t+n^{2}s)}}{\sinh\zeta_{n}}\.\] Here, \(\alpha\) is the geometric invariant parameter (3), as follows from (10). Let us briefly make two comments about the above results. For \(n^{2}>1\), we would need \(n^{2}s>1\) in order for the square root to be real. But also \(\zeta_{n}\) contains such a square root, so the imaginary units cancel out, after the hyperbolic functions have been converted to trigonometric ones. Thus, there is no problem with the reality of the two-point functions. Note also that the \(\sinh\) in the denominators has zeros at \(\zeta_{n}=im\pi\), which indicate resonances. At the moment we are not quite sure how to interpret this possibility. ### Geodesic approximation for heavy scalars Thus far, we have rigorously described the light operators as fluctuations of the string world-sheet. In this section we take a more phenomenological approach and consider the correlators of very heavy operators. These operators might arise, for example, as very massive string states. One expects that for such operators the correlator can be well approximated by \(\langle{\cal O}(\phi_{1}){\cal O}(\phi_{2})\rangle\sim\exp\left(-m_{\cal O}\, \ell(\phi_{1},\phi_{2})\right)\), where \(\ell(\phi_{1},\phi_{2})\) is the (renormalized) length of the _shortest_ geodesic connecting the two insertion points of the operators, which might be located on the same or on different boundaries of the world-sheet. An example of the operators we have in mind were dubbed "two-particle operators" in [17]. Consider an operator of the form \(\Phi^{a}\partial_{\phi}^{2n}\Phi^{b}\) where one usually takes the singlet, symmetric traceless, or antisymmetric representations in the \(SO(5)\) indices \(a,b\). Such operators appear in the OPE expansion computing the four-point function of simple operator insertions such as those corresponding to fields discussed in the previous subsections. An impressive result of [17] was to determine corrections to the conformal dimension: \(\Delta=2+2n-(2n^{2}+3n)/\sqrt{\lambda}\). Our world-sheet configuration is more general than the dual of the \(\frac{1}{2}\)-BPS Wilson-Maldacena loop considered in [17]. In particular, we do not have a natural organization in terms of irreducible representations of \(SO(5)\), although \(SO(4)\) seems to survive. We use some of the supersymmetric results as guidance for our current non-supersymmetric, non-conformal situation. The geodesic analysis relevant for such heavy operators is as follows. We start with the expression of the geodesic length, \[\ell=L\int\,\mathrm{d}u\,\cot\psi\sqrt{\dot{\tau}^{2}+\dot{\sigma}^{2}}\, \tag{113}\] where a dot represents a derivative with respect to \(u\). The equation of motion for \(\sigma\) is solved by7 Footnote 7: The equation of motion for \(\tau\) is then implied by (4.52) because of diffeomorphism invariance. \[\dot{\sigma}=P\tan\psi\sqrt{\dot{\tau}^{2}+\dot{\sigma}^{2}}\, \tag{4.52}\] where \(P\) is an integration constant. Without loss of generality, we can set \(u=\sigma\) and obtain \[\mathrm{d}\sigma=\pm\frac{\tan\psi\ \mathrm{d}\tau}{\sqrt{\tan^{2}\psi_{P}- \tan^{2}\psi}}\, \tag{4.53}\] where we have defined \(\tan^{2}\psi_{P}=P^{-2}\). To continue, we write \(\,\mathrm{d}\tau=\,\mathrm{d}\psi/\psi^{\prime}\) using (2.24) and change variable by defining \(y=\frac{\tan^{2}\psi}{\tan^{2}\psi_{0}}\), with \(\psi_{0}\) defined in (2.12). Using also (2.36), this results in \[\mathrm{d}\sigma=\pm\frac{\frac{1}{2}\sqrt{\frac{s}{1-s}}\,\mathrm{d}y}{\sqrt {(y_{P}-y)(1-y)\left(1+\frac{s+t}{1-s}y\right)}}\, \tag{4.54}\] where we have defined the new constant \(y_{P}=\frac{\tan^{2}\psi_{P}}{\tan^{2}\psi_{0}}\). We must distinguish two cases. For \(y_{P}<1\), the geodesic reaches a certain \(\tau=\tau_{P}<\tau_{0}\) and then turns back to the boundary it started at. These geodesics represent the Left-Left (or Right-Right) correlators. For \(y_{P}>1\), the geodesic reaches the middle of the world-sheet at \(\tau=\tau_{0}\), and then continues to the other boundary. These represent the Left-Right correlators. In either case, the difference \(\phi=\phi_{2}-\phi_{1}\) is given by \[\phi=\sqrt{\frac{s}{1-s}}\int\limits_{0}^{\min(y_{P},1)}\frac{\mathrm{d}y}{ \sqrt{(y_{P}-y)(1-y)\left(1+\frac{s+t}{1-s}y\right)}}. \tag{4.55}\] Clearly, the integral on the right hand side diverges in the limiting case \(y_{P}=1\). Because \(\phi\) counts only modulus \(2\pi\), the fact that \(\phi\) can be arbitrarily large implies that there are infinitely many geodesics connecting any two boundary points. They can be distinguished by their "winding number" \(w\), which measures how many times and in which sense they wind around the world-sheet. We are interested only in the shortest of these, because it dominates the amplitude \(\mathrm{e}^{-m_{\mathcal{O}}\ell}\).8 Footnote 8: In principle, one should write \(\sum_{w}\mathrm{e}^{-m_{\mathcal{O}}\ell_{w}}\) with \(\phi_{w}=\phi+2\pi w\). The geodesic length can be found using the same manipulations as above. One finds \[\ell=L\int\limits_{\epsilon}^{\min(y_{P},1)}\frac{\sqrt{y_{P}}\,\mathrm{d}y}{ y\sqrt{(y_{P}-y)(1-y)\left(1+\frac{s+t}{1-s}y\right)}}\, \tag{4.56}\] where we have introduced the cut-off \(\epsilon\) to regulate the logarithmic divergence at the world-sheet boundary. One must renormalize this by subtracting the universal divergence as described in [35]. For example, one may subtract the value in the limit \(y_{P}\to\infty\), which is independent of \(y_{P}\) and represents the length of a geodesic stretching between the two boundaries with \(\phi=0\). Alternatively, one can add a boundary term that implements the appropriate boundary conditions and regulates the divergence [36]. The integrals themselves can be done and result in elliptic integrals, but they are not very illuminating. For small \(y_{P}\), when the geodesic stays close to the boundary, after changing variable \(y=y_{P}z\), \(\ell\) diverges as \(\ln\frac{y_{P}}{\epsilon}\), which becomes \(\ln\frac{y_{P}}{\mu}\) with some renormalization scale \(\mu\) after subtracting the universal divergence. On the other hand, we get from (100) \(\phi\sim\sqrt{y_{P}}\), so that the leading amplitude becomes \(\mathrm{e}^{-m_{\mathcal{O}}\ell}\sim\phi^{-2m_{\mathcal{O}}L}\), reproducing the conformal behaviour. ## 5 Field theory correlators in the ladder approximation Field theory computations in the context of the half-BPS Maldacena-Wilson loop have a rich history with some important recurrent themes that we briefly review to motivate the structure we follow in this section. The fact that the combined propagator of the gluon and the scalar field in a circular Wilson loop is constant motivated the Gaussian matrix model conjecture [4; 5]. The latter states that expectation values are computed using a Gaussian matrix model which in practice resums ladder diagrams; this conjecture was later proven by Pestun [7]. The combined propagator for insertions in two different loops is not constant, as will be shown in Equation (101), and the ladder diagrams do not provide the complete perturbative description of the problem. Computing the resummation of ladder diagrams can be nevertheless very instructive. Summing ladders is a venerated tradition in this context, not only it is at the heart of the original conjecture stating that the vacuum expectation value of the half-supersymmetric Wilson loop was determined by a Gaussian matrix model [4; 5] but it also can be used to extract a qualitatively correct picture of the strong coupling description. For instance, the resummation of ladders diagrams for the correlator of two circular Wilson loops exhibits, in the strong coupling limit, a phase transition quite similar to the Gross-Ooguri one [37; 38]. Additionally, the ladder truncation can also be justified when certain analytic continuation of the coupling of the Wilson loops to the scalar fields is considered. This has allowed some explicit connection between the resummation of ladder and string theory results [20; 39]. With these motivations in mind, in this section we concentrate on the ladder diagrams contribution to the correlator of insertions in two Wilson loops. Our analysis will closely follow that of [37; 38], where the ladder contribution to the expectation value of connected correlators of Wilson-Maldacena loops are given in terms of certain Green's functions satisfying a set of Dyson equations. We consider inserting both operators in the same loop and also one operator per loop. _Our main finding is the fact that the very same Green's functions that describe the expectation value of the correlator of two Wilson-Maldacena loops also describe the correlation between insertions._ ### The correlator of two Wilson loops We study correlations between excitations inserted along Wilson loops in the ladder approximation, _i.e._, adding up only those Feynman diagrams with no vertices. Admittedly, this description turns out to be incomplete in the general case. However, as we shall see, this procedure becomes a sound approximation to the leading contribution in a certain parametric limit. Consider the two Wilson-Maldacena loops defined in (2.1) with the contours (2.2). Let us first review the evaluation of the vacuum expectation value of the above two Wilson-Maldacena loops as we introduce some convenient technical ingredients, more details can be found in [37; 38]. To compute ladder diagrams, it is convenient to define the following Gaussian effective fields \[\varphi_{a}(\phi)\equiv iA_{\mu}\dot{x}_{a}{}^{\mu}+\Phi_{I}{}_{n}^{I}|\dot{x }_{a}| \tag{5.1}\] where \(a=1,2\) indicates on which loop do they live. The propagator of these effective fields are [4; 5]: \[\left\langle\varphi_{a\,j}^{i}(\phi)\varphi_{b\,l}^{k}(\phi^{\prime})\right\rangle =\frac{1}{N}\,\delta_{l}^{i}\delta_{j}^{k}G_{ab}(\phi-\phi^{\prime})\, \tag{5.2}\] where \[G_{11}=G_{22}=\frac{\lambda}{16\pi^{2}}\equiv g\, \tag{5.3}\] while \[G_{12}(\phi)=G_{21}(\phi)=g\,\frac{\cos\gamma+\cos\phi}{\alpha^{-1}-\cos\phi }\equiv G(\phi)\, \tag{5.4}\] with \(\alpha\) denoting the conformally invariant geometric parameter defined in (2.3), see also (2.17). In a very precise sense, restricting the summation of diagrams to ladder ones is the same as treating the theory as if it were Gaussian. It is convenient to introduce the following auxiliary operators describing two path-ordered exponentials: \[\vec{\overline{U}}_{a}(\phi_{1},\phi_{2})=\vec{\overline{P}}\exp\int_{\phi_{1 }}^{\phi_{2}}d\phi\,\varphi_{a}(\phi),\qquad\vec{\overline{U}}_{a}(\phi_{1}, \phi_{2})=\vec{\overline{P}}\exp\int_{\phi_{1}}^{\phi_{2}}dt\,\varphi_{a}(\phi), \tag{5.5}\] where \(a=1,2\) indicates the circular loop that is being considered. The symbols \(\vec{\overline{P}}\) and \(\vec{\overline{P}}\) denote path and anti-path ordering: the rightmost field in the expansion of the exponential has the largest, respectively, the smallest argument. The Wilson loops can be defined in terms of these operators. Writing the ordered exponentials as \[\vec{\overline{U}}_{a}(\phi_{1},\phi_{2})=\prod_{\phi\in(\phi_{1},\phi_{2})} \left(\mathbb{1}+\varphi_{a}(\phi)d\phi\right), \tag{5.6}\] it is straightforward to verify that they satisfy the following recursive relations \[\vec{\overline{U}}_{a}(\phi_{1},\phi_{2})=\mathbb{1}+\!\int_{\phi_{1}}^{\phi_ {2}}\!\!\!d\phi\,\vec{\overline{U}}_{a}(\phi_{1},\phi)\varphi_{a}(\phi),\quad \vec{\overline{U}}_{a}(\phi_{1},\phi_{2})=\mathbb{1}+\!\int_{\phi_{1}}^{\phi _{2}}\!\!\!d\phi\,\varphi_{a}(\phi)\vec{\overline{U}}_{a}(\phi,\phi_{2}). \tag{5.7}\] Consider for instance \[W(\phi)=\frac{1}{N}\left\langle\operatorname{tr}\vec{\overline{U}}_{1}(0, \phi)\right\rangle. \tag{5.8}\] Using (5.7), the Wick's theorem and large-\(N\) factorization we get a simple Dyson equation \[W(\phi)=1+g\int_{0}^{\phi}d\phi^{\prime}\int_{0}^{\phi^{\prime}}d\phi^{\prime \prime}\,W(\phi^{\prime}-\phi^{\prime\prime})W(\phi^{\prime\prime})\,. \tag{5.9}\] This equation for \(W(\phi)\), which involves only the constant propagator (5.3), can be easily solved in terms of the Laplace transform, \(W(z)\), defined according to \[W(z)=\int\limits_{0}^{\infty}d\phi\,e^{-z\phi}\,W(\phi). \tag{5.10}\] Then, \[W(z)=\frac{1}{z}+g\frac{W(z)^{2}}{z}\quad\Rightarrow\quad W(z)=\frac{z-\sqrt{ z^{2}-4g}}{2g}, \tag{5.11}\] and anti-transforming \[W(\phi)=\frac{1}{\sqrt{g}\phi}I_{1}(2\sqrt{g}\phi), \tag{5.12}\] where \(I_{1}\) is a modified Bessel function of the first kind. This function, evaluated at \(2\pi\), gives the well-known result for the expectation value of the circular Wilson-Maldacena loop in the large \(N\) limit [4; 5]. It is worth remarking the important role that a constant propagators plays in the road to certain results. We will highlight this threat in various computations and thus motivate certain limits in the space of parameters. In fact, in a similar fashion, the connected correlator of \(k\) traces of a unique Wilson loop \[W(\phi_{1},\cdots,\phi_{k})=N^{k-2}\left\langle\operatorname{tr}\overleftarrow {U}_{1}(0,\phi_{1})\cdots\operatorname{tr}\overleftarrow{U}_{1}(0,\phi_{k}) \right\rangle_{\text{conn}}, \tag{5.13}\] can also be explicitly computed in terms of constant propagators. For example, for \(k=2\) one has [40] \[W(\phi_{1},\phi_{2})=\frac{\sqrt{g}\phi_{1}\phi_{2}}{\phi_{1}+\phi_{2}}\left[ I_{0}(2\sqrt{g}\phi_{1})I_{1}(2\sqrt{g}\phi_{2})+I_{1}(2\sqrt{g}\phi_{1})I_{0}(2 \sqrt{g}\phi_{2})\right]\,. \tag{5.14}\] Let us now turn to the correlator of the two concentric coaxial Wilson-Maldacena loops given in (2.2). This configuration is non-supersymmetric and its ladder contribution can be tackled with similar methods. It is convenient to introduce two Green's functions, \[K(\phi)=\left\langle\operatorname{tr}\overleftarrow{U}_{1}(0,\phi) \operatorname{tr}\overleftarrow{U}_{2}(0,2\pi)\right\rangle_{\text{conn}}, \tag{5.15}\] and \[\Gamma(\phi_{1},\phi_{2}|\varphi)=\left\langle\frac{1}{N}\,\operatorname{tr} \overleftarrow{U}_{1}(0,\phi_{1})\overleftarrow{U}_{2}(\varphi,\varphi+ \phi_{2})\right\rangle. \tag{5.16}\] Both are quadratic in ordered exponentials \(\overleftarrow{U}_{1}\) and \(\overleftarrow{U}_{2}\), but differ in the number of traces. The ladder contribution to the connected correlator of the two Wilson loops is simply given by \[\left\langle W(C_{1})W(C_{2})\right\rangle_{\text{ladders}}=K(2\pi). \tag{5.17}\] The derivation of a Dyson equation that relates the Green's function \(K(\phi)\) and \(\Gamma(\phi_{1},\phi_{2}|\varphi)\) was given in [38] and we review it here. Using (5.7), it follows that \[\left\langle\operatorname{tr}\overleftarrow{U}_{1}(0,\phi)\operatorname{tr} \overleftarrow{U}_{2}(0,2\pi)\right\rangle=N^{2}W(2\pi)+\int_{0}^{\phi}d\phi^ {\prime}\,\left\langle\operatorname{tr}\varphi_{1}(\phi^{\prime}) \overleftarrow{U}_{1}(\phi^{\prime},\phi)\operatorname{tr}\overrightarrow{U} _{2}(0,2\pi)\right\rangle. \tag{5.18}\] The first term in the right hand side can be re-written using (111). There are two possible Wick's contractions for the second term, giving rise to propagators \(g\) and \(G\) respectively, \[\left\langle\operatorname{tr}\overline{U}_{1}(0,\phi)\operatorname{ tr}\overline{U}_{2}(0,2\pi)\right\rangle= N^{2}W(2\pi)\left(W(\phi)-\int_{0}^{\phi}\!\!d\phi^{\prime}\!\int_{0}^{ \phi^{\prime}}\!\!d\phi^{\prime\prime}\,W(\phi^{\prime}-\phi^{\prime\prime})W (\phi^{\prime\prime})\right) \tag{112}\] \[+\frac{g}{N}\!\int_{0}^{\phi}\!\!d\phi^{\prime}\!\int_{0}^{\phi^{ \prime}}\!\!d\phi^{\prime\prime}\left\langle\operatorname{tr}\overline{U}_{1} (0,\phi^{\prime\prime})\operatorname{tr}\overline{U}_{1}(\phi^{\prime\prime}, \phi^{\prime})\operatorname{tr}\overline{U}_{2}(0,2\pi)\right\rangle\] \[+\frac{1}{N}\!\int_{0}^{\phi}\!\!d\phi^{\prime}\!\int_{0}^{2\pi}\! \!d\varphi\,G(\varphi-\phi^{\prime})\left\langle\operatorname{tr}\overline{U }_{1}(0,\phi^{\prime})\overline{U}_{2}(\varphi,\varphi+2\pi)\right\rangle.\] Applying large-\(N\) factorization and removing the disconnected part of the correlators we obtain an equation that relates \(K\) to \(\Gamma\), \[K(\phi)=2g\int_{0}^{\phi}d\phi^{\prime}\int_{0}^{\phi^{\prime}}d\phi^{\prime \prime}\,W(\phi^{\prime}\!-\!\phi^{\prime\prime})K(\phi^{\prime\prime})+\int_{ 0}^{\phi}d\phi^{\prime}\int_{0}^{2\pi}d\varphi\,G(\varphi\!-\!\phi^{\prime}) \Gamma(\phi^{\prime},2\pi|\varphi). \tag{113}\] The Green's function \(\Gamma(\phi_{1},\phi_{2}|\varphi)\) satisfies a closed Dyson equation, which can be derived following similar arguments \[\Gamma(\phi_{1},\phi_{2}|\varphi) = W(\phi_{2})+g\int_{0}^{\phi_{1}}\!dt^{\prime}\int_{0}^{\phi^{ \prime}}\!\!d\phi^{\prime\prime}\,W(\phi^{\prime}-\phi^{\prime\prime})\Gamma( \phi^{\prime\prime},\phi_{2}|\varphi) \tag{114}\] \[+\int_{0}^{\phi_{1}}\!d\phi^{\prime}\int_{0}^{\phi_{2}}\!\!d\phi ^{\prime\prime}\,G(\varphi+\phi^{\prime\prime}-\phi^{\prime})W(\phi_{2}-\phi ^{\prime\prime})\Gamma(\phi^{\prime},\phi^{\prime\prime}|\varphi).\] This equation can be brought to a more symmetric form. Eq. (114) is an integral equation of the type \[f(\phi)=g\int_{0}^{\phi}\!d\phi^{\prime}\int_{0}^{\phi^{\prime}}\!\!d\phi^{ \prime\prime}\,W(\phi^{\prime}-\phi^{\prime\prime})f(\phi^{\prime\prime})+ \int_{0}^{\phi}\!\!d\phi^{\prime}\,j(\phi^{\prime}). \tag{115}\] Using of the Dyson equation (111) for \(W(\phi)\) it is easy to see that this can be solved by \[f(\phi)=\int_{0}^{\phi}\!\!d\phi^{\prime}\,W(\phi-\phi^{\prime})j(\phi^{ \prime}). \tag{116}\] Applying this result to the Dyson equation (114) brings the latter to a symmetric form \[\Gamma(\phi_{1},\phi_{2}|\varphi)= W(\phi_{1})W(\phi_{2}) \tag{117}\] \[+\int_{0}^{\phi_{1}}d\phi^{\prime}\int_{0}^{\phi_{2}}d\phi^{ \prime\prime}\,W(\phi_{1}-\phi^{\prime})W(\phi_{2}-\phi^{\prime\prime})G( \varphi+\phi^{\prime\prime}-\phi^{\prime})\Gamma(\phi^{\prime},\phi^{\prime \prime}|\varphi),\] These Dyson equations are in general difficult to solve. There exist, however, certain limits in which they simplify considerably. One limit of interest is \(\alpha\to 1\) and \(\cos\gamma\to-1\). In this limit the two loops with opposite orientations become coincident and supersymmetric. The effective propagator \(G(\phi)\) becomes constant and Dyson equations (113) and (117) can be simply solved via a Laplace transform, as done with the Green's function \(W(t)\). Another limit of interest is the analytic continuation \(\cos\gamma\to\infty\). In first place, because being \(G(\phi)\gg g\) one can set \(W(\phi)\) to \(1\) in Dyson equations (113) and (117) to capture the leading order contribution in this parametric limit. Additionally, the non-ladder contributions are expected to be suppressed in this limit. ### Inserting local operators With the preliminaries covered, we can now tackle the insertion of operators in the correlator of two concentric circular loops. We might either consider the insertions in the same Wilson loop or insertions at different loops. It is possible to compute the correlation function between these insertions through the following expectation values \[\langle\!\langle\mathcal{O}_{1}^{L}(\phi_{1})\mathcal{O}_{2}^{L}( \phi_{2})\rangle\!\rangle=\frac{\langle\mathrm{tr}[Pe^{i\oint_{C_{1}}d\phi\varphi _{1}(\phi)}\mathcal{O}_{1}(\phi_{1})\mathcal{O}_{2}(\phi_{2})]\mathrm{tr}[Pe^{ i\oint_{C_{2}}d\phi^{\prime}\varphi_{2}(\phi^{\prime})}]\rangle_{\mathrm{conn}}}{ \langle\mathrm{tr}[Pe^{i\oint_{C_{1}}d\phi\varphi_{1}(\phi)}]\mathrm{tr}[Pe^{ i\oint_{C_{2}}d\phi^{\prime}\varphi_{2}(\phi^{\prime})}]\rangle_{\mathrm{conn}}}, \tag{5.25}\] \[\langle\!\langle\mathcal{O}_{1}^{L}(\phi_{1})\mathcal{O}_{2}^{R}( \phi_{2})\rangle\!\rangle=\frac{\langle\mathrm{tr}[Pe^{i\oint_{C_{1}}d\phi \varphi_{1}(\phi)}\mathcal{O}_{1}(\phi_{1})]\mathrm{tr}[Pe^{i\oint_{C_{2}}d \phi^{\prime}\varphi_{2}(\phi^{\prime})}\mathcal{O}_{2}(\phi_{2})]\rangle_{ \mathrm{conn}}}{\langle\mathrm{tr}[Pe^{i\oint_{C_{1}}d\phi\varphi_{1}(\phi)}] \mathrm{tr}[Pe^{i\oint_{C_{2}}d\phi^{\prime}\varphi_{2}(\phi^{\prime})}] \rangle_{\mathrm{conn}}}, \tag{5.26}\] where, for the case of two traces, the connected part means \[\langle\mathrm{tr}(A)\mathrm{tr}(B)\rangle_{\mathrm{conn}}=\langle\mathrm{tr} (A)\mathrm{tr}(B)\rangle-\langle\mathrm{tr}(A)\rangle\langle\mathrm{tr}(B)\rangle. \tag{5.27}\] Armed with the intuition of the previous section, we would like to compute the ladder diagrams contributions to those expectation values. Among all candidates for operator insertions, we will restrict our attention to the simplest possibility: we will consider insertions of scalar fields that do not appear in the Wilson loops \((\Phi_{1},\cdots\Phi_{4})\). For them, the types of ladder diagrams are limited and their holographic duals are easily identifiable on the string theory description. #### 5.2.1 Insertions in the same loop Let us compute (5.25) in the ladder approximation For the numerator of (5.25) we need \[\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(0,\phi_{1}) \Phi_{1}(\phi_{1})\overleftarrow{U}_{1}(\phi_{1},\phi_{2})\Phi_{1}(\phi_{2}) \overleftarrow{U}_{1}(\phi_{2},2\pi)\right)\mathrm{tr}\left(\overleftarrow{U} _{2}(0,2\pi)\right)\rangle\] \[=\Delta_{11}(\phi_{2}-\phi_{1})\langle\mathrm{tr}\left( \overleftarrow{U}_{1}(\phi_{2},2\pi+\phi_{1})\right)\mathrm{tr}\left( \overleftarrow{U}_{1}(\phi_{1},\phi_{2})\right)\mathrm{tr}\left(\overleftarrow {U}_{2}(0,2\pi)\right)\rangle \tag{5.28}\] where the contraction between the two \(\Phi_{1}\) splits one single trace into two and introduces a propagator \[\Delta_{11}(\phi)=\frac{g}{1-\cos\phi}\,. \tag{5.29}\] To arrive at the expression (5.28) we used that \(\mathrm{tr}\left(\overleftarrow{U}_{1}(0,\phi_{1})\overleftarrow{U}_{1}(\phi _{2},2\pi)\right)=\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{2},2\pi+\phi_{1 })\right)\). In the large \(N\) approximation, the vev of three traces can be expanded as follows \[\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{2},2\pi+\phi_ {1})\right)\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{1},\phi_{2})\right) \mathrm{tr}\left(\overleftarrow{U}_{2}(0,2\pi)\right)\rangle\] \[=\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{2},2\pi+ \phi_{1})\right)\rangle\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{1}, \phi_{2})\right)\rangle\langle\mathrm{tr}\left(\overleftarrow{U}_{2}(0,2\pi) \right)\rangle\] \[+\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{1},\phi_{2}) \right)\rangle\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{2},2\pi+\phi_{1 })\right)\mathrm{tr}\left(\overleftarrow{U}_{2}(0,2\pi)\right)\rangle_{\mathrm{ conn}}\] \[+\langle\mathrm{tr}\left(\overleftarrow{U}_{2}(0,2\pi)\right) \rangle\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{1},\phi_{2})\right) \mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{2},2\pi+\phi_{1})\right)\rangle_{ \mathrm{conn}}\] \[+\langle\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{2}-\phi_{1},2 \pi)\right)\mathrm{tr}\left(\overleftarrow{U}_{1}(\phi_{1},\phi_{2})\right) \mathrm{tr}\left(\overleftarrow{U}_{2}(0,2\pi)\right)\rangle_{\mathrm{conn}}. \tag{5.30}\] The first line in the right hand side of this equation, \(N^{3}W(2\pi-\phi_{2}+\phi_{1})W(\phi_{2}-\phi_{1})W(2\pi)\), is the leading large \(N\) contribution, but it cancels out when restricting to the connected part of the vev in the numerator of (102). The last line, order \(1/N\), is suppressed in the large \(N\) limit. The remaining intermediate lines can be expressed in terms of the Green's functions studied in the previous section. After all these considerations one has \[\langle{\rm tr}\left(\overleftarrow{U}_{1}(0,\phi_{1})\Phi_{1}( \phi_{1})\overleftarrow{U}_{1}(\phi_{1},\phi_{2})\Phi_{1}(\phi_{2}) \overleftarrow{U}_{1}(\phi_{2},2\pi)\right){\rm tr}\left(\overleftarrow{U}_{ 2}(0,2\pi)\right)\rangle_{\rm conn} \tag{104}\] \[=N\Delta_{11}(\phi_{2}-\phi_{1})\left[W(2\pi+\phi_{1}-\phi_{2})K( \phi_{2}-\phi_{1})+W(\phi_{2}-\phi_{1})K(2\pi+\phi_{1}-\phi_{2})\right.\] \[\left.+W(2\pi)W(\phi_{2}-\phi_{1},2\pi+\phi_{1}-\phi_{2})\right].\] Thus \[\langle\!\langle{\cal O}_{1}^{L}(\phi_{1}){\cal O}_{2}^{L}(\phi_ {2})\rangle\!\rangle_{\rm ladder}= \frac{N\Delta_{11}(\phi_{2}-\phi_{1})W(\phi_{2}-\phi_{1})K(2\pi- \phi_{2}+\phi_{1})}{K(2\pi)}\] \[+ \frac{N\Delta_{11}(\phi_{2}-\phi_{1})W(2\pi-\phi_{2}+\phi_{1})K( \phi_{2}-\phi_{1})}{K(2\pi)}\] \[+ \frac{N\Delta_{11}(\phi_{2}-\phi_{1})W(2\pi)W(\phi_{2}-\phi_{1},2 \pi+\phi_{1}-\phi_{2})}{K(2\pi)}. \tag{105}\] This is difficult to evaluate for general values of \(\phi_{2}-\phi_{1}\) and the coupling. In the limit \(\phi_{2}\to\phi_{1}\) the exact correlator approaches that of a conformal field with conformal dimension \(\Delta=1\). So does our ladder approximation (105), which approaches \(N\Delta_{11}(\phi_{2}-\phi_{1})\) in the limit \(\phi_{2}\to\phi_{1}\). #### 5.2.2 Insertions in different loops For the case of insertions in different loops (103), we need to evaluate the following expression \[\langle{\rm tr}\left(\overleftarrow{U}_{1}(0,\phi_{1})\Phi_{1}( \phi_{1})\overleftarrow{U}_{1}(\phi_{1},2\pi)\right){\rm tr}\left(\overleftarrow {U}_{2}(0,\phi_{2})\overleftarrow{U}_{2}(\phi_{2},2\pi)\right)\rangle\] \[=\Delta_{12}(\phi_{2}-\phi_{1})\langle{\rm tr}\left(\overleftarrow {U}_{1}(0,\phi_{1})\overleftarrow{U}_{2}(\phi_{2},2\pi)(\overleftarrow{U}_{2} (0,\phi_{2})\overleftarrow{U}_{1}(\phi_{1},2\pi)\right)\rangle\] \[=\Delta_{12}(\phi_{2}-\phi_{1})\langle{\rm tr}\left(\overleftarrow {U}_{1}(\phi_{1},\phi_{1}+2\pi)\overleftarrow{U}_{2}(\phi_{2},\phi_{2}+2\pi) \right)\rangle\] \[=\Delta_{12}(\phi_{2}-\phi_{1})\langle{\rm tr}\left(\overleftarrow {U}_{1}(0,2\pi)\overleftarrow{U}_{2}(\phi_{2}-\phi_{1},\phi_{2}-\phi_{1}+2\pi) \right)\rangle\] \[=N\Delta_{12}(\phi_{2}-\phi_{1})\Gamma(2\pi,2\pi|\phi_{2}-\phi_{1}), \tag{106}\] with \(\Delta_{12}\) the scalar propagator \[\Delta_{12}(\phi)=\frac{g/N}{\alpha^{-1}-\cos\phi}. \tag{107}\] Thus \[\langle\!\langle{\cal O}_{1}^{L}(\phi_{1}){\cal O}_{2}^{R}(\phi_{2})\rangle\! \rangle_{\rm ladder}=\frac{N\Delta_{12}(\phi_{2}-\phi_{1})\Gamma(2\pi,2\pi| \phi_{2}-\phi_{1})}{K(2\pi)}. \tag{108}\] ### Large \(\cos\gamma\) limit The limit \(\cos\gamma\to\infty\) is a parametric limit for which both, the Wilson loop configuration and the dual world-sheet, have to be analytically continued. A key motivation to consider this limit is the fact that the ladder diagrams become the leading contribution. Moreover, as the large \(\lambda\) limit of the ladder resummation can be computed, this provides the opportunity to test our world-sheet computations by an explicit comparison. In the limit \(\cos\gamma\to\infty\) not all the ladder diagrams are on the same footing. Diagrams in which all the propagators connect different loops dominate over the rest. Therefore, we have to set \(W(\phi)=1\) in all our Dyson equations. In particular, (5.24) becomes \[\Gamma(\phi_{1},\phi_{2}|\varphi)=1+\int_{0}^{\phi_{1}}d\phi^{\prime}\int_{0}^{ \phi_{2}}d\phi^{\prime\prime}\,G(\varphi+\phi^{\prime\prime}-\phi^{\prime}) \Gamma(\phi^{\prime},\phi^{\prime\prime}|\varphi), \tag{5.36}\] where now \[G(\phi)\simeq g\,\frac{\cos\gamma}{\alpha^{-1}-\cos\phi}. \tag{5.37}\] At this point we apply a manipulation described in [41] that allows us to find an intuitively clear solution to the above equation. We can obtain a differential equation by differentiating (5.36) with respect to \(\phi_{1}\) and \(\phi_{2}\) \[\partial_{\phi_{1}}\partial_{\phi_{2}}\Gamma(\phi_{1},\phi_{2}|\varphi)=G( \varphi+\phi_{2}-\phi_{1})\Gamma(\phi_{1},\phi_{2}|\varphi)\,. \tag{5.38}\] Changing coordinates \[x=\phi_{1}-\phi_{2}\,,\quad y=\phi_{1}+\phi_{2}\,,\quad\Rightarrow\quad \partial_{\phi_{1}}=\partial_{x}+\partial_{y}\,,\quad\partial_{\phi_{2}}= \partial_{y}-\partial_{x}\,, \tag{5.39}\] so that (5.38) becomes \[(\partial_{y}^{2}-\partial_{x}^{2})\Gamma(x,y)=G(\varphi-x)\Gamma(x,y)\,. \tag{5.40}\] We can solve this equation with \[\Gamma(x,y)=\sum_{n}\psi_{n}(x)e^{y\Omega_{n}}\,, \tag{5.41}\] where \[(\Omega_{n}^{2}-\partial_{x}^{2})\psi_{n}(x)=G(\varphi-x)\psi_{n}(x)\,. \tag{5.42}\] This is a sort of Schrodinger problem \[-\psi_{n}^{\prime\prime}(x)-\frac{g\cos\gamma}{\alpha^{-1}-\cos(\varphi-x)} \psi_{n}(x)=-\Omega_{n}^{2}\psi_{n}(x)\,. \tag{5.43}\] which we do not need to solve exactly. In the limit we are interested in, this is the equation for a particle trapped in a very deep well. Thus, the sum (5.41) is dominated by the ground state eigenvalue, which is approximately given by the the depth of the well. \[\Omega_{0}^{2}\simeq\frac{g\cos\gamma}{\alpha^{-1}-1}\,. \tag{5.44}\] Therefore, \[\Gamma(\phi,\phi|\varphi)\simeq\psi_{0}(0)e^{\sqrt{\frac{\alpha g\cos\gamma}{1- \alpha}}2\phi}\,. \tag{108}\] The limit \(\phi_{2}=\phi_{1}\to 0\) sets a boundary condition for \(\Gamma(\phi_{1},\phi_{2}|\varphi)\), which requires that \(\psi_{0}(0)=1\). So, finally we have \[\Gamma(2\pi,2\pi|\varphi)\simeq e^{4\pi\sqrt{\frac{\alpha g\cos\gamma}{1- \alpha}}}\,. \tag{109}\] Using that in the large \(\cos\gamma\) limit [38] \[K(2\pi)\simeq e^{4\pi\sqrt{\frac{\alpha g\cos\gamma}{1-\alpha}}}\,, \tag{110}\] the ladder contribution to the correlator of insertions in different loops becomes \[\langle\!\langle\mathcal{O}_{1}^{L}(\phi_{1})\mathcal{O}_{2}^{R}(\phi_{2}) \rangle\!\rangle_{\rm ladder}\simeq N\Delta_{12}(\phi_{2}-\phi_{1})\,. \tag{111}\] Since in the large \(\cos\gamma\) limit we expect that the ladder contribution is the leading one, it would be interesting to compare (111) with a dual string theory computation. The dual worldsheet configuration should also be analytically continued to be considered in a large \(\cos\gamma\) limit. From (104) and (105) it is possible to see that, for \(0\leq s\leq 1\) and \(t\gg 1\), the value of \(\gamma\) becomes imaginary and very large. More precisely, \(\gamma\simeq i\log\left(\frac{16t}{1-s}\right)\). Eq. (102) implies that, in this regime, the constant of motion \(K\) has to be taken imaginary and large. Unfortunately, we do not have at the moment an explicit expression of the correlator \(\langle\mathcal{O}^{L}(\phi)\mathcal{O}^{R}(0)\rangle\) for generic values of \(K\). In order to make the comparison with (111) possible, one would need to sum the expression (100) and then analytically continue the result for values of \(K\) large and imaginary. ## 6 Conclusions In this manuscript we have considered inserting local operators in the correlator of two Wilson-Maldacena loops. On the holographic side we have presented a complete account of the string world-sheet fluctuations, including the fermionic sector, in section 3. This is a first step for precision holography explorations of AdS\({}_{2}\)/dCFT\({}_{1}\) in non-supersymmetric, non-conformal setups. We found a structure of bosonic fluctuations of the form \(4+2+2\) and checked that it reduces, in the appropriate limits, to the familiar \(5+3\) structure dictated by the ultrashort representation of the \(OSp(4^{*}|4)\) supergroup governing the \(\frac{1}{2}\)-BPS configuration. It is worth remarking that despite being non-supersymmetric and non-conformal, the configuration considered in this manuscript displays a fairly constrained structure and, at every step, we are able to track various parallels with the supersymmetric and conformal limits. In particular, we found that of the original \(\frac{1}{2}\)-BPS spectrum, four modes remain massless and two modes remain with \(m^{2}L^{2}=2\) furnishing a deformation of the displacement multiplet. Following the holographic dictionary, we were able to analytically compute the two-point functions for these excitations corresponding to operator insertions in the same and in different loops at strong coupling. In the case of massless fields we found a closed analytic form in terms of elliptic theta functions and verified that the correlators satisfy a number of expected properties. For the massive fields we also obtained closed forms for the holographic correlators. We have also started a direct field-theoretic exploration of operator insertions in a system of two Wilson-Maldacena loops. On the field theory side we have concentrated on insertions of the scalars, \(\Phi_{I=1,2,3,4}\) that do not enter in the definition of the Wilson-Maldacena loops, that is, not \(\Phi_{5,6}\). In section 5, we considered operators inserted in the same and in different Wilson-Maldacena loops, we obtained explicit expressions for the two-point correlators in the ladder approximation. Interestingly, the answer can be formulated using objects that were introduced in the context of the simpler case of correlators of two Wilson-Maldacena loops. There are a number of very interesting directions that our work stimulates. A natural one pertains to pushing our analysis to the four-point correlators, extending the impressive work reported in [17] for the supersymmetric \(\frac{1}{2}\)-BPS Wilson-Maldacena loop and in [42] for the non-supersymmetric Wilson loop. Note that in both those cases conformal invariance along the defect is preserved; our situation requires taking one extra step into a non-conformal situation. In this work we did not develop the fermionic sector beyond obtaining the action for its quadratic fluctuations. Considering fermionic insertions, however, is a very interesting direction given the prospects of this fermionic sector sharing some features with the Sachdev-Ye-Kitaev model; we hope to report on these directions in the future. Having described all the string fluctuations we have the ground work to tackle one-loop corrections to the effective action for the connected correlators of the two Wilson-Maldacena loops, plausibly setting up a precision holographic comparison not directly constrained by supersymmetry or conformal invariance. It should be emphasized that this precision comparison will require some important advances on the field theory side. Our field theory results here, however, provide some evidence that such results could be achieved at least in certain limits. Another interesting direction would be to consider other representations for the Wilson loops. Indeed, the analysis of the totally symmetric and totally antisymmetric Wilson-Maldacena loops have produced powerful results in the context of the AdS/CFT correspondence [43; 44; 45; 46; 47; 48; 49]. Finally, it would be interesting to explore to what extend our setup connects with problems in condensed matter physics, such as the problem of two Kondo impurities. Beyond our technical progress in the treatment of Wilson-Maldacena loops in the AdS/CFT correspondence, there is a potential connection that would be interesting to explore further. As noted in the introduction, there is a striking similarity between the Gross-Ooguri phase transition of our set up and the transition in the Page curve for certain models of two-dimensional gravity [24]. Given the current interest and insights obtained from gravity in AdS\({}_{2}\) in the form of the Jackiw-Teitelboim model (see reviews [50; 51]), it is relevant to precisely clarify which parts of the techniques displayed here can be applied. We note that there are significant differences between the two setups. The Wilson-Maldacena loop provides a context for open string/Wilson loop correspondence of nongravitational AdS\({}_{2}\)/CFT\({}_{1}\) type correspondence since there is no dynamical gravity in the worldsheet. The Wilson-Maldacena loop has reparametrization invariance fixed by a static gauge that leaves invariant the symmetry \(SO(2,1)\in SO(2,4)\). This is to be contrasted with the emergent nature of \(SO(2,1)\) in the JT context with its correspondence pseudo-Goldstone mode in the boundary which is related to spontaneously broken reparametrizations. This discussion was addressed, for example, in [52]. More recently, the authors of [25] have clarified that the out-of-time-order correlators in the AdS\({}_{2}\) open string/Wilson loop correspondence display, in the appropriate regime, a Lyapunov growth that saturates the chaos bound. Moreover, in the conformal gauge, there is a reparametrization mode which in some respects resembles the Schwarzian mode while leading to \(SO(2,1)\) invariant boundary correlators. We hope to address some of these fascinating topics in the future. ###### Acknowledgements. We thank Simone Giombi for some clarifications regarding [25] and Juan Maldacena for comments. DHC and GAS are partially supported by PICT 2020-03749, PICT 2020-03826, PIP 02229, UNLP X791, UNLP X910 and PUE084 "Busqueda de nueva fisica". The work of AF is supported by CONICYT FONDECYT Regular #1201145 and ANID/ACT210100 Anillo Grant "Holography and its applications to High Energy Physics, Quantum Gravity and Condensed Matter Systems." The work of WM is partially supported by the INFN, research initiative STEFI. LPZ is partially supported by the U.S. Department of Energy under grant DE-SC0007859, he also acknowledges support from an IBM Einstein Fellowship at the Institute for Advanced Study. The five of us are grateful to ICTP for bringing us to Trieste under various programs (associatships (DHC, AF, LPZ), Giornate Uomo (WM) and the visiting programme (GAS)) during the initial stages of this project. ## Appendix A Conformal transformation of a pair of loops The configuration of two Wilson-Maldacena loops is often depicted as two parallel loops of the same radius separated by some distance. One then proceeds to study the correlator as a function of this separation [23]. Here we will show that a pair of parallel loops with arbitrary radii, \(r_{1}\) and \(r_{2}\), and separated a distance \(h\), can be mapped by a conformal transformation to either a pair of parallel loops with equal radii or to a pair of concentric loops lying in the same plane. Along the lines we will emphasize physical parameter that is to be varied. Consider \(\mathbb{R}^{3}\) and take two coaxial rings, i.e. have their centers on the \(x\)-axis and lie on two parallel planes orthogonal to the \(x\)-axis. Without loss of generality, one can fix one of the radii to unity and place it at \(x=0\), so that the initial pair of loops is parameterized by \[x_{1}^{\mu}=(0,\cos\phi,\sin\phi)\,\qquad x_{2}^{\mu}=(x,r\cos\phi,r\sin\phi). \tag{104}\] Moreover, after adopting planar coordinates in the \(yz\)-plane, the angle can be dropped, so that it suffices to consider the two-dimensional vectors \[x_{1}^{\mu}=(0,1)\,\qquad x_{2}^{\mu}=(x,r). \tag{105}\] The transformations needed to transform this pair into either a pair with \(x=0\) (concentric rings) or \(r=1\) (parallel rings) are a special conformal transformation (SCT), \[x^{\prime\mu}=\frac{x^{\mu}+b^{\mu}x^{2}}{1+2b^{\mu}x_{\mu}+b^{2}x^{2}}\,\] (A.3) a scale transformation, \[x^{\prime\mu}=cx^{\mu}\,\] (A.4) and a translation, \[x^{\prime\mu}=x^{\mu}+a^{\mu}\.\] (A.5) If executed in the above order, with \(b^{\mu}=(b,0)\), \(c=1+b^{2}\), and \(a^{\mu}=(-b,0)\), the vectors (A.2) transform into9 Footnote 9: All dimensions are expressed in units of the radius of the first loop. \[x^{\prime\mu}_{\ 1}=(0,1)\,\qquad x^{\prime\mu}_{\ 2}=(x^{\prime},r^{ \prime})\,\] (A.6) with \[x^{\prime}=\frac{x(1-b^{2})+b(x^{2}+r^{2}-1)}{(1+bx)^{2}+b^{2}r^{2}}\,\qquad r ^{\prime}=\frac{r(1+b^{2})}{(1+bx)^{2}+b^{2}r^{2}}\.\] (A.7) It is easy to check that the combination \[\alpha=\frac{2r}{x^{2}+1+r^{2}}\,\] (A.8) is an invariant of the transformation (A.7). More generally, if one lifts the assumption of unit radius for the first ring, then the invariant (A.8) becomes \[\alpha=\frac{2r_{1}r_{2}}{x^{2}+r_{1}^{2}+r_{2}^{2}}=\frac{2r_{1}r_{2}}{2r_{1} r_{2}+x^{2}+(r_{1}-r_{2})^{2}}\.\] (A.9) From the expression on the right it is clear that \(0<\alpha\leq 1\), with \(\alpha=1\) for the case of two coincident rings, \(x=0\), \(r_{1}=r_{2}\). Now, to obtain a concentric rings configuration, take \[b=\frac{1}{2x}\left[x^{2}+r^{2}-1\pm\sqrt{(x^{2}+r^{2}-1)^{2}+4x^{2}}\right],\] (A.10) for which \[x^{\prime}=0\,\qquad r_{\pm}^{\prime}=\frac{1}{\alpha}\left(1\mp\sqrt{1- \alpha^{2}}\right)\.\] (A.11) It is easy to check that the two signs give rise to equivalent configurations, because \(r_{+}^{\prime}r_{-}^{\prime}=1\). Similarly, for \[b=-\frac{x\pm\sqrt{r[(r-1)^{2}+x^{2}]}}{x^{2}+r(r-1)}\,\] (A.12) one finds two parallel equal radius contours separated by \[x^{\prime}=\mp\sqrt{\frac{2(1-\alpha)}{\alpha}}\,\qquad r^{\prime}=1\.\] (A.13) Again, the configurations corresponding to the two possible signs are equivalent. Geometry of embeddings In this appendix, we review the geometry of embedded manifolds following [53] and provide the general expression for the pull-back of the spinor covariant derivative. Our notation will be as follows: Space-time coordinate indices are denoted by Latin letters \((m,n,\ldots)\), while Greek letters \((\alpha,\beta,\ldots)\) belong to the world-volume coordinates. The corresponding flat indices are underlined. Latin indices \(i,j\) are used to label the directions in the normal bundle. They are flat indices by convention \((i=\underline{i})\). A \(d\)-dimensional manifold \(\mathbb{M}\) embedded in a \(\tilde{d}\)-dimensional manifold \(\widetilde{\mathbb{M}}\) (\(d<\tilde{d}\)) is locally described by considering the space-time coordinates of the embedding, \(x^{m}\), as differentiable functions of the variables \(\xi^{\alpha}\) (\(\alpha=1\ldots d\)), which are identified as world-volume coordinates. This implies that the tangent vectors to the embedding are given by \[x^{m}_{\alpha}(\xi)\equiv\partial_{\alpha}x^{m}(\xi). \tag{113}\] They provide the pull-back of any bulk tensor onto the world-volume, foremost, the induced metric \[g_{\alpha\beta}=x^{m}_{\alpha}x^{n}_{\beta}\,g_{mn}. \tag{114}\] We shall assume that \(g_{\alpha\beta}\) is non-degenerate. For a complete local basis of space-time vectors one needs to introduce a basis that spans the vector space orthogonal to the embedding, which is also called the normal bundle. There are \(d_{\perp}=\tilde{d}-d\) independent such vectors, \(N^{m}_{i}\) (\(i=1,\ldots,d_{\perp}\)), and we will apply the convention that these vectors satisfy, together with the tangents, the orthogonality and completeness relations \[N^{m}_{i}x^{n}_{\alpha}\,g_{mn}=0\,\qquad N^{m}_{i}N^{n}_{j}g_{mn}=\eta_{ij}\, \qquad g^{\alpha\beta}x^{m}_{\alpha}x^{n}_{\beta}+\eta^{ij}N^{m}_{i}N^{n}_{j}= g^{mn}. \tag{115}\] We allow the metric on the normal bundle, \(\eta_{ij}\), to have arbitrary signature \((d_{1},d_{2})\), with \(d_{1}+d_{2}=d_{\perp}\). In particular, because it is flat, the \(N^{m}_{i}\) are nothing but (\(d_{\perp}\) of \(\tilde{d}\)) vielbeins \(E^{m}_{\underline{i}}\) of a space-time frame that is locally adapted to the world-volume. The freedom of choice of the normal vectors gives rise to a local \(O(d_{1},d_{2})\) symmetry in the normal bundle. This makes it clear that there will be, in general, a gauge field related to this symmetry. The geometric structure of the embedding is characterized, in addition to the intrinsic world-volume curvature, by the second fundamental forms, \(H^{i}{}_{\alpha\beta}\), which describe the extrinsic curvature, and the gauge connection in the normal bundle, \(A^{ij}{}_{\alpha}=-A^{ji}{}_{\alpha}\). They are determined by the equations of Gauss and Weingarten, \[\hat{\nabla}_{\alpha}x^{m}_{\beta} \equiv\partial_{\alpha}x^{m}_{\beta}+\Gamma^{m}{}_{np}x^{n}_{ \alpha}x^{p}_{\beta}-\Gamma^{\gamma}{}_{\alpha\beta}x^{m}_{\gamma}=H^{i}{}_{ \alpha\beta}N^{m}_{i}\, \tag{116}\] \[\hat{\nabla}_{\alpha}N^{m}_{i} \equiv\partial_{\alpha}N^{m}_{i}+\Gamma^{m}{}_{np}x^{n}_{\alpha}N ^{p}_{i}-A^{j}{}_{i\alpha}N^{m}_{j}=-H_{i\alpha}{}^{\beta}x^{m}_{\beta}. \tag{117}\] In practice, given the tangent vectors \(x^{m}_{\alpha}\) (as functions of the world-sheet coordinates), equations (116) and (117) are used to calculate the second fundamental forms, \(H^{i}{}_{\alpha\beta}\), and the connections in the normal bundle, \(A^{ij}{}_{\alpha}\), respectively. Moreover, by using the appropriate connections, we have introduced in (116) and (117) the generalized covariant derivative, \(\hat{\nabla}_{\alpha}\), which is covariant with respect to all indices. We use the hat to distinguish it from the ordinary world-sheet covariant derivative. For example, world-sheet fluctuations are parameterized by world-sheet scalars \(\chi^{i}\) that are charged under the normal bundle gauge field. For these, we have \[\hat{\nabla}_{\alpha}\chi^{i}=\nabla_{\alpha}\chi^{i}+A^{i}{}_{j\alpha}\chi^{j}. \tag{111}\] The integrability conditions of the differential equations (110) and (111) are the equations of Gauss, Codazzi and Ricci, which are, respectively, \[R_{mnpq}x_{\alpha}^{m}x_{\beta}^{n}x_{\gamma}^{p}x_{\delta}^{q} =R_{\alpha\beta\gamma\delta}+H^{i}{}_{\alpha\delta}H_{i\beta \gamma}-H^{i}{}_{\alpha\gamma}H_{i\beta\delta}\, \tag{112}\] \[R_{mnpq}x_{\alpha}^{m}x_{\beta}^{n}N_{i}^{p}x_{\gamma}^{q} =\hat{\nabla}_{\alpha}H_{i\beta\gamma}-\hat{\nabla}_{\beta}H_{i \alpha\gamma}\,\] (113) \[R_{mnpq}x_{\alpha}^{m}x_{\beta}^{n}N_{i}^{p}N_{j}^{q} =F_{ij\alpha\beta}-H_{i\alpha}{}^{\gamma}H_{j\gamma\beta}+H_{i \beta}{}^{\gamma}H_{j\gamma\alpha}\, \tag{114}\] where \(F_{ij\alpha\beta}\) denotes the field strength in the normal bundle, \[F_{ij\alpha\beta}=\partial_{\alpha}A_{ij\beta}-\partial_{\beta}A_{ij\alpha}+ A_{ik\alpha}A^{k}{}_{j\beta}-A_{ik\beta}A^{k}{}_{j\alpha}. \tag{115}\] Whereas the geometric relations above suffice for the treatment of tensors, some more work is needed for spinors. In general, space-time spinors decompose into several families of world-sheet spinors, which implies the existence of new connections that implement the geometric relations between these families. In particular, we are interested in the pull-back of the bulk covariant derivative (for spinors) onto the world-volume, \[\hat{D}_{\alpha}\Psi=x_{\alpha}^{m}D_{m}\Psi=x_{\alpha}^{m}\left(\partial_{m}+ \frac{1}{4}\omega_{m}{}^{\underline{np}}\Gamma_{\underline{np}}\right)\Psi. \tag{116}\] The space-time spin connections are defined in terms of a space-time frame \(e_{\underline{m}}^{m}\), \[\omega_{p}{}^{\underline{mn}}=-e_{\underline{q}}^{\underline{n}}\left( \partial_{p}e^{q\underline{m}}+\Gamma^{q}{}_{pn}e^{\underline{n}\underline{m} }\right)\, \tag{117}\] and an analogous relation holds for the world-volume spin connections, \(\omega_{\alpha}{}^{\underline{\beta}\underline{\gamma}}\). Let us pick a frame that is locally adapted to the embedding, \[e_{\underline{n}}^{m}=\begin{cases}x_{\alpha}^{m}e_{\underline{\alpha}}^{ \alpha}&\text{for }\underline{n}=\underline{\alpha},\\ N_{i}^{m}&\text{for }\underline{n}=i.\end{cases} \tag{118}\] Then, using (110) and (111), it is straightforward to show that \[x_{\alpha}^{m}\left(\partial_{m}e_{\underline{n}}^{q}+\Gamma^{q}{}_{mp}e_{ \underline{n}}^{p}\right)=\begin{cases}N_{i}^{q}H^{i}{}_{\alpha\beta}e_{ \underline{\alpha}}^{\beta}+x_{\beta}^{q}e^{\beta\underline{\beta}}\omega_{ \alpha\underline{\beta}\underline{\alpha}}&\text{for }\underline{n}=\underline{\alpha},\\ -x_{\beta}^{q}H_{i\alpha}{}^{\beta}+N_{j}^{q}A^{j}{}_{i\alpha}&\text{for } \underline{n}=i.\end{cases} \tag{119}\] Hence, the pull-back of the space-time spin connections onto the world-volume are \[x_{\alpha}^{m}\omega_{m\underline{\alpha}\underline{\beta}}=\omega_{\alpha \underline{\alpha}\underline{\beta}}\,\qquad x_{\alpha}^{m}\omega_{mi\underline{\alpha}}=H_{i\alpha\beta}e_{ \underline{\alpha}}^{\beta}\,\qquad x_{\alpha}^{m}\omega_{mij}=A_{ij\alpha}. \tag{120}\] Consequently, (116) becomes \[\hat{D}_{\alpha}\Psi=\left(\partial_{\alpha}+\frac{1}{4}\omega_{\alpha \underline{\beta}\underline{\gamma}}\Gamma^{\underline{\beta}\underline{ \gamma}}+\frac{1}{2}H_{i\alpha\beta}\Gamma^{i}\Gamma^{\beta}+\frac{1}{4}A_{ij \alpha}\Gamma^{ij}\right)\Psi. \tag{121}\] Using an appropriate decomposition of the gamma matrices, the last two terms in the parentheses are interpreted as connections relating different families of world-sheet fermions.
2301.06141
Max-min Learning of Approximate Weight Matrices from Fuzzy Data
In this article, we study the approximate solutions set $\Lambda_b$ of an inconsistent system of $\max-\min$ fuzzy relational equations $(S): A \Box_{\min}^{\max}x =b$. Using the $L_\infty$ norm, we compute by an explicit analytical formula the Chebyshev distance $\Delta~=~\inf_{c \in \mathcal{C}} \Vert b -c \Vert$, where $\mathcal{C}$ is the set of second members of the consistent systems defined with the same matrix $A$. We study the set $\mathcal{C}_b$ of Chebyshev approximations of the second member $b$ i.e., vectors $c \in \mathcal{C}$ such that $\Vert b -c \Vert = \Delta$, which is associated to the approximate solutions set $\Lambda_b$ in the following sense: an element of the set $\Lambda_b$ is a solution vector $x^\ast$ of a system $A \Box_{\min}^{\max}x =c$ where $c \in \mathcal{C}_b$. As main results, we describe both the structure of the set $\Lambda_b$ and that of the set $\mathcal{C}_b$. We then introduce a paradigm for $\max-\min$ learning weight matrices that relates input and output data from training data. The learning error is expressed in terms of the $L_\infty$ norm. We compute by an explicit formula the minimal value of the learning error according to the training data. We give a method to construct weight matrices whose learning error is minimal, that we call approximate weight matrices. Finally, as an application of our results, we show how to learn approximately the rule parameters of a possibilistic rule-based system according to multiple training data.
Ismaïl Baaj
2023-01-15T16:48:30Z
http://arxiv.org/abs/2301.06141v2
# Max-min Learning of Approximate Weight Matrices From Fuzzy Data ###### Abstract In this article, we study the approximate solutions set \(\Lambda_{b}\) of an inconsistent system of \(\max-\min\) fuzzy relational equations \((S):A\square_{\min}^{\max}x=b\). Using the \(L_{\infty}\) norm, we compute by an explicit analytical formula the Chebyshev distance \(\Delta=\inf_{c\in\mathcal{C}}\|b-c\|\), where \(\mathcal{C}\) is the set of second members of the consistent systems defined with the same matrix \(A\). We study the set \(\mathcal{C}_{b}\) of Chebyshev approximations of the second member \(b\) i.e., vectors \(c\in\mathcal{C}\) such that \(\|b-c\|=\Delta\), which is associated to the approximate solutions set \(\Lambda_{b}\) in the following sense: an element of the set \(\Lambda_{b}\) is a solution vector \(x^{*}\) of a system \(A\square_{\min}^{\max}x=c\) where \(c\in\mathcal{C}_{b}\). As main results, we describe both the structure of the set \(\Lambda_{b}\) and that of the set \(\mathcal{C}_{b}\). We then introduce a paradigm for \(\max-\min\) learning weight matrices that relates input and output data from training data. The learning error is expressed in terms of the \(L_{\infty}\) norm. We compute by an explicit formula the _minimal value_ of the learning error according to the training data. We give a method to construct weight matrices whose learning error is minimal, that we call approximate weight matrices. Finally, as an application of our results, we show how to learn approximately the rule parameters of a possibilistic rule-based system according to multiple training data. Fuzzy set theory ; Systems of fuzzy relational equations ; Learning ; Possibility theory ## 1 Introduction Fuzzy relations were introduced in Zadeh's seminal paper on Fuzzy set theory [40]. The importance of fuzzy relations was stressed by Zadeh and Desoer in [41], where they highlighted that the study of relations is equivalent to the study of systems, since a system can be viewed as relations between an input space and an output space. This perspective is emphasized in the foreword of [10], where Zadeh wrote: "human knowledge may be viewed as a collection of facts and rules, each of which may be represented as the assignment of a fuzzy relation to the unconditional or conditional possibility distribution of a variable. What this implies is that knowledge may be viewed as a system of fuzzy relational equations. In this perspective, then, inference from a body of knowledge reduces to the solution of a system of fuzzy relational equations". Thanks to Sanchez's pioneering work on solving a system of fuzzy \(\max-\min\) relational equations [30], many Artificial Intelligence (AI) applications based on systems of fuzzy relational equations have emerged [2, 9, 10, 11, 24]. Sanchez gave necessary and sufficient conditions for a system of \(\max-\min\) fuzzy relational equations to be consistent i.e., to have solutions. In [31], he also showed that, if the system is consistent, there is a greater solution and many minimal solutions, which leads him to describe the complete set of solutions. However, addressing the inconsistency of these systems remains a difficult problem, which has often been raised [2, 10, 20, 26]. Many authors have tackled the issue of finding approximate solutions [7, 10, 13, 18, 21, 26, 35, 36, 37, 38, 39], and some numerical approaches were presented [23, 25]. Among these works, one pioneer idea was introduced by Pedrycz in [26]. Given an inconsistent system, Pedrycz proposes to slightly modify its second member to obtain a consistent system. Cuninghame-Green and Cechlarova [7] and later Li and Fang [21] each proposed an algorithm to measure the minimal distance expressed with the \(L_{\infty}\) norm \(\Delta~{}=~{}\inf_{c\in\mathcal{C}}\|b-c\|\), where \(b\) is the second member of a considered inconsistent system and \(\mathcal{C}\) is the set of the second members of the consistent systems defined with the same matrix: that of the inconsistent system. This minimal distance is called the Chebyshev distance associated to the second member of the inconsistent system. In this article, the first main result of our work is an explicit analytical formula (Theorem 1), to compute, for a system whose matrix and second member are respectively denoted \(A\) and \(b\), the Chebyshev distance associated to its second member \(b\). The Chebyshev distance is denoted \(\Delta=\Delta(A,b)\) and is obtained by _elementary calculations_ involving only the components of the matrix \(A\) and those of the second member \(b\). Then, we tackle the study of the set \(\mathcal{C}_{b}\) of Chebyshev approximations of the second member of the system, where a Chebyshev approximation is a vector \(c\) such that \(\|b-c\|=\Delta\) and the system formed by the matrix \(A\) and the vector \(c\) as second member is a consistent system. Moreover, we define the approximate solutions set \(\Lambda_{b}\) of the system, and we relate \(\Lambda_{b}\) to \(\mathcal{C}_{b}\) in the following sense: an element of \(\Lambda_{b}\) is a solution vector \(x^{*}\) of a system whose matrix is \(A\) and its second member is a Chebyshev approximation of \(b\). Motivated by Sanchez's seminal results [30], we introduce an idempotent application denoted \(F\), see (9), to check if a system defined with a fixed matrix and a given vector used as second member is a consistent system. The properties of \(F\) allow us to compute the greatest element of each of the sets \(\mathcal{C}_{b}\) and \(\Lambda_{b}\), see (Proposition 5) and (Proposition 8), i.e., we compute the greatest Chebyshev approximation of \(b\) and the greatest approximate solution of the system from the components of the matrix \(A\) and those of the vector \(b\). Then, in order to give the structure of the set \(\mathcal{C}_{b}\), we study its minimal elements. For this purpose, we give a first characterization of the set \(\Lambda_{b}\) (Proposition 9), which involves a system of \(\max-\min\) inequalities. By relying on the results of [22], we give a method for constructing the set \(\mathcal{C}_{b,\min}\) of minimal Chebyshev approximations (Corollary 2) and we prove that it is non-empty and finite (Corollary 3). The complete structure of the set \(\mathcal{C}_{b}\) follows from this results, see (Theorem 3). In addition, we prove a structure theorem for the set \(\Lambda_{b}\), see (Theorem 2). All these results let us introduce a paradigm to learn approximately a weight matrix relating input and output data from training data. To our knowledge, the \(\max-\min\) learning of a weight matrix is commonly tackled by trying to adapt the classical gradient descent method to \(\max-\min\) fuzzy neural networks [3, 4, 5, 6, 8, 14, 15, 16, 17, 19, 23, 27, 28, 29, 32, 33, 42] with the aim of minimizing the learning error, which is expressed in terms of \(L_{2}\) norm. However, the non-differentiability of the functions \(\max\) and \(\min\) is very challenging for developing an efficient gradient descent method for \(\max-\min\) neural networks. In our learning paradigm, we choose to express the learning error in terms of the \(L\infty\) norm. We give an _explicit formula for computing the minimal value (denoted by \(\mu\)) of the learning error according to the training data_, see (Definition 7) and (Corollary 5). The value \(\mu\) is computed in terms of Chebyshev distance of the second member of systems of \(\max-\min\) fuzzy relational equations associated to the training data. We then give a method (Method 1) for constructing approximate weight matrices, i.e., _matrices whose learning error is equal to \(\mu\)_. Finally, we introduce analogous tools for a system of \(\min-\max\) fuzzy relational equations to those already introduced for a system of \(\max-\min\) fuzzy relational equations and we show their correspondences (Table 4). This allows us to extend our results in [1], i.e., we give a method for approximately learn rule parameters of a possibistic rule-based system according to multiple training data. The article is structured as follows. In (Section 2), we remind necessary and sufficient conditions for a system of \(\max-\min\) fuzzy relational equations to be consistent. We introduce the application \(F\) and we give some of its useful properties. In (Section 3), we give the explicit analytical formula for computing the Chebyshev distance associated to the second member of a system. In (Section 4), we define the set of Chebyshev approximations of the second member and compute the greatest Chebyshev approximation. In (Section 5), we describe the structure of the set of Chebyshev approximations and that of the approximation solutions set of the system. In (Section 6), we introduce our learning paradigm. In (Section 7), we show the correspondences between a system of \(\min-\max\) fuzzy relational equations and a system of \(\max-\min\) fuzzy relational equations and we present our method for approximately learn rule parameters of a possibistic rule-based system according to multiple training data. Finally, we conclude with some perspectives. ## 2 Background In this section, we give the necessary background for solving a system of \(\max-\min\) fuzzy relational equations. We remind Sanchez's necessary and sufficient condition for a system of \(\max-\min\) fuzzy relational equations to be consistent. We reformulate this result as a fixed point property of a certain idempotent and increasing application, which we explicitly define. We show some of its useful properties. ### Solving of a system of \(\max-\min\) fuzzy relational equations We use the following notation: **Notation 1**.: \([0,1]^{n\times m}\) _denotes the set of matrices of size \((n,m)\) i.e., \(n\) rows and \(m\) columns, whose components are in \([0,1]\). In particular:_ * \([0,1]^{n\times 1}\) _denotes the set of column vectors of_ \(n\) _components,_ * \([0,1]^{1\times m}\) _denotes the set of row matrices of_ \(m\) _components._ _In the set \([0,1]^{n\times m}\), we use the order relation \(\leq\) defined by:_ \[A\leq B\quad\text{iff we have}\quad a_{ij}\leq b_{ij}\quad\text{ for all}\quad 1\leq i\leq n,1\leq j\leq m,\] _where \(A=[a_{ij}]_{1\leq i\leq n,1\leq j\leq m}\) and \(B=[b_{ij}]_{1\leq i\leq n,1\leq j\leq m}\)._ Let \(A=[a_{ij}]\in[0,1]^{n\times m}\) be a matrix of size \((n,m)\) and \(b=[b_{i}]\in[0,1]^{n\times 1}\) be a vector of \(n\) components. The system of \(\max-\min\) fuzzy relational equations associated to \((A,b)\) is of the form: \[(S):A\square_{\min}^{\max}x=b, \tag{1}\] where \(x=[x_{j}]_{1\leq j\leq m}\in[0,1]^{m\times 1}\) is an unknown vector of \(m\) components and the operator \(\square_{\min}^{\max}\) is the matrix product that uses the t-norm \(\min\) as the product and \(\max\) as the addition. The system can also be written as: \[\max_{1\leq j\leq m}\min(a_{ij},x_{j})=b_{i},\,\forall i\in\{1,2,\ldots,n\}.\] There are two competing notation conventions for studying systems of fuzzy relational equations: they differ in whether the unknown part and the second member are column vectors or row vectors. These two conventions are equivalent and the transpose map allows us to switch from one to the other. To check if the system \((S)\) is consistent, we compute the following vector: \[e=A^{t}\square_{\rightarrow\sigma}^{\min}b, \tag{2}\] where \(A^{t}\) is the transpose of \(A\) and the matrix product \(\square_{\rightarrow\sigma}^{\min}\) uses the Godel implication \(\rightarrow_{G}\) as the product and \(\min\) as the addition. The Godel implication is defined by: \[x\rightarrow_{G}y=\begin{cases}1&\text{ if }x\leq y\\ y&\text{ if }x>y\end{cases}. \tag{3}\] Thanks to Sanchez's seminal work [30], we have the following equivalence: \[(S)\text{ is consistent}\Longleftrightarrow A\square_{\min}^{\max}e=b. \tag{4}\] The set of solutions of the system \((S)\) is denoted by: \[\mathcal{S}=\mathcal{S}(A,b)=\{v\in[0,1]^{m\times 1}\,\mid\,A\,\square_{\min}^{ \max}v=b\}. \tag{5}\] If the system \((S)\) is consistent, the vector \(e\), see (2), is the greatest solution of the system \((S)\). Sanchez also showed in [31] that the system \((S)\) has many minimal solutions and he described its set \(\mathcal{S}\) of solutions. We begin our study by the following useful result: **Lemma 1**.: _The maps:_ \[[0,1]^{m\times 1}\rightarrow[0,1]^{n\times 1}:x\mapsto A\square_{\min}^{ \max}x, \tag{6}\] \[[0,1]^{n\times 1}\rightarrow[0,1]^{m\times 1}:c\mapsto A^{t}\square_{ \rightarrow\sigma}^{\min}c \tag{7}\] _are increasing with respect to the usual order relation between vectors._ Proof.: The first map is increasing because the \(\max\) and \(\min\) functions are increasing. For the second map, one can use that for a fixed \(x\in[0,1]^{m\times 1}\), the map \(y\mapsto(x\rightarrow_{G}y)\) is increasing. As a consequence, we have the following well-known result: **Lemma 2**.: _Let \(c,c^{\prime}\in[0,1]^{n\times 1}\) such that \(c\leq c^{\prime}\) then we have:_ \[\forall v\in[0,1]^{m\times 1},A\square_{\min}^{\max}v=c\Longrightarrow v\leq A ^{t}\square_{\to c}^{\min}c^{\prime}. \tag{8}\] Proof.: Let us remind that \(e=A^{t}\square_{\to G}^{\min}c\) and \(e^{\prime}=A^{t}\square_{\to G}^{\min}c^{\prime}\) are the potential greatest solutions of the systems \(A\square_{\min}^{\max}x=c\) and \(A\square_{\min}^{\max}x=c^{\prime}\) respectively. Then, from (7) we deduce \(e\leq e^{\prime}\). Let \(v\in[0,1]^{m\times 1}\) be such that \(A\square_{\min}^{\max}v=c\). Then the system \(A\square_{\min}^{\max}x=c\) is consistent and \(v\leq e\). By transitivity of the relation order, we get \(v\leq e^{\prime}\). We illustrate the solving of the system \((S)\) by an example: **Example 1**.: _Let:_ \[A=\begin{bmatrix}0.06&0.87&0.95\\ 0.75&0.13&0.88\\ 0.82&0.06&0.19\end{bmatrix}\text{ and }b=\begin{bmatrix}0.4\\ 0.7\\ 0.7\end{bmatrix}.\] _We have: \(A^{t}=\begin{bmatrix}0.06&0.75&0.82\\ 0.87&0.13&0.06\\ 0.95&0.88&0.19\end{bmatrix}\). We compute the potential greatest solution:_ \[e=A^{t}\square_{\to c}^{\min}b=\begin{bmatrix}\min(1.0,0.7,0.7)\\ \min(0.4,1.0,1.0)\\ \min(0.4,0.7,1.0)\end{bmatrix}=\begin{bmatrix}0.7\\ 0.4\\ 0.4\end{bmatrix}.\] _The system \(A\square_{\min}^{\max}x=b\) is consistent because:_ \[A\square_{\min}^{\max}e=\begin{bmatrix}0.4\\ 0.7\\ 0.7\end{bmatrix}=b.\] ### Reformulation of Sanchez's condition as a fixed point property For the system \((S)\), we introduce the following application: \[F:[0,1]^{n\times 1}\to[0,1]^{n\times 1}:c\mapsto F(c)=A\,\square_{\min}^{ \max}(A^{t}\,\square_{\to c}^{\min}c). \tag{9}\] The application \(F\) allows us to check if a system of fuzzy relational equations \(\max-\min\) is consistent: **Proposition 1**.: _For any vector \(c\in[0,1]^{n\times 1}\) the following conditions are equivalent:_ 1. \(F(c)=c\)_,_ 2. _the system_ \(A\square_{\min}^{\max}x=c\) _is consistent._ Proof.: Reformulation of Sanchez's result. The properties of idempotence, growth and right-continuity of the application \(F\) justify its introduction: **Proposition 2**.: 1. \(\forall c\in[0,1]^{n\times 1}\)_,_ \(F(c)\leq c\)_._ 2. \(F\) _is idempotent i.e.,_ \(\forall c\in[0,1]^{n\times 1},F(F(c))=F(c)\)_._ 3. \(F\) _is increasing and right-continuous._ The application \(F\) being right-continuous at a point \(c\in[0,1]^{n\times 1}\) means: for any sequence \((c^{(k)})\) in \([0,1]^{n\times 1}\) such that \((c^{(k)})\) converges to \(c\) when \(k\to\infty\) and verifying \(\forall k,c^{(k)}\geq c\), we have: \[F(c^{(k)})\to F(c)\text{ when }k\to\infty.\] Proof.: 1. Let \(i\in\{1,2,\ldots,n\}\), we denote by \(F(c)_{i}\) (resp. \(c_{i}\)) is the \(i\)-th component of the vector \(F(c)\) (resp. \(c\)) and we must prove \(F(c)_{i}\leq c_{i}\). We have: \[F(c)_{i} =\max_{1\leq j\leq m}\min[a_{ij},\min_{1\leq k\leq n}a_{kj}\to_{G}c_ {k}]\] \[\leq\max_{1\leq j\leq m}\min[a_{ij},a_{ij}\to_{G}c_{i}]\] \[=\max_{1\leq j\leq m}\min[a_{ij},c_{i}]\quad(\text{because}\,\min (x,x\to_{G}y)=\min(x,y))\] \[\leq c_{i}.\] 2. Consider the system \(A\square_{\min}^{\max}x=F(c)\). By definition of the application \(F\), we have \(F(c)=A\,\square_{\min}^{\max}e\) with \(e=A^{t}\square_{\to_{G}c}^{\min}c\). By Sanchez's result, we have: \[e\leq A^{t}\square_{\to G}^{\min}F(c).\] From (6) we get: \[F(c)=A\square_{\min}^{\max}e\leq A\square_{\min}^{\max}(A^{t}\square_{\to_{G}c }^{\min}F(c))=F(F(c)).\] But from the first statement of (Proposition 2), we know that \(F(F(c))\leq F(c)\). Therefore, we have \(F(F(c))=F(c)\). 3. This follows from the fact that for a fixed \(x\in[0,1]\), the map \(y\mapsto(x\to_{G}y)\) is right-continuous. We illustrate the use of the application \(F\): **Example 2**.: _(continued) Based on the computations in the previous example, we check that \(F(b)=b\). Let \(c=\begin{bmatrix}0.36\\ 0.57\\ 0.24\end{bmatrix}\). The potential greatest solution of the system \(A\square_{\min}^{\max}x=c\) is \(\begin{bmatrix}0.24\\ 0.36\\ 0.36\end{bmatrix}\). We have \(F(c)=\begin{bmatrix}0.36\\ 0.36\\ 0.24\end{bmatrix}\neq c\), so the system \(A\square_{\min}^{\max}x=c\) is not consistent._ ## 3 Chebyshev distance associated to the second member of the system \((S)\) In this section, we give an analytical method for computing the Chebyshev distance associated to the second member of the system \((S)\), see (1). For this purpose, we begin by giving some notations and studying two inequalities involved in the computation of this Chebyshev distance. We relate the fundamental result (Theorem 1 of [7]) to the properties of the studied inequalities. This allows us to give an explicit formula for computing the Chebyshev distance associated to the second member \(b\) of the system \((S)\). ### Notations For \(x,y,z,u,\delta\in[0,1]\), we use the following notations: * \(x^{+}=\max(x,0)\), * \(\overline{z}(\delta)=\min(z+\delta,1)\), * \(\underline{z}(\delta)=\max(z-\delta,0)=(z-\delta)^{+}\). We remark that we have the following equivalence in \([0,1]\): \[\mid x-y\mid\leq\delta\Longleftrightarrow\underline{x}(\delta)\leq y\leq \overline{x}(\delta). \tag{10}\] For our work, to the second member \(b=[b_{i}]_{1\leq i\leq n}\) of the system \((S)\) and a number \(\delta\in[0,1]\), we associate two vectors: \[\underline{b}(\delta)=[(b_{i}-\delta)^{+}]_{1\leq i\leq n}\quad\text{and} \quad\overline{b}(\delta)=[\min(b_{i}+\delta,1)]_{1\leq i\leq n}. \tag{11}\] These vectors \(\underline{b}(\delta)\) and \(\overline{b}(\delta)\) were already introduced e.g., in [7] (with others notations) and in [21]. Then, from (10), we deduce for any \(c=[c_{i}]_{1\leq i\leq n}\in[0,1]^{n\times 1}\): \[\|b-c\|\leq\delta\Longleftrightarrow\underline{b}(\delta)\leq c\leq\overline{b} (\delta). \tag{12}\] where \(\|b-c\|=\max_{1\leq i\leq n}\mid b_{i}-c_{i}\mid\). ### Preliminaries Let \(x,y,z,u\in[0,1]\) be fixed. We study how to obtain the smallest value of \(\delta\in[0,1]\) so that the following inequality is true: \[(x-\delta)^{+}\leq y.\] **Proposition 3**.: _For any \(\delta\in[0,1]\), we have:_ \[(x-\delta)^{+}\leq y\Longleftrightarrow(x-y)^{+}\leq\delta. \tag{13}\] Proof.: For any \(\delta\in[0,1]\), we have: \[(x-\delta)^{+}\leq y \Longleftrightarrow\max(x-\delta,0)-y\leq 0\] \[\Longleftrightarrow\max(x-y-\delta,-y)\leq 0\] \[\Longleftrightarrow x-y-\delta\leq 0\] \[\Longleftrightarrow x-y\leq\delta\] \[\Longleftrightarrow(x-y)^{+}\leq\delta.\] We deduce from (Proposition 3) that we have: \[(x-y)^{+}=\min\{\delta\in[0,1]\mid(x-\delta)^{+}\leq y\}. \tag{14}\] Let us study the solving of the following inequality that involves the Godel implication \(\rightarrow_{G}\), see (3): \[\underline{x}(\delta)\leq y\rightarrow_{G}\overline{z}(\delta), \tag{15}\] where: \(y\rightarrow_{G}\overline{z}(\delta)=\begin{cases}1&\text{if }y-z\leq \delta\\ z+\delta&\text{if }y-z>\delta\end{cases}.\) Let: \[\sigma_{G}(x,y,z)=\min(\frac{(x-z)^{+}}{2},(y-z)^{+}). \tag{16}\] Then: **Proposition 4**.: _For any \(\delta\in[0,1]\), we have:_ \[\underline{x}(\delta)\leq y\rightarrow_{G}\overline{z}(\delta)\Longleftrightarrow \sigma_{G}(x,y,z)\leq\delta. \tag{17}\] Proof.: \(\Longrightarrow\) Let us assume \(\underline{x}(\delta)\leq y\rightarrow_{G}\overline{z}(\delta)\) and prove \(\sigma_{G}(x,y,z)\leq\delta\). We remark that: * If \(y\leq z\) or \(x\leq z\), then \(\sigma_{G}(x,y,z)=0\leq\delta\). It remains for us to study the case where \(y>z\) and \(x>z\). * If \(y-z\leq\delta\), then \(\sigma_{G}(x,y,z)=\min(\frac{x-z}{2},y-z)\leq y-z\leq\delta\). It remains for us to study the case where \(y-z>\delta\). We have: \[\underline{x}(\delta)-(y\rightarrow_{G}\overline{z}(\delta)) =\max(x-\delta,0)-z-\delta\] \[=\max(x-\delta-z-\delta,-z-\delta)\] \[=\max(x-z-2\delta,-z-\delta)\leq 0,\] so \(x-z-2\delta\leq 0\) and \(\sigma_{G}(x,y,z)=\min(\frac{x-z}{2},y-z)\leq\frac{x-z}{2}\leq\delta\). \(\Longleftarrow\) Let us assume \(\sigma_{G}(x,y,z)\leq\delta\) and prove \(\underline{x}(\delta)\leq y\rightarrow_{G}\overline{z}(\delta)\). If \((x-\delta)^{+}=0\) or \(y\rightarrow_{G}\overline{z}(\delta)=1\), we trivially get the inequality \((x-\delta)^{+}\leq y\rightarrow_{G}\overline{z}(\delta)\). It remains for us to study the case where \((x-\delta)^{+}=x-\delta>0\) and \(y\rightarrow_{G}\overline{z}(\delta)<1\). From the inequality \(y\rightarrow_{G}\overline{z}(\delta)<1\), we deduce: \[y-z>\delta\text{ and }y\rightarrow_{G}\overline{z}(\delta)=z+\delta.\] As \(\sigma_{G}(x,y,z)=\min(\frac{(x-z)^{+}}{2},(y-z)^{+})=\min(\frac{(x-z)^{+}}{2},y-z)\leq\delta\), we obtain: \[\sigma_{G}(x,y,z)=\frac{(x-z)^{+}}{2}\leq\delta.\] This last inequality is equivalent to: \[\max(x-z-2\delta,-2\delta)\leq 0.\] So \(x-z-2\delta\leq 0\), which implies: \[(x-\delta)^{+}=x-\delta\leq z+\delta=y\rightarrow_{G}\overline{z}(\delta).\] We deduce from (Proposition 4) that we have: \[\sigma_{G}(x,y,z)=\min\{\delta\in[0,1]\ |\ \underline{x}(\delta)\leq y \rightarrow_{G}\overline{z}(\delta)\}. \tag{18}\] We illustrate this result: **Example 3**.: _Let \(x=0.56,y=0.87\) and \(z=0.36\). We want to obtain the smallest value of \(\delta\in[0,1]\) so that \(\underline{x}(\delta)\leq y\rightarrow_{G}\overline{z}(\delta)\) is true. We have \(y\rightarrow_{G}z=z\) and \(x>z\)._ \[\delta =\sigma_{G}(x,y,z)\] \[=\min(\frac{(x-z)^{+}}{2},(y-z)^{+})\] \[=\min(\frac{(0.56-0.36)^{+}}{2},(0.87-0.36)^{+})\] \[=\min(\frac{0.20}{2},0.51)\] \[=0.10.\] _We have \(\underline{x}(\delta)=x-0.10=0.46\) and \(\overline{z}(\delta)=z+0.10=0.46\). Therefore:_ \[\underline{x}(\delta)=y\rightarrow_{G}\overline{z}(\delta).\] Analytical formula for computing the Chebyshev distance associated to the second member of the system \((S)\) To the matrix \(A\) and the vector \(b\) of the system \((S)\), let us associate the set of vectors \(c=[c_{i}]\in[0,1]^{n\times 1}\) such that the system \(A\square_{\min}^{\max}x=c\) is consistent: \[\mathcal{C}=\{c=[c_{i}]\in[0,1]^{n\times 1}\ |\ A\square_{\min}^{\max}x=c \text{ is consistent}\}. \tag{19}\] This set allows us to define the Chebyshev distance associated to the second member \(b\) of the system \((S)\). **Definition 1**.: _The Chebyshev distance associated to the second member \(b\) of the system \((S):A\square_{\min}^{\max}x=b\) is:_ \[\Delta=\Delta(A,b)=\inf_{c\in\mathcal{C}}\|b-c\| \tag{20}\] where: \[\|b-c\|=\max_{1\leq i\leq n}\ |\ b_{i}-c_{i}\ |\.\] We have the following fundamental result, already proven in [7]: \[\Delta=\min\{\delta\in[0,1]\ |\ \underline{b}(\delta)\leq F(\overline{b}( \delta))\}. \tag{21}\] In the following, using only (21), we prove that the Chebyshev distance \(\Delta\) associated to the second member \(b\) of the system \((S)\) is given by the following formula: **Theorem 1**.: \[\Delta=\max_{1\leq i\leq n}\;\delta_{i}\] (22) _where for \(i=1,2,\ldots n\):_ \[\delta_{i}=\min_{1\leq j\leq m}\;\max[(b_{i}-a_{ij})^{+},\max_{1\leq k\leq n}\; \;\sigma_{G}\left(b_{i},a_{kj},b_{k}\right)]. \tag{23}\] To prove this formula, let us first introduce some notations and a lemma: **Notation 2**.: For \(1\leq i,k\leq n\) and \(1\leq j\leq m\) let: * \(K_{i}=\{\delta\in[0,1]\mid\underline{b}(\delta)_{i}\leq F(\overline{b}(\delta ))_{i}\}\), where \(\underline{b}(\delta)_{i}\) (resp. \(F(\overline{b}(\delta))_{i}\)) is the \(i\)-th component of the vector \(\underline{b}(\delta)\) (resp. \(F(\overline{b}(\delta))\)), * \(\beta_{j}=\min_{1\leq k\leq n}\;a_{kj}\rightarrow_{G}\overline{b}(\delta)_{k}\) where \(\overline{b}(\delta)_{k}\) is the \(k\)-th component of the vector \(\overline{b}(\delta)\), * \(D^{A}_{ij}=\{\delta\in[0,1]\mid\underline{b}(\delta)_{i}\leq a_{ij}\}\), * \(D^{\beta}_{ij}=\{\delta\in[0,1]\mid\underline{b}(\delta)_{i}\leq\beta_{j}\}\), * \(D_{ijk}=\{\delta\in[0,1]\mid\underline{b}(\delta)_{i}\leq a_{kj}\rightarrow_{G }\overline{b}(\delta)_{k}\}\). **Lemma 3**.: _We have:_ \[K_{i}=\bigcup_{1\leq j\leq m}\;D^{A}_{ij}\;\cap\;D^{\beta}_{ij}\text{ and }D^{\beta}_{ij}=\bigcap_{1\leq k\leq n}\;D_{ijk}.\] Proof.: By definition of the function \(F\), we have: \[F(\overline{b}(\delta))_{i}=\max_{1\leq j\leq m}\;\min(a_{ij},\beta_{j}).\] This implies directly that we have: \[K_{i}=\bigcup_{1\leq j\leq m}\;D^{A}_{ij}\cap D^{\beta}_{ij}.\] As \(\beta_{j}=\min_{1\leq k\leq n}\;a_{kj}\rightarrow_{G}\overline{b}(\delta)_{k}\), we also have: \[D^{\beta}_{ij}=\bigcap_{1\leq k\leq n}\;D_{ijk}.\] The proof of (Theorem 1) is given in the following. Proof.: For any \(i=1,2,\ldots,n\) and \(j=1,2,\ldots,m\), we deduce from (Proposition 3) and (Proposition 4) that for any \(\delta\in[0,1]\), we have: \[\delta\in D^{A}_{ij}\iff\delta\geq(b_{i}-a_{ij})^{+}\text{ and }\delta\in D^{ \beta}_{ij}\iff\delta\geq\max_{1\leq k\leq n}\;\sigma_{G}\left(b_{i},a_{kj},b_ {k}\right).\] Using (3), we get: \[\delta\in K_{i}\iff\exists\,j\in\{1,2,\ldots,m\}\text{ such that }\delta\geq\max[(b_{i}-a_{ij})^{+},\max_{1\leq k\leq n}\;\sigma_{G}\left(b_{i}, a_{kj},b_{k}\right)].\] So, we obtain: \[\delta\in K_{i}\iff\delta\geq\min_{1\leq j\leq m}\;\max[(b_{i}-a_{ij})^{+}, \max_{1\leq k\leq n}\;\sigma_{G}\left(b_{i},a_{kj},b_{k}\right)].\] As by definition \(\delta\in K_{i}\iff\underline{b}(\delta)_{i}\leq F(\overline{b}(\delta))_{i}\) and \(\Delta=\min\{\delta\in[0,1]\mid\underline{b}(\delta)\leq F(\overline{b}( \delta))\}\), see (21), we get: \[\Delta=\max_{1\leq i\leq n}\;\min_{1\leq j\leq m}\;\max[(b_{i}-a_{ij})^{+}, \max_{1\leq k\leq n}\;\sigma_{G}\left(b_{i},a_{kj},b_{k}\right)].\] The following example illustrates the computation of the Chebyshev distance associated to the second member of the system \((S)\): **Example 4**.: _Let:_ \[A=\begin{bmatrix}0.03&0.38&0.26\\ 0.98&0.10&0.03\\ 0.77&0.15&0.85\end{bmatrix}\text{ and }b=\begin{bmatrix}0.54\\ 0.13\\ 0.87\end{bmatrix}. \tag{24}\] _We apply (Theorem 1). We compute:_ \[\delta_{1}=\min_{1\leq j\leq 3}\max[(b_{1}-a_{1j})^{+},\max_{1\leq k\leq 3} \ \sigma_{G}\left(b_{1},a_{kj},b_{k}\right)].\] _We have:_ \[[(b_{1}-a_{1j})^{+}]_{1\leq j\leq 3}=\begin{bmatrix}0.54-0.03\\ 0.54-0.38\\ 0.54-0.26\end{bmatrix}=\begin{bmatrix}0.51\\ 0.16\\ 0.28\end{bmatrix},\] \[[\sigma_{G}\left(b_{1},a_{kj},b_{k}\right)]_{1\leq k\leq 3,1\leq j\leq 3}= \begin{bmatrix}\sigma_{G}\left(b_{1},a_{11},b_{1}\right)&\sigma_{G} \left(b_{1},a_{12},b_{1}\right)&\sigma_{G}\left(b_{1},a_{13},b_{1}\right)\\ \sigma_{G}\left(b_{1},a_{21},b_{2}\right)&\sigma_{G}\left(b_{1},a_{22},b_{2} \right)&\sigma_{G}\left(b_{1},a_{23},b_{2}\right)\\ \sigma_{G}\left(b_{1},a_{31},b_{3}\right)&\sigma_{G}\left(b_{1},a_{32},b_{3} \right)&\sigma_{G}\left(b_{1},a_{33},b_{3}\right)\end{bmatrix}\] \[= \begin{bmatrix}\sigma_{G}\left(0.54,0.03,0.54\right)&\sigma_{G} \left(0.54,0.38,0.54\right)&\sigma_{G}\left(0.54,0.26,0.54\right)\\ \sigma_{G}\left(0.54,0.98,0.13\right)&\sigma_{G}\left(0.54,0.10,0.13\right)& \sigma_{G}\left(0.54,0.03,0.13\right)\\ \sigma_{G}\left(0.54,0.77,0.87\right)&\sigma_{G}\left(0.54,0.15,0.87\right)& \sigma_{G}\left(0.54,0.85,0.87\right)\end{bmatrix}\] \[= \begin{bmatrix}0.0&0.0&0.0\\ 0.205&0.0&0.0\\ 0.0&0.0&0.0\end{bmatrix}.\] _Therefore:_ \[\delta_{1}=\min(\max(0.51,0.205),\max(0.16,0),\max(0.28,0))=0.16.\] _Similarly, we obtain: \(\delta_{2}=0\) and \(\delta_{3}=0.02\). The Chebyshev distance associated to \(b\) is \(\Delta=\max(\delta_{1},\delta_{2},\delta_{3})=0.16\)._ ## 4 Chebyshev approximations of the second member of the system \((S)\) In this section, we study the Chebyshev approximations of the second member \(b\) of the system \((S)\), which are vectors \(c\in[0,1]^{n\times 1}\) such that \(\|b-c\|=\Delta\) and the system \(A\square_{\min}^{\max}x=c\) is consistent. We show that there is a greater Chebyshev approximation that we compute. We give the definition of the set of minimal Chebyshev approximations, which will be useful to determine the structure of the set of Chebyshev approximations. We define the set of Chebyshev approximations of \(b\): **Definition 2**.: _The set of Chebyshev approximations of \(b\) is defined using the set \(\mathcal{C}\), see (19), and the Chebyshev distance associated to \(b\) (Definition 1):_ \[\mathcal{C}_{b}=\{c\in\mathcal{C}\ |\ \|b-c\|=\Delta(A,b)\}. \tag{25}\] In the following, to prove that the set \(\mathcal{C}_{b}\) is non-empty, we show that the greatest Chebyshev approximation exists, according to the usual order relation between vectors of \([0,1]^{n\times 1}\). **Proposition 5**.: 1. \(F(\overline{b}(\Delta))\in\mathcal{C}_{b}\)_,_ 2. \(\forall c\in\mathcal{C}_{b},\,c\leq F(\overline{b}(\Delta)).\)__ _So, \(F(\overline{b}(\Delta))\) is the greatest Chebyshev approximation of \(b\)._ Proof.: We deduce from (Proposition 1) and (Proposition 2) that \(F(\overline{b}(\Delta))\in\mathcal{C}\). From (21) and (Proposition 2) we deduce: \[\underline{b}(\Delta)\leq F(\overline{b}(\Delta))\leq\overline{b}(\Delta).\] From relation (10), we deduce \(\|F(\overline{b}(\Delta))-b\|\leq\Delta\). But \(\Delta=\inf_{c\in\mathcal{C}}\|b-c\|\) (Definition 1), then \(\|F(\overline{b}(\Delta))-b\|\geq\Delta\). Finally, \(\|F(\overline{b}(\Delta))-b\|=\Delta\) i.e., \(F(\overline{b}(\Delta))\in\mathcal{C}_{b}\). Let \(c\) be a vector in \(\mathcal{C}_{b}\). As \(\|b-c\|=\Delta\), we deduce \(c\leq\overline{b}(\Delta)\). Using that \(F\) is increasing (Proposition 2), we have \(F(c)\leq F(\overline{b}(\Delta))\). But \(F(c)=c\) (Proposition 1), so \(c\leq F(\overline{b}(\Delta))\) As a consequence of the first statement in (Proposition 5), we have: **Corollary 1**.: \[\Delta=\min_{c\in\mathcal{C}}\|b-c\|.\] \[\Delta=0\Longleftrightarrow\text{ the system $(S)$ is consistent.}\] Therefore, \(\Delta=0\) is a necessary and sufficient condition for the system \((S)\) to be consistent. It is much more difficult to obtain minimal Chebyshev approximations of \(b\). In a fairly abstract way, one can prove that the set: \[\mathcal{C}_{b,\min}=\{c\in\mathcal{C}_{b}\mid c\text{ minimal in }\mathcal{C}_{b}\} \tag{26}\] is non-empty and finite. In fact, we will show in the next section how to construct elements of \(\mathcal{C}_{b,\min}\) and prove that this set is finite. We illustrate the computation of the greatest Chebyshev approximation of the second member of the system \((S)\): **Example 5**.: _(continued) We continue with the matrix \(A\) and the vector \(b\), see (24), used in (Example 4). We remind that the Chebyshev distance associated to the second member \(b\) of the system \(A\square_{\min}^{\max}x=b\) is \(\Delta=0.16\)._ _From \(b=\begin{bmatrix}0.54\\ 0.13\\ 0.87\end{bmatrix}\), we compute \(\overline{b}(\Delta)=\begin{bmatrix}0.70\\ 0.29\\ 1.00\end{bmatrix}\)._ _Then, the greatest Chebyshev approximation of \(b\) is:_ \[F(\overline{b}(\Delta))=A\,\square_{\min}^{\max}(A^{t}\,\square_{\prec G}^{ \min}\overline{b}(\Delta))=\begin{bmatrix}0.38\\ 0.29\\ 0.85\end{bmatrix}.\] _We check that the distance between the greatest Chebyshev approximation \(\begin{bmatrix}0.38\\ 0.29\\ 0.85\end{bmatrix}\) and \(b=\begin{bmatrix}0.54\\ 0.13\\ 0.87\end{bmatrix}\) is equal to \(\Delta\)._ ## 5 Relating the approximate solutions set to the Chebyshev approximations set In this section, we study the approximate solutions set of the system \((S):A\square_{\min}^{\max}x=b\), which we denote by \(\Lambda_{b}\) and the set \(\mathcal{C}_{b}\) of Chebyshev approximations of the second member \(b\), see (25). By definition, an approximate solution \(x^{*}\in\Lambda_{b}\) is a column vector such that the vector \(c=A\square_{\min}^{\max}x^{*}\) is a Chebyshev approximation of \(b\) i.e., \(c\in\mathcal{C}_{b}\). Moreover, for all \(c\in\mathcal{C}_{b}\), the solutions of the system \(A\square_{\min}^{\max}x=c\) belong to \(\Lambda_{b}\) i.e., they are approximate solutions. This section is structured as follows. We begin by defining the set \(\Lambda_{b}\) and we relate it to the set \(\mathcal{C}_{b}\) (Subsection 5.1). We show that the set \(\Lambda_{b}\) is non-empty (Proposition 7) and has a greater element (Proposition 8). In (Subsection 5.2), we then give a characterization of \(\Lambda_{b}\) (Proposition 9) and we describe the structure of \(\Lambda_{b}\) in (Theorem 2). These two results allow us to study the minimal elements of \(\mathcal{C}_{b}\) and \(\Lambda_{b}\) (Subsection 5.3). Using the results and the method of [22] to construct minimal elements of systems of \(\max-\min\) inequalities, we show that the set of minimal Chebyshev approximations \(\mathcal{C}_{b,\min}\) is non-empty and finite. We also give a finite set of minimal approximate solutions noted \(\Lambda_{b,\min}\) associated to \(\mathcal{C}_{b,\min}\) by the following equality: \(\mathcal{C}_{b,\min}=\{A\square_{\min}^{\max}x\mid x\in\Lambda_{b,\min}\}\). Finally, we describe the structure of the set \(\mathcal{C}_{b}\) of Chebyshev approximations of \(b\) (Theorem 3). ### Approximate solutions set \(\Lambda_{b}\) We introduce a new notation and a new application: **Notation 3**.: \(\Lambda=[0,1]^{m\times 1}\)_._ **Proposition 6**.: \[\theta:\Lambda\to\mathcal{C}:x\mapsto A\square_{\min}^{\max}x\] (27) _where \(\mathcal{C}\) is defined in ( 19). The application \(\theta\) is a surjective and increasing map from \(\Lambda\) onto \(\mathcal{C}\)._ Proof.: By (Lemma 1), we know that the map \(\theta\) is increasing. As any \(c\in\mathcal{C}\) gives rise to a consistent system \(A\square_{\min}^{\max}x=c\), the map \(\theta\) is surjective. We remark that: * We have \(\mathcal{C}=\{\theta(x)\mid x\in\Lambda\}\). * For any \(u\in[0,1]^{n\times 1}\) we have: \[F(u)=\theta(e)\text{ where }e=A^{t}\square_{\to_{G}}^{\min}F(u).\] (28) This is an equivalent reformulation of the idempotence property of the application \(F\), see (Proposition 2). In the following, we introduce the subset \(\Lambda_{b}\subseteq\Lambda\) which is the reciprocal image of the set \(\mathcal{C}_{b}\) by the map \(\theta\), i.e.: **Definition 3**.: _The approximate solutions set of the system \((S)\) is:_ \[\Lambda_{b}=\theta^{-1}(\mathcal{C}_{b})=\{x\in\Lambda\mid\theta(x)\in \mathcal{C}_{b}\}. \tag{29}\] As \(\theta:\Lambda\to\mathcal{C}\) is a _surjective_ map, we have: \[\mathcal{C}_{b}=\{\theta(x)\mid x\in\Lambda_{b}\}. \tag{30}\] We define: **Definition 4**.: _A vector \(x\in\Lambda_{b}\) is called an approximate solution of the system \((S)\). By definition of the set \(\Lambda_{b}\), we have for all \(x\in\Lambda\):_ \[x\in\Lambda_{b}\Longleftrightarrow\|A\square_{\min}^{\max}x-b\|=\Delta(A,b). \tag{31}\] _where \(\Delta(A,b)\) is the Chebyshev distance associated to the second member \(b\) of the system \((S)\), see (Theorem 1)._ (see other definitions of approximate solutions using another choice of norms in [36, 37]). We have: **Proposition 7**.: _The approximate solutions set \(\Lambda_{b}\) is non-empty._ Proof.: As we know by (Proposition 5) that the set \(\mathcal{C}_{b}\) is non-empty, we conclude by (30) that the set \(\Lambda_{b}\) is also non-empty. In fact, we have a particular (and important) element in \(\Lambda_{b}\): **Notation 4**.: \(\eta:=A^{t}\square_{\to_{G}}^{\min}F(\overline{b}(\Delta))\)_._ **Proposition 8**.: _The column vector \(\eta\) satisfies the equality \(\theta(\eta)=F(\overline{b}(\Delta))\), therefore \(\eta\in\Lambda_{b}\) and \(\eta\) is the greatest approximate solution, i.e., the greatest element of \(\Lambda_{b}\)._ Proof.: The equality \(\theta(\eta)=F(\overline{b}(\Delta))\) follows from the idempotence property of the application \(F\) (Proposition 2). As by (Proposition 5), \(F(\overline{b}(\Delta))\in\mathcal{C}_{b}\), we obtain that \(\eta\in\Lambda_{b}\), see (Definition 3). Let us show \(x\in\Lambda_{b}\Longrightarrow x\leq\eta\). Set \(c=\theta(x)\) and \(c^{\prime}=F(\overline{b}(\Delta))\). As \(c=\theta(x)\in\mathcal{C}_{b}\), we have \(c\leq F(\overline{b}(\Delta))=c^{\prime}\) (Proposition 5). We apply (Lemma 2) to obtain \(x\leq A^{t}\square_{\to_{G}}^{\min}F(\overline{b}(\Delta))=\eta\). In what follows, we shall look for a finite non-empty set denoted \(\Lambda_{b,\min}\) of minimal approximate solutions, which satisfies: \[\Lambda_{b,\min}\subseteq\Lambda_{b}\text{ and }\mathcal{C}_{b,\min}=\{ \theta(x)\mid x\in\Lambda_{b,\min}\}. \tag{32}\] The existence of such a set \(\Lambda_{b,\min}\), which implies that the set \(\mathcal{C}_{b,\min}\) is also non-empty and finite, will be deduced from a characterization (Proposition 9) of the set \(\Lambda_{b}\) and a sharp result of [22] on the solving of a system of inequalities. ### Characterizing the approximate solutions set \(\Lambda_{b}\) We give the following first characterization of \(\Lambda_{b}\): **Proposition 9**.: _For any \(x\in\Lambda\), we have:_ \[x\text{ is an approximate solution i.e., }x\in\Lambda_{b}\Longleftrightarrow\underline{b}( \Delta)\leq\theta(x)\text{ and }x\leq\eta. \tag{33}\] Proof.: \(\implies\) \(\bullet\) We know from (Proposition 8) that \(x\in\Lambda_{b}\Longrightarrow x\leq\eta\). \(\bullet\) Let us show \(x\in\Lambda_{b}\Longrightarrow\underline{b}(\Delta)\leq\theta(x)\). If \(x\in\Lambda_{b}\), then \(\|b-\theta(x)\|=\Delta\) which implies that for any \(i\in\{1,2,\ldots,n\}\), \(b_{i}-\theta(x)_{i}\leq\Delta\) rewritten as \(b_{i}-\Delta\leq\theta(x)_{i}\). As \(\theta(x)_{i}\geq 0\), we deduce that: \[\forall i\in\{1,2,\ldots,n\},\left(b_{i}-\Delta\right)^{+}\leq\theta(x)_{i}.\] \(\Longleftarrow\) We suppose \(\underline{b}(\Delta)\leq\theta(x)\) and \(x\leq\eta\) and we must prove \(\|b-\theta(x)\|=\Delta\). As \(x\leq\eta\) and \(\theta\) is increasing, we have \(\theta(x)\leq\theta(\eta)=F(\overline{b}(\Delta))\). As \(F(\overline{b}(\Delta))\in\mathcal{C}_{b}\) (Proposition 5), then, for any \(i\in\{1,2,\ldots,n\}\), we have: \[-\Delta\leq b_{i}-\theta(\eta)_{i}\leq b_{i}-\theta(x)_{i}.\] On the other hand, \(\underline{b}(\Delta)\leq\theta(x)\) implies that for any \(i\in\{1,2,\ldots,n\}\): \[b_{i}-\theta(x)_{i}\leq\Delta.\] In conclusion, we have for \(i\in\{1,2,\ldots,n\}\): \[-\Delta\leq b_{i}-\theta(x)_{i}\leq\Delta,\] which is equivalent to \(\|b-\theta(x)\|\leq\Delta\). But, \(\theta(x)\in\mathcal{C}\), so \(\|b-\theta(x)\|\geq\Delta\). Therefore, \(\|b-\theta(x)\|=\Delta\) i.e., \(x\in\Lambda_{b}\). To introduce a sharp characterization of \(\Lambda_{b}\), which describes completely its structure, we will first give some notations and a lemma. **Notation 5**.: * _For_ \(j=1,2,\ldots,m\)_, let:_ \[H_{j}=\{i\in\{1,2,\ldots,n\}\mid a_{ij}<b_{i}-\Delta\},\] * _For any_ \(T\subseteq\{1,\ldots,m\}\)_, we denote by_ \(T^{c}\) _the complement of_ \(T\)_._ * _For_ \(T\subseteq\{1,\ldots,m\}\)_, we put:_ \[I_{T}=\bigcap_{j\in T}H_{j}\text{ and }\xi_{T}=\max_{i\in I_{T}}(b_{i}-\Delta)^{+},\] _with the convention_ \(\max_{\emptyset}=0\)_._ The map \(T\mapsto\xi_{T}\) has the following properties: **Lemma 4**.: 1. _For_ \(T=\emptyset\)_, we have_ \(\xi_{\emptyset}=\max_{i\in\{1,2,\ldots,n\}}(b_{i}-\Delta)^{+}\)_._ 2. _The map_ \(T\mapsto\xi_{T}\) _is decreasing i.e.,_ \(T\subseteq T^{\prime}\Longrightarrow\xi_{T^{\prime}}\leq\xi_{T}\)_._ 3. \(\xi_{\{1,2,\ldots,m\}}=0\)_._ Proof.: 1. This is true because \(I_{\emptyset}=\bigcap\limits_{j\in\emptyset}H_{j}=\{1,2,\ldots,n\}\). 2. If \(T\subseteq T^{\prime}\), then \(I_{T^{\prime}}\subseteq I_{T}\) and \(\xi_{T^{\prime}}=\max_{i\in I_{T^{\prime}}}(b_{i}-\Delta)^{+}\leq\xi_{T}=\max _{i\in I_{T}}(b_{i}-\Delta)^{+}\). 3. From (Proposition 8) and (Proposition 9), we deduce: \[\underline{b}(\Delta)\leq\theta(\eta).\] Let us show \(I_{\{1,2,\ldots,m\}}=\emptyset\). In fact, for any \(1\leq i\leq n\), the inequality \(\underline{b}(\Delta)_{i}\leq\theta(\eta)_{i}\) implies that there exists \(1\leq j\leq m\) such that: \[\underline{b}(\Delta)_{i}=(b_{i}-\Delta)^{+}\leq\min(a_{ij},\eta_{j})\leq a_{ ij}.\] Then, \(i\notin H_{j}\), so \(I_{\{1,2,\ldots,m\}}=\bigcap\limits_{j\in\{1,2,\ldots,m\}}H_{j}=\emptyset\) and by the convention \(\max_{\emptyset}=0\), we have \(\xi_{\{1,2,\ldots,m\}}=0\). The main characterization of the approximate solutions set \(\Lambda_{b}\) is: **Theorem 2**.: _For any \(x=\left[x_{j}\right]_{1\leq j\leq m}\in[0,1]^{m\times 1}\), we have:_ \[x\in\Lambda_{b}\iff\forall T\subseteq\{1,2,\ldots,m\},\quad\xi_{T}\leq\max _{j\in T^{c}}x_{j}\text{ and }x\leq\eta. \tag{34}\] For the proof of (Theorem 2), we need first to establish for any \(x=\left[x_{j}\right]_{1\leq j\leq m}\in[0,1]^{m\times 1}\): **Proposition 10**.: \[\underline{b}(\Delta)\leq\theta(x)\iff\forall i\in\{1,2,\ldots,n\},\exists j \in\{1,2,\ldots,m\},\text{ such that }i\in H_{j}{}^{c}\text{ and }(b_{i}-\Delta)^{+}\leq x_{j}.\] Proof.: Let \(i\in\{1,2,\ldots,n\}\). We have: \[(b_{i}-\Delta)^{+}\leq\theta(x)_{i} \iff\exists j\in\{1,2,\ldots,m\}\text{ such that }(b_{i}-\Delta)^{+}\leq\min(a_{ij},x_{j})\] \[\iff\exists j\in\{1,2,\ldots,m\}\text{ such that }(b_{i}-\Delta)^{+}\leq a_{ij}\text{ and }(b_{i}-\Delta)^{+}\leq x_{j}\] \[\iff\exists j\in\{1,2,\ldots,m\},\text{ such that }i\in H_{j}{}^{c}\text{ and }(b_{i}-\Delta)^{+}\leq x_{j}.\] The proof of (Theorem 2) is given in the following. Proof.: \(\Longrightarrow\) We know by (Proposition 9) that \(x\leq\eta\). Let \(T\subseteq\{1,2,\ldots,m\}\) and we must show \(\xi_{T}\leq\max_{j\in T^{c}}x_{j}\). * If \(I_{T}=\emptyset\), we have \(\xi_{T}=0\leq\max_{j\in T^{c}}x_{j}\). * If \(I_{T}\neq\emptyset\), then take \(i\in I_{T}\) such that \(\xi_{T}=(b_{i}-\Delta)^{+}\). Using (Proposition 10), we have \(j\in\{1,2,\ldots,m\}\), such that \(i\in H_{j}{}^{c}\) (which means that \((b_{i}-\Delta)^{+}\leq a_{ij}\)) and \((b_{i}-\Delta)^{+}\leq x_{j}\). We conclude that \(j\notin T\) i.e., \(j\in T^{c}\) and: \[\xi_{T}=(b_{i}-\Delta)^{+}\leq x_{j}\leq\max_{l\in T^{c}}x_{l}.\] \(\Longleftarrow\) To prove that \(x\in\Lambda_{b}\), by (Proposition 9), it is sufficient to have \(\underline{b}(\Delta)\leq\theta(x)\). Let \(i\in\{1,2,\ldots,n\}\), we must show that \((b_{i}-\Delta)^{+}\leq\theta(x)_{i}\). Take \(T=\{j\in\{1,2,\ldots,m\}\mid i\in H_{j}\}\). Clearly, \(i\in I_{T}\) and then: \[(b_{i}-\Delta)^{+}\leq\xi_{T}\leq\max_{l\in T^{c}}x_{l}.\] We distinguish two cases: * \(T^{c}=\emptyset\), then \(\max_{l\in T^{c}}x_{l}=0\), and \((b_{i}-\Delta)^{+}=0\leq\theta(x)_{i}\). * \(T^{c}\neq\emptyset\), and let \(l^{\prime}\in T^{c}\) such that \(x_{l^{\prime}}=\max_{l\in T^{c}}x_{l}\). We have: \[(b_{i}-\Delta)^{+}\leq\xi_{T}\leq x_{l^{\prime}}.\] But, \(l^{\prime}\in T^{c}\) means that \(i\in{H_{l^{\prime}}}^{c}\) i.e., \((b_{i}-\Delta)^{+}\leq a_{il^{\prime}}\). Finally, \[(b_{i}-\Delta)^{+}\leq\min(a_{il^{\prime}},x_{l^{\prime}})\leq\theta(x)_{i}.\] ### Obtaining minimal Chebyshev approximations from minimal approximate solutions From a practical point of view, one can obtain all the minimal Chebyshev approximations of the second member \(b\) of the system \((S)\). For this purpose, we use (Proposition 9) and the results of [22], where the authors showed that a system of \(\max-\min\) relational inequalities has a finite non-empty set of solutions, and they gave an algorithm to obtain the minimal solutions of such a system that are lower than a given solution. In the following, we use this result of [22]: **Notation 6**.: _We denote by \(\{v^{(1)},v^{(2)},\ldots,v^{(h)}\}\) the set of minimal solutions of the system of inequalities \(\underline{b}(\Delta)\leq A\square_{\min}^{\max}x\) (obtained using the algorithm of [22]) such that \(\forall i\in\{1,2,\ldots,h\},v^{(i)}\leq\eta\)._ We have: **Proposition 11**.: 1. \(\{v^{(1)},v^{(2)},\ldots,v^{(h)}\}\subseteq\Lambda_{b}\) _and_ \(\{\theta(v^{(1)}),\theta(v^{(2)}),\ldots,\theta(v^{(h)})\}\subseteq\theta( \Lambda_{b})=\mathcal{C}_{b}\)_,_ 2. \(\forall x\in\Lambda_{b},\exists i\in\{1,2,\ldots,h\},\) _such that_ \(v^{(i)}\leq x\)_,_ 3. \(\forall c\in\mathcal{C}_{b,\min},\exists i\in\{1,2,\ldots,h\},\) _such that_ \(c=\theta(v^{(i)})\)_._ Proof.: For the proof of the first statement, we observe that \(\{v^{(1)},v^{(2)},\ldots,v^{(h)}\}\subseteq\Lambda_{b}\) is a consequence of (Proposition 9) and \(\{\theta(v^{(1)}),\theta(v^{(2)}),\ldots,\theta(v^{(h)})\}\subseteq\theta( \Lambda_{b})=\mathcal{C}_{b}\) is a consequence of (30). To prove the second statement, let \(x_{0}\in\Lambda_{b}\). From (Proposition 9), we deduce: \[\underline{b}(\Delta)\leq A\square_{\min}^{\max}x_{0}=\theta(x_{0})\quad \text{and}\quad x_{0}\leq\eta.\] By the algorithm of [22], there is a minimal solution \(v\) of the system of inequalities \(\underline{b}(\Delta)\leq A\square_{\min}^{\max}x\) such that \(v\leq x_{0}\). As we have \(x_{0}\leq\eta\), we also have \(v\leq\eta\), so there is an index \(i\in\{1,2,\ldots,h\}\) such that \(v=v^{(i)}\leq x_{0}\). To prove the last statement, let \(c\in\mathcal{C}_{b,\min}\). From (30), there is an element \(x_{0}\in\Lambda_{b}\) such that \(c=\theta(x_{0})\) and from the second statement (of (Proposition 11)), there is an index \(i\in\{1,2,\ldots,h\}\) such that \(v^{(i)}\leq x_{0}\). From the increasing of \(\theta\) and \(\theta(v^{(i)})\in\mathcal{C}_{b}\), we deduce: \[\theta(v^{(i)})\leq\theta(x_{0})=c.\] By minimality of \(c\), we conclude that \(c=\theta(v^{(i)})\). The following corollary allows us to efficiently obtain the minimal Chebyshev approximations in practice. **Corollary 2**.: _Using (Notation 6), we put:_ \[\widetilde{\mathcal{C}}=\{\theta(v^{(1)}),\theta(v^{(2)}),\ldots,\theta(v^{(h) })\} \tag{35}\] _and_ \[(\widetilde{\mathcal{C}})_{\min}=\{c\in\widetilde{\mathcal{C}}\mid c\text{ is minimal in }\widetilde{\mathcal{C}}\}. \tag{36}\] _Then, we have:_ \[\widetilde{\mathcal{C}}\subseteq\mathcal{C}_{b}\text{ and }\mathcal{C}_{b,\min} =(\widetilde{\mathcal{C}})_{\min}.\] Proof.: By the first statement of (Proposition 11), we have \(\widetilde{\mathcal{C}}\subseteq\mathcal{C}_{b}\). By the third statement of (Proposition 11), we have \(\,\mathcal{C}_{b,\min}\subseteq\widetilde{\mathcal{C}}\). As \(\widetilde{\mathcal{C}}\subseteq\mathcal{C}_{b}\), we deduce \(\mathcal{C}_{b,\min}\subseteq(\widetilde{\mathcal{C}})_{\min}\). Let \(c\in(\widetilde{\mathcal{C}})_{\min}\). To prove that \(c\in\mathcal{C}_{b,\min}\), let \(c^{\prime}\in\mathcal{C}_{b}\) such that \(c^{\prime}\leq c\). We must prove that \(c^{\prime}=c\). By (30), there is an element \(x_{0}\in\Lambda_{b}\) such that \(c^{\prime}=\theta(x_{0})\). Using the second statement of (Proposition 11), we obtain an index \(i\in\{1,2,\ldots,h\},\) such that \(v^{(i)}\leq x_{0}\). Then we have \(\theta(v^{(i)})\in\widetilde{\mathcal{C}}\) and by the increasing of \(\theta\), we get: \[\theta(v^{(i)})\leq\theta(x_{0})=c^{\prime}\leq c.\] By minimality of \(c\) in \(\widetilde{\mathcal{C}}\), we obtain \(\theta(v^{(i)})=c\), so \(c^{\prime}=c\). We have: **Corollary 3**.: _The set \(\mathcal{C}_{b,\min}\) is non-empty and finite._ Proof.: As \(\widetilde{\mathcal{C}}\) is a finite non-empty ordered set, the set \((\widetilde{\mathcal{C}})_{\min}=\mathcal{C}_{b,\min}\) is also finite and non-empty. We are able to define a set of minimal approximation solutions \(\Lambda_{b,\min}\), see (32): **Definition 5**.: \[\Lambda_{b,\min}=\{x\in\{v^{(1)},v^{(2)},\ldots,v^{(h)}\}\mid\theta(x)\in \mathcal{C}_{b,\min}\}.\] (37) It follows from the first and the last statements of (Proposition 11) that we have: \[\Lambda_{b,\min}\subseteq\Lambda_{b}\text{ and }\mathcal{C}_{b,\min}=\{ \theta(x)\mid x\in\Lambda_{b,\min}\}.\] _Therefore, the set \(\Lambda_{b,\min}\) is non-empty and finite._ The structure of the set \(\mathcal{C}_{b}\) is described by the following result: **Theorem 3**.: _For all \(c\in[0,1]^{n\times 1}\), we have:_ \[c\text{ is a Chebyshev approximation of }b\text{ i.e., }c\in\mathcal{C}_{b}\iff F(c)=c\text{ and } \exists\,c^{\prime}\in\mathcal{C}_{b,\min}\text{ s.t. }c^{\prime}\leq c\leq F(\overline{b}(\Delta)). \tag{38}\] Proof.: \(\Longrightarrow\) Let \(c\in\mathcal{C}_{b}\). As \(\mathcal{C}_{b}\subseteq\mathcal{C}\), we know from (Proposition 1) that \(F(c)=c\). From (30), we have an approximate solution \(x_{0}\in\Lambda_{b}\) such that \(c=\theta(x_{0})\). Then, by the second statement of (Proposition 11), there is an index \(i\in\{1,2,\ldots,h\}\) such that \(v^{(i)}\leq x_{0}\). Set \(c_{1}=\theta(v^{(i)})\). Then, by (Corollary 2), we have \(c_{1}\in\widetilde{\mathcal{C}}\) and there exist an element \(c^{\prime}\in\widetilde{\mathcal{C}}_{\min}=\mathcal{C}_{b,\min}\) such that \(c^{\prime}\leq c_{1}\). As \(\theta\) is increasing and using (Proposition 5), we have: \[c^{\prime}\leq c_{1}=\theta(v^{(i)})\leq\theta(x_{0})=c\leq F(\overline{b}( \Delta)).\] \(\Longleftarrow\) As \(F(c)=c\), by (Proposition 1), we have \(c\in\mathcal{C}\). It remains us to prove that \(\|b-c\|=\Delta\). Let \(c^{\prime}\in\mathcal{C}_{b,\min}\) such that \(c^{\prime}\leq c\). As we have: \[\|b-c^{\prime}\|=\|b-F(\overline{b}(\Delta))\|=\Delta\quad\text{and}\quad c^ {\prime}\leq c\leq F(\overline{b}(\Delta)),\] we deduce for all \(i\in\{1,2,\ldots,n\}:\) \[-\Delta\leq b_{i}-F(\overline{b}(\Delta))_{i}\leq b_{i}-c_{i}\leq b_{i}-c^{ \prime}_{i}\leq\Delta,\] so \(\|b-c\|\leq\Delta\). As \(c\in\mathcal{C}\), we have also \(\|b-c\|\geq\Delta\). Finally, we conclude \(\|b-c\|=\Delta\), so \(c\) is a Chebyshev approximation of \(b\), i.e., \(c\in\mathcal{C}_{b}\). We illustrate our method for obtaining the minimal Chebyshev approximations of \(b\). **Example 6**.: _(continued) We continue with the results in (Example 5). We remind that the Chebyshev distance associated to the second member \(b\) of the system \(A\square_{\min}^{\max}x=b\) is \(\Delta=0.16\). We compute:_ \[\underline{b}(\Delta)=\begin{bmatrix}0.38\\ 0.00\\ 0.71\end{bmatrix},\overline{b}(\Delta)=\begin{bmatrix}0.70\\ 0.29\\ 1.00\end{bmatrix}\text{ and }\eta=A^{t}\square_{\to_{G}}^{\min}F(\overline{b}(\Delta))= \begin{bmatrix}0.29\\ 1\\ 1\end{bmatrix}.\] _The vector \(\theta(\eta)=\begin{bmatrix}0.38\\ 0.29\\ 0.85\end{bmatrix}\) is the greatest Chebyshev approximation of the second member \(b\) of the system._ _The system of inequalities \(\underline{b}(\Delta)\leq A\square_{\min}^{\max}x\) is: \(\begin{bmatrix}0.38\\ 0.00\\ 0.71\end{bmatrix}\leq\begin{bmatrix}0.03&0.38&0.26\\ 0.98&0.10&0.03\\ 0.77&0.15&0.85\end{bmatrix}\square_{\min}^{\max}\begin{bmatrix}x_{1}\\ x_{2}\\ x_{3}\end{bmatrix}\)._ _Using the approach of [22], we obtain two minimal solutions: \(v=\begin{bmatrix}0.00\\ 0.38\\ 0.71\end{bmatrix}\) and \(v^{\prime}=\begin{bmatrix}0.71\\ 0.38\\ 0.00\end{bmatrix}\) of the system of inequalities. Among these minimal solutions, only \(v\) is lower than \(\eta\)._ _The set \(\widetilde{\mathcal{C}}\), see (35), contains one element, which is \(A\square_{\min}^{\max}v=\begin{bmatrix}0.38\\ 0.10\\ 0.71\end{bmatrix}\) and we have \(\widetilde{\mathcal{C}}=(\widetilde{\mathcal{C}})_{\min}\). Therefore, from (Corollary 2), the unique minimal Chebyshev approximation of \(b\) is \(\hat{b}=\begin{bmatrix}0.38\\ 0.10\\ 0.71\end{bmatrix}\)._ _Some approximate solutions of the system \((S)\) are the solutions of the system \(\theta(\eta)=A\square_{\min}^{\max}x\) and the solutions of the system \(\hat{b}=A\square_{\min}^{\max}x\)._ ## 6 Learning approximate weight matrices according to training data Numerous approaches have been proposed for learning a weight matrix relating input data to output data by \(\max-\min\) composition [3, 4, 5, 6, 8, 14, 15, 16, 17, 19, 23, 27, 28, 29, 32, 33, 42]. One of the pioneering works is that of Pedrycz [27]. He highlighted that we can represent a system of \(\max-\min\) fuzzy relational equations \(W\square_{\min}^{\max}x=y\) by a neural network, where \(W=[w_{ij}]_{1\leq i\leq n,1\leq j\leq m}\in[0,1]^{n\times m}\) is called a weight matrix and \(x=\left[x_{j}\right]_{1\leq j\leq m}\in[0,1]^{m\times 1}\) and \(y=\left[y_{i}\right]_{1\leq i\leq n}\in[0,1]^{n\times 1}\) are column vectors. The neural network (Figure 1) has \(m\) input nodes corresponding to the components \(x_{1},x_{2},\cdots,x_{m}\) of \(x\), \(n\) output nodes corresponding to the components \(y_{1},y_{2},\cdots,y_{n}\) of \(y\) and \(n\cdot m\) edges such that each of the edges is weighted by the component \(w_{ij}\) of \(W\), and connects the input node \(x_{j}\) to the output node \(y_{i}\). For \(1\leq i\leq n\), the value of the output node \(y_{i}\) is given by \(y_{i}=\max_{1\leq j\leq m}\min(w_{ij},x_{j})\). To learn the weight matrix \(W\) according to training data, most of the approaches try to adapt the classical gradient descent method to such a \(\max-\min\) fuzzy neural network in order to minimize the learning error \(E(W)\) expressed Figure 1: A system of \(\max-\min\) fuzzy relational equations represented by a \(\max-\min\) neural network. Green nodes are input nodes and red nodes are output nodes. in the \(L_{2}\) norm. However, since the functions \(\max\) and \(\min\) are not fully differentiable, it is rather difficult to adapt the classical gradient descent to this framework. This issue was recently again encountered in [34]. In these approaches, it seems that the choice of the \(L_{2}\) norm is motivated by its adequacy to the differentiable calculus, while being equivalent to the \(L_{\infty}\) norm (two norms on the vector space \(\mathbb{R}^{n}\) are equivalent). In this section, based on our results, we introduce a paradigm to approximately learn a weight matrix relating input and output data from the following training data: \[(x^{(i)})_{1\leq i\leq N},x^{(i)}\in[0,1]^{m\times 1}\quad;\quad(y^{(i)})_{1\leq i \leq N},y^{(i)}\in[0,1]^{n\times 1}. \tag{39}\] For \(i=1,2,\ldots,N\), each pair \((x^{(i)},y^{(i)})\) is a training datum, where \(x^{(i)}\) is the input data vector and \(y^{(i)}\) is the targeted output data vector. Our choice of norm to express the learning error is the \(L_{\infty}\) norm: \[E(W)=\max_{1\leq i\leq N}\|y^{(i)}-W\Box_{\min}^{\max}x^{(i)}\| \tag{40}\] _where the norm of a vector \(z\) of \(n\) components is \(\|z\|=\max_{1\leq k\leq n}\mid z_{k}\mid\)._ The first main result of this section is that we can compute by an analytical formula a positive constant \(\mu\), which depends only on the training data, such that the following equality holds: \[\mu=\min_{W\in[0,1]^{n\times m}}E(W). \tag{41}\] In other words, our positive constant \(\mu\) minimizes the learning error. Whatever if \(\mu=0\) or \(\mu>0\), we give a method to get a weight matrix \(W^{*}\) such that \(E(W^{*})=\mu\). If \(\mu=0\), this method is based on the solving of \(n\) consistent systems of \(\max-\min\) fuzzy relational equations constructed from the training data that we will introduce. Otherwise, if \(\mu>0\), we get an approximate weight matrix \(W^{*}\) by gathering approximate solutions (Definition 4) of these same systems using (Section 5). This section is structured as follows. Considering a training data where the outputs are scalar (one value), we begin by relating the problem of learning a weight matrix connecting input data to output data to the solving of a system canonically associated to this training data (Subsection 6.1). Then, we tackle the general problem (Subsection 6.2). After giving some notations (Subsection 6.3) and defining the positive constant \(\mu\) (Definition 7) of (Subsection 6.4), we prove (41) and give a method ((Method 1) of (Subsection 6.5)) for constructing approximate weight matrices i.e., matrices \(W\) such that \(E(W)=\mu\). Finally, in (Subsection 6.6), we illustrate our results with two examples. Relating the problem of learning a weight matrix connecting input data to output data to the solving of a system canonically associated to these data Assume a training data composed of \(N\) piece of data as follows: \[(x^{(i)})_{1\leq i\leq N},x^{(i)}\in[0,1]^{m\times 1}\quad;\quad(y^{(i)})_{1 \leq i\leq N},y^{(i)}\in[0,1]. \tag{42}\] For \(i=1,2,\ldots,N\), each pair \((x^{(i)},y^{(i)})\) is a training datum, where \(x^{(i)}\) is an input data vector and \(y^{(i)}\) is the targeted output data value in \([0,1]\). We want to learn a weight matrix \(V\in[0,1]^{1\times m}\) such that: \[\forall i\in\{1,2,\ldots,N\},\ V\Box_{\min}^{\max}x^{(i)}=y^{(i)}. \tag{43}\] To tackle this problem, the idea is to introduce the following system which is canonically associated to the training data: \[(S):L\Box_{\min}^{\max}u=b, \tag{44}\] where: \[L=\begin{bmatrix}x_{j}^{(i)}\\ \end{bmatrix}_{1\leq i\leq N,1\leq j\leq m}=\begin{bmatrix}x_{1}^{(1)}&x_{2}^ {(1)}&\ldots&x_{m}^{(1)}\\ x_{1}^{(2)}&x_{2}^{(2)}&\ldots&x_{m}^{(2)}\\ \vdots&\vdots&\vdots&\vdots\\ x_{1}^{(N)}&x_{2}^{(N)}&\ldots&x_{m}^{(N)}\end{bmatrix}\quad\text{and}\quad b =[y^{(i)}]_{1\leq i\leq N}=\begin{bmatrix}y^{(1)}\\ y^{(2)}\\ \vdots\\ y^{(N)}\end{bmatrix}. \tag{45}\] So the rows of \(L\) are the transpose of the input data column vectors \(x^{(1)},x^{(2)},\ldots,x^{(N)}\) and the components of \(b\) are the targeted output values \(y^{(1)},y^{(2)},\ldots,y^{(N)}\). To relate the problem formulated in (43) to the system \((S)\), we will use the following lemma: **Lemma 5**.: _Let \(v=\begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{m}\end{bmatrix}\in[0,1]^{m\times 1}\) be a column-vector and \(V=[v_{1}\quad v_{2}\quad\cdots\quad v_{m}]\in[0,1]^{1\times m}\) is the row matrix which is the transpose of \(v\). We put \(v^{\prime}=[v^{\prime}_{i}]_{1\leq i\leq N}=L\square_{\min}^{\max}v\). Then, we have:_ 1. \(\forall i\in\left\{1,2,\ldots,N\right\},\,v^{\prime}_{i}=V\square_{\min}^{ \max}x^{(i)}\in[0,1]\)_,_ 2. \(\|b-v^{\prime}\|=\max_{1\leq i\leq N}\mid y^{(i)}-V\square_{\min}^{\max}x^{(i) }\mid\)_._ _The second statement implies that \(V\) is a weight matrix of the training data \(((x^{(i)})_{1\leq i\leq N},(y^{(i)})_{1\leq i\leq N})\), see (43), if and only if \(v\) is a solution of the system \((S)\)._ Proof.: We have: \[\forall i\in\left\{1,2,\ldots,N\right\},\,v^{\prime}_{i} =\max_{1\leq j\leq m}\min(l_{ij},v_{j})\] \[=\max_{1\leq j\leq m}\min(x^{(i)}_{j},v_{j})\] \[=V\square_{\min}^{\max}x^{(i)}.\] From these computations, we deduce the second statement: \[\|b-v^{\prime}\|=\max_{1\leq i\leq N}\mid y^{(i)}-v^{\prime}_{i} \mid=\max_{1\leq i\leq N}\mid y^{(i)}-V\square_{\min}^{\max}x^{(i)}\mid.\] The problem formulated in (43) is related to the system \((S)\) by: **Proposition 12**.: _Let \(v=\begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{m}\end{bmatrix}\in[0,1]^{m\times 1}\) be a column-vector and \(V=[v_{1}\quad v_{2}\quad\cdots\quad v_{m}]\in[0,1]^{1\times m}\) is the row matrix which is the transpose of \(v\). We have:_ \[v\text{ is a solution of the system }(S)\iff\forall i\in\left\{1,2,\ldots,N \right\}V\square_{\min}^{\max}x^{(i)}=y^{(i)}.\] Proof.: The proof of this proposition follows directly from the second statement of (Lemma 5). We have: **Remark 1**.: _The transpose map \([0,1]^{m\times 1}\rightarrow[0,1]^{1\times m}:v\mapsto V=v^{t}\) defines a bijective correspondence between solutions of the system \((S)\) and weight matrices associated to the training data._ In the case where the system \((S):L\square_{\min}^{\max}u=b\) is inconsistent, we will show that the transpose map still defines a bijective correspondence between approximate solutions of the system \((S)\) (Definition 4) and approximate weight matrices \(V\) i.e., matrices satisfying the following equality: \[\max_{1\leq i\leq N}\mid y^{(i)}-V\square_{\min}^{\max}x^{(i)}\mid=\Delta(L,b), \tag{46}\] where \(\Delta(L,b)\) is the Chebyshev distance associated to the second member \(b\) of the system \((S)\), see (Definition 1). The definition (46) of an approximate weight matrix \(V\) is justified by: 1. For any approximate solution \(v\in[0,1]^{m\times 1}\) of the system \((S)\), see (44), we have \(\|b-L\square_{\min}^{\max}v\|=\Delta(L,b)\) (Definition 4). 2. It follows from the second statement of (Lemma 5) and (Definition 1) that for any \(V\in[0,1]^{1\times m}\), we have: \[\max_{1\leq i\leq N}\mid y^{(i)}-V\square_{\min}^{\max}x^{(i)}\mid=\|b-L \square_{\min}^{\max}v\|\geq\Delta(L,b).\] (47) This leads to the definition of the positive constant \(\mu\): **Definition 6**.: _The positive constant \(\mu\) minimizing the learning error \(E(V)=\max_{1\leq i\leq N}\mid y^{(i)}-V\Box_{\min}^{\max}x^{(i)}\mid\), see (40), according to the training data, is the Chebyshev distance associated to the second member \(b\) of the system \((S)\):_ \[\mu=\Delta(L,b). \tag{48}\] This definition is justified by (47), which we rewrite as \[\forall V\in[0,1]^{1\times m},\,E(V)\geq\mu. \tag{49}\] To get the equality \(\mu=\min_{V\in[0,1]^{1\times m}}E(V)\), see (41), we establish the following result: **Proposition 13**.: _Let \(v=\begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{m}\end{bmatrix}\in[0,1]^{m\times 1}\) be a column-vector and \(V=[v_{1}\quad v_{2}\quad\cdots\quad v_{m}]\in[0,1]^{1\times m}\) is the row matrix which is the transpose of \(v\). We have:_ \[v\text{ is an approximate solution of the system }(S)\iff\max_{1\leq i\leq N}\mid y^{(i)}-V\Box_{\min}^{\max}x^{(i)} \mid=\Delta(L,b)=\mu.\] Proof.: This equivalence is deduced from the second statement of (Lemma 5) and the equivalence (31). We deduce: **Corollary 4**.: _The equality \(\mu=\min_{V\in[0,1]^{1\times m}}E(V)\) holds._ Proof.: This result is a consequence of the fact that the approximate solution set \(\Lambda_{b}\) is non-empty, see (Proposition 7). We illustrate this construction. **Example 7**.: _Let us consider the following training data:_ \[\begin{array}{ll}x^{(1)}=(0.7,0.4,0.4)^{t}&y^{(1)}=0.7\\ x^{(2)}=(1.0,0.2,0.5)^{t}&y^{(2)}=1.0\\ x^{(3)}=(0.2,0.3,0.8)^{t}&y^{(3)}=0.3\\ \end{array}\] _We construct the system \((S):L\Box_{\min}^{\max}u=b\) where \(L=\begin{bmatrix}0.7&0.4&0.4\\ 1.0&0.2&0.5\\ 0.2&0.3&0.8\end{bmatrix}\) and \(b=\begin{bmatrix}0.7\\ 1.0\\ 0.3\end{bmatrix}\). The system is consistent because the Chebyshev distance associated to \(b\) is equal to zero: \(\Delta(L,b)=0\), so \(\mu=\Delta(L,b)=0\). The greatest solution of \((S)\) is \(\begin{bmatrix}1.0\\ 1.0\\ 0.3\end{bmatrix}\) and there are two minimal solutions \(\begin{bmatrix}1.0\\ 0.3\end{bmatrix}\) and \(\begin{bmatrix}1.0\\ 0.0\\ 0.3\end{bmatrix}\) computed using the algorithm of [22]. Let us use the solution \(v=\begin{bmatrix}1.0\\ 0.7\\ 0.3\end{bmatrix}\) of the system \((S)\) and we put \(V=v^{t}=[1.0\quad 0.7\quad 0.3]\). The weight matrix \(V\) relates input and output data of the training data:_ \[\begin{array}{ll}V\Box_{\min}^{\max}x^{(1)}=y^{(1)},\\ V\Box_{\min}^{\max}x^{(2)}=y^{(2)},\\ V\Box_{\min}^{\max}x^{(3)}=y^{(3)}.\end{array}\] \begin{table} \begin{tabular}{l|l} \(x^{(1)}=(0.7,0.4,0.4)^{t}\) & \(y^{(1)}=0.7\\ x^{(2)}=(1.0,0.2,0.5)^{t}\) & \(y^{(2)}=1.0\\ x^{(3)}=(0.2,0.3,0.8)^{t}\) & \(y^{(3)}=0.3\\ \end{array}\) \\ \end{tabular} \end{table} Table 1: Training data. We have \(N=3,m=3\). ### Learning approximate weight matrices in the general case We shall extend the above results in the case where the outputs of the training data are column vectors of \(n\) components in \([0,1]\). Let us consider \(N\) training datum as follows: \[(x^{(i)})_{1\leq i\leq N},x^{(i)}\in[0,1]^{m\times 1}\quad;\quad(y^{(i)})_{1\leq i \leq N},y^{(i)}\in[0,1]^{n\times 1}. \tag{50}\] For \(i=1,2,\ldots,N\), each pair \((x^{(i)},y^{(i)})\) is a training datum, where \(x^{(i)}\) is the input data vector and \(y^{(i)}\) is the targeted output data vector. We study the following problems: 1. Is there a weight matrix \(W\) of size \((n,m)\) such that: \[\forall i\in\{1,2,\cdots,N\},\ W\Box_{\min}^{\max}x^{(i)}=y^{(i)}.\] 2. If this not the case, how to define and get a suitable approximate weight matrix \(W\)? We will prove the following results: 1. There is a positive constant denoted \(\mu\) which can be computed by an analytical formula according to the training data and which satisfies: \[\forall W\in[0,1]^{n\times m},\max_{1\leq i\leq N}\ \|y^{(i)}-W\Box_{\min}^{\max}x^{(i)}\|\geq\mu.\] (51) This positive constant minimizes the learning error \(E(W)\), see (40), and is expressed in terms of Chebyshev distances associated to the second member of systems of \(\max-\min\) fuzzy relational equations that we will introduce. 2. We will show the following equivalence: having a weight matrix that perfectly relates the input data to the output data is equivalent to having \(\mu=0\) i.e., \[\exists W\in[0,1]^{n\times m},\ \text{s.t.}\ \forall i\in\{1,2,\cdots,N\},W\Box_{\min}^{\max}x^{(i)}=y^{(i)} \iff\mu=0.\] (52) 3. We will show that the set of approximate weight matrices: \[\mathcal{A}=\left\{W\in[0,1]^{n\times m}\,\big{|}\,\max_{1\leq i\leq N}\,\|y^ {(i)}-W\Box_{\min}^{\max}x^{(i)}\|=\mu\right\}\] (53) is non-empty. This implies that \(\mu=\min_{W\in[0,1]^{n\times m}}E(W)\), see (41). In the following, we begin by giving some notations, then we define the positive constant \(\mu\) and introduce our method for constructing an approximate weight matrix \(W\) according to training data. ### Notations We reuse the matrix \(L=[l_{ij}]_{1\leq i\leq N,1\leq j\leq m}=[x_{j}^{(i)}]_{1\leq i\leq N,1\leq j \leq m}\) of size \((N,m)\), see (45), which is defined by the transpose of the input data column vectors \(x^{(1)},x^{(2)},\ldots,x^{(N)}\). To extend to the case where the output data are vectors of \(n\) components, we associate to the training data \(n\) systems of \(\max-\min\) fuzzy relational equations denoted by \((S_{1}),(S_{2}),\ldots,(S_{n})\), which all use the same matrix \(L\) and whose second members are \(b^{(1)},b^{(2)},\ldots,b^{(n)}\). For \(1\leq k\leq n\), the system \((S_{k})\) is of the form: \[(S_{k}):L\Box_{\min}^{\max}u=b^{(k)}, \tag{54}\] where the unknown part is a column vector \(u\in[0,1]^{m\times 1}\) and for \(k=1,2,\ldots,n\), the components of the column vector \(b^{(k)}=[b_{i}^{(k)}]_{1\leq i\leq N}\) are defined by: \[b_{i}^{(k)}=y_{k}^{(i)}\ ;1\leq i\leq N. \tag{55}\] We remark that for \(k=1,2,\ldots,n\) and \(i=1,2,\ldots,N\), each component \(b_{i}^{(k)}\) of the second member \(b^{(k)}\) of the system \((S_{k})\) is equal to the component \(y_{k}^{(i)}\) of the targeted output data vector \(y^{(i)}\): \[b^{(k)}=\begin{bmatrix}y_{k}^{(1)}\\ y_{k}^{(2)}\\ \vdots\\ y_{k}^{(N)}\end{bmatrix}. \tag{56}\] To any matrix \(W=[w_{kj}]_{1\leq k\leq n,1\leq j\leq m}\), we associate the \(n\) - tuple of column-vectors \((u^{(1)},u^{(2)},\ldots,u^{(n)})\) where for all \(1\leq k\leq n\), the column vector \(u^{(k)}=\left[u^{(k)}_{j}\right]_{1\leq j\leq m}\) is the transpose of the \(k\)-th row of the matrix \(W\): \[u^{(k)}_{j}=w_{kj}\ ;1\leq k\leq n,1\leq j\leq m. \tag{57}\] This defines the following bijective map between the sets \([0,1]^{n\times m}\) and \(([0,1]^{m\times 1})^{n}\): \[[0,1]^{n\times m}\rightarrow([0,1]^{m\times 1})^{n}:W\mapsto(u^{(1)},u^{(2)},\ldots,u^{(n)}) \tag{58}\] Every \(n\)-tuple \((u^{(1)},u^{(2)},\ldots,u^{(n)})\) of column-vectors in \([0,1]^{m\times 1}\) is the image by the above map of a unique matrix \(W\in[0,1]^{n\times m}\). Graphically, if \((u^{(1)},u^{(2)},\ldots,u^{(n)})\) is the image of \(W\) by the above map, we have: \[W=\begin{bmatrix}w_{11}&w_{12}&\cdots&w_{1m}\\ w_{21}&w_{22}&\cdots&w_{2m}\\ \vdots&\vdots&\vdots&\vdots\\ w_{n1}&w_{n2}&\cdots&w_{nm}\end{bmatrix}=\begin{bmatrix}u^{(1)}{}^{t}\\ u^{(2)}{}^{t}\\ \vdots\\ u^{(n)}{}^{t}\end{bmatrix}.\] ### Definition of the positive constant \(\mu\) minimizing the learning error \(E(w)\) We relate the systems \((S_{1}):L\square_{\min}^{\max}u=b^{(1)},(S_{2}):L\square_{\min}^{\max}u=b^{(2) },\ldots,(S_{n}):L\square_{\min}^{\max}u=b^{(n)}\), associated to the training data to the learning error \(E(W)\), see (40) by the following useful result: **Lemma 6**.: _For all matrices \(W\) of size \((n,m)\), we have:_ \[E(W)=\max_{1\leq i\leq N}\ \|y^{(i)}-W\square_{\min}^{\max}x^{(i)}\|=\max_{1 \leq k\leq n}\ \|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|, \tag{59}\] _where \(u^{(k)}\) is the column vector corresponding to the transpose of the \(k\)-th row of the matrix \(W\)._ Proof.: For all \(1\leq i\leq N\), we have: \[\|y^{(i)}-W\square_{\min}^{\max}x^{(i)}\| =\max_{1\leq k\leq n}\ |\ y^{(i)}_{k}-\max_{1\leq j\leq m}\min(w_{kj},x^{(i)}_{j})\ |\] \[=\max_{1\leq k\leq n}\ |\ b^{(k)}_{i}-\max_{1\leq j\leq m}\min(u^{(k)} _{j},l_{ij})\ |\.\] For all \(1\leq k\leq n\), we have: \[\|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\| =\max_{1\leq i\leq N}\ |\ b^{(k)}_{i}-\max_{1\leq j\leq m}\min(l_{ij},u^{(k)}_{j})\ |\.\] Finally, we get: \[\max_{1\leq i\leq N}\ \|y^{(i)}-W\square_{\min}^{\max}x^{(i)}\| =\max_{1\leq i\leq N}\ \max_{1\leq k\leq n}\ |\ b^{(k)}_{i}-\max_{1\leq j\leq m}\min(u^{(k)} _{j},l_{ij})\ |\] \[=\max_{1\leq k\leq n}\max_{1\leq i\leq N}\ |\ b^{(k)}_{i}-\max_{1\leq j\leq m}\min(l_{ij},u^{(k)}_{j})\ |\] \[=\max_{1\leq k\leq n}\ \|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|.\] We remark that for \(1\leq k\leq n\), we have: * If the system \((S_{k})\) is consistent, the Chebyshev distance associated to its second member \(b^{(k)}\), see (Definition 1), is equal to zero i.e., \(\Delta(L,b^{(k)})=0\), so obviously, we have: \[\|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|\geq\Delta(L,b^{(k)})=0,\] where \(u^{(k)}\) is the transpose of the \(k\)-th row of \(W\). * If the system \((S_{k})\) is inconsistent, we note that the system formed by the matrix \(L\) and the vector \(L\square_{\min}^{\max}u^{(k)}\) as second member is consistent (one of its solution is \(u^{(k)}\)). By definition of the Chebyshev distance \(\Delta(L,b^{(k)})\), we have: \[\|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|\geq\Delta(L,b^{(k)})>0,\quad\text{ see (Definition 1).}\] These remarks justify the introduction of the following definition: **Definition 7**.: _The positive constant \(\mu\) minimizing the learning error \(E(W)\), see (41), according to the training data is:_ \[\mu:=\max_{1\leq k\leq n}\;\Delta(L,b^{(k)}). \tag{60}\] From (59), (60) and the above remarks, we immediately justify that \(\mu\) minimizes the learning error \(E(W)\): **Proposition 14**.: _For all matrix \(W\) of size \((n,m)\), we have:_ \[E(W)=\max_{1\leq i\leq N}\;\|y^{(i)}-W\square_{\min}^{\max}x^{(i)}\|\geq\mu. \tag{61}\] We observe that having a weight matrix \(W\) that perfectly relates the input data to the output data i.e. \(E(W)=0\), implies having \(\mu=0\). In fact, we have: **Proposition 15**.: \[\exists W\in[0,1]^{n\times m},\text{ s.t. }\forall i\in\{1,2,\cdots,N\},W \square_{\min}^{\max}x^{(i)}=y^{(i)}\iff\mu=0.\] (62) Proof.: \(\implies\) follows from (61). \(\Longleftarrow\) If \(\mu=\max_{1\leq k\leq n}\;\Delta(L,b^{(k)})=0\), then all the systems \((S_{1}),(S_{2}),\ldots,(S_{n})\) are consistent. For all \(1\leq k\leq n\), let \(u^{(k)}\in[0,1]^{m\times 1}\) be a solution of the system \((S_{k})\). Denote by \(W\in[0,1]^{n\times m}\) the matrix whose rows are the transpose of the chosen column-vectors \((u^{(1)},u^{(2)}\ldots,u^{(n)})\) (see (58)). By (Lemma 6), we have: \[E(W)=\max_{1\leq i\leq N}\;\|y^{(i)}-W\square_{\min}^{\max}x^{(i)}\|=\max_{1 \leq k\leq n}\;\|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|=0=\mu,\] i.e., \(\forall i\in\{1,2,\cdots,N\},W\square_{\min}^{\max}x^{(i)}=y^{(i)}\). ### Method for learning approximate weight matrices In the proof of (Proposition 15), when \(\mu=0\), we have shown how to construct a weight matrix of the training data. We extend this construction to the general case where \(\mu\geq 0\), i.e., we give a method for constructing an approximate weight matrix \(W\) in the following sense: \[\max_{1\leq i\leq N}\;\|y^{(i)}-W\square_{\min}^{\max}x^{(i)}\|=\mu. \tag{63}\] By extending the method developed in (Subsection 6.1) to the case of \(n\) systems \((S_{1}),(S_{2}),\ldots,(S_{n})\), we construct such a matrix \(W\). **Method 1**.: _Let \(W\) be a matrix defined row by row, which satisfies the following conditions:_ * _If the system_ \((S_{k})\) _is consistent, we define the_ \(k\)_-th row of_ \(W\) _as the transpose of a solution_ \(u^{(k)}\) _of the system_ \((S_{k})\)_. For instance, its greatest solution_ \(L^{t}\square_{\neg\phi}^{\min}b^{(k)}\)_. With this choice, we have:_ \[\|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|=0=\Delta(L,b^{(k)}).\] * _If the system_ \((S_{k})\) _is inconsistent, we take a Chebyshev approximation_ \(b^{(k),*}\) _of_ \(b^{(k)}\) _(an element of the non-empty set_ \(\mathcal{C}_{b^{(k)}}\)_, see (_25_)). With this choice, we define the_ \(k\)_-th row of_ \(W\) _as the transpose of a solution_ \(u^{(k)}\) _of the system_ \(L\square_{\min}^{\max}u=b^{(k),*}\)_, for instance the greatest solution_ \(L^{t}\square_{\neg\phi}^{\min}b^{(k),*}\)_. With this choice, we have:_ \[\|b^{(k)}-L\square_{\min}^{\max}u^{(k)}\|=\Delta(L,b^{(k)}).\] _Thus, any matrix \(W\) constructed row by row with the above procedure will satisfy (63)._ We remind that the set \(\mathcal{A}\), see (53), is the set formed by the matrices verifying (63). From (Method 1), we have: **Proposition 16**.: _The set \(\mathcal{A}\) is non-empty._ Proof.: For \(1\leq k\leq n\), we choose an approximate solution (or solution) \(u^{(k)}\) of the system \((S_{k})\). Let \(W\in[0,1]^{n\times m}\) be the matrix defined by: \[W=\begin{bmatrix}u^{(1)^{t}}\\ u^{(2)^{t}}\\ \vdots\\ u^{(n)^{t}}\end{bmatrix}.\] From (Lemma 6), (Definition 4) and (Definition 7), we deduce: \[E(W)=\max_{1\leq i\leq N}\|y^{(i)}-W\Box_{\min}^{\max}x^{(i)}\|=\max_{1\leq k \leq n}\|b^{(k)}-L\Box_{\min}^{\max}u^{(k)}\|=\max_{1\leq k\leq n}\Delta(L,b^{( k)})=\mu.\] Thus \(W\in\mathcal{A}\). We deduce our main result, i.e., the equality (41) holds: **Corollary 5**.: \[\mu=\min_{W\in[0,1]^{n\times m}}E(W).\] Proof.: This follows from (Proposition 14) and (Proposition 16). In what follows, we illustrate our constructions. ### Examples The following examples illustrate the learning paradigm. In the first example we have \(\mu>0\), while in the second example, which was introduced by Pedrycz in [27], we have \(\mu=0\). **Example 8**.: _Let us consider the following training data:_ _We have \(L=\begin{bmatrix}0.7&0.4&0.4\\ 1.0&0.2&0.5\end{bmatrix}\), \(b^{(1)}=\begin{bmatrix}0.7\\ 1.0\end{bmatrix}\), \(b^{(2)}=\begin{bmatrix}0.1\\ 0.7\end{bmatrix}\) and \(b^{(3)}=\begin{bmatrix}0.3\\ 0.0\end{bmatrix}\). We form three systems \((S_{1}),(S_{2})\) and \((S_{3})\):_ \[(S_{1}):L\Box_{\min}^{\max}u_{1} =b^{(1)},\] \[(S_{2}):L\Box_{\min}^{\max}u_{2} =b^{(2)},\] \[(S_{3}):L\Box_{\min}^{\max}u_{3} =b^{(3)}.\] \(\bullet\) _The system \((S_{1})\) is consistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(1)})=\ 0\). It has \(\begin{bmatrix}1\\ 1\\ 1\end{bmatrix}\) as greatest solution, and it has a unique minimal solution \(\begin{bmatrix}1\\ 0\end{bmatrix}\) computed using the method of [22]._ \(\bullet\) _The system \((S_{2})\) is inconsistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(2)})=0.3\) \begin{table} \begin{tabular}{c c} \(x^{(1)}=(0.7,0.4,0.4)^{t}\) & \(y^{(1)}=(0.7,0.1,0.3)^{t}\) \\ \(x^{(2)}=(1.0,0.2,0.5)^{t}\) & \(y^{(2)}=(1.0,0.7,0.0)^{t}\) \\ \end{tabular} \end{table} Table 2: Training data of the example. We have \(N=2,m=3\) and \(n=3\). _We get \(\eta=\begin{bmatrix}0.4\\ 1\\ 1\end{bmatrix}\) and the greatest Chebyshev approximation of \(b^{(2)}\) is: \(\begin{bmatrix}0.4\\ 0.5\end{bmatrix}\) because \(L\square_{\min}^{\max}\eta=\begin{bmatrix}0.4\\ 0.5\end{bmatrix}\). The vectors \(\begin{bmatrix}0.4\\ 0.0\\ 0.0\end{bmatrix}\) and \(\begin{bmatrix}0.0\\ 0.0\\ 0.4\end{bmatrix}\) are solutions of the system of inequalities \(\underline{b^{(2)}}(\Delta(L,b^{(2)}))\leq L\square_{\min}^{\max}x\) and lower than \(\eta\). We have \(L\square_{\min}^{\max}\begin{bmatrix}0.4\\ 0.0\\ 0.0\end{bmatrix}=L\square_{\min}^{\max}\begin{bmatrix}0.0\\ 0.0\\ 0.4\end{bmatrix}=\begin{bmatrix}0.4\\ 0.4\end{bmatrix}\), therefore, from (Corollary 2), we have a unique minimal Chebyshev approximation of \(b^{(2)}\) which is \(\begin{bmatrix}0.4\\ 0.4\end{bmatrix}\). We use the greatest Chebyshev approximation. The system \((S_{2}^{\prime}):\begin{bmatrix}0.4\\ 0.5\end{bmatrix}=\begin{bmatrix}0.7&0.4&0.4\\ 1.0&0.2&0.5\end{bmatrix}\square_{\min}^{\max}u_{2}^{\prime}\) is consistent and it has \(\begin{bmatrix}0.4\\ 1\\ 1\end{bmatrix}\) as greatest solution and one unique minimal solution \(\begin{bmatrix}0\\ 0.5\end{bmatrix}\)._ \(\bullet\)_The system \((S_{3})\) is inconsistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(3)})=0.15\). We use the greatest Chebyshev approximation of \(b^{(3)}\): \(\begin{bmatrix}0.15\\ 0.15\end{bmatrix}\). The system \((S_{3}^{\prime}):\begin{bmatrix}0.15\\ 0.15\end{bmatrix}=\begin{bmatrix}0.7&0.4&0.4\\ 1.0&0.2&0.5\end{bmatrix}\square_{\min}^{\max}u_{3}^{\prime}\) is consistent and it has \(\begin{bmatrix}0.15\\ 0.15\\ 0.15\end{bmatrix}\) as greatest solution and three minimal solutions \(\begin{bmatrix}0.15\\ 0\\ 0\end{bmatrix}\), \(\begin{bmatrix}0\\ 0.15\\ 0\end{bmatrix}\) and \(\begin{bmatrix}0\\ 0\\ 0.15\end{bmatrix}\)._ _As, we have \(\Delta(L,b^{(1)})=0\), \(\Delta(L,b^{(2)})=0.3\) and \(\Delta(L,b^{(3)})=0.15\), we have \(\mu=0.3\)._ _From the solutions of \((S_{1})\), \((S_{2}^{\prime})\) and \((S_{3}^{\prime})\), we can construct an approximate weight matrix \(W\) row by row. For instance, \(W=\begin{bmatrix}1&0&0.2\\ 0.2&1&0.5\\ 0.15&0.15&0.0\end{bmatrix}\) where \(\begin{bmatrix}1\\ 0\\ 0.2\end{bmatrix}\) is a solution of \((S_{1}^{\prime})\), \(\begin{bmatrix}0.2\\ 1.0\\ 0.5\end{bmatrix}\) is a solution of \((S_{2}^{\prime})\) and \(\begin{bmatrix}0.15\\ 0.15\\ 0\end{bmatrix}\) is a solution of \((S_{3}^{\prime})\). From the training data, we observe that:_ \[W\square_{\min}^{\max}x^{(1)} =\begin{bmatrix}0.7\\ 0.4\\ 0.15\end{bmatrix}\text{ and }\|\begin{bmatrix}0.7\\ 0.4\\ 0.15\end{bmatrix}-y^{(1)}\|=0.3=\mu,\] \[W\square_{\min}^{\max}x^{(2)} =\begin{bmatrix}1\\ 0.5\\ 0.15\end{bmatrix}\text{ and }\|\begin{bmatrix}1\\ 0.5\\ 0.15\end{bmatrix}-y^{(2)}\|=0.2<\mu.\] **Example 9**.: _In [27], Pedrycz learns a weight matrix according to the following training data:_ \[\begin{array}{ll}x^{(1)}=(0.3,1.0,0.5,0.2)^{t}&y^{(1)}=(0.7,0.5,0.6)^{t}\\ x^{(2)}=(0.1,1.0,1.0,0.5)^{t}&y^{(2)}=(0.7,1.0,0.6)^{t}\\ x^{(3)}=(0.5,0.7,0.2,1.0)^{t}&y^{(3)}=(0.7,0.7,0.6)^{t}\\ x^{(4)}=(1.0,0.7,0.5,0.3)^{t}&y^{(4)}=(1.0,0.5,0.6)^{t}\end{array}\] _We put \(L=\begin{bmatrix}0.3&1.0&0.5&0.2\\ 0.1&1.0&1.0&0.5\\ 0.5&0.7&0.2&1.0\\ 1.0&0.7&0.5&0.3\end{bmatrix}\), \(b^{(1)}=\begin{bmatrix}0.7\\ 0.7\\ 0.7\\ 1.0\end{bmatrix}\),\(b^{(2)}=\begin{bmatrix}0.5\\ 1.0\\ 0.7\\ 0.5\end{bmatrix}\) and \(b^{(3)}=\begin{bmatrix}0.6\\ 0.6\\ 0.6\\ 0.6\end{bmatrix}\). We form three systems \((S_{1}):\)\(L\square_{\min}^{\max}u_{1}=b^{(1)}\), \((S_{2}):L\square_{\min}^{\max}u_{2}=b^{(2)}\) and \((S_{3}):L\square_{\min}^{\max}u_{3}=b^{(3)}\)._ \begin{table} \begin{tabular}{l l l} \(\begin{bmatrix}0.4\\ 1\\ 1\end{bmatrix}\) & _and the greatest Chebyshev approximation of \(b^{(2)}\) is: \(\begin{bmatrix}0.4\\ 0.5\end{bmatrix}\) because \(L\square_{\min}^{\max}\eta=\begin{bmatrix}0.4\\ 0.5\end{bmatrix}\). The vectors \(\begin{bmatrix}0.4\\ 0.0\\ 0.0\end{bmatrix}\) and \(\begin{bmatrix}0.0\\ 0.0\\ 0.4\end{bmatrix}\) are solutions of the system of inequalities \(\underline{b^{(2)}}(\Delta(L,b^{(2)}))\leq L\square_{\min}^{\max}x\) and lower than \(\eta\). We have \(L\square_{\min}^{\max}\begin{bmatrix}0.4\\ 0.0\\ 0.0\end{bmatrix}=L\square_{\min}^{\max}\begin{bmatrix}0.0\\ 0.0\\ 0.4\end{bmatrix}=\begin{bmatrix}0.4\\ 0.4\end{bmatrix}\), therefore, from (Corollary 2), we have a unique minimal Chebyshev approximation of \(b^{(2)}\) which is \(\begin{bmatrix}0.4\\ 0.4\end{bmatrix}\). We use the greatest Chebyshev approximation. The system \((S_{2}^{\prime}):\begin{bmatrix}0.4\\ 0.5\end{bmatrix}=\begin{bmatrix}0.7&0.4&0.4\\ 1.0&0.2&0.5\end{bmatrix}\square_{\min}^{\max}u_{2}^{\prime}\) is consistent and it has \(\begin{bmatrix}0.4\\ 1\\ 1\end{bmatrix}\) as greatest solution and one unique minimal solution \(\begin{bmatrix}0\\ 0.5\end{bmatrix}\)._ \(\bullet\)_The system \((S_{3})\) is inconsistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(3)})=0.15\). We use the greatest Chebyshev approximation of \(b^{(3)}\): \(\begin{bmatrix}0.15\\ 0.15\end{bmatrix}\). The system \((S_{3}^{\prime}):\begin{bmatrix}0.15\\ 0.15\end{bmatrix}=\begin{bmatrix}0.7&0.4&0.4\\ 1.0&0.2&0.5\end{bmatrix}\square_{\min}^{\max}u_{3}^{\prime}\) is consistent and it has \(\begin{bmatrix}0.15\\ 0.15\\ 0.15\end{bmatrix}\) as greatest solution and three minimal solutions \(\begin{bmatrix}0.15\\ 0\\ 0\end{bmatrix}\), \(\begin{bmatrix}0.15\\ 0.15\end{bmatrix}\) and \(\begin{bmatrix}0\\ 0.15\end{bmatrix}\)._ _As, we have \(\Delta(L,b^{(1)})=0\), \(\Delta(L,b^{(2)})=0.3\) and \(\Delta(L,b^{(3)})=0.15\), we have \(\mu=0.3\)._ _From the solutions of \((S_{1})\), \((S_{2}^{\prime})\) and \((S_{3}^{\prime})\), we can construct an approximate weight matrix \(W\) row by row. For instance, \(W=\begin{bmatrix}1&0&0.2\\ 0.2&1&0.5\\ 0.15&0.15&0.0\end{bmatrix}\) where \(\begin{bmatrix}1\\ 0\\ 0.2\end{bmatrix}\) is a solution of \((S_{1}^{\prime})\), \(\begin{bmatrix}0.2\\ 1.0\\ 0.5\end{bmatrix}\) is a solution of \((S_{2}^{\prime})\) and \(\begin{bmatrix}0.15\\ 0.15\end{bmatrix}\) is a solution of \((S_{3}^{\prime})\). From the training data, we observe that:_ \[W\square_{\min}^{\max}x^{(1)} =\begin{bmatrix}0.7\\ 0.4\\ 0.15\end{bmatrix}\text{ and }\|\begin{bmatrix}0.7\\ 0.4\\ 0.15\end{bmatrix}-y^{(1)}\|=0.3=\mu,\] \[W\square_{\min}^{\max}x^{(2)} =\begin{bmatrix}1\\ 0.5\\ 0.15\end{bmatrix}\text{ and }\|\begin{bmatrix}1\\ 0.5\\ 0.15\end{bmatrix}-y^{(2)}\|=0.2<\mu.\] \end{table} Table 3: Training data used in [27]. We have \(N=4,m=4\) and \(n=3\). \(\bullet\)_The system \((S_{1})\) is consistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(1)})=\ 0\). Its greatest solution is \(\begin{bmatrix}1.0\\ 0.7\\ 0.7\\ 0.7\end{bmatrix}\) and it has a unique minimal solution \(\begin{bmatrix}1.0\\ 0.7\\ 0.0\\ 0.0\end{bmatrix}\). \(\bullet\)The system \((S_{2})\) is consistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(2)})=0\). Its greatest solution is \(\begin{bmatrix}0.5\\ 0.5\\ 1.0\\ 0.7\end{bmatrix}\) and it has a unique minimal solution \(\begin{bmatrix}0.0\\ 0.0\\ 1.0\\ 0.7\end{bmatrix}\). \(\bullet\)The system \((S_{3})\) is consistent because the Chebyshev distance associated to its second member is \(\Delta(L,b^{(3)})=\ 0\). Its greatest solution is \(\begin{bmatrix}0.6\\ 0.6\\ 0.6\\ 0.6\end{bmatrix}\) and it has a unique minimal solution \(\begin{bmatrix}0.0\\ 0.6\\ 0.0\\ 0.0\end{bmatrix}\). As, \(\Delta(L,b^{(1)})=\Delta(L,b^{(2)})=\Delta(L,b^{(3)})=0.0\) we have: \(\mu=0.0\). Therefore, in the set of matrices \(\mathcal{A}\), see (53), each of the weight matrices \(W\) has three rows constructed from the minimal solution and the greatest solution of each of the three systems \((S_{1}),(S_{2})\) and \((S_{3})\):_ \[\begin{bmatrix}1.0\\ 0.7\\ 0.0\\ 0.0\end{bmatrix}\leq u_{1}\leq\begin{bmatrix}1.0\\ 0.7\\ 0.7\\ 0.7\end{bmatrix},\ \begin{bmatrix}0.0\\ 0.0\\ 1.0\\ 0.7\end{bmatrix}\leq u_{2}\leq\begin{bmatrix}0.5\\ 0.5\\ 1.0\\ 0.7\end{bmatrix}\text{ and }\begin{bmatrix}0.0\\ 0.6\\ 0.0\\ 0.0\end{bmatrix}\leq u_{3}\leq\begin{bmatrix}0.6\\ 0.6\\ 0.6\\ 0.6\end{bmatrix}.\] _Let us consider for example the weight matrix \(W=\begin{bmatrix}1.0&0.7&0.3&0.3\\ 0.4&0.4&1.0&0.7\\ 0.1&0.6&0.2&0.2\end{bmatrix}\in\mathcal{A}\). One can check from the training data that:_ \[W\square_{\min}^{\max}x^{(1)} =y^{(1)},\] \[W\square_{\min}^{\max}x^{(2)} =y^{(2)},\] \[W\square_{\min}^{\max}x^{(3)} =y^{(3)},\] \[W\square_{\min}^{\max}x^{(4)} =y^{(4)}.\] ## 7 Application In what follows, we study an application of our results: how to approximately learn the rule parameters of a possibilistic rule-based system. Recently, Dubois and Prade have emphasized the development of possibilistic learning methods that would be consistent with if-then rule-based reasoning [12]. For this purpose, the author of [1] introduced a system of \(\min-\max\) fuzzy relational equations for learning the rule parameters of a possibilistic rule-based system according to a training datum: \[(\Sigma):Y=\Gamma\square_{\max}^{\min}X,\] where \(\square_{\max}^{\min}\) is the matrix product which takes \(\max\) as the product and \(\min\) as the addition. In the equation system \((\Sigma)\), the second member \(Y\) describes an output possibility distribution, the matrix \(\Gamma\) contains the possibility degrees of the rule premises and \(X\) is an unknown vector containing the rule parameters. If the system \((\Sigma)\) is inconsistent, e.g., due to poor training data, an approximate solution is desirable. The general method that we introduced for obtaining approximate solutions of a system of \(\max-\min\) fuzzy relational equations can be applied to the case of a system of \(\min-\max\) fuzzy relational equations such as \((\Sigma)\). In what follows, we show how to switch from a system of \(\min-\max\) fuzzy relational equations such as \((\Sigma)\) to a system of \(\max-\min\) fuzzy relational equations and vice versa. We introduce analogous tools for a system of \(\min-\max\) fuzzy relational equations to those already introduced for a system of \(\max-\min\) fuzzy relational equations and we show their correspondences in (Table 4). From these results, we propose a method for finding approximate solutions of the rule parameters of possibilistic rule-based system when we have multiple training data. Switching from a system of \(\max-\min\) fuzzy relational equations to a system of \(\min-\max\) fuzzy relational equations (and vice versa) In this subsection, we use the following notation: **Notation 7**.: _To any matrix \(A=[a_{ij}]\), we associate the matrix \(A^{\circ}=[1-a_{ij}]\) and we have \((A^{\circ})^{\circ}=A\)._ Let \(A\) and \(B\) be matrices of respective size \((n,m)\) and \((m,p)\), the transformation \(A\mapsto A^{\circ}\) switches the two matrix products \(\square^{\min}_{\max}\) and \(\square^{\max}_{\min}\) in the following sense: \[(A\square^{\min}_{\max}B)^{\circ}=A^{\circ}\square^{\max}_{\min}B^{\circ}\text { and }(A\square^{\max}_{\min}B)^{\circ}=A^{\circ}\square^{\min}_{\max}B^{\circ}. \tag{64}\] This transformation establishes that the study of systems of \(\max-\min\) fuzzy relational equations is equivalent to the study of systems of \(\min-\max\) fuzzy relational equations in a precise sense that we will develop in what follows and summarize in (Table 4). Let us remark that the transformation \(t\mapsto 1-t\) switches the Godel product, see (3), to the \(\epsilon\)-product defined by: \[x\epsilon y=\left\{\begin{array}{ll}y&\text{if}\quad x<y\\ 0&\text{if}\quad x\geq y\end{array}\right.\text{ in }[0,1].\] Therefore, we deduce that the matrix product \(\square^{\min}_{\rightarrow_{G}}\) is switched to the matrix product \(\square^{\max}_{\epsilon}\) where we take the \(\epsilon\)-product as product and \(\max\) as addition. Let: * \(A\square^{\max}_{\min}x=b\) be a system of \(\max-\min\) fuzzy relational equations, * \(G\square^{\min}_{\max}x=d\) be a system of \(\min-\max\) fuzzy relational equations. In (Table 4), for a system \(G\square^{\min}_{\max}x=d\), we introduce analogous tools (second column) to those already introduced for a system \(A\square^{\max}_{\min}x=b\) (first column). The last column shows how to relate the tools of the two systems iff \[G=A^{\circ}\text{ and }d=b^{\circ}. \tag{65}\] The relations established for the first five rows of (Table 4) are justified by the general switch in (64). Assuming the relation in the sixth row is established, the remaining rows are also justified by the general switch in (64). In the following, we define the Chebyshev distance associated to the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\), denoted \(\nabla(G,d)\) and we prove the last four relations. **Definition 8**.: _The Chebyshev distance associated to the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\) is:_ \[\nabla(G,d)=\inf_{c\in\mathcal{T}}\|d-c\|.\] We remark that if \(d=b^{\circ}\), then for all \(c\in[0,1]^{n\times 1}\) we have \(\|d-c\|=\|b-c^{\circ}\|\). From this property, we deduce: **Proposition 17**.: _If \(G=A^{\circ}\) and \(d=b^{\circ}\), then the Chebyshev distance associated to the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\) is equal to the Chebyshev distance associated to the second member \(b\) of the system \(A\square_{\min}^{\max}x=b\):_ \[\nabla(G,d)=\Delta(A,b). \tag{66}\] Proof.: \[\nabla(G,d) =\inf_{c\in\mathcal{T}}\|d-c\|\] \[=\inf_{c\in\mathcal{T}}\|b-c^{\circ}\|\] \[=\inf_{c^{\prime}\in\mathcal{C}}\|b-c^{\prime}\|\qquad\text{( because }\mathcal{C}=\mathcal{T}^{\circ}\text{)}\] \[=\Delta(A,b)\qquad\qquad\text{(see (Definition 1))}\] The equality \(\nabla(G,d)=\Delta(A,b)\), allows us to establish analogous properties for \(\nabla(G,d)\): **Corollary 6**.: \(\nabla(G,d)=\min_{c\in\mathcal{T}}\|d-c\|\)_._ In order to give an explicit formula for \(\nabla(G,d)\), we will use the following lemma: **Lemma 7**.: _If \(G=A^{\circ}\) and \(d=b^{\circ}\), then we have:_ \[\forall c\in[0,1]^{n\times 1},\forall\delta\in[0,1],\quad U(\underline{c}( \delta))\leq\overline{c}(\delta)\iff\underline{c}^{\prime}(\delta)\leq F( \overline{c^{\prime}}(\delta)), \tag{67}\] _where \(c^{\prime}=c^{\circ}\)._ Proof.: This is a consequence of the relation \(\forall c\in[0,1]^{n\times 1},U(c)=F(c^{\circ})^{\circ}\). For a system of \(\min-\max\) fuzzy relational equations, (Theorem 1 of [7]) becomes: **Corollary 7**.: \(\nabla(G,d)=\min\{\delta\in[0,1]\mid U(\underline{d}(\delta))\leq\overline{d}( \delta)\}\)_._ For a system of \(\min-\max\) fuzzy relational equations, (Theorem 1) of this article becomes: **Corollary 8**.: _Let \(G=[g_{ij}]_{1\leq i\leq n,1\leq j\leq m}\in[0,1]^{n\times m}\) be a matrix and \(d=[d_{i}]_{1\leq i\leq n}\) be a column vector. The Chebyshev distance associated to the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\) is:_ \[\nabla=\nabla(G,d)=\max_{1\leq i\leq n}\ \nabla_{i} \tag{68}\] _where for \(i=1,2,\ldots n\):_ \[\nabla_{i}=\min_{1\leq j\leq m}\ \max[(g_{ij}-d_{i})^{+},\max_{1\leq k\leq n}\ \sigma_{\epsilon}\left(d_{i},g_{kj},d_{k}\right)] \tag{69}\] _and_ \[\sigma_{\epsilon}\left(u,v,w\right)=\min(\frac{(w-u)^{+}}{2},(w-v)^{+}). \tag{70}\] Proof.: If we set \(A=G^{\circ}\) and \(b=d^{\circ}\), we deduce (68), (69) and (70) from the equality \(\Delta(A,b)=\nabla(G,d)\), see (66), (Theorem 1) and the relation \(\sigma_{G}(x,y,z)=\sigma_{\epsilon}(u,v,w)\) where \(x=u^{\circ}\), \(y=v^{\circ}\) and \(z=w^{\circ}\). We define the set of Chebyshev approximations of the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\): \[\mathcal{T}_{d}=\{c\in\mathcal{T}\mid\|d-c\|=\nabla(G,d)\}. \tag{71}\] If \(G=A^{\circ}\) and \(d=b^{\circ}\), then the equality \(\mathcal{T}_{d}=C_{b}^{\circ}\) follows from (66). From \(U(\underline{d}(\nabla))=F(\overline{b}(\Delta))^{\circ}\), see (Table 4), we deduce: **Corollary 9**.: _The lowest Chebyshev approximation of the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\) is \(U(\underline{d}(\nabla))\)._ The method for obtaining maximal Chebyshev approximations of the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\) is analogous to the practical method presented in (Subsection 5.3). We use the following notation: **Notation 8**.: _Let \(\{w^{(1)},w^{(2)},\ldots,w^{(h)}\}\) be the set of maximal solutions of the system of inequalities \(G\square_{\max}^{\min}x\leq\overline{d}(\nabla)\) such that \(\forall i\in\{1,2,\ldots,h\},w^{(i)}\geq\nu=G^{t}\square_{\epsilon}^{\max}U( \underline{d}(\nabla))\)._ We have: **Corollary 10**.: _We put:_ \[\widetilde{\mathcal{T}}=\{\psi(w^{(1)}),\psi(w^{(2)}),\ldots,\psi(w^{(h)})\}\] _and_ \[(\widetilde{\mathcal{T}})_{\max}=\{c\in\widetilde{\mathcal{T}}\mid c\text{ is maximal in }\widetilde{\mathcal{T}}\}.\] _Then, we have:_ \[\widetilde{\mathcal{T}}\subseteq\mathcal{T}_{d}\text{ and }\mathcal{T}_{d,\max} =(\widetilde{\mathcal{T}})_{\max},\] _where \(\mathcal{T}_{d,\max}\) is the set formed by the maximal Chebyshev approximations of the second member \(d\) of the system \(G\square_{\max}^{\min}x=d\)._ Proof.: Let \(A=G^{\circ}\) and \(b=d^{\circ}\). From the equality \(\nabla(G,d)=\Delta(A,b)\) and the general switch in (64), we have for any \(x\in[0,1]^{\mathrm{m}\times 1}\): \[G\square_{\mathrm{max}}^{\mathrm{min}}x\leq\overline{d}(\nabla)\Longleftrightarrow \underline{b}(\Delta)\leq A\square_{\mathrm{min}}^{\mathrm{max}}x^{\circ},\] \[x\geq\nu\Longleftrightarrow x^{\circ}\leq\eta=A^{t}\square_{\to G}^{ \mathrm{min}}F(\overline{b}(\Delta)).\] From these two equivalences, we deduce: \[\{w^{(1)},w^{(2)},\ldots,w^{(h)}\}=\{{v^{(1)}}^{\circ},{v^{(2)}}^{\circ},\ldots,{v^{(h^{\circ}})}\}\] where the set \(\{v^{(1)},v^{(2)},\ldots,v^{(h)}\}\) is defined in (Notation 6) for the system \(A\square_{\mathrm{min}}^{\mathrm{max}}x=b\). Using the switch (64), this last equality implies the claims of the Corollary. **Corollary 11**.: _The set \(\mathcal{T}_{d,\mathrm{max}}\) is non-empty and finite._ Proof.: As \(\widetilde{\mathcal{T}}\) is a finite non-empty ordered set, the set \((\widetilde{\mathcal{T}})_{\mathrm{max}}=\mathcal{T}_{d,\mathrm{max}}\) is non-empty and finite. We study the approximate solutions set \(\Upsilon_{d}\) of the system \(G\square_{\mathrm{max}}^{\mathrm{min}}x=d\): **Definition 9**.: _The approximate solutions set of the system \(G\square_{\mathrm{max}}^{\mathrm{min}}x=d\) is:_ \[\Upsilon_{d}=\psi^{-1}(\mathcal{T}_{d})=\{x\in[0,1]^{\mathrm{m}\times 1}\mid \psi(x)\in\mathcal{T}_{d}\}.\] If \(A=G^{\circ}\) and \(b=d^{\circ}\), we have \(\Upsilon_{d}=\Lambda_{b}^{\circ}\). From this, we deduce a particular element of \(\Upsilon_{d}\): **Proposition 18**.: _The lowest approximate solution of the system \(G\square_{\mathrm{max}}^{\mathrm{min}}x=d\) is \(\nu=G^{t}\square_{\epsilon}^{\mathrm{max}}U(\underline{d}(\nabla))\)._ Proof.: This follows from that the fact that, if \(A=G^{\circ}\) and \(b=d^{\circ}\), we have \(\nu=\eta^{\circ}\). We have a set of maximal approximate solutions \(\Upsilon_{d,\mathrm{max}}\) of the system \(G\square_{\mathrm{max}}^{\mathrm{min}}x=d\) that satisfies: \[\Upsilon_{d,\mathrm{max}}\subseteq\Upsilon_{d}\text{ and }\mathcal{T}_{d, \mathrm{max}}=\{\psi(x)\mid x\in\Upsilon_{d,\mathrm{max}}\}, \tag{72}\] which is defined by: **Definition 10**.: \[\Upsilon_{d,\mathrm{max}}=\{x\in\{w^{(1)},w^{(2)},\ldots,w^{(h)}\}\mid\psi(x) \in\mathcal{T}_{d,\mathrm{max}}\},\quad\text{see (Notation~{}\ref{eq:10})}.\] We have: **Proposition 19**.: _With the above definition of \(\Upsilon_{d,\mathrm{max}}\), we have \(\Upsilon_{d,\mathrm{max}}\subseteq\Upsilon_{d}\) and \(\mathcal{T}_{d,\mathrm{max}}=\{\psi(x)\mid x\in\Upsilon_{d,\mathrm{max}}\}\). Therefore, the set \(\Upsilon_{d,\mathrm{max}}\) is non-empty and finite._ Proof.: If \(A=G^{\circ}\) and \(b=d^{\circ}\), we have: \(\psi(x)=\theta(x^{\circ})^{\circ}\), \(\Upsilon_{d}=\Lambda_{b}^{\circ}\) and \(\Upsilon_{d,\mathrm{max}}=\Lambda_{b,\mathrm{min}}^{\circ}\). From these three equalities, we deduce immediately \(\Upsilon_{d,\mathrm{max}}\subseteq\Upsilon_{d}\) and \(\mathcal{T}_{d,\mathrm{max}}=\{\psi(x)\mid x\in\Upsilon_{d,\mathrm{max}}\}\). As we know that the set \(\mathcal{T}_{d,\mathrm{max}}\) is non-empty and finite (Corollary 11), we deduce from the equality \(\mathcal{T}_{d,\mathrm{max}}=\{\psi(x)\mid x\in\Upsilon_{d,\mathrm{max}}\}\) that the set \(\Upsilon_{d,\mathrm{max}}\) is also non-empty and finite. The structure of the set \(\mathcal{T}_{d}\) is described by the following result: **Corollary 12**.: _For all \(c\in[0,1]^{n\times 1}\), we have:_ \[c\text{ is a Chebyshev approximation of }d\text{ i.e., }c\in\mathcal{T}_{d}\iff U(c)=c\text{ and } \exists\,c^{\prime}\in\mathcal{T}_{d,\mathrm{max}}\text{ s.t. }U(\underline{d}(\nabla))\leq c\leq c^{\prime}. \tag{73}\] Proof.: This follows from that the fact that, if \(A=G^{\circ}\) and \(b=d^{\circ}\), we have \(\mathcal{T}_{d}=\mathcal{C}_{b}^{\circ}\), \(\mathcal{T}_{d,\mathrm{max}}=\mathcal{C}_{b,\mathrm{min}}^{\circ}\) and \(U(\underline{d}(\nabla))=F(\overline{b}(\Delta))^{\circ}\); by applying (Theorem 3), we get the result. In the following, we illustrate the switch from the system of \(\mathrm{min}-\mathrm{max}\) fuzzy relations equations \((\Sigma)\) of [1] to its associated system of \(\mathrm{max}-\mathrm{min}\) fuzzy relational equations. **Example 10**.: _Let us reuse the example in [1]._ \[(\Sigma):\quad Y =\Gamma\square_{\max}^{\min}X\] \[\begin{bmatrix}0.3\\ 1\\ 0.3\\ 0.3\\ 0.7\\ 0.7\end{bmatrix} =\begin{bmatrix}0.1&1&1&1&1&1\\ 1&1&1&1&1\\ 0.1&1&1&0.8&1&1\\ 1&1&1&0.8&1&1\\ 0.1&1&1&1&1&0.3\\ 1&1&1&1&0.3\\ 0.1&1&1&0.8&1&0.3\\ 1&1&1&0.8&1&0.3\end{bmatrix}\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\\ s_{3}\\ r_{3}\end{bmatrix}.\] _where \(s_{1},r_{1},s_{2},r_{2},s_{3}\), and \(r_{3}\) are unknown rule parameters. The system \((\Sigma)\) is consistent. We have:_ \[X=\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\\ s_{3}\\ \end{bmatrix}\text{ is a solution iff }\begin{bmatrix}0.3\\ 0\\ 0\\ 0\\ 0.7\end{bmatrix}\leq\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\\ s_{3}\\ r_{3}\end{bmatrix}\leq\begin{bmatrix}0.3\\ 1\\ 1\\ 0.8\\ 1\\ 0.7\end{bmatrix}.\] _Let \(A=\Gamma^{\circ}\), \(x=X^{\circ}\) and \(b=Y^{\circ}\). We have:_ \[b =A\square_{\min}^{\max}x\] \[\begin{bmatrix}0.7\\ 0\\ 0.7\\ 0.7\\ 0.7\\ 0.7\\ 0.3\\ \end{bmatrix} =\begin{bmatrix}0.9&0&0&0&0&0\\ 0.0&0&0&0&0\\ 0.9&0&0&0.2&0&0\\ 0.9&0&0&0&0&0.7\\ 0.9&0&0&0&0&0.7\\ 0.9&0&0&0.2&0&0.7\\ 0&0&0&0.2&0&0.7\end{bmatrix}\square_{\min}^{\max}\begin{bmatrix}1-s_{1}\\ 1-r_{1}\\ 1-s_{2}\\ 1-s_{3}\\ 1-r_{3}\end{bmatrix}.\] _We have:_ \[x=\begin{bmatrix}1-s_{1}\\ 1-r_{1}\\ 1-s_{2}\\ 1-r_{2}\\ 1-s_{3}\\ 1-r_{3}\end{bmatrix}\text{ is a solution iff }\begin{bmatrix}0.7\\ 0\\ 0.2\\ 0.3\end{bmatrix}\leq\begin{bmatrix}1-s_{1}\\ 1-r_{1}\\ 1-s_{2}\\ 1-r_{2}\\ 1-s_{3}\\ 1-r_{3}\end{bmatrix}\leq\begin{bmatrix}0.7\\ 1\\ 1\\ 1\\ 0.3\end{bmatrix}.\] Let \((\Sigma)\) be an inconsistent system. Using our results, we can obtain approximate solutions of the system \((\Sigma)\) which are solutions of the consistent systems defined by the matrix of \((\Sigma)\) and a Chebyshev approximation of the second member of \((\Sigma)\). For obtaining maximal Chebyshev approximations of the second member of the system \((\Sigma)\), we have to use (Corollary 10), which requires the solving of a particular system of \(\min-\max\) fuzzy relational inequalities. The solving of such a system can be done by adapting the results of [22] using an approach similar to the one we used to establish the correspondences (Table 4). We illustrate how to obtain approximate solutions of the system \((\Sigma)\) when it is inconsistent. **Example 11**.: _(continued) Let us reuse the matrix \(\Gamma\) of the previous example and a new second member \(Y=\begin{bmatrix}0.3\\ 1\\ 0.3\\ 0.8\\ 0.7\\ 0.7\\ 0.3\\ 0.7\end{bmatrix}.\)_ _In this case, the system \((\Sigma)\) is inconsistent and the Chebyshev distance associated to the second member \(Y\) of \((\Sigma)\) is \(\nabla=0.2\). The lowest Chebyshev approximation of \(Y\) is denoted \(\tilde{Y}\) and from (Corollary 10) we find that there is a unique maximal Chebyshev approximation of \(Y\) which is denoted \(\hat{Y}\):_ \[\hat{Y}=\begin{bmatrix}0.5\\ 1\\ 0.5\\ 0.8\\ 0.5\\ 0.5\\ 0.5\end{bmatrix}\quad\text{and}\quad\hat{Y}=\begin{bmatrix}0.5\\ 1\\ 0.5\\ 0.9\\ 0.5\\ 0.9\end{bmatrix}. \tag{74}\] _Some approximate solutions of the system \((\Sigma):Y=\Gamma\square_{\max}^{\min}X\) are the solutions of the system \(\hat{Y}=\Gamma\square_{\max}^{\min}X\) i.e., \(\begin{bmatrix}0.5\\ 0\\ 0\\ 0\\ 0.5\end{bmatrix}\leq\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\\ s_{3}\\ r_{3}\end{bmatrix}\leq\begin{bmatrix}0.5\\ 1\\ 1\\ 0.8\\ 1\\ 0.5\end{bmatrix}\) and the solutions of the system \(\hat{Y}=\Gamma\square_{\max}^{\min}X\) i.e, \(\begin{bmatrix}0.5\\ 0\\ 0\\ 1\\ 0.9\end{bmatrix}\leq\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\\ s_{3}\\ r_{3}\end{bmatrix}\leq\begin{bmatrix}0.5\\ 1\\ 1\\ 1\\ 0.9\end{bmatrix}\). One can check that \(\begin{bmatrix}0.5\\ 1\\ 1\\ 0.9\end{bmatrix}\) belongs to \(\Upsilon_{Y,\max}\) i.e., it is a maximal approximate solution._ ### Finding approximate solutions of the rule parameters according to multiple training data The equation system \((\Sigma)\) has been introduced for learning the rule parameters according to a training datum [1]. Our results let us tackle the problem of determining values of the rule parameters when we have multiple training data as follows. Let us consider that we have \(N\) equation systems \((\Sigma_{1}):Y_{1}=\Gamma_{1}\square_{\max}^{\min}X,(\Sigma_{2}):Y_{2}=\Gamma_ {2}\square_{\max}^{\min}X,\ldots,(\Sigma_{N}):Y_{N}=\Gamma_{N}\square_{\max}^{ \min}X\), where each of them is formed from a training datum using the procedure introduced in [1]. From the matrices \(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{N}\) and the second members \(Y_{1},Y_{2},\ldots Y_{N}\) of the equation systems, we form a new matrix and a new column vector by block matrix construction: \[\mathbf{\Gamma}=\begin{bmatrix}\Gamma_{1}\\ \Gamma_{2}\\ \vdots\\ \Gamma_{N}\end{bmatrix}\text{ and }\mathbf{Y}=\begin{bmatrix}Y_{1}\\ Y_{2}\\ \vdots\\ Y_{N}\end{bmatrix} \tag{75}\] We introduce the following equation system, which in fact stacks the equation systems \((\Sigma_{1}),(\Sigma_{2}),\ldots,(\Sigma_{N})\) into one: \[(\mathbf{\Sigma}):\mathbf{Y}=\mathbf{\Gamma}\square_{\max}^{\min}X. \tag{76}\] By solving \((\mathbf{\Sigma})\), we obtain solutions for the rule parameters that take into account all the training data. If the system \((\mathbf{\Sigma})\) is inconsistent, \(\nabla(\mathbf{\Gamma},\mathbf{Y})\) is the Chebyshev distance associated to its second member \(\mathbf{Y}\) and using our results, we can obtain approximate solutions of the rule parameters which are approximate solutions of the system \((\mathbf{\Sigma})\). We illustrate this paradigm by the following example. **Example 12**.: _We consider two systems, each of them being built from a training datum using the method presented in [1]:_ \[\begin{array}{rll}(\Sigma_{1}):&Y_{1}&=\Gamma_{1}\square_{\max}^{\min}X\\ &\begin{bmatrix}0.3\\ 1\\ 0.3\\ 0.8\end{bmatrix}=\begin{bmatrix}0.4&1&1&1\\ 1&1&1&1\\ 0.4&1&1&0.8\\ 1&1&1&0.8\end{bmatrix}\square_{\max}^{\min}\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\end{bmatrix}\\ \end{array}\] _and_ \[(\Sigma_{2}):\;\;Y_{2} =\Gamma_{2}\square_{\max}^{\min}X\] \[\begin{bmatrix}1\\ 0.8\\ 0.3\\ 0.3\end{bmatrix} =\begin{bmatrix}1&1&1&1\\ 1&0.7&1&1\\ 1&1&1&0.1\\ 1&0.7&1&0.1\end{bmatrix}\square_{\max}^{\min}\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\end{bmatrix}.\] _We remind that \(s_{1},r_{1},s_{2},r_{2}\) are the unknown rule parameters. We form the system \((\mathbf{\Sigma}):\;\mathbf{Y}=\mathbf{\Gamma}\square_{\max}^{\min}X\)_ \[(\mathbf{\Sigma}):\;\;\mathbf{Y} =\mathbf{\Gamma}\square_{\max}^{\min}X\] \[\begin{bmatrix}0.3\\ 1\\ 0.3\\ 0.8\\ 1\\ 0.8\\ 0.3\\ 0.3\end{bmatrix} =\begin{bmatrix}0.4&1&1&1\\ 1&1&1&1\\ 0.4&1&1&0.8\\ 1&1&1&1&0.8\\ 1&1&1&1&1\\ 1&0.7&1&1\\ 1&1&1&0.1\\ 1&0.7&1&0.1\end{bmatrix}\square_{\max}^{\min}\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\end{bmatrix}.\] _The system \((\mathbf{\Sigma})\) is inconsistent, because the Chebyshev distance associated to its second member \(\mathbf{Y}\) is \(\nabla=0.1\). The lowest Chebyshev approximation of \(\mathbf{Y}\) is denoted \(\hat{\mathbf{Y}}\) and we find that we have a unique maximal Chebyshev approximation of \(\mathbf{Y}\) denoted \(\hat{\mathbf{Y}}\):_ \[\hat{\mathbf{Y}}=\begin{bmatrix}0.4\\ 1\\ 0.4\\ 0.8\\ 1\\ 0.7\\ 0.2\\ 0.2\end{bmatrix}\text{ and }\hat{\mathbf{Y}}=\begin{bmatrix}0.4\\ 1\\ 0.4\\ 0.8\\ 0.9\\ 0.4\\ 0.4\end{bmatrix}.\] _Some approximate solutions of the system \((\mathbf{\Sigma})\) are the solutions of the system \(\hat{\mathbf{Y}}=\mathbf{\Gamma}\square_{\max}^{\min}X\) i.e., \(\begin{bmatrix}0\\ 0\\ 0\\ 0.2\end{bmatrix}\leq\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\end{bmatrix}\leq\begin{bmatrix}0.4\\ 0.9\\ 1\\ 0.4\end{bmatrix}\) and the solutions of the system \(\hat{\mathbf{Y}}=\mathbf{\Gamma}\square_{\max}^{\min}X\) i.e, \(\begin{bmatrix}0\\ 0.9\\ 0\\ 0.4\end{bmatrix}\leq\begin{bmatrix}s_{1}\\ r_{1}\\ s_{2}\\ r_{2}\end{bmatrix}\leq\begin{bmatrix}0.4\\ 0.9\\ 1\\ 0.4\end{bmatrix}\)._ ## 8 Conclusion In this article, for an inconsistent system of \(\max-\min\) fuzzy relational equations denoted \((S):A\square_{\min}^{\max}x=b\), we have described the approximate solutions set of the system and the set of Chebyshev approximations of the second member \(b\). The main tool of our study is an explicit analytical formula to compute the Chebyshev distance \(\Delta=\inf_{c\in\mathcal{C}}\|b-c\|\), which is expressed in \(L_{\infty}\) norm, and where \(\mathcal{C}\) is the set of second members of the consistent systems defined with the same matrix \(A\). The Chebyshev distance is obtained by elementary calculations involving only the components of the matrix \(A\) and those of the second member \(b\). We defined an approximate solution of an inconsistent system \(A\square_{\min}^{\max}x=b\) as a solution of a consistent system \(A\square_{\min}^{\max}x=c\), where \(c\) is a vector such that \(\|b-c\|=\Delta\) ; \(c\) is called a Chebyshev approximation of \(b\). We first related the approximate solutions set to the set of Chebyshev approximation of \(b\). We gave two sharp characterizations of the approximate solutions set and showed how to get minimal Chebyshev approximations of \(b\) from minimal approximate solutions. As a consequence of our result, we proved that the set of minimal Chebyshev approximations of \(b\) is non-empty and finite. Furthermore, we described the structure of the approximate solutions set and that of the set of Chebyshev approximations of \(b\). We introduced a paradigm for \(\max-\min\) learning approximate weight matrices relating input and output data from training data, where the learning error is expressed in terms of \(L_{\infty}\) norm. For this purpose, we canonically associated to the training data systems of \(\max-\min\) fuzzy relational equations. These systems allowed us to compute the minimal value \(\mu\) of the learning error according to the training data. This minimal value \(\mu\) is expressed in terms of the Chebyshev distances associated to the second member of the already introduced systems. Moreover, we gave a method for constructing approximate weight matrices whose learning error is equal to \(\mu\). By introducing analogous tools for a system of \(\min-\max\) fuzzy relational equations to those already introduced for a system of \(\max-\min\) fuzzy relational equations, and then establishing the correspondences between them, we have shown that the study of the approximate solutions of a system of \(\max-\min\) fuzzy relational equations is equivalent to the study of the approximate solutions of a system of \(\min-\max\) fuzzy relational equations. This allowed us to extend the results of [1]: we gave a method to approximately learn the rule parameters of a possibilistic rule-based system according to multiple training data. In perspectives, we are currently working on the development of analogous tools for systems of \(\max-T\) fuzzy relational equations, where \(T\) is the t-norm product or the t-norm of Lukasiewicz. For these systems, we already have analytical formulas to compute the Chebyshev distance associated to their second member. As applications, for the problem of the \(\max-\min\) invertibility of a fuzzy relation, when a fuzzy matrix \(A\) has no preinverse (resp. postinverse), we know how to compute, using the \(L_{\infty}\) norm, an approximate preinverse (resp. postinverse) for \(A\). We also tackle the development of new applications based on systems of \(\max-T\) fuzzy relational equations where \(T\) is a t-norm among \(\min\), product or the one of Lukasiewicz.
2310.10062
A Comprehensive Evaluation of Tool-Assisted Generation Strategies
A growing area of research investigates augmenting language models with tools (e.g., search engines, calculators) to overcome their shortcomings (e.g., missing or incorrect knowledge, incorrect logical inferences). Various few-shot tool-usage strategies have been proposed. However, there is no systematic and fair comparison across different strategies, or between these strategies and strong baselines that do not leverage tools. We conduct an extensive empirical analysis, finding that (1) across various datasets, example difficulty levels, and models, strong no-tool baselines are competitive to tool-assisted strategies, implying that effectively using tools with in-context demonstrations is a difficult unsolved problem; (2) for knowledge-retrieval tasks, strategies that *refine* incorrect outputs with tools outperform strategies that retrieve relevant information *ahead of* or *during generation*; (3) tool-assisted strategies are expensive in the number of tokens they require to work -- incurring additional costs by orders of magnitude -- which does not translate into significant improvement in performance. Overall, our findings suggest that few-shot tool integration is still an open challenge, emphasizing the need for comprehensive evaluations of future strategies to accurately assess their *benefits* and *costs*.
Alon Jacovi, Avi Caciularu, Jonathan Herzig, Roee Aharoni, Bernd Bohnet, Mor Geva
2023-10-16T04:53:22Z
http://arxiv.org/abs/2310.10062v2
# A Comprehensive Evaluation of Tool-Assisted Generation Strategies ###### Abstract A growing area of research investigates augmenting language models with tools (e.g., search engines, calculators) to overcome their shortcomings (e.g., missing or incorrect knowledge, incorrect logical inferences). Various few-shot tool-usage strategies have been proposed. However, there is no systematic and fair comparison across different strategies, or between these strategies and strong baselines that do not leverage tools. We conduct an extensive empirical analysis, finding that (1) across various datasets, example difficulty levels, and models, strong no-tool baselines are competitive to tool-assisted strategies, implying that effectively using tools with in-context demonstrations is a difficult unsolved problem; (2) for knowledge-retrieval tasks, strategies that _refine_ incorrect outputs with tools outperform strategies that retrieve relevant information _ahead of_ or _during generation_; (3) tool-assisted strategies are expensive in the number of tokens they require to work--incurring additional costs by orders of magnitude--which does not translate into significant improvement in performance. Overall, our findings suggest that few-shot tool integration is still an open challenge, emphasizing the need for comprehensive evaluations of future strategies to accurately assess their _benefits_ and _costs_. ## 1 Introduction Augmenting language models (LMs) with tools has been proposed to overcome LMs' inherent weaknesses [12, 13], such as the lack of grounding to reliable or updated sources [11], incoherent logical ability [10, 13] and arithmetic ability [14], among others. This is done through _tool-assisted (TA) generation_, where LMs are trained or instructed to use external tools, such as search engines over the web--e.g., Google search [14, 15, 16, 17], Wikipedia search [18], a calculator [15], or a python interpreter [16]. Often, tool invocations are structured as _Chain-of-Thought_ (CoT) long-form answers [20]. Recent work proposed a variety of strategies for interfacing between the LM and the tool, such as through demonstrations of API calls [1] or using the tool to refine the model's output [14]--see Figure 2 for an overview. But what are the advantages and trade-offs of different TA strategies? For example, some strategies incur significantly higher _computation costs_ than others with little to no improvement in performance. There is a gap in the literature on the _evaluation_ of such strategies, in particular _against strong baselines_ and _against each other_. Concretely, works that report empirical evaluations are often restricted to comparisons of a single proposed strategy against a limited selection of non-TA baselines, using a limited selection of LMs or even a single LM, or focus on evaluating various LMs with a specific TA strategy [11]. Additionally, comparisons often do not consider the increase in computation that each TA strategy requires, which vary significantly, and have a large effect on inference time or cost. The above issues are only some of the pitfalls we observed in the literature, limiting the scope of current evaluations. In SS3, we analyze the literature for common pitfalls and collect a set of guidelines towards a fair and reliable evaluation procedure specifically for TA strategies. Next (SS4), we conduct a study which addresses all of the observed pitfalls, using GPT3, Flan-UL2 and Flan-PaLM, and complex reasoning benchmarks StrategyQA, MuSiQue, GSM8K, and DROP. We report a fair, systematic comparison of five few-shot TA strategies across multiple models and demonstrations, and all strategies use the same set of tools. We analyze the study results (SS5) and arrive at surprising conclusions: (1) Non-TA baselines are stronger than initially reported. In most cases, TA strategies do not significantly or at all improve on non-TA strategies on popular Question Answering datasets. (2) For retrieval tools in knowledge tasks, TA strategies that fix model output after it is generated perform better than TA strategies that prompt the model to interface with the tool directly during generation. For calculator tools in calculation-intensive tasks, the relationship is not decisive. (3) TA strategies incur significantly higher computation costs than non-TA baselines by multiplicative factors, and there is no general correlation between computation cost and performance, with the exception that refinement strategies in retrieval settings are more costly than non-refinement strategies. In SS6 we report a fine-grained analysis of the results. We investigate the effect of each example's difficulty--e.g., very large numbers, or very rare entities) on improvement from tool usage, and find that tools do not systematically improve model performance on harder examples, where they were expected to have the strongest improvement. Finally, based on an error analysis of failure cases, we find that the majority of mistakes follow incorrect tool invocations, rather than incorrect tool responses (in the case of the retrieval tool) or incorrect inferences based on correct tool usage. In conclusion, we conduct an extensive evaluation of few-shot TA strategies, finding that previous estimates of tool-usage performance is not representative. Overall, this suggests that few-shot tool integration is still an open challenge. We call the community to evaluate future strategies systematically, while taking into account the significant costs that these strategies require in comparison to their benefits. Towards this, we provide a set of concrete guidelines for fair and reliable evaluation of TA strategies. Moreover, We release the handcrafted collection of 184 demonstrations used in our study (to be available soon in the supplementary material). ## 2 Tool-Assisted Language Models We describe existing few-shot strategies for augmenting LMs with tools and discuss related work. ### Few-shot TA strategies Strategies for tool usage can be broadly divided into two categories: (a) Using tools during generation and insert the tools' outputs into the model's prompt (Figures 0(a), 0(a)); (b) Using tools to refine the LM's output after generation (Figures 0(b), 0(b)). Strategies can be further categorized into settings where the tool is heuristically called in a pipeline or called when the model generates pre-specified tool calls. Refer to Mialon et al. (2023) for a review of the literature on TA strategies and models. Among TA strategies of type (a): **SelfAsk** (Press et al., 2023) decomposes the task into subtasks as simpler questions, such that a tool can be called on each question. A related strategy is _DemonstrateSearch-Predict_(Khattab et al., 2023). **Inline** strategies such as Toolformer (Schick et al., 2023)1, ART (Paranjape et al., 2023), inter alia (Chen et al., 2022; Gao et al., 2023; Lyu et al., 2023) demonstrate tool usage with pre-defined words or tokens and tool arguments, halt generation when those tokens and arguments are generated, invoke the tool, and insert its output into the prompt to resume generation. **Interleaving Retrieval**(Trivedi et al., 2022) does not directly instruct the model to use tools, but calls the tool on each reasoning step, to provide the model with additional context for future steps. (Jiang et al., 2023) propose a similar strategy, opting to re-write each step after using it as a query. There are also strategies such as **Decomposed Prompting**(Khot et al., 2023) that are generalizations of the previous strategies. Footnote 1: Schick et al. primarily discusses tool usage with training. We adapt only the few-shot strategy in our experiments. Among TA strategies of type (b): **RARR**(Gao et al., 2023) involves a pipeline designed for Figure 1: Illustration of tool-assistance strategies that invoke tools and insert their outputs into the prompt (a), and strategies that first generate some output, and only use tools to fix and refine it (b). knowledge-based tasks: verifying the relevance and factuality of each claim by generating questions based on the claim, retrieving snippets that answer these questions, and checking if the answers match the information in the claim. If not, the claim is refined to match the snippets. **Check & Fix**, a method we introduce in this work, uses each CoT step as a search query, and checks whether the step is entailed by the retrieved snippets by prompting the model to classify this entailment. This strategy is similar to Jiang et al. (2023, contemporaneous work), which additionally uses low-confidence filtering but omits the entailment verification. ### Related Work Training LMs to use tools.While we are primarily concerned with few-shot tool assistance of LM generation, the literature also explores LMs which are trained to use specific tools (Parisi et al., 2022; Hao et al., 2023; Patil et al., 2023). These methods are constrained to the tools seen during training, and require data (annotated, bootstrapped, or synthetically constructed) of tool demonstrations. Other tool-assisted neural networks.There is adjacent research on augmenting neural networks, in ways besides textual interfaces, with tools (e.g., Andor et al., 2019; Jacovi et al., 2019) or training differentiable subnetworks that heavily mimic tools (Neelakantan et al., 2017; Trask et al., 2018). ## 3 Evaluation Pitfalls While there is a plethora of TA strategies (SS2.1), no systematic comparison of these strategies has been Figure 2: Overview of the TA strategies implemented in this work. Blue text marks tool queries, tool responses are in turquoise cells, refinement is in orange cells and dashed arrows, and yellow cells are LM generations. conducted. Research that proposes TA strategies in few-shot settings is often not focused on evaluating properties of those strategies, but other aspects of LM capabilities Press et al. (2023); Gao et al. (2023), usage in particular strict contexts Paranjape et al. (2023), evaluating various LM models themselves with a particular strategy Mialon et al. (2023), and so on. Below we collect observations from the literature that demonstrate the limited evaluation scope of TA strategies, in an effort to establish a set of criteria for future evaluations to be reliable and fair (a summary is provided in Table 1). **(1) Coupling the TA strategy and the tool together.** Comparisons may vary the tools and methods together (e.g., a TA strategy \(A\) with a tool \(A\) versus a TA strategy \(B\) with a tool \(B\)). **(2) Forcing baselines to the framework of the TA strategy.** Typical baselines to a given TA strategy are to apply that strategy while letting the model generate the tool's output instead of the tool, and using CoT prompting. However, the optimal way to solve the problem without tools may not be the same as the TA strategy in question. In this work, we implement three different baselines (SS4) and find that there is _no clear winner_ among two of them (we explore this empirically in SS5). **(3) Using one model across all comparisons.** Often, a single model is chosen to use as the underlying model for the TA strategy. This limits the insights from the evaluation to this model in particular, since conclusions may not carry over to other models. In this work, we find that the _best-performing strategies vary significantly_ across different LMs (we explore this empirically in SS5). **(4) Using one prompt and one set of demonstrations across all comparisons.** Few-shot evaluation is known to be unreliable when using a single set of demonstrations as a single prompt Perez et al. (2021). Furthermore, some prompts used in TA strategy evaluations--in particular, CoT demonstrations--appear so often on the internet that they are suspected to be part of the models' training data, further compromising their function Jacovi et al. (2023). **(5) Not considering TA strategy costs.** In many cases, the TA strategy requires significantly more compute than no-tool baselines, and different TA strategies also require different amounts of computation. Computation cost is not traditionally considered in comparisons. ## 4 Experimental Setup Our goal is to conduct a fair and reliable comparison of _TA strategies_, without being influenced by properties of specific models, tools or prompts. To this end, we focus on _few-shot_ tool usage, a popular TA scheme that allows flexibility around using new tools and adapting tools to specific tasks. In what follows, we describe our experimental setup. What guides this experimental setup is to perform a comprehensive, rigorous evaluation without the pitfalls of SS3. Our evaluation covers 5 different TA strategies, 4 recent LMs, 4 complex reasoning datasets, 3 few-shot prompts, and 2 tools. For each _TA strategy + dataset + model_ combination, we run three experiments with a different number of demonstrations. Overall, our evaluation includes \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt}} \hline \hline & **Pitfall** & **Recommendation** \\ \hline **(1)** & Coupling the TA strategy and the tool together. & Comparisons of TA strategies should use the same tools across strategies. \\ **(2)** & Forcing no-tool baselines to the framework of the TA strategy. & The optimal way to solve the task without tools may be different from solving the task with tools: No-tool baselines should include multiple variants of both free-form and structured strategies, to ensure the TA strategies are not given an advantage. \\ \hline **(3)** & Using one model across all comparisons. & Different models may behave differently when it comes to using tools effectively, based on their training data. Multiple models should be tested, if possible. \\ **(4)** & Using one prompt and set of demonstrations across all comparisons. & Multiple different sets of demonstrations should be used to get reliable estimates of few-shot performance. \\ \hline **(5)** & Not considering TA strategy costs. & TA strategies can be efficient or inefficient with regards to the prompt tokens and generation tokens they require to work, with respect to no-tool baselines or with respect to each other. The differences can be significant (§5). Comparisons of TA strategies should factor the computation cost of the strategy, which we term as _token efficiency_. \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of evaluation pitfalls of TA strategies (§3) and recommendations to mitigate them. an execution of 342 experiments, each of which generates 250 (GPT-3) or 500 (non-GPT-3) long-form answers. Additional implementation details are in Appendix A. **Tool-assisted strategies.** We evaluate the TA strategies shown in Figure 2: SelfAsk, Inline, Interleaving, C&F and RARR. We additionally include variants of SelfAsk and Inline where the model is separately called to summarize tool output in relevant context, as it can often be very long (SelfAskQA and InlineQA; see Appendix A for details). Finally, in the retrieval settings, we use Top-1 retrieval for all models, and additionally Top-5 retrieval for the Flan-PaLM-540B model (see "_Models_" below) to check whether additional retrieved information can improve performance despite the significantly longer input and processing cost. For SelfAsk and RARR we use the original implementation provided by the methods' creators. We implement Interleaving (Trivedi et al., 2022), as at the time of this research no implementation was available. Importantly, this implementation yields similar performance to that of existing approaches that combine CoT with retrieval from Wikipedia by He et al. (2022); Jiang et al. (2023) (see full results in Appendix B). Additionally, Jiang et al. (2023, Figure 4) implemented methods that apply retrieval and refinement over generated CoT that are similar to C&F and achieve similar performance to ours, as well (see Appendix B). For Inline, we are not aware of reports on few-shot performance of a similar strategy in the literature. **Baseline strategies.** We use no-tool versions of SelfAsk, Inline, and standard CoT prompting. The SelfAsk and Inline baselines simply involve giving the model the prompts used for the tool-based versions, while disabling tool calls (such that the model generates the output in-place of the tools). These are the baselines used by Press et al. (2023) and Schick et al. (2023) respectively. **Datasets.** We consider tasks that require complex reasoning, where models could potentially benefit from external tool usage. Specifically, we use StrategyQA (Geva et al., 2021) and MuSiQue (Trivedi et al., 2022), which require reasoning about entity knowledge, and GSM8k (Cobbe et al., 2021) and DROP (Dua et al., 2019) that evaluate arithmetic reasoning. In DROP we select examples that have numerical answers. We randomly sample 500 examples from the development set of each dataset (with the exception of StrategyQA, whose test set has 229 examples), and use it for performance evaluation of UL2, Flan-PaLM-540B and Flan-PaLM-62B. For GPT-3, we use a subset of 250 examples of that set, due to cost. We use standard evaluation measures for every dataset (F1 in the case of MuSiQue). We provide data examples in Appendix A. **Models.** We evaluate the methods across four LMs: Flan-UL2-20B (Tay et al., 2023), GPT-3 (text-davinci-003) (Brown et al., 2020), Flan-PaLM-540B and Flan-PaLM-62B (Chung et al., 2022). We omit GPT-3 experiments on RARR and Interleaving due to cost. Importantly, our focus is _not_ in comparing performance of these models, but to use them as samples of different model instances and training schemes against which to compare different TA strategies. **Tools.** We strictly use the same tools across all strategies, to ensure a fair comparison: Google Search (Press et al., 2023; Schick et al., 2023; Lewis et al., 2021) for knowledge tasks, and a calculator (Schick et al., 2023; Qin et al., 2023) for the calculation tasks. RARR, SelfAsk and Interleaving are designed for retrieval settings only, while Inline and Check & Fix can be used in all settings. For the retrieval settings using Google Search and Flan-PaLM-540B, we test retrieval with both the top 1 and top 5 tool-retrieved snippets: The two formats are designed to cover both cases where a shorter tool output may prevent the model's answer from degenerating, and a longer tool output may help the model with more relevant information. **Few-shot demonstrations.** In order to overcome bias from using demonstrations from prior work that were likely seen during training (Jacovi et al., 2023), we re-annotate prompts for all TA strategies, datasets and tools. We randomly sample 8 examples from each dataset's training set, and annotate each example with demonstrations for each TA strategy. Some of the strategies call the model multiple times with different prompts (e.g., Check & Fix, RARR), which requires separate annotations. This effort results in a total of 184 annotated demonstrations, which we release as a resource for future works on TA generation. From each set of 8 demonstrations, we then construct three separate prompts--3-shot, 5-shot and 7-shot--randomly sampled from the original 8 demonstrations, to get a better estimation of few-shot performance. ## 5 Comparative Results ### Organization of the results. Due to the **Tool vs. no tool.** Previous work that propose TA strategies found that using such strategies consistently improve performance in comparison to no-tool baselines (Press et al., 2023; Jiang et al., 2023; Trivedi et al., 2022a, inter alia). Figure 3 shows that the TA strategies do not improve performance over the no-tool baselines in our selection of datasets. The figure shows results against the average of the different few-shot scores, though we observe similar trends when using the maximum of scores as well. Full results are in Appendix B. Similarly to us, Gao et al. (2023, SS6.2) found that StrategyQA performance slightly decreased with tools in RARR compared to no-tool baselines for PaLM-540B (Chowdhery et al., 2022), and Jiang et al. (2023, SS6.2) found that performance decreased on StrategyQA in two settings comparable to our implementations of Interleaving and Check & Fix with GPT-3. We conclude that for the settings in this work, _the no-tool baselines are stronger than initially expected based on the literature._ More research is required to investigate whether this relationship holds in other contexts, though we note that the datasets and models used in our experiments are common in TA research (Mialon et al., 2023). Additionally, our experiments provide empirical justification to Recommendations (2) and (3) in SS3. First, we find that the CoT and Inline baselines outperform each other at a roughly equal rate, and neither emerges as a clear winner. This shows that different baselines obtain different results, and so, relying on only a single baseline in evaluation does not necessarily provide a good estimation for no-tool performance (recommendation (2)). Also, _the best-performing strategies vary significantly across models_, which highlights the importance of using multiple models for evaluation (recommendation (3))--for illustration, we report the highest-performing strategies in each setting in Table 2, to \begin{table} \begin{tabular}{l l l} \hline \hline Model & Dataset & Best strategy \\ \hline GPT-3 & StrategyQA & Baseline-Inline \\ GPT-3 & DROP & Baseline-Inline \\ GPT-3 & GSM8K & Check \& Fix \\ GPT-3 & MuSiQue & Inline \\ Flan-PaLM-540B & StrategyQA & Baseline-CoT \\ Flan-PaLM-540B & DROP & Baseline-Inline \\ Flan-PaLM-540B & GSM8K & Baseline-Inline \\ Flan-PaLM-540B & MuSiQue & RARR-Top5 \\ Flan-UL2-20B & StrategyQA & Baseline-Inline \\ Flan-UL2-20B & DROP & Baseline-Inline \\ Flan-UL2-20B & GSM8K & Inline \\ Flan-UL2-20B & MuSiQue & Baseline-CoT \\ Flan-PaLM-62B & StrategyQA & Baseline-CoT \\ Flan-PaLM-62B & DROP & Baseline-CoT \\ Flan-PaLM-62B & GSM8K & Inline \\ Flan-PaLM-62B & MuSiQue & Check \& Fix \\ \hline \hline \end{tabular} \end{table} Table 2: For each combination of dataset and model, we derive the best-performing strategy on the average score across the few-shot prompts. Notably, _the best-performing strategy varies across different models, datasets or prompts_, which means that it is necessary to evaluate over all axes to get a better estimation of general performance. Figure 3: A comparison of evaluation scores across two areas (§5): (a) No-tool baselines vs. TA strategies; (b) Tool usage via refinement of generated text vs. tool usage during generation, where the generated text contains tool arguments is conditioned on tool outputs. The dark line marks the confidence interval among samples. show that the overall conclusion can be distorted by choosing a particular model or strtegy Extended details are in Appendix B.1. **Tool use during generation vs. post-generation refinement.** In Figure 3 we compare the strategies that use tools during generation against the strategies that first generate an answer, and then use tools to improve the answer. For retrieval tasks, refinement clearly outperforms non-refinement strategies, but the same does not apply to the calculation tasks. We conjecture that planning calculations ahead of time during generation is more aligned with LM pretraining data, based on internet text, than planning retrieval queries in similar contexts. **Token efficiency.** TA strategies are typically evaluated in terms of task performance and properties such as factuality and logic correctness. We argue that computational cost is another important factor to consider. Specifically, we propose to evaluate _token efficiency_, that is, the amount of _prompt_ tokens and _generated_ tokens, which have direct effect on the cost of the TA strategy. Notably, the cost of a TA strategy depends on various variables, including model size, GPU type, caching optimizations, vocabulary size, beam search size, and so on. However, token counts can serve as a plausibly generic proxy for the purpose of comparing the cost of different TA strategies, as other factors are roughly equal across strategies, as long as the same models and tools are used. We consider prompt tokens and generated tokens separately, as they often have different consequences on cost.2 Footnote 2: Depending on model architecture and quantity of times reusing the same prompt, prompt processing cost can be optimized, whereas the token generation cost varies with other factors such as vocabulary size. Tables 3, 4 show both canonical and empirical comparisons across TA strategies with regards to token efficiency. The canonical comparison is a function of the relevant variables in the "canonical" setting where the model was expected to answer \begin{table} \begin{tabular}{l l|r r r r} \hline \hline \multirow{3}{*}{TA strategy} & \multirow{3}{*}{ \begin{tabular}{c} Answer \\ tokens \\ _(canonical)_ \\ \end{tabular} } & \multicolumn{4}{c}{Answer tokens (_empirical_)} \\ & & \multicolumn{2}{c}{Retrieval} & \multicolumn{2}{c}{Calculator} \\ & & GPT-3 & Flan-PaLM-540B & GPT-3 & Flan-PaLM-540B \\ \hline Baseline & \(m\) & 44 & 42 & 58 & 88 \\ SelfAsk & \(m\) & 20 & 72 & - & - \\ SelfAskQA & \(2m\) & 59 & 64 & - & - \\ Inline & \(m\) & 103 & 248 & 62 & 102 \\ InlineQA & \(2m\) & 114 & 256 & - & - \\ Check \& fix & \(2m\) & 89 & 177 & 75 & 177 \\ RARR & \(3m\) & & 181 & - & - \\ Interleaving & \(m\) & & 72 & - & - \\ \hline \hline \end{tabular} \end{table} Table 4: Average number of answer tokens across the 5-shot experiments, for each strategy. The RARR formula assumes a single verification question per step. \begin{table} \begin{tabular}{l l|r r r r} \hline \hline \multirow{3}{*}{TA strategy} & \multirow{3}{*}{ \begin{tabular}{c} Prompt \\ tokens (_canonical_) \\ \end{tabular} } & \multicolumn{4}{c}{Prompt tokens (_empirical_)} \\ & & \multicolumn{2}{c}{Retrieval} & \multicolumn{2}{c}{Calculator} \\ & & GPT-3 & Flan-PaLM-540B & GPT-3 & Flan-PaLM-540B \\ \hline Baseline & \(n\) & 353 & 353 & 1418 & 801 \\ SelfAsk & \(t(n+\frac{kt+1}{2})\) & 2281 & 1399 & - & - \\ SelfAskQA & \(t(2n+k)\) & 3589 & 2736 & - & - \\ Inline & \(t(n+\frac{kt+1}{2})\) & 1793 & 1775 & 3453 & 1083 \\ InlineQA & \(t(2n+k)\) & 3375 & 3672 & - & - \\ Check \& fix & \(t(2n+k)\) & 3839 & 3547 & 7548 & 3647 \\ \hline RARR & \(3n(t+1)\) & & 4729 & - & - \\ Interleaving & \(t(n+\frac{kt+1}{2})\) & & 3221 & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: Average number of prompt tokens per strategy (5-shot), with \(n\) as the CoT prompt length, \(t\) as the number of tool calls, \(k\) as the tool’s output length. Flan-PaLM-540B has a shorter context window than GPT-3, which limits prompt length. The canonical formula for RARR favorably assumes a single verification question. the question perfectly, and use the tool perfectly as intended. Across all TA strategy experiments, we found _no general correlation_ between token efficiency and performance. Concretely: (1) All TA strategies are significantly more expensive than the no-tool baselines by orders of magnitude, while not incurring an improvement worthy of this extra cost. _Empirically, using tools in each case can incur extra costs by a factor of 5x to 10x for prompt processing, and 2x to 5x for generation._ (2) The refinement strategies are more expensive than the no-refinement strategies. So while they improve performance for retrieval tasks, it comes at a cost. ## 6 Analytical Results We discuss further analyses of our results, findings that (a) our observations generally hold across different levels of example difficulty, and (b) most prediction errors of tool-augmented LMs stem from incorrect inputs to the tool and bad outputs from it, and not from a lack of tool usage. ### Example Difficulty It has been shown that LMs have difficulty solving problems involving long-tail entities Kandpal et al. (2022); Mallen et al. (2022) and complex mathematical reasoning challenges Mishra et al. (2022); Imani et al. (2023). Accordingly, we ablate the results from SS5 along the following axes of example difficulty, in order to understand how tools can affect performance on difficult examples. We provide an overview of the trends here, and extended results are available in Appendix B. **Measures of difficulty.** We investigate the effectiveness of tool-usage across varying levels of example difficulty, which we approximate in two axes: _(A) Long-tail entities (retrieval):_ Following Mallen et al. (2022), we extract the entities from the question and associated gold answers in StrategyQA and MuSiQue, and use the corresponding entity Wikipedia page views as a measure of popularity. _(B) Large numbers (calculation):_ We segment the examples in the calculation tasks based on the range of the median and largest number in the example (question and gold solution in GSM8k, or question and context paragraph in DROP). **Results.** Performance across increasing levels of entity popularity and computation complexity, with different LMs and TA strategies, are shown in Figure 3(a) and Figure 3(b), respectively. We find that performance uniformly decreases for harder examples in the retrieval setting for all models, but in the calculation setting, this only manifests for Flan-UL2-20B (implying that the larger models are more robust to the numerical ranges in GSM8K and DROP). Overall, in all cases _tool use does not improve upon the baselines even when controlling for the harder cases where tools are expected to be more useful_. This conclusion is aligned with our error analysis in SS6.3, which shows that the common errors stem from incorrect tool arguments, more than correct tool arguments but incorrect inferences based on them. Flan-UL2 with a calculator is an exception, where tool use indeed helps, though moreso on the _easier_ examples, likely due to a higher rate of correct arguments to the calculator. ### Tool Usage Statistics A possible explanation for the similar performance of no-tool baselines could be a lack of tool usage. To check this, we aggregate usage over the different TA strategies, and find that the models indeed use tools in the majority of the cases; 70%-80% in SelfAsk, and \(>\)90% in others (see Appendix B). We also investigate usage across other axes, such as models and number of demonstrations, and find similar trends. However, the datasets and tasks we investigate are designed to benefit from the tools in all cases, which shows that few-shot demonstrations are not always sufficient in inducing tool use in models. In particular, the SelfAsk strategies receive the lowest tool use, being the strategies that use natural language to query whether to use the tool (the answer begins with "Are follow up questions needed here:" to which the model answers "No" in the cases where the tool is not used). ### Error Analysis We sampled 50 instances for which an error was made by the TA models, randomly across the 5-shot experiments, and categorized them across three categories: (A) Incorrect tool input; (B) incorrect tool output; (C) incorrect model inferences based on correct tool usage. Error B applies only to the retrieval settings, where the retrieval tool Google Search in our case) retrieved a wrong or irrelevant snippet. The errors were distributed approximately to 60% (A), 10% (B), and 30% (C) in the retrieval setting, and 80% (A) and 20% (C) in the calculation setting. Li et al. (2023) reported an error analysis for tool-assistance in dialogue customer assistance settings, with similar conclusions regarding error A, although errors B and C do not apply in their context, and other error types manifest instead. Our results suggest that the majority of errors are not due to the incorrect tool responses (i.e., issues with Google Search as a choice of retriever), and overall more influenced by incorrectly invoking tools to begin with, in comparison to invoking them correctly but composing the solution incorrectly. ## 7 Conclusions and Takeaways We conduct a comprehensive assessment of few-shot tool augmentation strategies for LMs, covering hundreds of experiments with multiple LMs, datasets, and tools. Our experiments show that current tool-usage integration approaches are presently a false promise; prompting strategies that do not use tools typically obtain similar task performance, without the high cost of tool execution. Controlling for example difficulty, where tools are expected to provide the most benefit, does not explain the relative strength of the no-tool baselines. Instead, the primary errors we observe are related to incorrect usage of the tools to begin with (i.e., generating incorrect arguments to the tool). Our findings call for more robust evaluation of future TA strategies, primarily in more practical settings where models are not expected to leverage inherent abilities to solve tasks. To this end, our work provides concrete evaluation guidelines, such as employing stronger baselines and factoring in computation costs. ## Limitations While our study aims to provide a comprehensive evaluation of TA strategies, there are some limitations. First, recent work (Dodge et al., 2021; Magar and Schwartz, 2022; OpenAI, 2023) suggests that examples from public datasets, like those used in our evaluation, may have leaked to the training data of recent LMs. Such contamination can introduce biases to the evaluation, such as lack of need for external tools. We are not aware of alternatives without this issue at the time of this writing. Second, due to the high cost of executing large LMs in an exhaustive evaluation, we ran only a single experiment for each combination of TA strategy, model, dataset, and number of demonstrations. However, given the sensitivity of models to the demonstrations (Perez et al., 2021), future work should extend this evaluation to use multiple sets of demonstrations for each such combination. Last, while our findings show that non-tool models often perform on par with existing TA strategies, our setting favors tool usage. For example, our tasks only require a single type of tool such that the model does not need to choose between multiple tools. Future work that investigates when and how tools _can_ improve performance should consider more realistic evaluation settings, for example, by considering tasks where the model may need to use multiple types of tools together, or tasks where tools may sometimes give unhelpful answers. Figure 4: We analyze performance of the strategies across two area (no-tool baselines vs. TA strategies), conditioned on _example difficulty_ as defined by the existence of rare or common entities in the retrieval settings (via percentile of page views) and small or large numbers in the calculation settings (via percentile of numeric range). In (a), lower page views imply higher difficulty, and in (b), larger numbers imply higher difficulty.
2304.12932
Evolving Three Dimension (3D) Abstract Art: Fitting Concepts by Language
Computational creativity has contributed heavily to abstract art in modern era, allowing artists to create high quality, abstract two dimension (2D) arts with a high level of controllability and expressibility. However, even with computational approaches that have promising result in making concrete 3D art, computationally addressing abstract 3D art with high-quality and controllability remains an open question. To fill this gap, we propose to explore computational creativity in making abstract 3D art by bridging evolution strategies (ES) and 3D rendering through customizable parameterization of scenes. We demonstrate that our approach is capable of placing semi-transparent triangles in 3D scenes that, when viewed from specified angles, render into films that look like artists' specification expressed in natural language. This provides a new way for the artist to easily express creativity ideas for abstract 3D art. The supplementary material, which contains code, animation for all figures, and more examples, is here: https://es3dart.github.io/
Yingtao Tian
2023-04-24T07:47:48Z
http://arxiv.org/abs/2304.12932v1
# Evolving Three Dimension (3D) Abstract Art: # Evolving Three Dimension (3D) Abstract Art: Fitting Concepts by Language Yingtao Tian Google Research, Brain Team [email protected] ###### Abstract Computational creativity has contributed heavily to abstract art in modern era, allowing artists to create high quality, abstract two dimension (2D) arts with a high level of controllability and expressibility. However, even with computational approaches that have promising result in making concrete 3D art, computationally addressing abstract 3D art with high-quality and controllability remains an open question. To fill this gap, we propose to explore computational creativity in making abstract 3D art by bridging evolution strategies (ES) and 3D rendering through customizable parameterization of scenes. We demonstrate that our approach is capable of placing semi-transparent triangles in 3D scenes that, when viewed from specified angles, render into films that look like artists' specification expressed in natural language. This provides a new way for the artist to easily express creativity ideas for abstract 3D art. The supplementary material, which contains code, animation for all figures, and more examples, is here: [https://es3dart.github.io/](https://es3dart.github.io/). ## Introduction Through art history, a trend of abstract art has been influential since the beginning of 20-th century in the course of modernism [18] which focuses on abstract elements instead of traditional photo-realistic forms. Starting from Cubism art movement [13] and geometric abstraction [15], the focus on abstraction leads to abstract expressionism [14] and minimalist art [15, 16]. They collectively have opened a new approach of painting art where the subjective appreciation of the object or the feeling could be expressed, and the once dominant traditional focus on accurate representation is not the only standard anymore. Computer art, in the broader sense making art in a computational way, have also played a heavy role in this course of abstract art. Early works have brought forward the concept of artists generating art by designing mathematically, or more precisely algorithmically [11, 12]. The said algorithm, and its properties like its complexity [10], have became an important and intrinsic metric of art [12]. In this regard, a large body of pioneer artists including Frieder Nake, Vera Figure 1: Our proposed method places semi-transparent triangles in three dimension (3D) spaces using Evolution Strategies [13, 14, 15]. Leveraging ray-tracing based rendering Mitsuba 3 [16, 17], the rendered film at possibly multiple cameras is compared with its corresponding, user-specified text prompt using distance between their representation embedded by CLIP [15]. Such distances, aggregate by average, are used as the fitness in the sense of Evolution Strategies, which optimize the parameters of triangles to achieve better finesses. Molar, A. Michael Noll, Manfred Mohr, Leslie Mezei and Georg Nees have explored designed algorithms to produce abstract art that are composite of simple primitives like lines and polygons. More recently, modern approaches propose that artists could, instead of designing the algorithm directly, specify rules to find an algorithm that in turn produces the artwork. Doing so becomes feasible thanks to the recent advances in evolution strategies (ES), for example art generation using ES has been proposed to produce a wide range of simple (Johansson 2008; Altereddualia 2008) and complex (Fogleman 2016; Cason 2016; Paauw and Van den Berg 2019; Shahrabi 2020; Tian and Ha 2022) art forms, where the artists can specify the rules using text or images as instructions. While arguably painting has always been one of the most dominant art forms, arts concerning three dimensional (3D) objects is an equally important field. For example, Among 3D arts the one with the longest tradition is sculpture (Rogers 2020) and architecture (Gowans et al. 2022) which starts from classical antiquity and remains pretty relevant today. Yet modern techniques and industries add new movements to 3D art, where we also see a similar trend of abstractness and modernism like the painting arts mentioned above. For example, the trend of modernism has led to the sculpture to go beyond the realm of solid, representational form, and the artists started to produce "nonfunctional, nonrepresentational, three-dimensional works of art" (Rogers 2020). This particularly includes spatial sculpture (Conroy 1977; Kricke 1976; Caro 1962) where space becomes the subject of the 3D artworks, and the viewing angle as well as the relation of objects comes to be an important part of the art. In the realm of computational approach to 3D art, early work explores rule based generation (Broughton, Tan, and Coates 1997; Coates, Broughton, and Jackson 1999) where the combination of rules are evaluated by human-in-the-loop (Cook 2007). Late works focus on parameterization, such parametric 3D surfaces (Chu 2021). A recent work (Hsiao, Huang, and Chu 2018) produces wire art that looks like predefined sketches by connecting vortexes using a 3D path finding algorithm. However, the modern computational approaches to the abstract 3D art remains an open gap to fill. This is more prominent given how high-quality and controllable computer _concrete_ 3D art has been achieved by recent advances. For example, it is now possible to generate high-quality 3D volumetric objects using recent generative model like NeRF (Mildenhall et al. 2021; Martin-Brualla et al. 2021) and text-to-image model like DALLE (Ramesh et al. 2022). Powerful image generative models like Imagen (Saharia et al. 2022) and Parti (Yu et al. 2022) open the door to works such as DreamFields (Jain et al. 2022), DrameFusion (Poole et al. 2022) and Magic 3D (Lin et al. 2022) where artists can easily control the generation of height quality 3D object by text prompt. On the other side, to our best knowledge, still missing are computational tools to produce high quality _abstract_ 3D art creation that does not not require artists detailing everything but instead allow artists specifying instructions in a way that is high level and that human can easily produces and understand. To bridge this gap, we propose to combine evolution strategies (ES) and 3D rendering through customized parameterization of scenes, which is later evaluated by a deep learning model, to address computational creativity in the abstract 3D art. In doing so we leverage the recent advances in evolution algorithm applied to abstract 2D art generation, as well as ray-tracing rendering, which is vital to the rendering of physically-sounding transparent objects. Two components are bridged by immediate mode, a paradigm in computer graphics where senses are parameterized. Parameterization could be specified by the artist to customize scenes, allowing a new way for the artist to express creative ideas at a high level. We demonstrate that our approach is capable of placing shapes in 3D scenes that, when viewing from specified angles, look like artists' specification expressed in natural language. This is facilitated by recent advances in deep learning, namely CLIP that is also used in DALL-E (Ramesh et al. 2022), that connects text and images domains. With all these components, the artist can freely express the idea of 3D abstract art by text, which is a more approachable way and allows a wider audience to participate in 3D art creativity. A quick summary of our proposed method and some exemplary artifacts are shown in Figure 1 and Figure 2. ## Related Works In this section we cover works that are the background of or related to our proposed methods. Computational Approach to Abstract Painting ArtThe computational approach to abstract and minimalist painting art has a long history before the era of computing. Early works discuss mathematical art (Malkevitch 2003) which establishes the connection between artworks and mathematical properties such as symmetry and polygon for paintings, and octave for music. Since the inception of computers as a new means for human activity, algorithmic art (Verostko 1994) has been proposed as a new framing of art, where artworks are not produced by humans directly but by human designing a mathematical process, or an algorithm, that produces the artifact. Furthermore, the properties of the said algorithm themselves could also be a subject of artistic discussion. One example is low-complexity art (Schmidhuber 1997) where the complexity of the said algorithm becomes a measure of the artwork. In this regard, a large body of pioneer artists have practically explored designing algorithms to produce computational abstract art that are composite of simple lines and polygons. This includes Frieder Nake (Medien Kunst Netz 2018), Vera Molar (Vera Molar 2023), Leslie Mezei (Zentrum fur Kunst und Medien 2023), A. Michael Noll, Manfred Mohr and Georg Nees. Collectively, they represent the artists putting the early concept of computer abstract art in practice. Naturally, artists explores whether it's possible to, instead of directly designing an algorithm, use the rules that control the possible search space of the algorithm that actually makes the art. However, since the algorithm is hardly differentiable, gradients are not available or hard to define. From an optimization point of view, this non-differentiability makes it challenging to find an algorithm since a wide range of optimization methods are gradient-based. This resonates with the challenge of looking for a better neural network architecture (Elsken, Metzen, and Hut Figure 2: Several examples of the abstract 3D art produced by our method, where the evolution process places triangles inside the unit cube space (visualized by black frame) and sets triangles’ colors and transparencies, forming a spatial configuration. In each example shown here, four cameras look at the unit cube space from four sides, although this is an arbitrary decision and cameras can have different numbers and directions. The film from each camera, capturing the rendered images, is compared with the prompt. It could be observed that our method is capable of making a 3D art, which follows the spatial abstract art style, that looks like what humans can compose in natural language text. ter 2019). To tackle it, previous works have explored leveraging evolution strategies (ES) in art generation, since ES belongs to the category of black box optimization which does not require differentiation. Such effort could handles art forms ranging from simple ones (Johansson 2008; Alteredqualia 2008) to more complex ones (Fogleman 2016; Cason 2016; Paauw and Van den Berg 2019; Shahrabi 2020; Tian and Ha 2022). 3D RenderingThe development of Computer Graphics (Foley et al. 1994; Shirley, Ashikhmin, and Marschner 2009) is largely associated (Watt 1993) with the constant quest for better three dimension (3D) rendering. One of the drives in 3D rendering is the development of game (Gregory 2018) which naturally calls for high quality rendering in real-time (Akenine-Moller, Haines, and Hoffman 2019). Regarding rendering technique, broadly speaking two ways exist: first is rasterization (Shirley, Ashikhmin, and Marschner 2009), where polygons representing 3D objects are projected to pixels on 2D screen. It is fast, widely adopted, and often good enough. Another is ray-tracing (Glassner 1989; Spencer and Murty 1962; Appel 1968; Whitted 2005), where rays are traced back from camera, interacting with the objects it encounters accordingly to the rendering equation (Kajiya 1986), all the way till the light source. It enables a high degree of physical plausibility, but at the cost of high computational requirements. In the practice of 3D rendering engines, two paradigms exist: One is retained mode graphics (Jin 2006) where the application issues since to graphic libraries. This is the dominating practice due to its efficiency. Another one is immediate mode paradigm (Radich and Satran 2019) where the application builds the scene and only issues drawing primitives to the graphic libraries. It is less efficient, but allows more flexibility and expression, which could helpful in creativity settings. Evolution Strategies (ES) (Beyer 2001; Beyer and Schwefel 2002), as an optimization method, has been applied to many problems. Inspired by biological evolution, its high-level idea consists of iteratively changing parameters and keeping the sets of parameters that are most fitting. At the end of evolution the best, or the most fitting solutions remain. A straightforward realization of this idea is iteratively perturbing parameters randomly and keeping ones only if the change leads to better fitness. Unfortunately, it is often computationally inefficient. Recent advances in ES have largely improved the efficiency. For example, PGPE (Sehnke et al. 2010) proposes to estimate the gradients in linear time which can be used by gradient-based optimizers like Adam (Kingma and Ba 2014) and ClipUp (Toklu, Liskowski, and Srivastava 2020). On the other hand, CMA-ES (Hansen 2000; Hansen 2006) estimates the covariance matrix of parameters, which provides better performance using quadratic running time. Notably, unlike gradient-based optimization, evolution strategies do not require the optimized problem to be differentiable, thus it could effectively serve as black-box optimization solver where only the evaluation of fitness is needed. This leads to a wide range of applications. For example, recent advances in neural evolution (Such et al. 2017) allows efficient optimization of neural networks, and EvoJAX (Tang, Tian, and Ha 2022) fully leverages the hardware acceleration for a wide range of evolution tasks. 3D Generative Models and Computational Creativity One early way of generating creative 3D objects starts with 3D point cloud (Nguyen and Le 2013; Guo et al. 2020), which consists of points with unit volume in 3D space. The 3D point cloud is easier to model, and is used in turn to generate the 3D shape by morphing the points (Mo et al. 2019; Li et al. 2021). Recently, we have seen a surge of high quality generative models that directly models 3D objects. Especially in producing concrete, _volumetric_ 3D objects, works in the line of NeRF (Mildenhall et al. 2021; Martin-Brualla et al. 2021) represents the whole scene by a radiance field parameterized by the neural base models. Research in 3D generation is not limited to the modality of 3D objects only. Multi-modal, text-to-image works such as DALLE (Ramesh et al. 2022), Imagen (Saharia et al. 2022) and Parti (Yu et al. 2022) allow creating high quality images using text prompts as guidance. Based on them, text-to-3D objects have become possible. For example, DreamFields (Jain et al. 2022), DrameFusion (Poole et al. 2022), Magic 3D (Lin et al. 2022) and Imagen Video (Ho et al. 2022) are capably for generating photo-realistic _volumetric_ 3D objects following the description given in text. Figure 3: The architecture of our method, consisting of an outer loop of evolution strategies (ES) to find parameters leading to better fitness, and an inner, actual evaluation of fitness. The builder builds spatial 3D objects that compose semi-transparent triangles on the 3D space from parameters. The renderer renders the 3D space from different cameras producing corresponding images or “films” which are compared with provided text prompts using Cosine loss between the images and text prompts encoded by CLIP encoders. Such a loss is treated as the fitness of the parameters given back to ES. The user of our proposed method specifies the text prompt and hyper-parameters governing the behavior of the builder and the renderer, allowing expressing creativity. Figure 4: Our method generating with text prompts “Walt Disney World” with four cameras, with different numbers of triangles, namely 10, 25, 50 and 100 respectively. It could be shown that our method leverages the budgets of triangles in the increasing order of granularity, by first using triangles for general shape and then moving towards fine-grained details. Beside generative models that model concrete and real-world 3D objects, similar problems have also been approached from a computational creativity point of view, which emphasize the artistic creativity of the generated object. Early work explores rule based generation [1, 13, 14] where the combination of rules is evaluated by either enabling human-in-the-loop [1] or parameterizing a single formula [15]. Also, a recent work produces wire art [10] resembling given sketches by first generating vortexes and then connecting them by leveraging 3D path finding algorithms. They are probably closest to our work, but crucial differences exist: As far as we know, we are the first work to address _spatial_, _abstract_ 3D generation with the expressionism from modern neural based models. ## Methodology We show the overall architecture of our proposed method in Figure 3. It contains two parts, the outer loop of evolution strategies and the inner evaluation of 3D scene's fitness. ### The outer loop of evolution strategies (ES) This is a black box optimization that suggests multiple sets of parameters and adjusts them based on the fitness, or how well each set of parameters are. At the end of several steps of optimization, ES gives parameters leading to better fitness. We use CMA-ES [1, 14], an algorithm that estimates the covariance matrix of parameters, since it provides better performance than common alternatives like PGPE [10] while only incurring marginal increase of running time in our case. Engineering-wise, we use EvoJAX implementation of CMA-ES, which is based on JAX [1] and runs easily on accelerators like GPUs. ### The inner evaluation of 3D scene's fitness In our setting, the parameters literally parameterize the building and the rendering of 3D scenes. We first build the 3D scene, and then render it from multiple, user-specified cameras using a ray-tracing renderer engine. While the actual spatial objects are parameterized by the parameters, how the builder and the renderer interpret these parameters are considered hyper-parameters that the artist users could control. Finally, the rendered images from each camera, or "films" following photography terms, are compared with the text prompts semantically, which is done by computing the Cosine loss between images and texts encoded by CLIP encoders. The mean loss of all pairs of images and texts is given back to the aforementioned evolution strategy for adjusting the parameters accordingly. Overall, the evaluation of 3D scenes through parameterization is the main contribution we device to help artists express creativity. We detail the key decisions as follows: #### Parameterization Since our goal focuses on the computational creativity of _spatial_ 3D art as motivated by the Figure 5: Our method generates with two independent runs, both with prompt “A bright, vibrant, dynamic, spirited, vivid painting of a dog” from four directions. Different runs lead to equally plausible yet largely different 3D art. An artist user could exercise discretion “in-the-loop” by choosing from different variants from these runs. trend of abstract art in modern sculpture, we choose placing semi-transparent triangles in plastic material in the 3D space. Concretely, each of \(N\) triangles is associated with \(13\) learnable parameters, namely the position of its three vertices \((x_{1},y_{1},z_{1})\), \((x_{2},y_{2},z_{2})\), \((x_{3},y_{3},z_{3})\) and the color and transparency \((R,G,B,A)\), thus making totally \(13N\) parameters. It is possible to archive photo-realistic rendering of these semi-transparent triangles with the help of a ray-tracing renderer, because light rays may pass through and bounce between them many times. Furthermore, doing so allowing retain the possibility of reproducing a solution in the real world. The choice of semi-transparent triangles is inspired by a recent work on 2D abstract art [21], but going to 3D, as in our setting, makes our whole new pipeline necessary since the technique and the optimization dynamics are completely different. RenderingIn practice, for ray-tracing rendering we use physic-based Mitsuba 3 renderer [20]. For each triangle, the bidirectional scattering distribution function (BSDF) for rendering is set as a mixture of BK7 Glass, a thin dielectric material, and a _Lambertian_, an ideally diffuse material of corresponding \(R,G,B\) value, mixed with ratio \(A\). Besides physically correct rendering, Mitsuba 3 also allows GPU-powered paralleling sampling, which largely accelerates the ray-tracing rendering. For the sake of completeness, we note that Mitsuba 3 is also capable of sampling-based gradient estimation, but we do not leverage such capacity and leave study of that behavior as an orthogonal research direction for future study. EvaluationAs we expect the pipeline to produce a scene that, when rendered from different cameras (we call what a camera produces "films"), looks like corresponding text semantically, We measure it using CLIP, which provides an image and a text encoder that projects images and text into a shared, comparable latent space with Cosine distance. Note that with multiple pairs of camera and text, we could make the produced 3D object look like (or different) from different directions. In doing so, each film is encoded and compared with corresponding encoded text, and the means of Cosine distance of all such pairs are used as fitness, which is given back to the evolution strategies. ### Computation Platform Since both the rendering and the evolution strategies we use are fully run-able on GPU, the computation is fast and in our experience can be tens of times faster than on CPU, thus fully leveraging the modern hardware accelerators. ## Experiments In this section we showcase our method with several experiments. In Figure 2, we show several examples of the evolved 3D art produced by our method, each with \(1,200\) steps of evolution and a population of \(128\) using CMA-ES. As shown here, our method demonstrates that a wide range Figure 6: Our method generates with text prompts “Walt Disney World”, with a fixed transparency of \(50\%\) and with the default setting of learnable transparency. While the fixed transparency setting allows more global control of the scene, the learnable one provides great flexibility in how triangles are related to the space. More results from different transparency (\(0\%\) and \(80\%\)) could be found in supplementary materials online for comparison. of text prompts can be handled by our method, producing spatial, abstract art that is both novel and consistent with human interpretation. Even given the abstract nature defined by the scene, our method could still handle both the spatial shape (first two examples) and the color (last two examples). In the rest of this section, we investigate how turning several important hyper-parameters could impact the finally generated 3D art, showing the dynamics of our method which could be served as a guidance for the artist users. Different Number of TrianglesThe number of triangles could be used as a kind of "budget" that our method used to allocate in occupying the 3D space. In Figure 4, we show our method generating 3D artwork with 10, 25, 50 and 100 triangles, respectively. Our method thus places triangles in the increasing order of granularity, where the general outline is first emphasized and details are then filled. Note that the number of parameters increases in proportion to the number of triangles, requiring more computation time in rendering and optimization. Thus, this is a balance that artist users should decide. Different Runs of the Same ConfigurationOne important aspect of computational creativity is the ability to produce variants of the art given the same instruction. Such a property not only allows artists to be "in-the-loop" to choose from a wide range of variants, but also shows the capacity of the generative model. In Figure 5, we show two configurations, each with two independent runs. It is shown that different runs lead to equally plausible yet largely different 3D art. In doing so, our method could support artists "in-the-loop" of the creativity process. Fixed v.s. Learnable TransparencyUnlike 2D art, in 3D art the transparency matters a lot, due to the effects such as reflection and optical diffusion. This is especially true in the spatial setting as we focus on. In Figure 6, we demonstrate several settings of transparency, including the fixed transparency and the default setting of learnable transparency. It shows that while fixed transparency allows a more consistent global outlook, it nonetheless limits the expression by forcing large and small triangles contributing the same to the images. In contrast, learnable transparency gives our method flexibility in how the triangles are related to the 3D space. Different Text Prompts at Different CamerasWhile in many examples we show the same text prompt for cameras, this is completely not a requirement imposed by our method. On the contrary, our method allows pairs of texts and cameras in an arbitrary combination. Such a capacity allows a wider range of creativity from users to, for example, generate a 3D art that looks differently from different angles. In Figure 7 we demonstrate one such case, where our method generates 3D art that looks like "Walt Disney World" from two directions but "an annoyed cat" from the other two directions, even these views are of the same, single 3D art. We argue that our method is the first to be capable of helping artists in such a creative process that previously requires lots of manual work [10]. ## Conclusion In this work we address the problem that is previously not studied -- generating 3D, abstract, and spatial art that is semantically aligned with human interpretation. In doing so, we propose to leverage evolution strategies (ES) with ray-tracing rendering of parameterized 3D scenes, along with CLIP method for measuring the semantic similarity. We demonstrate that our approach is capable of producing 3D arts through several experiments, and provides the flexibility for artists or users to fine-tune for the desired result. Nonetheless, our proposed method is best suited as a call for further future study in computational approaches in 3D art. For example, it remains unclear whether optimization using differentiability of the renderer would lead to a different dynamic and thus art style. Also, the designing of the parameterized scene is a time-consuming one requiring the extensive knowledge of 3D rendering and optimization, so whether it would be improved through the (semi-)automation process should be studied too. Figure 7: Our method generates with different text prompts at cameras. The text prompt for camera 1 and 3 is “Walt Disney World” and for camera 2 and 4 is “an annoyed cat”. Our method produces one 3D art, and successfully allows it to look differently from different angles.
2302.10273
ViGU: Vision GNN U-Net for Fast MRI
Deep learning models have been widely applied for fast MRI. The majority of existing deep learning models, e.g., convolutional neural networks, work on data with Euclidean or regular grids structures. However, high-dimensional features extracted from MR data could be encapsulated in non-Euclidean manifolds. This disparity between the go-to assumption of existing models and data requirements limits the flexibility to capture irregular anatomical features in MR data. In this work, we introduce a novel Vision GNN type network for fast MRI called Vision GNN U-Net (ViGU). More precisely, the pixel array is first embedded into patches and then converted into a graph. Secondly, a U-shape network is developed using several graph blocks in symmetrical encoder and decoder paths. Moreover, we show that the proposed ViGU can also benefit from Generative Adversarial Networks yielding to its variant ViGU-GAN. We demonstrate, through numerical and visual experiments, that the proposed ViGU and GAN variant outperform existing CNN and GAN-based methods. Moreover, we show that the proposed network readily competes with approaches based on Transformers while requiring a fraction of the computational cost. More importantly, the graph structure of the network reveals how the network extracts features from MR images, providing intuitive explainability.
Jiahao Huang, Angelica Aviles-Rivero, Carola-Bibiane Schonlieb, Guang Yang
2023-01-23T12:51:57Z
http://arxiv.org/abs/2302.10273v1
# VIGU: Vision GNN U-Net for Fast MRI ###### Abstract Deep learning models have been widely applied for fast MRI. The majority of existing deep learning models, e.g., convolutional neural networks, work on data with Euclidean or regular grids structures. However, high-dimensional features extracted from MR data could be encapsulated in non-Euclidean manifolds. This disparity between the go-to assumption of existing models and data requirements limits the flexibility to capture irregular anatomical features in MR data. In this work, we introduce a novel Vision GNN type network for fast MRI called Vision GNN U-Net (ViGU). More precisely, the pixel array is first embedded into patches and then converted into a graph. Secondly, a U-shape network is developed using several graph blocks in asymmetrical encoder and decoder paths. Moreover, we show that the proposed ViGU can also benefit from Generative Adversarial Networks yielding to its variant ViGU-GAN. We demonstrate, through numerical and visual experiments, that the proposed ViGU and GAN variant outperform existing CNN and GAN-based methods. Moreover, we show that the proposed network readily competes with approaches based on Transformers while requiring a fraction of the computational cost. More importantly, the graph structure of the network reveals how the network extracts features from MR images, providing intuitive explainability. Jiahao Huang\({}^{1,2,\star}\), Angelica I. Aviles-Rivero\({}^{3}\), Carola-Bibiane Schonlieb\({}^{3}\), Guang Yang\({}^{1,2,\star}\)\({}^{1}\) National Heart and Lung Institute, Imperial College London, United Kingdom \({}^{2}\) Cardiovascular Research Centre, Royal Brompton Hospital, United Kingdom \({}^{3}\) Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom \({}^{\star}\) Send correspondence to {j.huang21,g.yang}@imperial.ac.uk Fast MRI, Graph Neural Network (GNN) ## 1 Introduction Magnetic Resonance Imaging (MRI) is one of the most important clinical tools. It provides high-resolution and non-invasive imaging for diagnosis and prognosis in a harmless manner. However, MRI has an inherently slow scanning time, since the raw data is acquired in _k_-space, and the minimum scanning time is decided by the selection of temporal and spatial resolution as well as the field of view, constraining by the Nyquist theorem. The prolonged scanning time leads to artefacts from the voluntary and involuntary physiological movements of the patients [1]. With the thriving development of artificial intelligence technologies, deep learning-based models have been promptly developed for fast MRI [2, 3]. Convolutional neural networks (CNNs) have dominated research studies in computer vision (CV) and medical image analysis, including MRI reconstruction [3, 4, 5], taking advantage of the inductive biases of locality and weight sharing, and their hierarchical structures. Recently, Transformers [6] have shown superiority for CV tasks bolstered by their global sensitivity and long-range dependency. Transformer-based MRI reconstruction methods [7, 8, 9, 10] have been proposed and achieved promising results, even though their increased computational cost is still a challenge for a wider application. General CNNs and Transformers backbones treat image data differently (Figure 1). The 2D convolution (Conv2D) in CNNs applies sliding operation kernel on pixels in a regular grid, exploiting the shift-invariance and local prior. The multi-head self-attention (MSA) in Transformers (specifically ViT [6]), embeds different ranges of pixels into patches, then converts them into sequences, introducing global sensitivity and long-range dependency. However, both Conv2D and MSA operations are usually based on the regular pixel grid in the Euclidean space [11]. Recently, Han et al. in [11] proposed the Vision GNN (ViG) backbone. ViG combines, combining the patch em Figure 1: Data Structures of Convolutional Neural Networks, Vision Transformers and Vision GNN U-Net. bedding from ViT [6] and the idea of Graph Convolutional Networks (GCNs) [12], treating images with more flexibility from the graph perspective. GCNs are originally designed for tackling specific tasks for non-Euclidean data, e.g., point cloud, social network, and biochemical graphs. Vision GNN fills the technological gap between GNNs and image data for computer vision tasks, and achieves state-of-the-art results in high-level tasks like classification and detection tasks. For MR images, the shape of anatomical structures are irregular, leading to redundancy and inflexibility when using the conventional grid or sequence data structure. We hypothesise that treat MR images as graphs (Figure 1) can provide a comprehensive understanding of the anatomical structures in MR images. Specifically, the image is first converted into patches by a shallow CNN and then regarded as nodes in a graph. Nodes with similar features can be gathered and connected using the K-nearest neighbours (KNN) algorithm, where information exchange can be conducted. Different anatomical structures can be recognised as sub-graphs of the whole graph (for an image). The edge connections within and between sub-graphs can be learnt to reflect the intra- and inter-relationship of anatomical structures. In this paper, we exploit how ViG works for a specific low-level image restoration task, i.e., MR reconstruction, by introducing a ViG-based U-Net, namely ViGU, and its variants based on Generative Adversarial Network (GAN), namely ViGU-GAN. Experiments have shown that our proposed ViGU and ViGU-GAN can outperform CNN-based and GAN-based MRI reconstruction methods and can achieve comparable results with Transformer-based methods with much a lower computational cost. The edge connection of ViGU shows that the proposed ViGU can learn the intra- and inter-relationship of different anatomical structures, providing model explainability. ## 2 Methods This section describes in detail the key parts of the proposed ViGU network and variant. ### U-Net Based Architecture The architecture of the proposed ViGU is displayed in Fig. 2 (A). CNN-based input and output modules are applied at the beginning and end of our ViGU converting between images \(\mathbb{R}^{h\times w\times 1}\) and patch vectors \(\mathbb{R}^{N\times C}\). We denote \(r\) and \(C\) as the patch size and embedding channel number respectively. We then define the number of patches as \(N=H\times W=h/r\times w/r\). Relative position embedding is applied for each patch, which is omitted for brevity. Three encoder blocks (EncB) and three decoder blocks (DecB) are symmetrically arranged in the encoder and decoder path correspondingly, between which a bottleneck block (BnB) is placed. The EncB, DecB and BnB are composed of one or multiple ViG Blocks, which are the basic computation blocks for ViGU. The resolution of feature maps is gradually decreased and increased along the encoder and decoder paths. Information is passed, via the skip connection and concatenation operation, from the encoder to the decoder paths between feature maps with the same resolution. Residual connection is applied to convert the ViGU into a refinement function: \(\dot{x}_{u}=\text{H}_{\text{ViGU}}(x_{u})+x_{u}\). ### Graph-level Operation A key step is how to transform an image as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) composed of a set of of nodes \(\mathcal{V}\) connected by a set of edges \(\mathcal{E}\). For each feature map, we have a group of patches \(X=\{x_{1},x_{2},...,x_{N}\}\), which are viewed as a set of unordered nodes \(\mathcal{V}=\{v_{1},v_{2},...,v_{N}\}\). For a single node \(v_{i}\), \(K\) edges \(\mathcal{E}_{i}=\{e_{1i},e_{2i},...,e_{Ki}\}\) are acquired from its \(K\) nearest neighbours \(\mathcal{N}(v_{i})\), where \(e_{ji}\) indicates the edge from node \(v_{j}\) to node \(v_{i}\). The graph representation of feature map \(X\) can be expressed as \(\mathcal{G}(X)\). A graph convolution operation \(\text{H}_{\text{GConv}}\) is expressed as: \[\mathcal{G}^{\prime} =\text{H}_{\text{GConv}}(\mathcal{G}(X),W)\] \[=\text{H}_{\text{Update}}(\text{H}_{\text{Aggregate}}(\mathcal{G} (X),W_{\text{Aggregate}}),W_{\text{Update}}), \tag{1}\] in which \(\text{H}_{\text{Aggregate}}\) and \(\text{H}_{\text{Update}}\) refer to the Aggregate and Update operations in graph convolution with learnable parameters \(W_{\text{Aggregate}}\) and \(W_{\text{Update}}\)[11]. ### ViG Block As Figure 2 (B) shows, ViG Block adopted the structure from ViT Block [6], which can be expressed as: \[X^{\prime}=\text{FC}(\text{GraphConv}(\text{FC}(X)))+X \tag{2}\] \[X^{\prime\prime}=\text{MLP}(X)+X^{\prime}, \tag{3}\] where \(X\) and \(X^{\prime\prime}\) are the input and output of ViG Block. \(\text{GraphConv}(\cdot)\) and \(\text{MLP}(\cdot)\) denote the graph convolution and Figure 2: (A) The network architecture of our ViGU; (B) The structure of the ViG Blocks. the multi-layer perceptron. FC\((\cdot)\) denotes the full connected layer, which is applied before and after the graph convolution; with the purpose to keep the domain consistency between node and image features and increase the feature diversity. All the normalisation and activation functions are omitted for brevity. ### Optimisation Scheme To train our proposed ViGU, Charbonnier loss is applied to the image and frequency domains, which are denoted as \(\mathcal{L}_{\mathrm{img}}(\theta)\) and \(\mathcal{L}_{\mathrm{freq}}(\theta)\) respectively. They allow for constraining the ground truth MR images \(x\) and reconstructed MR images \(\hat{x}_{u}\). Moreover, a \(l1\) loss is applied for perceptual-based, \(\mathcal{L}_{\mathrm{perc}}(\theta)\), constraints using a pre-trained VGG \(f_{\mathrm{VGG}}(\cdot)\). Formally, they read: \[\underset{\theta}{\text{min}}\,\mathcal{L}_{\mathrm{img}}(\theta) =\sqrt{||\;x-\hat{x}_{u}\;||_{2}^{2}+\epsilon^{2}}, \tag{4}\] \[\underset{\theta}{\text{min}}\,\mathcal{L}_{\mathrm{freq}}(\theta) =\sqrt{||\;\mathcal{F}x-\mathcal{F}\hat{x}_{u}\;||_{2}^{2}+ \epsilon^{2}},\] (5) \[\underset{\theta}{\text{min}}\,\mathcal{L}_{\mathrm{perc}}(\theta)= ||\;f_{\mathrm{VGG}}(x)-f_{\mathrm{VGG}}(\hat{x}_{u})\;||_{1}, \tag{6}\] where \(\epsilon\) is empirically set to \(10^{-9}\). We denote \(\theta\) as the network parameter of ViGU, and \(\mathcal{F}\) refers to the Fourier transform. The total loss of ViGU, \(\mathcal{L}_{\mathrm{ViGU}}(\theta)\), using is computed as: \[\mathcal{L}_{\mathrm{ViGU}}(\theta)=\alpha\mathcal{L}_{\mathrm{img}}(\theta) +\beta\mathcal{L}_{\mathrm{freq}}(\theta)+\gamma\mathcal{L}_{\mathrm{perc}}( \theta), \tag{7}\] where \(\alpha\), \(\beta\) and \(\gamma\) are weighting parameters balancing the importance of each term. Our ViGU can also benefit from GAN principles yielding to a new variant called ViGU-GAN. For the GAN-based variant, the proposed ViGU is the generator \(G_{\theta_{G}}\) parameterised by \(\theta_{G}\) (same with the \(\theta\) in ViGU), and a U-Net based discriminator [13], \(D_{\theta_{D}}\), is applied for adversarial training. The adversarial loss \(\mathcal{L}_{\mathrm{adv}}(\theta_{G},\theta_{D})\) is then given by: \[\underset{\theta_{G}}{\text{min}}\,\underset{\theta_{D}}{ \text{max}}\,\mathcal{L}(\theta_{G},\theta_{D}) \tag{8}\] \[=\mathbb{E}_{x\sim p_{t}(x)}[\log D_{\theta_{D}}(x)]-\mathbb{E}_{ x_{u}\sim p_{u}(x_{u})}[\log D_{\theta_{D}}(\hat{x}_{u})].\] The total loss of ViGU-GAN, \(\mathcal{L}_{\mathrm{ViGU-GAN}}(\theta)\), reads: \[\mathcal{L}_{\mathrm{ViGU-GAN}}(\theta_{G},\theta_{D})=\mathcal{L}_{\mathrm{ ViGU}}(\theta_{G})+\mathcal{L}(\theta_{G},\theta_{D}). \tag{9}\] ## 3 Experimental Settings and Results This section describes in detail the set of experiments conducted to validate the proposed ViGU and variant. ### Implementation Details We evaluate our approach using the Calgary-Campinas Public Dataset [14]. It is composed of 67 cases of T1-weight 3D brains, and randomly divided into training, validation and testing datasets following a ratio of 6:1:3. The multi-channel data was converted into single-channel MR images using the root sum square method. The top and bottom slices in each case were discarded, and the rest of the 100 slices were chosen for experiments. The number of ViG Blocks and embedding channels was set to \([3,3,3,1,3,3,3]\) and \([96,192,384,768,384,192,96]\) respectively. ViGU\({}_{x}\) indicated the proposed ViGU with a patch size of \(x\). The initial learning rate was set to \(6\times 10^{-4}\) and decays every 10,000 steps by 0.5 from the 50,000\({}^{\text{th}}\) step. The weighting parameters in the loss function \(\alpha\), \(\beta\) and \(\gamma\) were set to 15, 0.1 and 0.0025. For training the ViGU-GAN, the parameter of the discriminator is updated every 5 steps, to prevent training an "overly strong" discriminator and compromising the training of the generator. We compared the proposed ViGU and ViGU-GAN against MRI reconstruction methods of DAGAN [4], nPIDD-GAN [5] and SwinMR [8] with Gaussian 1D 30% (G1D30%) and radial 10% (R10%) masks. For quantitative results, we use Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Frechet Inception Distance (FID) [15]. Multiply Accumulate Operations (MACs) were utilised to estimate the computational complexity with an input size of \(1\times 256\times 256\). ### Comparison Experiments Table 1 and Figure 3 show the quantitative results and visualised samples of the comparison experiments, respectively. The proposed ViGU and ViGU-GAN outperformed other CNN and GAN-based methods, and achieved comparable results compared to the Transformer-based method SwinMR, Figure 3: Visual comparison of our ViGU/ViGU-GAN vs existing techniques. Results display SSIM and PSNR results. with only 1.9% and 9.1% MACs depending on the patch size. For the patch size setting, ViGU and ViGU-GAN with small patch sizes (larger patch resolution) tend to have better reconstructed results, whereas at the cost of larger MACs. For the GAN-based variant ViGU-GAN, the utilisation of adversarial training mainly improves the perceptual experiments and reflects a better FID score. However, the proposed ViGU-GAN leads to an unstable training process (abnormal pool performance of ViGU\({}_{4}\)-GAN using R10% mask in Table 1), prolonged convergence time and enlarged GPU memory requirements. Further research and optimisation of GAN-based ViGU should be conducted. ### Visualised Graph & Explainability Figure 4 shows the visualised graph connection of the proposed ViGU, including reference MR images (A), and graph connection from EnvB2 (B-C) and EnvB3 (D-E). For better visualisation, we only display a chosen node (red) and its first-order neighbours (green) connected by an edge (green line). In Figure 4 (B-C), \(2\times 2\) maximum pooling operation was applied for the neighbour nodes to reduce the computational cost, which led that the neighbour node area being bigger than the chosen node area. The graph connection of the proposed ViGU model can provide an explainability of how the network recognises and extracts the feature of MR images. Figures 4 (B) and (D) show that a node of brain tissue tends to have more neighbour nodes containing brain tissue, which proves that the network can be trained to gather the node with similar features and create the connection between them. However, since there is no tag information added to the network, it is hard for the proposed ViGU to learn the accurate border of different without any supervision. Different anatomical structures with similar textures can also mislead the network. A node at the edge (border of the anatomical structures, not the edge in the graph) tends to have a neighbour node that is also at the edge, regardless of the anatomical structures (Figure 4 (C) and (E)). ## 4 Discussion This work has exploited how ViG works for MRI reconstruction, treating the MR images as graphs instead of conventional grid or sequence structure data. Using graph-based operation our proposed network can extract and process the feature more flexibly and efficiently since the irregular anatomical structures leads to redundancy and inflexibility using regular grid-based or sequence-based operations like CNN and transformers. In addition, the proposed ViGU can learn a comprehensive understanding of the feature of MR images in latent non-Euclidean space, gathering and linking different parts with similar features globally. In conclusion, we can envisage that our proposed ViGU and ViGU-GAN to be served as a UNet-based backbone for the graph-based MRI reconstruction, super-resolution and segmentation. For future work, segmentation information would be incorporated into the ViGU, guiding the network to build clinically-meaningful graphs, and improving the reconstruction performance while providing better explainability. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{2}{c}{MACs} & \multicolumn{2}{c}{GG1050s} & \multicolumn{2}{c}{R109s} \\ \cline{2-9} & (\(\downarrow\)) & SSIM & PSNR \(\uparrow\) & FID \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) & FID \(\downarrow\) \\ \hline ZF & - & 0.883 (0.012) & 27.81 (0.82) & 156.38 & 0.706 (0.022) & 23.53 (0.82) & 319.45 \\ DAGGAN & 33.97 (0.924) & 30.41 (0.032) & 56.04 & 0.822 (0.024) & 25.95 (0.05) & 132.58 \\ HPDGD-GAN & 56.44 & 0.943 (0.009) & 31.81 (0.922) & 25.15 & 0.804 (0.023) & 27.17 (0.97) & 28.26 \\ SwiMR & 800.73 & **0.955 (0.009)** & **33.05 (1.299)** & 21.03 & **0.876 (0.022)** & **27.86 (1.029)** & 59.01 \\ \hline ViGU & 1.05 (0.949) & 0.943 (0.009) & 32.85 (1.05) & 26.06 & 0.886 (0.025) & 27.60 (1.05) & 63.53 \\ ViGU\_GAN & 15.07 & 0.959 (0.009) & 41.11 (0.02) & 22.44 & 0.810 (0.024) & 28.65 (0.019) & 87.63 \\ ViGU\_GAN & 73.02 & **0.955 (0.009)** & 32.95 (1.07) & 22.73 & 0.872 (0.023) & 27.72 (1.01) & 55.61 \\ ViGU\_GAN & 73.02 & 0.954 (0.009) & 32.88 (1.07) & **16.62** & 0.973 (0.022) & 27.75 (1.00) & **50.19** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results of the comparison experiments. Figure 4: Visualised graphs of the proposed ViGU. (A): the original MR images; (B-C): Graph connection from EnvB2; (D-E): Graph connection from EnvB3. A chosen node (red) and its first-order neighbours (green) are connected by edges (green line). In (B-C), \(2\times 2\) maximum pooling operation was applied for the neighbour nodes to reduce the computational cost. ## 5 Acknowledgements This study was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\NSFC\(\backslash\)211235), the NVIDIA Academic Hardware Grant Program, and the UKRI Future Leaders Fellowship (MR/V023799/1). CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute.
2308.05143
Pb$_9$Cu(PO4)$_6$(OH)$_2$: Phonon bands, Localized Flat Band Magnetism, Models, and Chemical Analysis
In a series of recent reports, doped lead apatite (LK-99) has been proposed as a candidate ambient temperature and pressure superconductor. However, from both an experimental and theoretical perspective, these claims are largely unsubstantiated. To this end, our synthesis and subsequent analysis of an LK-99 sample reveals a multiphase material that does not exhibit high-temperature superconductivity. We study the structure of this phase with single-crystal X-ray diffraction (SXRD) and find a structure consistent with doped $\text{Pb}_{10}(\text{PO}_4)_6(\text{OH})_2$. However, the material is transparent which rules out a superconducting nature. From ab initio defect formation energy calculations, we find that the material likely hosts $\text{OH}^-$ anions, rather than divalent $\text{O}^{2-}$ anions, within the hexagonal channels and that Cu substitution is highly thermodynamically disfavored. Phonon spectra on the equilibrium structures reveal numerous unstable phonon modes. Together, these calculations suggest it is doubtful that Cu enters the structure in meaningful concentrations, despite initial attempts to model LK-99 in this way. However for the sake of completeness, we perform ab initio calculations of the topology, quantum geometry, and Wannier function localization in the Cu-dominated flat bands of four separate doped structures. In all cases, we find they are atomically localized by irreps, Wilson loops, and the Fubini-Study metric. It is unlikely that such bands can support strong superfluidity, and instead are susceptible to ferromagnetism (or out-of-plane antiferromagnetism) at low temperatures, which we find in ab initio studies. In sum, $\text{Pb}_{9}\text{Cu}(\text{PO}_4)_6(\text{OH})_2$ could more likely be a magnet, rather than an ambient temperature and pressure superconductor.
Yi Jiang, Scott B. Lee, Jonah Herzog-Arbeitman, Jiabin Yu, Xiaolong Feng, Haoyu Hu, Dumitru Călugăru, Parker S. Brodale, Eoghan L. Gormley, Maia Garcia Vergniory, Claudia Felser, S. Blanco-Canosa, Christopher H. Hendon, Leslie M. Schoop, B. Andrei Bernevig
2023-08-09T17:41:05Z
http://arxiv.org/abs/2308.05143v2
Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\): Phonon bands, Localized Flat Band Magnetism, Models, and Chemical Analysis ###### Abstract In a series of recent reports, doped lead apatite (LK-99) has been proposed as a candidate ambient temperature and pressure superconductor. However, from both an experimental and theoretical perspective, these claims are largely unsubstantiated. To this end, our synthesis and subsequent analysis of an LK-99 sample reveals a multiphase material that does not exhibit high-temperature superconductivity. We study the structure of this phase with single-crystal X-ray diffraction (SXRD) and find a structure consistent with doped Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). However, the material is transparent which rules out a superconducting nature. From _ab initio_ defect formation energy calculations, we find that the material likely hosts OH anions, rather than divalent \(O^{2-}\) anions, within the hexagonal channels and that Cu substitution is highly thermodynamically disfavored. Phonon spectra on the equilibrium structures reveal numerous unstable phonon modes. Together, these calculations suggest it is doubtful that Cu enters the structure in meaningful concentrations, despite initial attempts to model LK-99 in this way. However for the sake of completeness, we perform _ab initio_ calculations of the topology, quantum geometry, and Wannier function localization in the Cu-dominated flat bands of four separate doped structures. In all cases, we find they are atomically localized by irreps, Wilson loops, and the Fubini-Study metric. It is unlikely that such bands can support strong superfluidity, and instead are susceptible to ferromagnetism (or out-of-plane antiferromagnetism) at low temperatures, which we find in _ab initio_ studies. In sum, Pb\({}_{3}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) could more likely be a magnet, rather than an ambient temperature and pressure superconductor. + Footnote †: These authors contributed equally. Donostia International Physics Center (DIPC), P. Manuel de Lardizabal 4, 20018 Donostia-San Sebastian, Spain + Footnote †: These authors contributed equally. Donostia International Physics Center (DIPC), P. Manuel de Lardizabal 4, 20018 Donostia-San Sebastian, Spain ## I Introduction A wave of scientific and social interest has followed a recent claim that LK-99[1; 2], with the proposed composition of Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O, exhibits ambient temperature and pressure superconductivity [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Although the data presented in the original reports is insufficient to support such a remarkable claim, a large body of immediate work has followed. Experimentally, it is unclear what the structure and composition of the material really are; most likely it is a multi-phase sample. Experiments have suggested diamagnetic behavior, and levitation experiments have discussed that it may arise from either diamagnetism or small ferromagnetic impurities [8]. As the sample is likely to contain multiple phases, it is possible that two different compounds contribute to each property, _i.e._ one part is diamagnetic, and the other metallic. Thus, clarification of the composition of the material is necessary _before_ trusting the models based on postulated structures. Assuming some of the numerous possible compositions, a number of _ab initio_ band structures have been produced. Claims of flat bands - and their positive influence on superconductivity - have been made. However, given the stakes, computation and prediction of physical properties requires an elevated level of accuracy. While flat bands provide a theoretical platform for high-temperature superconductivity, non-trivial quantum geometry is a compulsory for superfluidity, i.e. the Meissner effect[22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. This is because the mass of the condensing Cooper pair is inversely proportional to the minimal Fubini-Study metric [33; 34], a rigorous measure of quantum geometry. A variety of nonzero lower bounds on the minimal Fubini-Study metric exist for non-atomic bands [22; 35; 36; 37; 38]. However, flat bands are in fact detrimental to superfluidity if they are atomically localized, i.e. if their narrow dispersion comes from Wannier localization as opposed to destructive interference [39; 40; 41]. The inescapability of this conclusion is seen in the limiting case of a tight-binding model with all hoppings vanishing. The perfectly flat band that results cannot support transport of any kind, much less superconductivity, even in the presence of attractive Hubbard interactions. The key physics of flat band superconductivity thus lies in deviations from this limit, measured by quan tum geometry [42]. Furthermore, if Coulomb repulsion overwhelms the would-be attractive interaction, flat band ferromagnetism, rather than superconductivity, would be favored [43; 44]. Thus a comprehensive understanding of the active bands at the Fermi level is required for predictions of the many-body state. Due to new advances in understanding and classification of band structures [47; 48], we can analyze their symmetry, localization, and topology with extreme accuracy that only depends on the accuracy of the DFT calculation. In this work, we study two lead apatites (see Fig. 1) available in materials databases, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O[45] and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\)[46] - which we emphasize may not be the ultimate material structure. Making assumptions about the location of the fractionally occupied O anion and the Cu dopant, we obtain several microscopic models for the electron and phonon bands in this system. Our results can be summarized as follows. Firstly in the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O initial structure (Fig. 1(a)), we consider Cu replacing Pb at two possible positions, either the 1b or 1c Wyckoff positions, referred to as the Cu\({}_{1}\)- and Cu\({}_{2}\)-dopings respectively. Both possibilities result in a set of two bands at the Fermi level dominated by Cu (see Sec. III.1). They are narrow with a \(\sim 100\)-meV bandwidth, and form an elementary band representation [47] of the Cu \(d\)-orbitals. We compute their Fubini-Study metric and non-abelian Wilson loops, which show strong localization, although the the Cu\({}_{2}\) structure does have a significantly reduced gap to the O bands below. For both possibilities, we obtain 4-band, short-ranged, symmetric tight-binding models which demonstrate that the weak dispersion of the Cu orbitals arises primarily from hybridization with nearby O bands. Secondly we consider the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) structure (Fig. 1(b)). For both locations of the Cu dopant, we again find a set of Cu bands in an elementary band representation with \(\sim 100\)-meV bandwidth. However, the gap to the nearby O bands is much larger, and we provide a 2-band model built entirely from Cu Wannier functions. Again, the Fubini-Study metric indicates atomic localization. See Sec. III and Sec. IV for more details. In all cases, the two bands at the Fermi level lack strong quantum geometry. Due to their flat, localized nature, ferromagnetism seems to be the preferred configuration of these states in ab initio studies (see Sec. III.1). The absence of extended states in these bands does not support a theory of high-temperature superconductivity based on the flat bands we obtain in these structures. However, preliminary calculations of the phonon spectrum show that more careful relaxation of the doped compound is required to fully stabilize their structures, which may result in changes to the band geometry. A phonon-driven mechanism for superconductivity must also compete with the strong Hubbard repulsive interaction, which we also construct using _ab initio_ Hubbard-Kanamori parameters. In Refs. [1; 2], LK-99 shows a sharp drop of the resistivity around 400 K, towards a state claimed to be superconducting. Nevertheless, the reported value of resistivity is 2-3 orders of magnitude higher than that of good metals; for instance, Cu presents a resistivity value of \(10^{-6}\)\(\Omega\)cm. Moreover, the analysis of the reported specific heat shows no transition up to 400 K, in principle, at odds with a jump expected due to the release of entropy of a second order phase transition. Indeed, the specific heat seems to drop with temperature, which adds more controversy to the claim of room-temperature superconductivity. Ref. [49] has recently found evidence for the Cu\({}_{2}\)S present in the multi-phase compound sourcing the resistivity transition. Moreover, the presence of a diamagnetic signal does not necessarily imply SC. In Fig. 2(A), we show the magnetic response of the diamagnetic compound HOPG (Highly Oriented Pyroitye Graphite). As we can see, the diamagnetism of HOPG is clearly different from any superconductor [50]. On the other hand, besides the typical hysteric behavior of the magnetization expected in a SC, in Fig. 2B, we show the magnetic torque response \(\mathbf{\tau}=\mathbf{M}\times\mathbf{B}\) of the type II SC NbSe\({}_{2}\). The hysteric behavior between the sweep-up and -down curves defines the different field regions of a type II SC (vortex solid, vortex liquid, critical field, etc) [51]. Figure 2: (A) Magnetization _vs_ field for (HOPG) Highly Oriented Pyroityc Graphite, showing a diamagnetic response. (B) Hystereses in the curves of the torque (\(\tau\)) _vs_ H for NbSe\({}_{2}\). Figure 1: The crystal structure for (a) Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O[45] and (b) Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\)[46]. The Wyckoff positions of atoms in space group 176 \(P6_{3}/m\) are labelled, with their coordinates given in Table. 1. The O atoms surrounding P atoms that form PO\({}_{4}\) are at \(6h\) or generic positions and are not labeled for simplicity. We also mark two possible Cu doping positions on Pb\({}_{2}\) with yellow and blue circles, which are \(1b\) and \(1c\) Wyckoff positions in space group 143 \(P3\) and will be called Cu\({}_{1}\)- and Cu\({}_{2}\)-doping, respectively. The H atoms are not shown in (b) for simplicity, which are close to the trigonal O atoms at \(4e\) Wyckoff position. Chemical structure Pb-apatite is structurally related to the parent compound, Ca-apatite, and much can be learned through their comparison. This crystal type features 1-dimensional channels filled with requisite charge balancing anions. For example, the calcium-oxo analogue has been shown to only exist as Ca\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O at temperatures exceeding 1000\({}^{o}C\),[52] below of which it rapidly includes water into the lattice, forming the neutral hydrate Ca\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). While the Pb-based system may have different dehydration temperatures to interconvert between Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O[45] and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\)[46] (see Fig. 1), historical data would indicate that the hydrated material is thermodynamically preferred. To this end, we performed a number of DFT calculations to assess the formation enthalpy of the hydroxy and oxo Pb atites considered here. Our calculations reveal that the inclusion of H\({}_{2}\)O, forming Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), is exothermic with its inclusion favored by 38.5 kcal/mol. These data suggest while Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O may be a metastable form, direct syntheses of that material will tend to form the hydrate if sufficient protons are available. And even if Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O is formed it will interconvert to the hydrate upon exposure to air. Structurally, the oxo and hydroxy apatites are better distinguished by their lattice parameters, and less so by direct crystallographic measurements. Ca-apatite exhibits a dramatic lattice contraction in both the \(a\) and \(c\) parameters[53], proportional to the extent of hydration, _i.e._ the conversion from O\({}^{2-}\) to OH\({}^{-}\) contracts the lattice.[54]. Such contraction has been attributed to Cu-inclusion within the Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O lattice[2; 7], but a similar effect is predicted through simple hydration. While theoretical studies thus far have focused on the pure Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O or Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) phase, the reported synthesis methods cannot result in single phase samples of either of them. In the final step of the reaction, Lanzarkite (Pb\({}_{2}\)(SO\({}_{4}\))O) is mixed in a 1:1 molar ratio with Cu\({}_{3}\)P, but this 2:1 Pb/P ratio is stoichiometrically inconsistent with the desired product. Even loosening the ratio of the reactants would create a significant amount of Cu impurities: the final product has 6 P, so there will be 18 Cu per formula unit of Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O or Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). The powder X-ray diffraction (PXRD) pattern presented in the original reports[1; 2] suggests that the majority of the sample exists in a structure related to Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), but also features many crystalline impurities. Due to the severe off-stoichiometry of the reaction, additional amorphous phases are also to be expected. Only few Rietveld analyses have thus far been performed[12], and it is difficult to separate the structural effects of Cu inclusion versus hydration without high-quality single crystal data. To this end, is unclear if _any_ Cu atoms were actually incorporated into the structure. To address this, we are able to compute the formation enthalpies of Cu substitutions at Pb lattice sites. Following the standard procedure[55], the Cu defects were computed using a 2\(\times\)2\(\times\)2 super cell containing 1831 electrons for the neutral substitution. Computations were referenced to bulk Cu/Pb (rich potentials) and CuO/PbO (poor potentials) and the formation enthalpy was computed using a 2\(\times\)2\(\times\)2 k-mesh, with the FNV correctional scheme employed for charged defects and averaged diagonalized dielectric tensors[56; 57], and Hubbard model (U = 4.0 eV for Cu). The formation enthalpies for the defects are depicted in 3. Notably, the experimentally observed Cu\({}^{2+}\) (_i.e._ charge-neutral) substitution is predicted to form in _p_-type conditions, at minimum occurring with a 1.2 eV penalty depending on the reference potentials and amounting to many orders of magnitude sub-stoichiometric Cu concentration, essentially forbidding its inclusion within the material. However, the charge transition from inclusion of Cu\({}^{2+}\) to Cu\({}^{1+}\) occurs very near the Fermi level for the undoped Pb-apatite material, and Cu\({}^{1+}\) inclusions becomes more favored in _n_-type conditions. The experimental absence of appreciable Cu\({}^{1+}\) suggests that if Cu\({}^{2+}\) is incorporating, it is being aided by other correlated defects not considered here. Additionally, other mechanisms of Cu inclusion are possible (_e.g._ interstitial formation, as observed in Pb\({}^{2+}\)-containing in lead halide perovskites[58]). Pb-apatite may host other dopants _e.g._ S, which is also present during the reaction[59]. These may be studied in future work. To address whether Cu and OH\({}^{-}\) are included, and to generally deduce the structure, we performed a modified synthesis of the putative LK-99 compound described in Ref.[1; 2]. Sample purity of all intermediate reagents was confirmed via powder X-ray diffraction using a STOE Stadi P powder X-ray diffractometer equipped with a Mo Figure 3: The defect formation enthalpies for Cu substitution in Pb lattice sites at both rich and poor potentials show that Cu\({}_{\rm Pb}\) is unfavorably incorporated into either Pb site within the apatite lattice. K\(\alpha\) (\(\lambda\)=0.71073 A) sealed-tube x-ray source and graphite monochromator at room temperature in either Debye-Scherrer or transmission geometry Fig. 25, Fig. 26. In the final step, Cu\({}_{3}\)P and Pb\({}_{2}\)(SO\({}_{4}\))O in a 1:1 molar ratio were ground into a fine powder in mortar and pestle. The powders were loaded into an alumina crucible, placed in a quartz tube, and sealed under dynamic vacuum without any argon back-filling. The tube was then loaded in a furnace and heated to 950 \({}^{\circ}\)C over 4 hours, kept at this temperature for 6 hours, and then shut off to cool quickly. (Upon pulling the sample out of the furnace we observed that our sample, due to its loading in an alumina crucible, did not attack our quartz tube in contrast to Refs. [1; 2]. This suggests the possibility of further chemical reactions in the protocol of Refs. [1; 2].) The resulting product inside the crucible contained mostly white and orange colored powder in addition to metallic gray lumps as seen in Fig. 28. A PXRD pattern of this sample can be seen in Fig. 27, showing that our sample has more impurity phases than the original LK-99 sample, but that the characteristic peaks are present and thus it seems possible to isolate the main phase reported in the original sample from our sample as well. Energy-dispersive X-ray spectroscopy (EDS) was also utilized to examine these impurity phases. In Fig. 29, we show that the transparent-orange phase likely gets its color from pockets of metallic Cu. Likewise, in Fig. 30 we observe islands of Cu\({}_{2}\)S embedded in a matrix of the transparent-white phase. As noted previously[60; 61], these Cu\({}_{2}\)S islands are likely the cause of the tenfold drop in resistivity observed at 104.8 \({}^{\circ}\)C in the original preprint. Under magnification, the white powder was observed to be transparent single crystals with a clear hexagonal rod habit (Fig. 4). Knowing that Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O was reported to crystallize in a hexagonal space group, a transparent single crystal of dimensions (0.55 \(\times\) 0.092 \(\times\) 0.176 nm) was picked for single crystal X-ray diffraction (SXRD) analysis using an APEX2 CCD diffractometer equipped with a Mo K\(\alpha\) (\(\lambda\)=0.71073 A) sealed-tube X-ray source and graphite monochromator at room temperature. Initial unit cell refinement obtained a unit cell in a hexagonal setting with lattice parameters **a**=9.8508(1) Aand **c**=7.4395(2) A, agreeing with other previously reported structures for lead apatite [45; 46]. Indexation and integration proceeded smoothly for a full hemisphere collection out to a resolution of 0.5 A. Run list generation and frame data processing were done in APEX 2 [62]. An analytical absorption correction was used to scale the data before importing the peak list into JANA2020 [63]. The initial structure solution was obtained in space group P6\({}_{3}/m\), consistent with observed systematic absences Fig. 31, from the charge-flipping algorithm as implemented in SUPERFLIP [64]. An initial refinement, done on \(F^{2}\), was needed to obtain a valid initial structure with 6 distinct sites (2 Pb, 1 P, and 3 O). Notably, during this initial refinement Oxygen atoms along the c-axis were removed. Running a Fourier transform of residual electron density, visualized with the Vesta software package [65], reveals 2 unique crystallographic pockets of significant electron density around points [0 0 0] (Wyckoff position 2\(b\)) and [0 0 1/4] (2\(a\)) in P6\({}_{3}/m\), shown in Fig. 5, suggestive of a Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) structural solution. After Oxygen atoms were added to these points, freely refining their occupancies results in a site occupancy factor of the 2\(b\) Oxygen centered at [0 0 0] of 1.163, an unphysical value for an OH\({}^{-}\) molecular unit. Clearly, an atom with more electron density is needed. Replacing the oxygen instead with sulfur, a chemically similar element that is also present in the reaction, seems like a likely candidate, although further elemental analysis methods will be needed to distinguish between an SH\({}^{-}\), PH\({}_{2}^{-}\) or other possible dopants at this site. Freely refining occupancies for both an O at [0 0 1/4] and S located at [0 0 0] results in site occupancy factors of 0.54 and 0.56, respectively. The freely refined occupancies having a summation very close to 1 seems suggestive that this could be a physically real interpretation of the Figure 4: A single crystal similar in transparency, shape, and size to what we picked for SXRD. Figure 5: Residual electron density located at the 2a and 2b positions along the c axis. Positive isosurfaces are drawn at 7e Å\({}^{-3}\). structural solution. After refining the isotropic parameters anisotropically, a final refinement converges with a goodness of fit (GoF) parameter of 1.22 and R value of 3.94 compared to all reflections. Here, our site occupancies for O (1.08) and S (1.12) sum to above 1. If we decide to restrict the occupancies of the 2\(\delta\) S and 2\(a\) O to sum to a site occupancy factor of 1, we end refinement with a final composition Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\). This refinement has nominally the same refinement statistics (GoF(all) = 1.22, R(all) = 3.95). We also investigated two ways of possible Cu doping within our refinement: one in which the Cu atom substitutes on the Pb lattice site, and another which the Cu atom is inserted along the chain. Attempts to dope both Pb sites with Cu results in a refined composition of Pb\({}_{9.55}\)Cu\({}_{0.45}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\) with similar statistics (GoF(all)= 1.21, R(all) = 3.93). We found the Cu has to be added in as a split site after Pb position has been refined completely. The last two steps of occupancy and anisotropic parameter refinement had to be done by restraining the Pb/Cu split site position with automatic refining keys switched off. The compositional refinement of this structure certainly warrants skepticism. Unfortunately for us, this system has complications for EDS, namely, the Pb L\({}_{\alpha 1}\) edge lies at the same energy as the S K\({}_{\alpha 1}\) edge. This, along with neutron diffraction being needed to confirm H incorporation means future studies are needed. However, in search of confirming the refinements Cu incorporation, we carbon coated a small transparent needle and ran EDS on a Quanta environmental scanning electron microscope. Using the elemental mapping technique, we were surprised to find that we were able to detect Cu, and it is homogeneously distributed within the crystal. Furthermore, in Fig. 6 the Pb:Cu ratio we detected over a roughly 15 minute period was 58.3:2.8, which roughly translates to a ratio of 9.55:0.46, an indicator that the Cu incorporation in our refined composition of Pb\({}_{9.55}\)Cu\({}_{0.45}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\) could be correct. However, we would like to acknowledge the fact that 2.8 atomic percent is extremely low for EDS characterization and future studies utilizing synchrotron radiation are needed to confirm incorporation. In addition, as there is a lot of Cu in the reaction it cannot be ruled out that Cu is at the surface of the crystal. We can also obtain a reasonable refinement to the data if we place Cu into the channel with composition Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH) \({}_{1.11}\) Cu\({}_{0.49}\) (GoF = 1.22, R = 3.93). Still, as the crystals are transparent, charge balance needs to be maintained and thus this last solution is not chemically reasonable. Full collection, integration, and refinement statistics can be found in App. C. We would to stress here again that the transparent nature of the crystal should rule out superconductivity as a property, it rather indicates a wide band gap. Next, we compare simulated powder patterns of our structural solution, the ICSD reported structures, and the relaxed DFT structures of Cu doped variants with the published PXRD pattern from [1; 2]. To do this, structural position files were loaded into VESTA, and patterns simulated for using a Cu K\(\alpha\) (\(\lambda=1.5406\)A) wavelength. Simulated patterns were then overlaid atop the experimental pattern, extracted using an in-house Mathematica code. For ease of visualization, we adjust the simulated patterns via a zero point shift to match the peaks expected around 18 degrees in the experimental data. This zero-point correction is not uncommon for Rietveld refinements, and is needed if the diffractometer used in the experimental pattern is misaligned. Most zero-point corrections were minimal, i.e. 0.2\({}^{\circ}\). We find a good agreement (see Fig. 32 and Fig. 33) of our SXRD solution with the reported data. Other structures fit the data less well and a discussion is given in the SI. This analysis is preliminary as the cropped data from the original preprint are not of high enough quality to perform a Rietveld analysis. It does however show that our structure obtained from SRXD, measured in a _transparent_ crystal, agrees with the powder pattern published in the original LK-99 paper. Further analysis of our samples will follow. Due to the numerous uncertainties regarding the ultimate structural composition of the LK-99 material, we will investigate a variety of scenarios using ab initio density functional theory. In our ab initio studies, we still focus on structures on which Cu substitutes for Pb, as we come to different conclusions as previous theoretical works assuming the same substitutions. ## III Ab initio results We consider two different experimental structures of lead apatite for _ab-initio_ calculations in this work, i.e., Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O[45; 66] and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\)[46; 67], both have space group (SG) 176 \(P6_{3}/m\) symmetry. Their crystal structures are shown in Fig. 1. The O atoms at 4\(e\) Wyckoff position in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O have \(\frac{1}{4}\) occupancy, while (OH)\({}_{2}\) atoms at 4\(e\) in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) have \(\frac{1}{2}\) occupancy. The atomic positions are summarized in Table. 1. We remark that lead apatite has many experimentally reported structures, with the position of trigonal-O or (OH)\({}_{2}\) being slightly different. For example, the structures in Ref.[68; 69; 70; 71] has trigonal-(OH)\({}_{2}\) located at Wyckoff position \(2a=(0,0,\frac{1}{4}),(0,0,\frac{3}{4})\) without fractional occupancy. The LK-99[1; 2] Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O (\(0.9<x<1.1\)) is hypothesized to be synthesized by doping Pb atoms with Cu at 4\(f\) Wyckoff positions. ### Electronic Structure In this section, we discuss the electronic structures for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) in both undoped and Cu-doped phases. We first consider the undoped phase. As reported in literature[45; 46], the trigonal-O in Figure 6: Energy dispersive X-ray Spectroscopy map of a translucent crystal coated in carbon. Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O has \(\frac{1}{4}\) occupancy, while the trigonal-(OH)\({}_{2}\) in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) has \(\frac{1}{2}\) occupancy. Such fractional occupancy is difficult to treat in DFT. For simplicity, we fix their positions to remove the fractional occupancy, i.e., fix O at \((0,0,0.634)\), and fix O in OH\({}_{2}\) at \((0,0,0.04),(0,0.54)\) and H at \((0,0,-0.10),(0,0,0.40)\), s.t the total number of electrons in the unit cell is the same as the fractionally occupied structure. Mind that the positions of H atoms are not given in the original experimental structure[46] we use and are manually added using the \(O\)-\(H\) bond length in H\({}_{2}\)O molecular, i.e., about 1A, which also agrees with the \(O\)-\(H\) length reported in another experiment structure[72]. Notice that after fixing the trigonal-O and (OH)\({}_{2}\) positions, the original SG 176 \(P6_{3}/m\) symmetry is lowered to SG 143 \(P3\) and SG 173 \(P6_{3}\) for two structures, respectively. Remark that fixing trigonal-O at any of the four \(4e\) positions are equivalent, as they are related by \(\{C_{6z|00\frac{1}{2}}\}\) and \(\{M_{z}|000\}\) in SG 176. For trigonal (OH)\({}_{2}\) the scenario is similar. Thus we will focus on the aforementioned positions of trigonal-O and (OH)\({}_{2}\) in the following. We relax the structure without fractional occupancy and obtain the relaxed lattice constants and atomic positions summarized in Appendix. A.1 (Table. 4, 5). The symmetry is maintained during the relaxation, i.e., SG 143 for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and SG 173 for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). The band structures for two relaxed structures are shown in Fig. 7, which host a large band gap of 2.761 and 2.635 eV, respectively. The highest occupied bands are quasi-flat and mainly come from the \((p_{x},p_{y})\) orbitals of the trigonal-O atoms. We then consider the Cu-doped phases. As hypothesized in Ref.[1; 2], the Cu-doped LK-99 Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O (\(0.9<x<1.1\)) has Cu doping the Pb atoms at \(4f\) position. By fixing \(x=1\) for simplicity, there exist four possible Cu-doping structures by placing the Cu atom at one of the \(4f\) positions. The four \(4f\) positions are equivalent in SG 176. However, after fixing (OH)\({}_{2}\) in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), the SG is lowered to SG 173 \(P6_{3}\), which gives two inequivalent Cu doping positions, which we call the one at \((\frac{1}{3},\frac{2}{3},z)\) Cu\({}_{1}\)-doping and \((\frac{2}{3},\frac{1}{3},-z)\) Cu\({}_{2}\)-doping, as marked using yellow and blue circles in Fig. 1. For Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, after fixing the position of trigonal-O, the SG is lower to 143 \(P3\) which makes all 4 positions inequivalent. For simplicity, we only consider the Cu\({}_{1}\)-doping and Cu\({}_{2}\)-doping, as the other two possible dopings have similar band structures as reported in Ref.[14]. We relax the structure of Cu\({}_{1}\)- and Cu\({}_{2}\)-doping for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), both having SG 143 \(P3\) symmetry, with lattice constants and atomic positions summarized in Appendix. A.1 (Table. 4, 5). The four relaxed structures are used to perform DFT calculations. We consider the paramagnetic (PM) phase and ferromagnetic (FM) for each structure, with their band structures shown in Fig. 8, 9, and the orbital projections given in Appendix. A.2 (Fig. 13, 14). In the PM phase, for all four structures, there exist two quasi-flat bands with \(\frac{3}{4}\) filling at the Fermi level \(E_{f}\), contributed mainly by the \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xy},d_{x^{2}-y^{2}})\) about 20%, and \(p\) orbital of O atoms that close to Cu weight about 10%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \((d_{xz},d_{yz})\) orbitals of Cu (with Cu \((d_{xz},d_{yz})\) weight about 50%, \(( \({}_{2}\) 30%), forming elementary band representation (EBR) \({}^{1}E^{2}E@1b\) in SG P3 (notice in the PM phase the time-reversal symmetry exists and enforces \({}^{1}E\) and \({}^{2}E\) to be degenerate) for Cu\({}_{1}\) doping and \({}^{1}E^{2}E@1c\) for Cu\({}_{2}\) doping. For two Cu dopings of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, there are two bands close the to Cu bands with a small band gap, mainly formed by the bands of \((p_{x},p_{y})\) of the trigonal-O and form EBR \({}^{1}E^{2}E@1a\), as shown in Fig. 8, 13. Although here these four bands formed by \((d_{xz},d_{yz})\) of Cu and \((p_{x},p_{y})\) of the trigonal-O do not have band inversions and are topologically trivial, we can adjust the hoppings and make them topological, which we show in Sec. IV using minimal tight-binding models. For two Cu dopings of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), however, the bands below the Cu bands have a larger band gap and weak hybridization with Cu bands. The FM phases of the four structures all have a lower total energy compared with the PM phase, as shown in Table. 2, suggesting that the FM phase is energetically more favored than the PM phase. Evidence for soft ferromagnetism, consistent with the well-localized bands and strong Coulomb repulsion we find, has recently appeared in experiments [8]. In the spin-polarized orbital projections shown in Fig. 13, 14, the quasi-flat bands of Cu near \(E_{f}\) have a large spin-splitting of about 0.6 eV, while the bands of trigonal-O have negligible spin-splitting. The calculated magnetic moment is about 0.6 \(\mu_{B}\) for Cu and small moments for O atoms surrounding Cu which sum to about 0.3 \(\mu_{B}\) in total, in agreement with the fact that the two flat bands near \(E_{f}\) in the PM phase also have about 30% weight from O. We also calculate the total energy of the A-type antiferromagnetic phase (AFM), i.e., FM inplane and AFM out of the plane, as shown in Table. 2. The FM and AFM phases have almost the same energy per unit cell, both being lower than the PM phase. We leave for future studies to investigate the magnetic ground state. ### Phonon Spectrum In this section, we perform the phonon calculations to check the stability of proposed structures in the literature, including Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). We find that, in their nominal 111 unit cells, _all_ structures display imaginary phonon frequencies. We focus our discussion below on Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) since powder X-ray diffraction shows a strong similarity between it and LK-99. Supplemental figures for other structures may be found in App. A.4. For the undoped structures, both the Pb-O and Pb-(OH)\({}_{2}\) present negative/imaginary phonon at low-T and high-T with \(1\times 1\times 1\) cell, where the phonon of Pb-(OH)\({}_{2}\) is shown in Fig. 10(a,b). Here, the temperature effect is included via electronic smearing in the harmonic approximation level. As the atomic projection presents, the phonon instability is mainly contributed - as theoretically expected - by the heaviest Pb atoms at both Pb1 and Pb2 sites. As the temperature goes higher, the imaginary phonons harden, especially in the \(k_{z}=0\) plane where imaginary modes disappear at high-T. Since the calculation is performed in \(1\times 1\times 1\) cell with a short \(c\) compared to \(a\), the negative phonon in \(k_{z}=\pi\) plane may be caused by the short cutoff in the \(c\) direction. To verify this scenario, we also perform phonon calculation in \(1\times 1\times 2\) supercell at high-T. As shown in Fig. 10(c), the negative branches on the \(k_{z}=\pi\) plane in the \(1\times 1\times 1\) cell calculation become positive, as the cell is enlarged to include the force constants between atoms with longer distance. We expect that the residual imaginary phonon at the \(\Gamma\) point can be eliminated by performing phonon calculations in a larger supercell after relaxation. We also observe that the phonon spectrum presents a good separation of frequency based on the mass of elements: the heaviest Pb dominates the lowest frequency and gives the imaginary modes, H phonons lie at a much higher frequency, which is not shown in the plot, and P and O phonons lie in the middle. For the Cu-doped structure, previous DFT calculations suggest a (ferro/antiferro)magnetic ground state. Therefore, the phonon calculations are performed in both paramagnetic and ferromagnetic phases as presented in App. A.4 with a \(1\times 1\times 1\) cell. Similar to the undoped case, the doped structures show negative phonon modes, but tend to harden in the FM phase. Both O and the Cu dopant contribute to the imaginary phonon at low \(T\) (see App. A.4 ) and one should relax the structure with much care for the doping effect to obtain a stable phonon spectrum. The difficulty in obtaining a stable structure even for the undoped parent compounds, which exist in nature, underscores an important challenge for first principles studies of the doped compound, whatever its nature. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Compound & Phase & PM & FM & AFM \\ \hline \multirow{2}{*}{Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O} & Cu\({}_{1}\)-doping & 0 & -0.129 & -0.130 \\ \cline{2-5} & Cu\({}_{2}\)-doping & 0.195 & 0.089 & 0.092 \\ \hline \multirow{2}{*}{Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\)} & Cu\({}_{1}\)-doping & 0 & -0.136 & -0.136 \\ \cline{2-5} & Cu\({}_{2}\)-doping & -0.177 & -0.332 & -0.332 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between total energy of PM, FM, and A-type AFM phases per unit cell (in eV) calculated in DFT. The total energies in the Cu\({}_{1}\)-doping phase in Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) are used as zero. It can be seen that in all four cases, the FM and AFM phases have very close total energy, which are both lower than the PM phase. In Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O the Cu\({}_{1}\) doping has lower energy while Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) the Cu\({}_{2}\) phase is lower. ## IV Tight-binding models In this section, we construct short-range tight-binding models for both Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) compounds and for both considered positions of the Cu dopant. Such models are a prerequisite for studying the many-body phases that LK-99 is conjectured to realize. We emphasize that the precise chemical composition, purity, and structure of the supposed compound are far from being settled, and initial proposals may need to be re-examined. Nevertheless, we study the _proposed_ structures here for completeness. We find, in all cases and in agreement with independent calculations, that Cu forms a high-density flat at the Fermi level. Based on this finding we ask whether a flat band superconductivity scenario is viable. We perform the calculations of the quantum geometry in these bands and find that they are atomically localized and will likely not favor superconductivity. In all cases, we find that the two bands straddling the Fermi level are atomic and dominated by the Cu \(d\)-orbitals. In the Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O structure, a four-band model is required due to the close proximity of the O bands, which hybridize with the Cu orbitals particularly in the \(k_{z}=\pi\) plane. For this structure, the dispersion of the Cu bands is predominantly due to hybridization with O. In contrast, Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) shows well-isolated Cu bands at the Fermi level, and a two-band model can be constructed. App. B contains a complete discussion of the parameters, symmetry, and quantum geometry of the bands. A low-energy model of the flat bands can be constructed from the \(d_{yz},d_{xz}\) orbitals of Cu on the 1b/1c positions for Cu\({}_{1}\)/Cu\({}_{2}\) doping, and the \(p_{x},p_{y}\) orbitals of O on the 1a position in space group \(P3\) (see App. B for conventions of the lattice). Our DFT calculations show degeneracies at \(\Gamma\) and \(A\) (which form double Weyl points [9]) that cannot be protected by \(C_{3}\) alone since it forms an abelian group. We check that the separate spin sectors possess a spin-less time-reversal symmetry \(\mathcal{T}^{2}=+1\) which protects a 2D complex irrep \({}^{1}E^{2}E\) arising from the \(d\) and \(p\) orbitals. The preservation of the spin-less \(\mathcal{T}\) in the FM phase comes from the fact that the magnetization is taken into account as the local momentum term \(M(\mathbf{r})s_{z}\) (with real scalar \(M(\mathbf{r})\)). The spin-orbital coupling is negligible for the Cu and O atoms. ### Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O In the Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O structure, it is necessary to construct a four-band model due to the O-dominated bands that appear closely below (and hybridize with) the Cu-dominated bands at the Fermi level. The model takes the form \[h_{4}(\mathbf{k})=\begin{pmatrix}h_{C}(\mathbf{k})&h_{CO}(\mathbf{k})\\ h_{CO}^{\dagger}(\mathbf{k})&h_{O}(\mathbf{k})\end{pmatrix} \tag{1}\] consistent with these symmetries. For Cu\({}_{1}\) doping, it is sufficient to include the three nearest-neighbor Cu-O hoppings (\(h_{CO}(\mathbf{k})\)), six O-O in-plane hoppings for O and Cu, and the two O-O vertical hoppings (\(h_{O}(\mathbf{k})\)). In fact, the Cu hoppings are less than 4meV and can be safely dropped, effectively setting \(h_{C}(\mathbf{k})=0\). The Cu\({}_{2}\) doping structure exhibits a smaller gap between the O and Cu bands, and requires the inclusion of next nearest neighbor hoppings, with full expressions given in App. B. We find excellent agreement with the DFT spectrum and irreps in Fig. 11 within this approximation. Our tight-binding model shows that the dispersion of the Cu bands at the Fermi level arises essentially in its entirety from hybridization with O. Nevertheless, symmetry eigenvalues indicate that the bands are topologically trivial: O hybridization does not cause a topological change. To confirm this, we compute the 2D Fubini Figure 10: Phonon spectrum for relaxed Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) (without Cu doping) structure at (a) low and (b) high temperature calculated using \(1\times 1\times 1\) supercell, which shows imaginary, unstable phonons in the \(k_{3}=\pi\) plane. (c) Phonon spectrum calculated using a \(1\times 1\times 2\) supercell at high temperature, which stabilizes the \(k_{3}=\pi\) modes but softens a single mode at the \(\Gamma\) point. In addition, we observe that the soft phonon modes shown in panels (a) and (b) are flat modes in the \(k_{3}=\pi\) plane. A series of flat phonon modes are also presented at the finite frequency in the high-\(T\) calculations, such as flat modes around 1 THz in panels (c). Study quantum metric (Fig. 11(c,d)) \[g(\mathbf{k})=\frac{1}{2}\sum_{i=x,y}\mathrm{Tr}\,\partial_{i}P(\mathbf{k}) \partial_{i}P(\mathbf{k}) \tag{2}\] where \(P(\mathbf{k})=U(\mathbf{k})U^{\dagger}(\mathbf{k})\) is the projector onto the eigenvector matrix \(U(\mathbf{k})\) of the two Cu bands, and non-abelian Wilson (Fig. 20) loop \[W(k_{2},k_{3})=U^{\dagger}(2\pi,k_{2},k_{3})\prod_{k_{1}}^{2\pi\gets 0}P( \mathbf{k})\,U(0,k_{2},k_{3}) \tag{3}\] computed over the set of two bands at the Fermi level. Both show strongly localized states, consistent with their elementary band representation. Furthermore, the basis Wanniers functions are localized: square root of Wannier spread is about \(0.20a\approx 0.26c\) for Cu and about \(0.33a\approx 0.43c\) for O). As we can see, the Wannier function is more localized for Cu than that for O, explaining the fact why the hopping among Cu is smaller than that among O. ### Pb\({}_{9}\)Cu(Po\({}_{4}\))\({}_{6}\)(Oh)\({}_{2}\) We construct a 2-band model \(h_{2}(\mathbf{k})\) for the two bands near the Fermi level in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) for both Cu\({}_{1}\) doping and Cu\({}_{2}\) doping and in both PM and FM phases. The 2-band model is constructed with \(d_{xz}\) and \(d_{yz}\) on Cu (at 1b for Cu\({}_{1}\) doping and at 1c for Cu\({}_{2}\) doping), and it only contains NN hopping along all three directions in addition to the onsite energy term. The form of the model is the same for both doping and for both PM and FM phases, since they all preserve the spinless TR and \(C_{3}\) symmetries for the 2 bands near the Fermi level according to the Wannierization of DFT data. With parameter values determined from the Wannierization of the DFT data, we can see that the model gives very similar bands as the DFT band structure (Fig. 12). (Detailed expression and parameter values of the 2-band model can be found in Appendix. B.2.) In particular, for one specific Cu doping, we choose the same parameter values for the NN hoppings for PM and FM phases, since the DFT values in the two phases are very close (difference smaller than \(0.1meV\)); the only non-negligible difference between PM and FM is just an onsite energy shift, indicating that the effect of FM on the states near Fermi level is just uniformly shifting their energy. The small bandwidth of the bands near the Fermi level comes from the small hopping magnitudes (maximum hopping about 12meV), which comes from the small spread of the Wannier function of the basis (the square root of the Wannier spread of each orbital is about \(0.24a\approx 0.32c\)). (See Appendix. B.2 for details.) We can see the Wannier spread of the 2-band model is larger than that of the Cu in the 4-band model, which is consistent with the larger hopping here compared to the hopping among Cu in the 4-band model. In the DFT bands structure, the symmetry-protected gapless points at \(\Gamma\) and \(A\) are double Weyl points with chirality \(\pm 2\). (See Appendix. B.2 for details.) However, the band splitting along \(\Gamma\)-A is very small (maximum splitting about 2meV), and thus in our simplified NN-hopping model, we neglect the band splitting along \(\Gamma\)-A for simplicity. Although such simplification in our model will make the two double Weyl points merge into an accidental nodal line along \(\Gamma\)-A, it will be convenient for future study on the correlated physics, since it makes the eigenvectors of the Hamiltonian independent of \(k_{z}\). Both Ref. [16] and Ref. [9] contain 2-band PM models constructed from \(d_{xz}\) and \(d_{yz}\) on Cu. However, the 2-band PM model in Ref. [16] has mirror symmetry along \(y\), which we do not include in our model since the DFT calculation indicates a considerable breaking of the mirror symmetry. (See Appendix. B.2 for details.) The 2-band PM models in both Ref. [16] and Ref. [9] have considerably different parameter values than ours. Ref. [9] shows double Weyl points at \(\Gamma\) and A in their 2 band model, for which they choose to include the small band splitting along \(\Gamma\)-A in their model. Figure 11: Comparison of DFT and tight-binding model band structures for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. The relaxed DFT (red) and short-range tight-binding model (blue) are shown for Cu\({}_{1}\) doping (\(a\)) and for Cu\({}_{2}\) doping (\(b\)). The 2D quantum metric \(g(\mathbf{k})\) is computed on the plane \(k_{3}=\pi\) for the Cu\({}_{1}\) (c) and Cu\({}_{2}\) (d) models, showing peaked features where the Cu and O bands have the smallest direct gap. ## V Interacting Hamiltonian We use the constraint random phase approximation (cRPA) method [73; 74; 75; 76] to compute the screened Coulomb interaction for the \((d_{xz},d_{yz})\) orbitals of Cu near \(E_{f}\) for two Cu-doped structure of Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). In Table. 3, we list the values of Hubbard-Kanamori parameters, i.e., the onsite intra-orbital Hubbard \(\mathcal{U}\), inter-orbital \(\mathcal{U}^{\prime}\), and onsite exchange \(\mathcal{J}\). The interacting Hamiltonian can be constructed as \[\hat{H}_{\text{int}} =\mathcal{U}\sum_{im}n_{im\uparrow}n_{im\downarrow}+\mathcal{U}^ {\prime}\sum_{i,m\neq m^{\prime}}n_{im\uparrow}n_{im^{\prime}\downarrow} \tag{4}\] \[+(\mathcal{U}^{\prime}-\mathcal{J})\sum_{i,m<m^{\prime},\sigma}n_ {im\sigma}n_{im^{\prime}\sigma}\] \[+\mathcal{J}\sum_{i,m\neq m^{\prime}}(-c^{\dagger}_{im\uparrow}c _{im\downarrow}c^{\dagger}_{im^{\prime}\downarrow}c_{im^{\prime}\uparrow}+c^ {\dagger}_{im\uparrow}c^{\dagger}_{im\downarrow}c_{im^{\prime}\downarrow}c_{ im^{\prime}\uparrow}),\] where \(i\) is the site index and \(m\) the orbital index. We remark that the _ab-initio_ Hubbard-Kanamori parameters may need to be renormalized as the two quasi-flat bands near \(E_{f}\) also have weights of other orbitals. ### Comments on Flat Bands and Interactions Flat bands are thought to be beneficial for strongly correlated phases since the interaction strength necessarily dominates over the single-particle bandwidth. However, the quantum geometry of the flat bands is another essential ingredient in determining the nature of the resulting strongly correlated phases. For instance in the single-band square lattice Hubbard model at half filling, the flat band limit \(t\to 0\) yields a fully decoupled lattice (the atomic limit) which is a perfect paramagnet. It is \(O(t^{2}/U)\) corrections that stabilize an anti-ferromagnetic phase. If, however, the interaction strength is much larger than a set of isolated bands but smaller than the gap between these bands and their complement, a different set of phases can emerge. In the repulsive case, ferromagnetism can be proven [43; 44], and superconductivity (or phase separation) for attractive case [26; 34; 43]. Features of the many-body phase, for instance the mass of the Cooper pair, stiffness of the spin wave, and a lower bound on the electron-phonon coupling are determined by quantum geometry [26; 34; 77]. To entertain the possibility of superconductivity in flat bands, we recall that the mean-field critical temperature in flat bands will be proportional to the interaction strength [22]. Although this scenario is exponentially improved over one-band BCS theory, it still requires an attractive interaction of roughly 100 meV. This order of magnitude is larger than the bandwidth of the two-band model in the OH structure, and roughly equal to the bandwidth of the upper Cu bands in the four-band model for the O structure. (We note in this case that the band gap is small, and projecting the interaction to the flat bands may not be justified.) The repulsive Coulomb interaction we computed is eV scale, and is much larger than the total bandwidth of both tight-binding models. Two recent papers [9; 20] have proposed models where one of the Cu bands is perfectly flat. Although the two Cu bands together form an indecomposable elementary band representation with trivial topology, analyzing only one of the two bands (per spin) away from their degeneracy points at \(\Gamma\) and \(A\) can show strong quantum geometry. It is only appropriate to consider strong coupling groundstate a single flat (gapless) band in the limit where the interaction strength is much smaller than the bandwidth. This limit, while interesting and worthy of study, places an upper bound on the interaction strength which Figure 12: Comparison of the relaxed DFT (red) and 2-band tight-binding model band structures (blue) and irreps for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) in the specified Cu doping and magnetic properties. The relaxed DFT and the NN-hopping tight-binding model (blue) are in close agreement. The expression of the model and the values of the model parameters are specified in Appendix.2. Note that this is a 2-band model, so the quantum geometric tensor of both bands, taken together, vanishes. \begin{table} \begin{tabular}{c|c|c|c} \hline Phase & \(\mathcal{U}\) & \(\mathcal{U}^{\prime}\) & \(\mathcal{J}\) \\ \hline Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O Cu\({}_{1}\) & 2.75 & 1.71 & 0.52 \\ \hline Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O Cu\({}_{2}\) & 3.53 & 2.38 & 0.58 \\ \hline Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) Cu\({}_{1}\) & 2.88 & 1.99 & 0.45 \\ \hline Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) Cu\({}_{2}\) & 1.85 & 0.96 & 0.14 \\ \hline \end{tabular} \end{table} Table 3: The _ab-initio_ Hubbard-Kanamori parameters. In the table, \(\mathcal{U}\), \(\mathcal{U}^{\prime}\), \(\mathcal{J}\) denotes the onsite intra-orbital Hubbard, inter-orbital Hubbard, and onsite exchange interaction. All numbers are in eV. limits its applicability to a high-temperature phase. ## VI Further verification and tests ### Immediate Experiments As we experimentally found that samples are multiphase, short-term experiments should focus on isolating the different phases and characterizing their compositions and structures. Also several more synthesis should be performed to confirm that the outcomes are repeatable. Once we know reliably the outcome of the synthesis the sample needs to be thoroughly characterized with a wide range of methods. SEM/EDX will be a good first attempt to analyze how many different phases are in the materials and what their respective compositions are. It would be ideal if those phases can be separated either manually of in different synthesis attempts targeting the respective compositions found via SEM/EDX. As we showed it is possible to pick single crystals from at least one phase. ### Chemical Structure Verification Once phase-pure materials are obtained, they can be thoroughly characterized. If single crystals are obtained, SXRD is ideal in combination with chemical analysis such as SEM/EDX or, if enough samples can be separated, ICP-OES. Otherwise PXRD in combination with Rietveld refinement will be necessary. If the phases are not crystalline, the structural characterization becomes more complicated and chemical analysis is a more important first step, likely to be followed by high-resolution electron microscopy. Characterization of physical properties is most meaningful on single-phase materials and can be performed after through characterization of all components of the samples. Similarly, theoretical analysis of the electronic structures is most meaningful if the final crystal structures are known. ### Theoretical Analysis Once the chemical structure is firmly established and the set of bands at the Fermi level is settled, the bands must be fully analyzed based on the principles outlined in this paper, including orbital, quantum geometric, topological, and localization content in all the multiple phases that result from the reaction. The interacting Hamiltonian must then be derived and with it the values of the Hubbard \(U\). Then the ground state of the system must be determined. Even if not superconducting, one must still explain the levitation properties shown in initial experiments - either large diamagnetism or some flavor of ferromagnetism. The phonons, and more importantly the electron-phonon interaction need to be obtained; an understanding is needed of the rather flat Pb phonon bands on the \(k_{z}=\pi\) plane that seem to become negative at low temperature. If indeed, however, the sample does turn out to be superconducting, short of being a fundamental discovery, it would also point out the limitation of our theoretical understanding of the mechanisms that create it. ## VII Conclusions Assuming the reported structure and the location of the Cu dopants, we have performed first principle calculations of the electronic structure of Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). We find that the bands are almost flat, localized on the Cu atoms, and have weak quantum geometry and trivial topology. Hence in a "flat-band superconductivity" scenario, already alluded to previously, our calculation of the quantum geometric properties of the active bands shows that such a phase would have extremely small superfluid stiffness even at zero temperature. Instead, their flatness and the strong Coulomb repulsion is compatible with the (anti)/ferromagnetic ground states found in calculations. The phonon spectra seem to contain imaginary phonons for both high and low-temperature paramagnetic phases (with the reported unrelaxed structure). The experimental findings also do not, as of yet, strongly suggest the presence of superconductivity. Hence either the reported structure is incorrect, or the ab-initio calculations and the structure are correct but then furthermore give rise to a non-superconducting ground-state, or the ab-initio calculations are incorrect due to large correlations or other factors. In fact, our preliminary experiments and structural solutions point to LK-99 being a multi-phase material, where the part that structurally agrees with a doped version of Pb apatite is transparent and thus probably not superconducting. Either way, this points to the difficulty of predicting and/or explaining superconductors by ab initio methods, even when they exist. It also suggests that extra cases be taken in the literature, both experimental and theoretical. Experimentally, samples need to be much more carefully analyzed with a wide range of diffraction and spectroscopic methods. The individual phases should be isolated and their properties should be studied separately. Theoretically, one must at the very minimum check the correct symmetry of the states, their localization and topology, and try to obtain superconductivity from the first principle Hamiltonian, rather than introducing it by hand in a BdG formalism. These represent hard challenges that only serious investigations can overcome. ## VIII Acknowledgements H.H. and Y.J. were supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 101020833). D.C. acknowledges the hospitality of the Donostia International Physics Center, at which this work was carried out. J.H-A. is supported by a Hertz Fellowship. D.C. and B.A.B. were supported by the Simons Investigator Grant No. 404513, the Gordon and Betty Moore Foundation through Grant No. GBMF8685 towards the Princeton theory program, the Gordon and Betty Moore Foundation's EPiQS Initiative (Grant No. GBMF11070), Office of Naval Research (ONR Grant No. N00014-20-1-2303), BSF Israel US foundation No. 2018226, NSF-MERSEC (Grant No. MERSEC DMR 2011750). J.Y. is supported by the Gordon and Betty Moore Foundation through Grant No. GBMF8685 towards the Princeton theory program. B.A.B. and C.F. are also part of the SuperC collaboration. Y.J. and S.B-C. acknowledge financial support from the MINECO of Spain through the project PID2021- 122609NB-C21 and by MCIN and by the European Union Next Generation EU/PRTR-C17.I1, as well as by IKUR Strategy under the collaboration agreement between Ikerbasque Foundation and DIPC on behalf of the Department of Education of the Basque Government. BAB also acknowledges support the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 101020833). J.H. and D.C. are supported by DOE Grant No. DE-SC0016239. J.H is also supported by a Hertz Fellowship. C.H.H, P.S.B, and E.L.G are supported by DMR-1956403, and the Research Corporation for Science Advancement (Cotrell Scholar Program) for non-tenured faculty. We are grateful to T. T. Debela, A. Walsh, D. Scanlon, A. Rosen, and C. Musgrave for helpful discussions related to defect formation enthalpy calculations. The authors acknowledge the use of Princeton's Imaging and Analysis Center, which is partially supported by the Princeton Center for Complex Materials, a National Science Foundation (NSF) - MRSEC program (DMR-2011750). The authors would also like to acknowledge G. Cheng for helping with carbon coating. LMS is supported by the Gordon and Betty Moore Foundation's EPIQS initiative through Grant No. GBMF9064, as well as the David and Lucille Packard foundation. SBL is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. (DGE-2039656). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ###### Contents * I Introduction * II Chemical Structure * III ab initio Results * III.1 Electronic Structure * III.2 Phonon Spectrum * IV Tight-Binding Models * IV.1 \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * IV.2 \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}(\text{OH})_{2}\) * V Interacting Hamiltonian * V.1 Comments on Flat Bands and Interactions * VI Further Verification and Tests * VI.1 Immediate Experiments * VI.2 Chemical Structure Verification * VI.3 Theoretical Analysis * VII Conclusions * VIII Acknowledgements * A Additional DFT results * A.1 Crystal structure * A.2 Orbital projections * A.3 Bands of unrelaxed structures * A.4.1 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.5.1 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.1 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.2 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.3 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.4 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.5 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.6.7 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.8 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.9 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.1 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.1 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.2 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.3 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.4 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.5 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.6.6 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.7 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.8 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) * A.6.9 The \(\text{Pb}_{\text{b}}\text{Cu}(\text{PO}_{4})_{6}\text{O}\) 4. Phonon Spectra 5. DFT computational details 2. Single-Particle Hamiltonians 1. \(\mathrm{Pb_{5}Cu(PO_{4})_{6}O}\) 2. \(\mathrm{Pb_{9}Cu(PO_{4})_{6}(OH)_{2}}\) 3. Experimental Details 1. Synthesis 2. Refinement 3. Powder Diffraction Comparisons ## Appendix A Additional DFT results ### Crystal structure In Table. 4, we list the experimental and DFT relaxed lattice constants in undoped and Cu-doped phases of \(\mathrm{Pb_{10}(PO_{4})_{6}O}\) and \(\mathrm{Pb_{10}(PO_{4})_{6}(OH)_{2}}\). In Table. 5, we give the relaxed atomic positions. ### Orbital projections In Fig. 13 and Fig. 14, we show the orbital projections of \(\mathrm{Pb_{10}(PO_{4})_{6}O}\) and \(\mathrm{Pb_{10}(PO_{4})_{6}(OH)_{2}}\), respectively. ### Bands of unrelaxed structures As mentioned in the main text, the original experimental structures have fractional occupancy for O or \(\mathrm{(OH)_{2}}\). Hence the unrelaxed structures used here have fixed the position for O and \(\mathrm{(OH)_{2}}\) and remove the fractional occupancy. This means that these unrelaxed structures are not experimental and need further relaxation in order to obtain stable structures for DFT. The Cu-doped phases also do not have experimental structures and need further relaxation. For completeness, we also show in Fig. 15 the bands of unrelaxed structure for \(\mathrm{Pb_{10}(PO_{4})_{6}O}\) and \(\mathrm{Pb_{10}(PO_{4})_{6}(OH)_{2}}\), in undoped, Cu\({}_{1}\)-doped, and Cu\({}_{2}\)-doped phases. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Compound & Phase & \(a\) & \(c\) & Volume \\ \hline \multirow{6}{*}{Pb-O} & Experiment[45] & 9.865 & 7.431 & 626.251 \\ \cline{2-5} & undoped & 10.018 & 7.485 & 650.585 \\ \cline{2-5} & Cu\({}_{1}\)-doped & 9.795 & 7.339 & 609.820 \\ \cline{2-5} & Cu\({}_{2}\)-doped & 9.868 & 7.410 & 624.970 \\ \hline \multirow{6}{*}{Pb-(OH)\({}_{2}\)} & Experiment[46] & 9.866 & 7.426 & 625.991 \\ \cline{2-5} & undoped & 9.866 & 7.426 & 625.991 \\ \cline{1-1} \cline{2-5} & Cu\({}_{1}\)-doped & 9.731 & 7.296 & 598.352 \\ \cline{1-1} \cline{2-5} & Cu\({}_{2}\)-doped & 9.718 & 7.301 & 597.151 \\ \hline \hline \end{tabular} \end{table} Table 4: The experimental and relaxed lattice constants in undoped and Cu-doped phases \(\mathrm{Pb_{10}(PO_{4})_{6}O}\) and \(\mathrm{Pb_{10}(PO_{4})_{6}(OH)_{2}}\) (written as Pb-O and Pb-(OH)\({}_{2}\) for short). The experimental structures are given by Ref.[45; 46]. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Compound & Phase & Atom & Position \\ \hline \multirow{6}{*}{Pb-O} & undoped & tri-O & \((0,0,0.726)\) \\ \cline{2-3} & \multirow{2}{*}{Cu\({}_{1}\)-doped} & tri-O & \((0,0,0.821)\) \\ & & Cu\({}_{1}\) & \((\frac{1}{3},\frac{2}{3},0.498)\) \\ \cline{2-3} & \multirow{2}{*}{Cu\({}_{2}\)-doped} & tri-O & \((0,0,0.746)\) \\ & & Cu\({}_{2}\) & \((\frac{2}{3},\frac{1}{3},0.502)\) \\ \hline \multirow{6}{*}{Pb-(OH)\({}_{2}\)} & undoped & tri-O & \((0,0,0.040),(0,0,0.540)\) \\ & & tri-H & \((0,0,0.900),(0,0,0.400)\) \\ \cline{2-3} & \multirow{2}{*}{Cu\({}_{1}\)-doped} & tri-O & \((0,0,0.154),(0,0,0.762)\) \\ & & tri-H & \((0,0,0.019),(0,0,0.628)\) \\ \cline{2-3} & \multirow{2}{*}{Cu\({}_{1}\)-doped} & Cu\({}_{1}\) & \((\frac{1}{3},\frac{2}{3},0.473)\) \\ \cline{2-3} & \multirow{2}{*}{Cu\({}_{2}\)-doped} & tri-O & \((0,0,0.154),(0,0,0.630)\) \\ \cline{2-3} & & tri-H & \((0,0,0.021),(0,0,0.497)\) \\ \cline{2-3} & & Cu\({}_{2}\) & \((\frac{2}{3},\frac{1}{3},0.498)\) \\ \hline \hline \end{tabular} \end{table} Table 5: The relaxed atomic positions in undoped and Cu-doped phases Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) (written as Pb-O and Pb-(OH)\({}_{2}\) for short). Figure 13: Orbital projections of Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O in Cu\({}_{1}\)- and Cu\({}_{2}\)-doping, PM and FM phases, where the first row is the projection of \((d_{xz},d_{yz})\) orbitals of Cu, and the second row is the \((p_{x},p_{y})\) orbitals of trigonal-O. Figure 14: Orbital projections of Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) in Cu\({}_{1}\)- and Cu\({}_{2}\)-doping, PM and FM phases, where the first row is the projection of \((d_{xz},d_{yz})\) orbitals of Cu, and the second row is the \((p_{x},p_{y})\) orbitals of trigonal-O. Figure 15: The DFT bands of unrelaxed structures. (a)-(c) are undoped, Cu\({}_{1}\)-doped, and Cu\({}_{2}\)-doped bands for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, and (d)-(f) are for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). ### Phonon Spectra Here, we present the phonon spectrum of relaxed Pb-O, the doped Pb-O, and Pb-(OH)\({}_{2}\) compounds. The temperature effect is included via the smearing method in harmonic approximation level. Without specific indication, these calculations are performed within 111 cells. As shown below, all these phonon spectrums present imaginary modes in the 111-cell results. Similar to the phonon of undoped Pb-(OH)\({}_{2}\) in Sec. III.2, the phonon presents a hardening trend as the temperature goes higher. For the imaginary modes at \(k_{3}=\pi\) plane, we expect they can be eased in larger supercell calculation as distant force constants are included. In the doped structure, the O1 and Cu sites also contribute to the imaginary branches at low temperatures, which are absent in the high-temperature results. This is much more evident in the FM phonon spectrum. Since O and Cu are much lighter than Pb, the current imaginary phonons may not be physical, and further relaxation and larger supercell calculations may eliminate them, which we leave for future study. We also emphasize that current instabilities don't determine the stability issues of these compounds since larger supercell takes longer time. ### DFT computational details We use the Vienna _ab-initio_ Simulation Package (VASP)[78, 79, 80, 81, 82] to perform the _ab-initio_ computations. The generalized gradient approximation (GGA) with Perbew-Burke-Ernzerhof (PBE) exchange-correlation potential[83] is adopted. A cutoff energy of 500 eV is adopted and no extra Hubbard U correction is applied in the calculation of band structures. The maximally localized Wannier functions (MLWFs) are constructed using Wannier90[84, 85, 86, 87] to obtain onsite energies and hoppings parameters of orbitals. The phonon spectrum is calculated through density functional perturbation theory (DFPT) assisted by Phonopy package [88]. Defect formation energy calculations were performed using a 2x2x2 supercell, computed with the same base function, a 500 eV cutoff, and a 4 eV Hubbard U correction for Cu. Bulk Cu, Cu\({}_{2}\)S, and Cu\({}_{3}\)P were used for the competing chemical phases. ## Appendix B Single-Particle Hamiltonians In this Appendix, we give a detailed, symmetry-based construction of the single-particle bands for two possibles structures of Cu doped lead apatite. Both models show partially occupied topologically trivial bands dominated by Cu \(d\)-orbitals at the Fermi level. We then fit our parameters to ab initio Wannier calculations. Figure 16: Phonon spectrum for relaxed Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O without doping Cu. The negative branches get harden at \(k_{3}=0\) plane as the temperature goes higher. Instead, the imaginary phonon at \(k_{3}=\pi\) are still soft, which may be caused by the short cutoff in \(c\)-direction. Figure 17: Phonon spectrum for relaxed Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O in Cu\({}_{1}\)- and Cu\({}_{2}\)-doping state. Different from the undoped structure, the doped ones present imaginary phonon contributed by O and Cu at low-T, while their contributions are eliminated at high-T. Figure 18: Phonon spectrum for relaxed Pb-(OH)\({}_{2}\) in Cu\({}_{1}\)- and Cu\({}_{2}\)-doping state. As in Cu-doped Pb-O compounds, O also contributes to the imaginary phonon modes at low-T, which are absent at high-T. ### Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O We first provide the details of the single-particle Hamiltonians in the Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O structure. The location of the Cu dopant, which is proposed to replace Pb (although experimental characterization of the system is crucial to test this hypothesis), can occur either at the 1b (Cu\({}_{1}\) doping) or 1c position (Cu\({}_{2}\) doping). Note that Cu doping and structural relaxation reduce the space group of the original Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O compound to \(P31^{\prime}\) generated by translations, \(C_{3}\), and spin-less time-reversal \(\mathcal{T}\). In both cases, DFT orbital projections indicate that the dominant orbitals in the bands near the Fermi surface are the \(d\)-orbitals of the Cu atom at the 1b/1c position and the \(p\) orbitals of the O atom at the 1a position (shown in Fig. 19). We use lattice vectors \(\mathbf{a}_{1}=a(1,0,0),\mathbf{a}_{2}=C_{3}\mathbf{a}_{1},\mathbf{a}_{3}=c(0,0,0)\) where \(C_{3}\) is a three-fold rotation. The Wannier centers/locations of Cu atoms are \(\mathbf{r}_{1b}=(\mathbf{a}_{1}+2\mathbf{a}_{2})/3+z\mathbf{a}_{3}\) and \(\mathbf{r}_{1c}=(2\mathbf{a}_{1}+\mathbf{a}_{2})/3+z^{\prime}\mathbf{a}_{3}\). The realistic structures show a small vertical displacement between O and Cu atoms (which are not fixed by symmetry since \(C_{3}\) is in-plane) given by \(z,z^{\prime}\sim.25\). The Cu \(d\) orbitals and O \(p\) orbitals both transform in the 2D irrep \({}^{1}E^{2}E\) with angular momentum \(\pm 1\,\mathrm{mod}\,3\). Topological quantum chemistry [47] provides the following momentum space irreps when these local orbitals are induced to the space group: \[\begin{split}{}^{1}E^{2}E_{1a}\uparrow P31^{\prime}& =\Gamma_{2}\Gamma_{3}+A_{2}A_{3}+K_{2}+K_{3}+H_{2}+H_{3},\\ {}^{1}E^{2}E_{1b}\uparrow P31^{\prime}&=\Gamma_{2} \Gamma_{3}+A_{2}A_{3}+K_{1}+K_{3}+H_{1}+H_{3}\\ {}^{1}E^{2}E_{1c}\uparrow P31^{\prime}&=\Gamma_{2} \Gamma_{3}+A_{2}A_{3}+K_{1}+K_{2}+H_{1}+H_{2}\end{split} \tag{101}\] and the irrep notation is contained in the character tables below. \[\begin{split}\begin{array}{c|c}31^{\prime}&1&C_{3}\\ \hline A&1&1\\ {}^{1}E^{2}E&2&-1\end{array}\qquad\begin{array}{c|c}31^{\prime}&1&C_{3} \\ \hline\Gamma_{1}&1&1\\ \Gamma_{2}\Gamma_{3}&2&-1\end{array}\qquad\begin{array}{c|c}31^{\prime}&1&C_{3} \\ \hline A_{1}&1&1\\ A_{2}A_{3}&2&-1\end{array}\qquad\begin{array}{c|c}31^{\prime}&1&C_{3}\\ \hline A_{1}&1&1\\ A_{2}A_{3}&2&-1\end{array}\qquad\begin{array}{c|c}31^{\prime}&1&C_{3}\\ \hline H_{1}&1&1\\ H_{2}&1&e^{\frac{2\pi i}{3}}\\ H_{3}&1&e^{-\frac{2\pi i}{3}}\end{array}\qquad\begin{array}{c|c}31^{\prime}&1&C_{3} \\ \hline H_{1}&1&1\\ H_{2}&1&e^{\frac{2\pi i}{3}}\\ H_{3}&1&e^{-\frac{2\pi i}{3}}\end{array}\end{split} \tag{102}\] Note that the \(K^{\prime}\) and \(H^{\prime}\) points can be obtained by time-reversal, and the only irreps at \(M\) and \(L\) are the trivial irrep of \(\mathcal{T}\). More details can be found on the Bilbao Crystallographic Server [https://www.cryst.ehu.es/](https://www.cryst.ehu.es/). We see that degeneracies occur at the \(\Gamma\) point and \(A\), and the rest of the bands are split. We now build a short ranged tight-binding model from these orbitals. We will primarily discuss the para-magnetic (spin-unpolarized) DFT results which provide a single-particle band structure. Including Coulomb repulsion favors a ferromagnetic groundstate which is expected due to the flat bands obtained in the tight-binding models. For both dopant positions, we write the Hamiltonian as \[h_{4}(\mathbf{k})=\begin{pmatrix}h_{C}(\mathbf{k})&h_{CO}(\mathbf{k})\\ h_{CO}^{\dagger}(\mathbf{k})&h_{O}(\mathbf{k})\end{pmatrix} \tag{103}\] Figure 19: Real Space lattice and hoppings. (a) Hopping structure of the 4-band model in 2D. The only relevant 3D term is an interlayer \(t^{\prime}\) hopping between Cu orbitals. (b) Hopping structure of the 2-band model in 2D. describing the two Wannier orbitals dominated by Cu and O respectively, and their coupling. The symmetries of the model have representations \[D[C_{3}]h({\bf k})D^{\dagger}[C_{3}]=h(C_{3}{\bf k}),\qquad h^{*}({\bf k})=h(-{\bf k }),\qquad h({\bf k}+{\bf G})=V[{\bf G}]h({\bf k})V^{\dagger}[{\bf G}] \tag{104}\] where \(D[C_{3}]=R_{2\pi/3}\oplus R_{2\pi/3}\), \(V[{\bf G}]=e^{-i{\bf G}\cdot{\bf r}_{11}}\sigma_{0}\oplus\sigma_{0}\) and \(R_{2\pi/3}=e^{-i\frac{2\pi}{3}\sigma_{2}}\) is the 2D rotation matrix. The O block is dominated by a relative potential \(\mu_{O}\), an in-plane \({\bf a}_{1}\) hopping matrix \(t_{O}\), and an out-of-plane \({\bf a}_{3}\) hopping matrix \(t^{\prime}_{O}\). For higher accuracy, we also include \({\bf a}_{1}\pm{\bf a}_{3}\) hoppings denoted \(t^{\pm}_{O}\). The O-O Hamiltonian is \[h_{O}({\bf k})=\mu_{O}\sigma_{0}+\left(t^{\prime}_{O}e^{-i{\bf k}\cdot{\bf a }_{3}}+\sum_{n=0}^{2}R^{n}_{2\pi/3}(t_{O}e^{-i{\bf k}\cdot C^{n}_{3}{\bf a}_{1 }}+t^{+}_{O}e^{-i{\bf k}\cdot C^{n}_{3}({\bf a}_{1}+{\bf a}_{3})}+t^{-}_{O}e^{ -i{\bf k}\cdot C^{n}_{3}({\bf a}_{1}-{\bf a}_{3})})R^{n}_{-2\pi/3}+h.c.\right). \tag{105}\] The Cu \(d_{xz},d_{yz}\) orbitals have the same \(C_{3}\) representations, and their block has the analogous form (although we shall see that the hopping strengths are much weaker): \[h_{C}({\bf k})=\mu_{Cu}\sigma_{0}+\left(t^{\prime}_{Cu}e^{-i{\bf k}\cdot{\bf a }_{3}}+\sum_{n=0}^{2}R^{n}_{2\pi/3}(t_{Cu}e^{-i{\bf k}\cdot C^{n}_{3}{\bf a}_{1 }}+t^{+}_{Cu}e^{-i{\bf k}\cdot C^{n}_{3}({\bf a}_{1}+{\bf a}_{3})}+t^{-}_{Cu}e^ {-i{\bf k}\cdot C^{n}_{3}({\bf a}_{1}-{\bf a}_{3})})R^{n}_{-2\pi/3}+h.c.\right). \tag{106}\] Lastly, the hybridization term contains the nearest-neighbor Cu-O hopping \(T\), as well as out-of-plane couplings \(T_{\pm}\): \[h_{CO}({\bf k})=\sum_{n=0}^{2}R^{n}_{2\pi/3}(Te^{-i{\bf k}\cdot C^{n}_{3} \boldsymbol{\delta}}+T_{+}e^{-i{\bf k}\cdot C^{n}_{3}(\boldsymbol{\delta}+{ \bf a}_{3})}+T_{-}e^{-i{\bf k}\cdot C^{n}_{3}(\boldsymbol{\delta}-{\bf a}_{3}) })R^{n}_{-2\pi/3}. \tag{107}\] Here \(\boldsymbol{\delta}=-{\bf r}_{1}b/{\bf r}_{1}c\) for the Cu\({}_{1}\) doping model and Cu\({}_{2}\) doping model respectively. Incorporating the dominant terms from DFT, we use the following hopping matrices for Cu\({}_{1}\) doping: \[\mu_{O} =3.803,\quad t_{O}=-0.018(\sigma_{1}-i\sigma_{2})/2,\quad t^{ \prime}_{O}=-0.074\sigma_{0},\quad\mu_{C}=4.094,\quad t_{C}=-0.0035(\sigma_{1 }-i\sigma_{2})/2\] \[T =\begin{pmatrix}0.056&0.0190\\ -0.056&-0.045\end{pmatrix},\qquad\text{all other terms}=0 \tag{108}\] and Cu\({}_{2}\) doping: \[\mu_{O} =3.624,\quad t_{O}=-0.025(\sigma_{0}+\sigma_{3})/2,\quad t^{ \prime}_{O}=-0.021\sigma_{0},\quad t^{+}_{O}=.007(\sigma_{1}-i\sigma_{2})/2, \quad t^{-}_{O}=.003(\sigma_{1}-i\sigma_{2})/2\] \[\mu_{C} =3.736,\quad t_{C}=R_{-2\pi/3}\begin{pmatrix}0&0\\.006&-.002\end{pmatrix}R_{2\pi/3},\quad t^{\prime}_{C}=-.004\sigma_{0},\quad t ^{+}_{C}=0,\quad t^{-}_{C}=-.003(\sigma_{0}+\sigma_{3})/2 \tag{109}\] \[T =\left(\begin{array}{cc}-0.005&0.021\\ -0.012&0.004\end{array}\right),\quad T_{+}=-.004(\sigma_{1}+i\sigma_{2})/2, \quad T_{-}=-.006(\sigma_{1}+i\sigma_{2})/2\.\] The single-particle second-quantized Hamiltonian is \[H_{4}=\sum_{{\bf k},\alpha\beta,\sigma=\uparrow,\downarrow}c^{\dagger}_{{\bf k }\alpha,\sigma}[h_{4}({\bf k})]_{\alpha\beta}c^{\dagger}_{{\bf k}\beta,\sigma} \tag{110}\] using \(SU(2)\) spin symmetry due to the small spin-orbit coupling. The Fermi level of the single-particle model occurs at \(3/4\) filling of the upper two bands in each spin sector, leading to a total of one hole per unit cell in the spinful model. In addition to an accurate modeling of the spectrum, it is crucial to study the localization and topological properties of the bands, whose effect on the many-body physics is magnified by a large density of states. To do so, we compute the two-band non-abelian Wilson loop in Fig. 20 and the two-band Fubini-Study metric at \(k_{z}=0,\pi\) in Fig. 21. Our results are consistent with tight localization, as expected from the atomic band representation formed by the Cu \(d\)-orbitals. We emphasize that, although the two Cu bands together are trivial and form an EBR, the double degeneracy enforced by \(C_{3}\) and \(\mathcal{T}\) (the \(\Gamma_{2}\Gamma\))3 and \(A_{2}A_{3}\) irreps) creates non-trivial quantum geometry within a _single_ band. In particular, the Fubini-Study metric will diverge at the touching, as required by the Berry curvature monopole (pointed out in Ref. [9]) or double Weyl point. Fig. 11 shows that Cu and O DFT bands nearly touch at the \(H\) point, indicating close proximity to a band inversion. We check that tuning the \(t^{\prime}\) vertical hopping of the O orbitals realizes this band crossing, resulting in a topological phase transition into a semi-metal (see Fig. 22). Figure 21: The 2D Fubini-Study metric \(g(\mathbf{k})=\frac{1}{2}\operatorname{Tr}(\partial_{x}P\partial_{x}P+ \partial_{y}P\partial_{y}P)\) (in unit of \(a^{2}\) of the upper two bands for the DFT calculation and the tight-binding model Eq. (1). Figure 22: Inverted Semimetal phase obtained for the Cu\({}_{1}\) structure by increasing the O out-of-plane hopping \(t^{\prime}_{O}\) in \(h_{4}(\mathbf{k})\). We compare the dispersions of the \(C_{3}\) irreps along the \(KH\) line in the normal (a) and inverted (b) phases. The inverted phase is a topological semimetal due to the protected crossing of the different \(C_{3}\) eigenvalues. Note \(C_{3}\) is preserved along all points in \(KH\). (c) show the band structure in the inverted phase. DFT shows that ferromagnetism (and also a degenerate anti-ferromagnetism) is favored over the paramagnetic (spin-degenerate) single-particle bands given above. We find that the spin-polarized bands at the Fermi surface have essentially identical parameters as Eq. (100), whereas small adjustments in the parameters are required for Cu\({}_{2}\) doping: \[\begin{split}\mu_{O}&=3.695,\quad t_{O}=-0.025(\sigma _{0}+\sigma_{3})/2,\quad t_{O}^{\prime}=-0.022\sigma_{0},\quad t_{O}^{+}=.006( \sigma_{1}-i\sigma_{2})/2,\quad t_{O}^{-}=.003(\sigma_{1}-i\sigma_{2})/2\\ \mu_{C}&=3.751,\quad t_{C}=R_{-2\pi/3}\begin{pmatrix} 0&0\\.006&-.002\end{pmatrix}R_{2\pi/3},\quad t_{C}^{\prime}=-.004\sigma_{0},\quad t_{ C}^{+}=0,\quad t_{C}^{-}=-.003(\sigma_{0}+\sigma_{3})/2\\ T&=\left(\begin{array}{cc}-0.005&0.021\\ -0.013&0.005\end{array}\right),\quad T_{+}=0,\quad T_{-}=-.006(\sigma_{1}+i \sigma_{2})/2\.\end{split} \tag{101}\] Their main effect is to further shrink the gap between the O and Cu bands, as can be seen in Fig. 23 ### Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) We provide the details of the single-particle Hamiltonians in the Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) structure for both Cu\({}_{1}\) and Cu\({}_{2}\) doping and in both PM and FM phases. For this structure, we have space group P3. In all the cases, we only have two bands near the Fermi level Fig. 9, they are trivial atomic bands given by \(d_{xz}\) and \(d_{yz}\) orbitals of Cu. (Cu atoms are at 1b for Cu\({}_{1}\) and at 1c for Cu\({}_{2}\).) Wanneirization of the DFT bands would directly generate a DFT 2-band tight-binding model, which can generate band structure that perfectly matches the DFT ones. The DFT 2-band tight-binding model shows that the hoppings longer than NN are small (_i.e._, less than 1meV), and thus we will build a 2-band NN-hopping model for the two bands in each case. Briefly, the form of the model is the same for all cases due to the same symmetries. The hopping parameter values in the model are the same for the PM and FM phases for one specific Cu doping, though the onsite energies are different. The hopping values do change across different Cu doping, as indicated by the different bands in Fig. 9 for different Cu doping. Details are discussed below. The 2-band NN hopping model that we build with \(d_{xz}\) and \(d_{yz}\) at the same position in each unit cell reads \[H_{2band}=\sum_{\mathbf{k}}c_{\mathbf{k},Cu}^{\dagger}\left[E_{Cu}+2t_{z}\cos(k_{z}c) +\left(\sum_{n=0,1,2}R_{2\pi/3}^{n}\mathbf{a}_{1}R_{-2\pi/3}^{n}e^{-\mathrm{i}(C_{ 3}^{n}\mathbf{a}_{1})\cdot\mathbf{k}}+h.c.\right)\right]c_{\mathbf{k},Cu}\, \tag{102}\] where e \[t_{\mathbf{a}_{1}}=\begin{pmatrix}t_{xx}&t_{xy}\\ t_{yx}&t_{yy}\end{pmatrix} \tag{103}\] is the hopping matrix along \(\mathbf{a}_{1}\), \(c_{\mathbf{k},Cu}^{\dagger}=(c_{\mathbf{k},Cu,d_{xz}}^{\dagger},c_{\mathbf{k},Cu,d_{yz}}^{\dagger})\). Figure 23: The ferromagnetic bands are very similar to the paramagnetic bands (see Main Text). The essential difference between the two cases is the Fermi level, which fills \(3/4\) of the spin-degenerate Cu bands in the paramagnetic case, but fills \(1/2\) of the spin-polarized Cu bands in the ferromagnetic case. For Cu\({}_{1}\) doping, the parameter values are \[E_{Cu}=-0.0326\mathrm{eV}\,\ t_{z}=-0.0062\mathrm{eV}\,\ t_{xx}=-0.0094\mathrm{eV} \,\ t_{xy}=0.0066\mathrm{eV}\,\ t_{yx}=-0.0115\mathrm{eV},t_{yy}=0. \tag{103}\] The in-plane hopping is shown in Fig. 19(b).Compared to the most general symmetry-allowed NN hoppings, we directly neglect the off-diagonal part of the \(\mathbf{a}_{3}\) hopping (as it is about \(0.5\mathrm{meV}\), equivalent to maximum band splitting about \(2\mathrm{meV}\) along \(\Gamma\)-A). We also set \(t_{yy}=0\) since it is smaller than \(2\mathrm{meV}\). The band structure from the model Eq. (102) is shown in Fig. 12(a), which has a good agreement with the DFT band structure. The flatness of the bands, which form an indecomposable atomic representation of the Cu orbitals, is merely due to the small hopping amplitudes/localized Wannier functions. To be specific, the square root of the Wannier spread of each orbital is \(2.402\mathrm{\AA}=0.2469a=0.3293c\), indicating that the Wannier function is very localized. Owing to \(t_{yx}/t_{xy}=-1.74\neq-1\), the mirror symmetry along \(y\) is broken. Since this is a 2-band model, the geometric properties (_e.g._, Fubini-Study metric) of the two bands, taken together, vanishes. However, we may look at the Wilson loop of the one of the two bands, for which we choose lower band. The Wilson loop is ill-defied for \(k_{z}=0\) and \(k_{z}=\pi\) planes, since the two bands touch on the two planes at \(\Gamma\) or A (enforced by symmetries), and thus we look at \(k_{z}=\pm\pi/2\) for the DFT 2-band tight-binding model, which is shown in Fig. 24(a). Clearly, the lower band of the DFT 2-band model has nonzero and opposite Chern numbers at \(k_{z}=\pm\pi/2\), showing that the gapless points at \(\Gamma\) and A are double Weyl points with chirality \(\pm 2\). However, in our simplified model (Eq. (102)), we neglect the band splitting along \(\Gamma\)-A, since it is very small. Although such simplfication merge the two double Weyl points into an accidental nodal line along \(\Gamma\)-A, it would be convenience for later study of correlated phases based on our model, since it make the eigenvectors of the Hamiltonian independent of \(k_{z}\). Figure 24: The Wilson loop spectrum of the lower band of the DFT 2-band model in each specified case on \(k_{z}=\pi/2\). The Wilson loop \(W\) is calcuated along \(\mathbf{b}_{2}\), and is ploted as a function of \(k_{1}\) is along \(\mathbf{b}_{1}\). Now we discuss the ferromagnetic case for Cu\({}_{1}\) doping. With ferromagnetism, only the spin-down bands remain near the Fermi level Fig. 9. The DFT spin-down bands are very close to the DFT paramagnetic bands as shown in Fig. 12(a,b). The effect of FM is approximately just a shift of the onsite energy for bands near Fermi level, as the hopping values for the DFT FM spin-down two bands are extremely similar to those for the DFT PM two bands (difference smaller \(0.1meV\)) except of a considerable shift of the onsite energy. Therefore, we can build a 2-band NN-hopping model that has the same form as Eq. (126), and choose the same hopping parameter values as Eq. (127), while shifting onsite energy to \[E_{Cu}=0.008907\mathrm{eV}. \tag{128}\] Again, the band structure from the model has a good agreement with the DFT band structure, as shown in Fig. 12. Again, the DFT 2-band tight-binding model has double Weyl points with chirality \(\pm 2\) at \(\Gamma\) and A (Fig. 24(b)), which we choose to merge into an accidental nodal line along \(\Gamma\)-A in our simplified model. We now discuss the Cu\({}_{2}\) doping parameters, which are very similar to the Cu\({}_{1}\) doping. A key difference is that the Cu is now at 1c, but this is simply a choice of origin in the two-band model we build and does not affect the parameterization of the Hamiltonian. In the PM phase, the hopping parameter values read \[\begin{split}& E_{Cu}=-0.0207\mathrm{eV},\ t_{z}=-0.0065\mathrm{eV},\ t_{xx}=\frac{1}{4}\left(3t_{2}-\sqrt{3}t_{1}\right),\ t_{xy}=\frac{1}{4}\left(-3t_{1}-\sqrt{3}t_{2} \right)\\ & t_{yx}=\frac{1}{4}\left(t_{1}-\sqrt{3}t_{2}\right),t_{yy}= \frac{1}{4}\left(\sqrt{3}t_{1}+t_{2}\right)\end{split} \tag{129}\] with \[t_{1}=0.0112\mathrm{eV}\,\ t_{2}=-0.0027\mathrm{eV}. \tag{130}\] The reason for us to parameterize the in-plane hoppings by only \(t_{1}\) and \(t_{2}\), because along \(\mathbf{a}_{2}\) direction the hopping matrix approximately reads \[R_{2\pi/3}t_{\mathbf{a}_{1}}R_{-2\pi/3}=t_{\mathbf{a}_{2}}=\begin{pmatrix}0&0\\ t_{1}&t_{2}\end{pmatrix} \tag{131}\] according to the Wannierization of the DFT data, where the neglected elements are smaller than 1meV. In particular, \(t_{xy}=-0.0072\mathrm{eV}\) and \(t_{yx}=0.0040\mathrm{eV}\) indicate the breaking of the mirror as \(y\) since \(t_{xy}/t_{yx}=-1.81\neq-1\). The band structure from the model Eq. (126) again has a good agreement with the DFT band structure, as shown in Fig. 12(b). Small hopping again the small spread of the Wannier function of the basis: the square root of the Wannier spread of each orbital is \(2.271\mathrm{\AA}=0.2334a=0.3113c\). Again, the DFT 2-band tight-binding model has double Weyl points with chirality \(\pm 2\) at \(\Gamma\) and A (Fig. 24(c)), which we choose to merge into an accidental nodal line along \(\Gamma\)-A in our simplified model. In the ferromagnetic case, the model parameters again only defer from Eq. (129) by an onsite energy shift: \[E_{Cu}=0.0023\mathrm{eV}. \tag{132}\] Again, the band structure from the model has a good agreement with the DFT band structure, as shown in Fig. 12(d). Again, the DFT 2-band tight-binding model has double Weyl points with chirality \(\pm 2\) at \(\Gamma\) and A (Fig. 24(d)), which we choose to merge into an accidental nodal line along \(\Gamma\)-A in our simplified model. ## Appendix C Experimental Details In this Appendix, we give detailed information on synthesis and characterization of lead apatites. ### Synthesis Well homogenized powders of Pb\({}_{2}\)(SO\({}_{4}\))O were synthesized by grinding stoichiometric amounts of PbO (Sigma Aldrich, \(>\)99.0 percent) and PbSO\({}_{4}\) (Sigma Aldrich, \(>\)99.0 percent) in an agate mortar and pestle. The mixture was loaded in alumina crucibles in 3g amounts and then placed in a quartz tube and sealed under dynamic vacuum. Experiments found that sealing with oxygen present (i.e. not backfilling with Ar) resulted in the highest purity of product. Sample purity was confirmed via powder X-ray diffraction using a STOE Stadi P powder X-ray diffractometer equipped with a Mo K\(\alpha\) (\(\lambda\)= 0.71073 A) sealed-tube X-ray source and graphite monochromator at room temperature in transmission geometry Fig. 25 Cu\({}_{3}\)P was grown phase pure by mixing elemental Cu (Alfa Aesar, -170+270 mesh, 99.9 percent) and elemental red Phosphorus (Alfa Aesar, 99.999+ percent) in a 3:1 stoichiometric ratio. The sample was placed in an alumina crucible and subsequently sealed in a quartz tube. For this synthetic step the quartz tube was evacuated three times and backfilled with argon to prevent any oxidation before sealing. The tube was then loaded in a furnace heated first to 450 \({}^{\circ}\)C over 2 hours and held there for 3 hours to prereact the phosphorus. The furnace was then ramped to 1050 \({}^{\circ}\)C over 4 hours, kept at this temperature for 8 hours, and then shut off to cool quickly. This saved nearly 24 hours in synthesis time compared to the original synthesis and results in a phase pure product of the target compound, collected in transmission geometry on the same instrument Fig. 26. Figure 25: Comparisons of experimental (black) and calculated powder X-ray diffraction patters of Pb\({}_{2}\)(SO\({}_{4}\))O (red). Finally, a 1:1 molar ratio of Cu\({}_{3}\)P and Pb\({}_{2}\)(SO\({}_{4}\))O were ground into a fine powder in mortar and pestle. The powders were once again loaded into an alumina crucible, placed in a quartz tube, and sealed under dynamic vacuum. The sample was then heated to 950 \({}^{\circ}\)C, kept at this temperature for 6hrs, and then shut off to cool quickly. Figure 26: Comparisons of experimental (black) and calculated powder X-ray diffraction patters of Cu\({}_{3}\)P (red). Figure 27: Comparisons of previously reported (black) and collected powder X-ray data for proposed Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O (red). Figure 28: A representative sample of the final product phase, ground into a powder. We observe at least 3 phases with seemingly different properties. Figure 29: EDS of the red and white transparent phase showing pockets of Cu embedded in a phase consistent with Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)(OH)(SH). ### Refinement Calculated precession images for data in refined herein are shown below. Collection and integration statistics for the full hemisphere data set are shown below. Freely refining occupancies for both an O at [0 0 1/4] and S located at [0 0 0] results in site occupancy factors of 0.54 and 0.56, respectively. The freely refined occupancies having a summation very close to 1 seems suggestive that this could be a physically real interpretation of the structural solution. After refining the thermal parameters anisotropically, a final refinement converges with a goodness of fit (GoF) parameter of 1.22 and R value of 3.94 Figure 31: Precession images for a thin transparent crystal. The only systematic absences are [00]=2n agreeing with the space group \(\text{P}6_{3}/m\). Figure 30: EDS of the metallic gray phase showing Cu\({}_{2}\)S embedded in a phase consistent with Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)(OH)(SH). compared to all reflection. Here, our site occupancies for O (1.08) and S (1.12) sum to above 1. If we decide to restrict the occupancies of the \(2b\) S and \(2a\) O to sum to a site occupancy factor of 1, we end refinement with a final composition Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\). This refinement has nominally the same refinement statistics (GoF(all) = 1.22, R(all) = 3.95). Table S VIII. Refined atomic coordinates for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\). \begin{tabular}{l l l l l l} \hline \hline Site & Wyckoff Position & x & y & z & Occupancy \\ \hline Pb1 & 6h & & 0.24486(4) & 0.24667(4) & 1/4 & 1 \\ Pb2 & 4f & & 2/3 & 1/3 & -0.00339(4) & 1 \\ P1 & 6h & & 0.3729(2) & 0.4020(2) & 3/4 & 1 \\ O1 & 6h & & 0.4851(7) & 0.3366(8) & 3/4 & 1 \\ O2 & 12i & & 0.2650(6) & 0.3480(6) & 0.5834(6) & 1 \\ O3 & 6h & & 0.5337(8) & 0.4166(7) & 1/4 & 1 \\ S1 & 2b & & 0 & 0 & 0.53 \\ O4 & 2a & & 0 & 1/4 & 0.47 \\ \hline \hline \end{tabular} Table S VII. Reflection and integration statistics for lead appetite structures without Cu doping. \begin{table} \begin{tabular}{l l} \hline \hline Crystal Dimension (mm) & 0.55 \(\times\) 0.092 \(\times\) 0.176 mm \\ Temperature (K) & 298(2) \\ Radiation Source, \(\lambda\) (Å) & Mo K\(\alpha\), \(\lambda\)=0.71073 Å \\ Absorption Correction & analytical \\ Space Group & P6\({}_{3}/m\) \\ \(a\) (Å) & 9.8508(1) \\ \(c\) (Å) & 7.4395(2) \\ Cell Volume (Å\({}^{3}\)) & 625.198(19) \\ Absorption Coefficient (mm\({}^{-1}\)) & 67.952 \\ \(\theta_{min}\), \(\theta_{max}\) & 2.39, 36.34 \\ Number of Reflections & 32013 \\ Unique Reflections (I \(>\)3\(\sigma\), all) & 763, 1066 \\ Rint(I\(>\)3\(\sigma\), all) & 6.54, 7.40 \\ \hline \hline \end{tabular} \end{table} Table S VI. Collection and integration statistics for lead appetite structures. We also investigated two ways of possible Cu doping within our refinement: one in which the Cu atom substitutes on the Pb lattice site, and another which the Cu atom is inserted along the chain. Attempts to dope both Pb sites with Cu results in a refined composition of Pb\({}_{9.55}\)Cu\({}_{0.45}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\) with similar statistics (GoF= 1.21, R = 3.93). We found the Cu has to be added in as a split site after Pb position has been refined completely. The last two steps of occupancy and anisotropic parameter refinement had to be done by restraining the Pb/Cu split site position with automatic refining keys switched off. We can also obtain a reasonable refinement to the data if we place Cu into the channel with composition Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH) \({}_{1.11}\) Cu\({}_{0.49}\) (GoF = 1.22, R = 3.93). Still, as the crystals are transparent, charge balance needs to be maintained and thus this last solution is not chemically reasonable. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Site & Wyckoff Position & x & y & z & Occupancy \\ \hline Pb1 & 6h & 0.24487 & 0.24674 & 1/4 & 0.956 \\ Cu1’ & 6h & 0.24487 & 0.24674 & 1/4 & 0.044 \\ Pb2 & 4f & 2/3 & 1/3 & -0.00344 & 0.953 \\ Cu2’ & 4f & 2/3 & 1/3 & -0.00344 & 0.047 \\ P1 & 6h & 0.3728(2) & 0.4019(2) & 3/4 & 1 \\ O1 & 6h & 0.4853(7) & 0.3370(8) & 3/4 & 1 \\ O2 & 12i & 0.2652(6) & 0.3480(6) & 0.5834(6) & 1 \\ O3 & 6h & 0.5341(8) & 0.4168(7) & 1/4 & 1 \\ S1 & 2b & 0 & 0 & 0 & 0.53 \\ O4 & 2a & 0 & 0 & 1/4 & 0.47 \\ \hline \end{tabular} \end{table} Table 5: XIV. Refined anisotropic displacement parameters for Pb\({}_{9.55}\)Cu\({}_{0.45}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\)(SH)\({}_{1.06}\). ### Powder Diffraction Comparisons In this section, we compare the experimental data from [1; 2] with simulated patterns of experimentally determined Fig. 32 and computational structures Fig. 33 used in this body of work. The first two comparisons show patterns from crystallographic information files not determined in this paper. Upon first glance, it is understandable why the originals authors came to the conclusion that their structure resembles Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O. The direct overlap of peaks resembles a good match of the experimental pattern, however, as previously stated this structure is unstable at room temperature. In contrast, the simulated pattern for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) seems chemically reasonable and utilizes neutron diffraction to refine the positions of the H atoms, yet falls short in matching well with the data. The slow migration of simulated peaks to higher \(2\theta\) values is indicative of the simulated crystal structure needs to expand slightly to overlap with the experimental data. Comparing to the simulated patterns of 3 of our structure solutions reported in this paper, we see negligible changes in the way our data fits the experimental work. This is despite the fact that one of our structures, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)Cu\({}_{0.49}\) is chemically infeasible because it cannot be charged balanced. This leads us to the conclusion that even though the patterns match well, the match of experimental data is still not enough to accurately verify composition or structure. In Fig. 33, we compare the simulated patterns of computationally relaxed structures with the experimental data. Intriguingly, as we relax the structures of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), both compounds expand, leading to Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) looking like a proper fit. This structure is then likely another chemically reasonable interpretation of the data. As we probed the Cu doped structures, we noticed that all of these computationally modified structures grow another peak around 15 degrees which varies in intensity based on the anion identity and Cu doping position shown zoomed in with Fig. 34. The highest expected intensity at this angle is found in the the Cu2-substituted Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O pattern, which, except for ignoring this peak is the best-matched pattern to the experimental data of the four. Once again, we see crystallographic uncertainties with ordered Cu dopants. From these simulations, we are skeptical of any ordering of Cu dopants inside the lead apatite structure. Another interesting possibility, however, is the prospect of the Cu dopants not having a preferred site selectivity. We note that our model for Pb\({}_{9.55}\)Cu\({}_{0.45}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{0.94}\) and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH) \({}_{1.11}\) Cu\({}_{0.49}\) have somewhat significant Cu doping, yet they do not have any predicted peaks below 18 degrees. In conclusion, laboratory X-ray diffraction techniques such as SCXD and PXRD are not yet enough to come to a conclusion on the exact structure of lead apatite synthesized via the Lanarkite plus Cu\({}_{3}\)P route, but hope that symbiotic techniques such as neutron scattering and X-ray photoelectron spectroscopy will be used in the near future.
2310.13543
Shedding Light on Low Surface Brightness Galaxies in Dark Energy Survey with Transformers
Low surface brightness galaxies (LSBGs) which are defined as galaxies that are fainter than the night sky, play a crucial role in understanding galaxy evolution and cosmological models. Upcoming large-scale surveys like Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid are expected to observe billions of astronomical objects. In this context, using semi-automatic methods to identify LSBGs would be a highly challenging and time-consuming process and demand automated or machine learning-based methods to overcome this challenge. We study the use of transformer models in separating LSBGs from artefacts in the data from the Dark Energy Survey (DES) data release 1. Using the transformer models, we then search for new LSBGs from the DES that the previous searches may have missed. Properties of the newly found LSBGs are investigated, along with an analysis of the properties of the total LSBG sample in DES. We identified 4,083 new LSBGs in DES, adding an additional $\sim17\% $ to the LSBGs already known in DES. This also increased the number density of LSBGs in DES to 5.5 deg$^{-2}$. We performed a clustering analysis of the LSBGs in DES using an angular two-point auto-correlation function and found that LSBGs cluster more strongly than their high surface brightness counterparts. We associated 1310 LSBGs with galaxy clusters and identified 317 among them as ultra-diffuse galaxies (UDGs). We found that these cluster LSBGs are getting bluer and larger in size towards the edge of the clusters when compared with those in the centre. Transformer models have the potential to be on par with convolutional neural networks as state-of-the-art algorithms in analysing astronomical data.
H. Thuruthipilly, Junais, A. Pollo, U. Sureshkumar, M. Grespan, P. Sawant, K. Malek, A. Zadrozny
2023-10-20T14:40:45Z
http://arxiv.org/abs/2310.13543v1
# Shedding Light on Low Surface Brightness Galaxies in Dark Energy Survey with Transformers ###### Abstract Context:Low surface brightness galaxies (LSBGs) which are defined as galaxies that are fainter than the night sky, play a crucial role in understanding galaxy evolution and cosmological models. Upcoming large-scale surveys like Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid are expected to observe billions of astronomical objects. In this context, using semi-automatic methods to identify LSBGs would be a highly challenging and time-consuming process and demand automated or machine learning-based methods to overcome this challenge. Aims:We study the use of transformer models in separating LSBGs from artefacts in the data from the Dark Energy Survey (DES) data release 1. Using the transformer models, we then search for new LSBGs from the DES that the previous searches may have missed. Properties of the newly found LSBGs are investigated, along with an analysis of the properties of the total LSBG sample in DES. Methods:We created eight different transformer models and used an ensemble of these eight models to identify LSBGs. This was followed by a single-component Sersic model fit and a final visual inspection to filter out false positives. Results:Transformer models achieved an accuracy \(\sim 94\%\) in separating the LSBGs from artefacts. In addition, we identified \(4\,083\) new LSBGs in DES, adding an additional \(\sim 17\%\) to the LSBGs already known in DES. This also increased the number density of LSBGs in DES to \(5.5\) deg\({}^{-2}\). The new LSBG sample consists of mainly blue and compact galaxies. We performed a clustering analysis of the LSBGs in DES using an angular two-point auto-correlation function and found that LSBGs cluster more strongly than their high surface brightness counterparts. This effect is driven by the red LSBG. We associated 1310 LSBGs with galaxy clusters and identified 317 among them as ultra-diffuse galaxies (UDGs). We found that these cluster LSBGs are getting bluer and larger in size towards the edge of the clusters when compared with those in the centre. Conclusions:Transformer models have the potential to be on par with convolutional neural networks as state-of-the-art algorithms in analysing astronomical data. The significant number of LSBGs identified from the same dataset using a different algorithm highlights the substantial impact of the methodology on finding LSBGs. The reported number density of LSBGs is only a lower estimate and can be expected to increase with the advent of surveys with better image quality and more advanced methodologies. ## 1 Introduction Low-surface-brightness galaxies (LSBGs) are most often defined as galaxies with a central surface brightness fainter than the night sky or galaxies with \(B\)-band central surface brightness \(\mu_{0}(B)\) below a certain threshold value. In literature, the threshold values of \(\mu_{0}(B)\) for classifying a galaxy as LSBG vary among different works, ranging from \(\mu_{0}(B)\geq 23.0\) mag arcsec\({}^{-2}\)(Bothun et al., 1997) to \(\mu_{0}(B)\geq 22.0\) mag arcsec\({}^{-2}\)(Burkholder et al., 2001). It is estimated that the LSBGs only contribute a few percentages (\(<10\%\)) to the local luminosity and stellar mass density of the observable universe (Bernstein et al., 1995; Driver, 1999; Hayward et al., 2005; Martin et al., 2019). However, LSBGs are considered to account for a significant fraction (\(30\%\sim 60\%\)) of the total number density of galaxies (McGaugh, 1996; Bothun et al., 1997; O'Neil & Bothun, 2000; Haberzettl et al., 2007; Martin et al., 2019), and as much as \(15\%\) of the dynamical mass content of the universe (Driver, 1999; Minchin et al., 2004). These numbers imply that LSBGs can contribute significantly to our understanding of the physics of galaxy evolution and cosmological models. However, as their name indicates, LSBGs are very faint systems, and due to the observational challenges in detecting them, LSBGs remain mostly an unexplored realm. In recent years, despite the observational challenges, advances in digital imaging have improved our ability to detect LSBGs. The first known and the largest LSBG to be identified and verified is Malin 1, serendipitously discovered by Bothun et al. (1987) during a survey of galaxies of low surface brightness in the Virgo cluster. Notably, Malin 1 is the largest spiral galaxy known until today (e.g., Impey et al., 1988; Junais et al., 2020; Galaz et al., 2022). Current searches for LSBGs have shown that they exhibit a wide range of physical sizes (E Greene et al., 2022) and can be found in various types of environments, ranging from satellites of local nearby galaxies (Danieli et al., 2017; Cohen et al., 2018), ultra-faint satellites of the Milky Way (McConnachie, 2012; Simon, 2019), galaxies found in the field (Leisman et al., 2017; Prole et al., 2021), to members of massive galaxy clusters like Virgo (Mihos et al., 2015, 2017; Junais et al., 2022) and Coma (van Dokkum et al., 2015; Koda et al., 2015). LSBGs also consist of several sub-classes based on their physical size, surface brightness and gas content. Ultra-diffuse galaxies (UDGs) represent a subclass of LSBGs characterized by their considerable size, comparable to that of Milky Way-like galaxies, yet exhibiting very faint luminosities akin to dwarf galaxies. Although the term 'UDG' was coined by van Dokkum et al. (2015), such galaxies were identified in several earlier studies in the literature (Sandage and Binggeli, 1984; McGaugh and Bothun, 1994; Dalcanton et al., 1997; Conselice et al., 2003). Similarly, giant LSBGs (GLSBGs) form another sub-class of LSBGs that are extremely gas-rich (\(\rm{M_{HI}}>10^{10}\)\(M_{\odot}\)), faint and extended (Sprayberry et al., 1995; Saburova et al., 2023). The formation and evolution of extreme classes like UDGs and GLS-BGs are still debated (Amorisco and Loeb, 2016; Di Cintio et al., 2017; Saburova et al., 2021; Benavides et al., 2023; Laudato and Salzano, 2023). To comprehend the formation of various types LSBGs in different environments,studying them extensively across different environments (galaxy clusters vs field) over a large area of the sky is crucial. Recently, Greco et al. (2018) detected 781 LSBGs in the Hyper Suprime-Cam Subaru Strategic Program (HSC SSP) in a blind search covering around 200 deg\({}^{2}\) of the sky from the Wide layer of the HSC SSP. Similarly, in a recent study, Tangolidis et al. (2021) utilised a support vector machine (SVM) and visual inspection to analyse the first three years of data from the Dark Energy Survey (DES). They identified more than 20 000 LSBGs, which is currently the largest LSBG catalogue available. A common feature observed in both of these untargeted searches for LSBGs was the significant presence of low-surface brightness artefacts. As pointed out in Tanoglidis et al. (2021), these artefacts predominantly consist of diffuse light from nearby bright objects, galactic cirrus, star-forming tails of spiral arms and tidal streams. These artefacts typically pass the simple selection cuts based on photometric measurements and often make up the majority of the LSBG candidate sample. These contaminants need to be removed, which is often accomplished using semi-automated methods with a low success rate and visual inspection, which is more precise but time-consuming. For example, in HSC SSP, Greco et al. (2018) applied selection cuts on the photometric measurements from SourceExtractor(Bertin and Arnouts, 1996). This led to the selection of 20 838 LSBG candidates. Using a galaxy modelling pipeline based on imfit(Erwin, 2015), the sample size was subsequently reduced to 1 521. However, after the visual inspection, only 781 candidates were considered confident LSBGs, which is around 4% of the preliminary candidate sample and 50% of the sample selected by the pipeline. Similarly, in DES, Tanoglidis et al. (2021) shortlisted 419 895 LSBG candidates using the selection cuts on SourceExtractor photometric measurements. After applying a feature-based machine learning (ML) classification (SVM) on the photometric measurements, the candidate sample was further reduced to 44 979 objects. However, a significant number of false positives still remained, and only 23 790 were later classified as confident LSBGs. Therefore, these numbers indicate that the occurrence of LSBGs in these methods is roughly 5% for the initial selection and 50% for the subsequent selection. The upcoming large-scale surveys, such as Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) and Euclid (Euclid Collaboration et al., 2022), are expected to observe billions of astronomical objects. In this scenario, relying solely on photometric selection cuts or semi-automated methods such as galaxy model fitting would not be practical to identify LSBGs confidently. Furthermore, the accuracy of the classification methodology between LSBGs and artefacts must be improved to achieve meaningful results. Hence, this situation demands more effective and efficient automation methodologies for the searches of LSBGs. Recently, the advancements in deep learning have opened up a plethora of opportunities and have been widely applied in astronomy. Particularly for analysing astronomical images, convolutional neural networks (CNNs) have emerged as a state-of-the-art technique. For example, the CNNs have been used for galaxy classification (Perez-Carrasco et al., 2019), galaxy merger identification (Pearson et al., 2022), supernova classification (Cabrera-Vives et al., 2017) and finding strong gravitational lenses (Schaefer et al., 2018; Davies et al., 2019; Rojas et al., 2022). One of the fascinating features of CNNs is their ability to directly process the image as input and learn the image features, making them one of the most popular and robust architectures in use today. Generally, the learning capacity of a neural network increases with the number of layers in the network. The first layers of the network learn the low-level features, and the last layers learn more complex features (Russakovsky et al., 2015; Simonyan and Zisserman, 2015). One of the main requirements for creating a trained CNN is a sufficiently large training dataset that can generalise the features of the data we are trying to analyse. Recently, Tanoglidis et al. (2021) utilised a catalogue of over 20 000 LSBGs from DES to classify LSBGs from artefacts using a CNN for the first time and achieved an accuracy of 92% and a true positive rate of 94%. While CNNs have been the dominant choice for analysing image data in astronomy, the current state-of-the-art models for computer vision are transformers. Transformers were initially introduced in natural language processing (NLP) as an attention-based model (Vaswani et al., 2017). The fundamental concept behind the transformer architecture is the attention mechanism, which has also found a broad range of applications in machine learning (Zhang et al., 2018; Fu et al., 2019; Parmar et al., 2019; Zhao et al., 2020; Tan et al., 2021). In the case of NLP, attention calculates the correlation of different positions of a single sequence to calculate a representation of the sequence. Later the idea was adapted to computer vision and has been used to produce state-of-the-art models for various image processing tasks like image classification (Wortsman et al., 2022), and image segmentation (Chen et al., 2023). Generally, two categories of transformers are present in the literature. The first type integrates both CNNs and attention to perform the analysis. An example of this type is the Detection Transformer (DETR) proposed for end-to-end object detection by Carion et al. (2020). The key idea behind using CNNs and Transformers together is to leverage the strengths of both architectures. CNNs excel at local feature extraction, capturing low-level details, and spatial hierarchies, while attention layers excel at modelling global context and long-range dependencies. The second class of transformers is the models that do not use a CNN and operate entirely based on self-attention mechanisms. An example of this type is the Vision transformer (ViT) proposed for object classification by Dosovitskiy et al. (2021). ViTs have demonstrated remarkable performance in image classification tasks and have surpassed the accuracy of CNN-based models on various benchmark datasets (Dosovitskiy et al., 2021; Yu et al., 2022; Wortsman et al., 2022). Even though transformers have been introduced very recently in astronomy, they have already found a wide variety of applications. For example, the transformer models have been used to detect and analyse strong gravitational lensing systems (Thuruthipilly et al.2022b,a; Huang et al.2022; Jia et al.2023), representing light curves which can be used further for classification or regression (Allam and McEwen2021), and classifying multi-band light curves of different supernovae (SN) types (Pimentel et al.2023). In this paper, we explore the possibilities of transformers in classifying LSBGs from artefacts in DES and compare the performance of transformers with the CNNs presented in Tanoglidis et al. (2021). We also use the transformer models to look for new LSBGs that the previous searches may have missed. For comparison purposes, throughout this work, we follow the LSBG definition from Tanoglidis et al. (2021), based on the \(g\)-band mean surface brightness (\(\bar{\mu}_{eff}\)) and the half-light radii (\(r_{1/2}\)). We consider LSBGs as galaxies with \(\bar{\mu}_{eff}>24.2\) mag arcsec\({}^{-2}\) and \(r_{1/2}>2.5^{\prime\prime}\). The paper is organised as follows. Sect. 2 discusses the data we used to train our models and look for new LSBGs. Sect. 3 provides a brief overview of the methodology used in our study, including the models' architecture, information on how the models were trained, and the details about the visual inspection. The results of our analysis are presented in Sect. 4. A detailed discussion of our results and the properties of the newly identified LSBGs are analysed in Sect. 5 and Sect. 6 respectively. Further analysis of the clustering of LSBGs is presented in Sect. 7 and a detailed discussion on the UDGs, which are identified as a sub-sample of LSBGs, is presented in Sect. 8. Sect. 9 concludes our analysis by highlighting the significance of LSBGs, the impact of methodology in finding LSBGs, and the future prospect with LSST. ## 2 Data ### Dark Energy Survey The Dark Energy Survey (DES; Abbott et al.2018, 2021) is a six-year observing program (2013-2019) covering \(\sim 5000\) deg\({}^{2}\) of the southern Galactic cap in the optical and near-infrared regime using the Dark Energy Camera (DECam) on the 4-m Blanc Telescope at the Cerro Tololo Inter-American Observatory (CTIO). The DECam focal plane comprises 62 2k \(\times\) 4k charge-coupled devices (CCDs) dedicated to science imaging and 12 2k \(\times\) 2k CCDs for guiding, focus, and alignment. The DECam field-of-view covers 3 deg\({}^{2}\) with a central pixel scale of 0.263 arcsec pixel\({}^{-1}\)(Flaugher et al.2015). To address the gaps between CCDs, DES utilises a dithered exposure pattern (Neilsen et al.2019) and combines the resulting individual exposures to form coadded images, which have dimensions of 0.73 \(\times\) 0.73 degrees (Morganson et al.2018). The DES has observed the sky in \(grizY\) photometric bands with approximately 10 overlapping dithered exposures in each filter (90 sec in \(griz\)-bands and 45 sec in \(Y\)-band). ### DES DR1 and the gold catalogue In this work, we use the image data from the dark energy survey data release 1 (DES DR1; Abbott et al.2018) and the DES Y3 gold coadd object catalogue (DES Y3_gold_2_1.1) obtained from the first three years of the DES observations (Sevilla-Noarbe et al.2021). The DES DR1 comprises optical and near-infrared imaging captured over 345 different nights between August 2013 and February 2016. The median \(3\sigma\) surface brightness limits of the \(g\), \(r\), and \(i\)-bands of DES DR1 are 28.26, 27.86, and 27.37 mag arcsec\({}^{-2}\), respectively (Tanoglidis et al.2021). It is worth mentioning that the DES source detection pipeline has not been optimised for detecting large, low surface-brightness objects (Morganson et al.2018). Thus, the above-mentioned surface brightness values can be considered as the limits for detecting faint objects in each band. The gold catalogue shares the same single image processing, image coaddition, and object detection as the DES DR1. The objects in the gold catalogue were detected using SourceExtractor(Bertin and Arnouts1996) and have undergone selection cuts on minimal image depth and quality, additional calibration, and deblending. The median coadd magnitude limit of the DES Y3 gold object catalogue at a signal-to-noise ratio (S/N) = 10 is \(g=24.3\) mag, \(r=24.0\) mag, and \(i=23.3\) mag (Sevilla-Noarbe et al.2021). The gold catalogue contains around 319 million astronomical objects, which we used for searching LSBGs in DES. For a detailed review and discussion on the data from the DES, please refer to Abbott et al. (2018) and Sevilla-Noarbe et al. (2021). We reduced the number of objects processed in our study using preselections similar to Greco et al. (2018) and Tanoglidis et al. (2021). We first removed objects classified as point-like objects in the gold catalogue, based on the \(i\)-band SourceExtractor SPREAD_MODEL parameter and EXTENDED_CLASS_COADD as described in Tanoglidis et al. (2021). In addition, we constrained the \(g\)-band half-light radius (FLUX_RADIUS_G) and surface brightness (MUE_MEAN_MODEL_G) within the range of \(2.5^{\prime\prime}<r_{1/2}<20^{\prime\prime}\) and \(24.2<\bar{\mu}_{eff}<28.8\) mag arcsec\({}^{-2}\), respectively. Furthermore, we also limited our sample to objects with colors (using the MAG_AUTO magnitudes) in the range: \[-0.1<g-i<1.4, \tag{1}\] \[(g-r)>0.7\times(g-i)-0.4,\] (2) \[(g-r)<0.7\times(g-i)+0.4. \tag{3}\] These color cuts are based on Greco et al. (2018) and Tanoglidis et al. (2021). As mentioned by Greco et al. (2018), these color requirements will remove the spurious detections due to optical artefacts detected in all bands and blends of high-redshift galaxies. Finally, we also restricted the axis ratio (B_IMAGE/A_IMAGE) of each object to be greater than 0.3 to remove the artefacts like the highly elliptical diffraction spikes. Our complete selection criteria were based on the selection criteria presented in Appendix B of Tanoglidis et al. (2021). After the preliminary selections using the SourceExtractor parameters from the DES Y3 gold catalogue, our sample contains 419 784 objects. ### Training data All of the trained, validated, and tested models in this study used the labelled dataset of LSBGs and artefacts identified from DES by Tanoglidis et al. (2021). Below, we briefly summarise the primary steps taken by Tanoglidis et al. (2021) in constructing the LSBG catalogue. 1. The SourceExtractor parameters from the DES Y3 gold coadd object catalogue presented by Sevilla-Noarbe et al. (2021) were used to create the initial selection cuts, as discussed in Sect. 2.2. 2. The candidate sample was further reduced using an SVM to classify artefacts and LSBGs. The SVM was trained with a manually labelled set of \(\sim 8\,000\) objects (640 LSBGs) and using the SourceExtractor parameters as features for learning. 3. From the candidate sample generated through SVM, over 20 000 artefacts were excluded upon visual inspection. Most of the rejected objects that had passed SVM's feature-based selection were found to be astronomical artefacts (such as galactic cirrus, star-forming extensions of spiral arms, and tidal streams) rather than instrumental artefacts (such as scattered light emitted by nearby bright objects) during visual inspection. 4. Objects that passed the visual inspection were subjected to Sersic model fitting and Galactic extinction correction. Following this, new selection cuts were applied to the updated parameters, and the final LSBG catalogue containing 23 790 LSBGs was created. For training our classification models, we selected LSBGs from the LSBG catalogue as the positive class (label - 1) and the objects rejected in the third step (visual inspection) by Tanoglidis et al. (2021) as the negative class (label - 0). The catalogues for the positive and negative classes are publicly available, and we used these catalogues to create our training dataset1. The selection of the artefacts and LSBGs for training was random, and after selection, we had 18 474 artefacts for training as 23 103 LSBGs. However, when we further inspected these LSBGs and artefacts, we found that there were 797 objects belonging to both classes. After conducting a thorough visual examination, we identified that these are, in fact, LSBGs that had been mistakenly categorized as artefacts in the publicly accessible artefact catalogue. However, we avoided these 797 objects from our training set to avoid contamination and ambiguity among classes during training. We generated multi-band cutouts for each object in the flexible image transport system (FITS) format using the cutout service provided in the DES public data archive. Each cutout corresponds to a \(67.32\arcsec\times 67.32\arcsec\) (\(256\times 256\) pixels) region of the sky and is centred at the coordinates of the object (LSBG or artefact). We resized the cutouts from their initial size to \(64\times 64\) pixels to reduce the computational cost. The cutouts of \(g,r\) and \(z\)-bands were stacked together to create the dataset for training the models. Examples of LSBGs and artefacts used for training the model are shown in Fig. 1. Our training catalogue contains 39 983 objects, out of which 22 306 are LSBGs and 17 677 are artefacts. Before training, we randomly split the full sample into a training set, a validation set and a test set, each consisting of 35 000, 2 500, and 2 483 objects, respectively. Footnote 1: [https://github.com/dtanoglidis/DeepShadows/blob/main/Datasets](https://github.com/dtanoglidis/DeepShadows/blob/main/Datasets) ## 3 Methodology ### Transformers and Attention As mentioned in Sect. 1, the central idea behind every transformer architecture is attention. Before applying attention, the input sequence is transformed into three vectors in multi-head attention: query (\(Q\)), key (\(K\)), and value (\(V\)). The dot product between the query and key vectors is used to obtain attention scores. The attention scores are then used to weight the value vector, producing a context vector that is a weighted sum of the value vectors. For our work, the vectors (\(Q\), \(V\) and \(K\)) are identical, and this method is termed self-attention. This approach enables the transformer to model long-range dependencies and capture complex patterns in the input sequence. Mathematically, the attention function is defined as \[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V, \tag{4}\] where \(Q,K,V\) are the query, key, and value vectors and \(d_{k}\) is the dimension of the vector \(K\). The softmax function, by definition, is the normalised exponential function that takes an input vector of \(K\) real numbers and normalises it into a probability distribution consisting of \(K\) probabilities proportional to the exponential of the input numbers. The building blocks of our transformer models are layers applying self-attention and are termed transformer encoders. Please refer to Vaswani et al. (2017) for a detailed discussion on transformer encoders. ### LSBG Detection Transformer (LSBG DETR) We implemented four transformer models that use a CNN backbone and self-attention layers to classify the labels, which we call LSBG DETR (LSBG detection transformers) models in general. The LSBG DETR architecture is inspired by transformer models from Thuruthipilly et al. (2022), which explored diverse structures and hyperparameters to optimize classification performance. Each individual model is followed by a number indicating their chronological order of creation. The LSBG DETR models have an 8-layer CNN backbone to extract feature maps from the input image. The feature maps produced by the CNN backbone are then passed on to the transformer encoder layer to create an attention map that helps the transformer component focus on the most relevant features for classification. The transformer encoder layer has sub-components known as heads which parallelly apply the self-attention to the input vector split into smaller parts. Output generated by the transformer encoder is then passed on to a feed-forward neural network (FFN) layer to predict the probability of the input being an LSBG or not. Another point to be noted is that the transformers are permutation invariant; hence, we add positional encoding to address this issue and retain the positional information of features. For the LSBG DETR, we used fixed positional encoding defined by the function \[PE_{(pos,2)}=\sin\!\left(\text{pos}/12800^{\frac{2}{\text{max}}}\right)\!, \tag{5}\] \[PE_{(pos,2+1)}=\cos\!\left(\text{pos}/12800^{\frac{2}{\text{max}}}\right)\!, \tag{6}\] where \(pos\) is the position, \(i\) is the dimension of the positional encoding vector, and \(d_{model}\) is the dimension of the input feature vector. We follow the positional encoding defined in Vaswani et al. (2017), and for a detailed discussion on positional encoding and its importance, we refer to Liutkus et al. (2021); Su et al. (2021); Chen et al. (2021). The general structure of the LSBG DETR is shown in Fig. 2. For a detailed discussion on the transformer models similar to LSBG DETR, we refer to Carion et al. (2020) and Thuruthipilly et al. (2022). ### LSBG Vision We have created four transformer models similar to the Vision Transformer (ViT) introduced by Google Brain (Dosovitskiy et al., 2021), which we call LSBG vision transformers (LSBG ViT) in general. Similar to LSBG DETR models, each individual model is followed by a number indicating their chronological order of creation. One of the main features of LSBG ViT models is that it does not use any convolutional layers to process the image, unlike LSBG DETR. In the ViT architecture, the input image is divided into fixed-size patches, which are flattened into a sequence of 1D vectors. Since the transformers are permutation invariant, the positional embedding is added to the patch embedding before they are fed into the transformer layers. The positional embedding is typically a fixed-length vector that is added to the patch embedding, and it is learned during training along with the other model parameters. The combined 1-D sequence is then passed through a stack of transformer layers. An additional learnable (class) embedding is affixed to the input sequence, which encodes the class of the input image. This class embedding for each input is calculated by applying self-attention to positionally embedded image patches. Output from the class embedding is passed on to a multi-layer perceptron (MLP) head to predict the output class. A schematic diagram of the Vision transformer is shown in Fig. 3. For a detailed discussion on ViT models, please refer to Dosovitskiy et al. (2021). ### Training All of the LSBG DETR and LSBG ViT models were trained with an initial learning rate of \(\alpha=10^{-4}\). We used the exponential linear unit (ELU) function as the activation function for all the layers in these models (Clevert et al., 2016). We initialise the weights of our model with the Xavier uniform initialiser (Glorot and Bengio, 2010), and all layers are trained from scratch by the ADAM optimiser with the default exponential decay rates (Kingma and Ba, 2015). We have used the early stopping callback from \(\mathbf{Keras}\)2 to monitor the validation loss of the model and stop training once the loss was converged. The models LSBG DETR 1 and 4 had 8 heads and were trained for 150 and 93 epochs, respectively. Similarly, the LSBG DETR 2 and 3 had 12 heads and were trained for 134 and 105 epochs, respectively. Coming to the LSBGS ViT models, the hyperparameters we varied were the size of the image patches, the number of heads and the number of transformer encoder layers. The hyperparameters for the all the LSBG DETR models were customized based on the results from Thuruthipil et al. (2022), which extensively investigated the hyperparameter configurations of DETR models. When it comes to the LSBG vision transformer models, we maintained the hyperparameters from the LSBG DETR models such as learning rate, and batch size, except for adjustments in image patch size, the count of attention heads, and the number of transformer encoder layers. We varied these parameters and the four best models are presented in Table 1. In the spirit of reproducible research, our code for LSBG DETR and LSBG ViT is publicly available 3. Footnote 2: [https://keras.io/api/callbacks](https://keras.io/api/callbacks) Footnote 3: [https://github.com/hareesht23/](https://github.com/hareesht23/) ### Ensemble Models We had two classes of transformers (LSBG DETR and LSBG ViT) with four models in each class, and we used an ensemble model of these four models for each class to look for new LS Figure 1: Four examples of LSBGs (1a) and artefacts (1b) used in the training data. Each image of the LSBG and artefact corresponds to a \(67.32^{\prime\prime}\times 67.32^{\prime\prime}\) region of the sky. Images were generated by combining the \(g,r\) and \(z\) bands using APLpy package (Robitaille and Bressert, 2012). Figure 2: Scheme of the general architecture of the Detection Transformer (LSBG DETR) taken from Thuruthipilly et al. (2022). The extracted features of the input image by the CNN backbone are combined with positional encoding and passed on to the encoder layer to assign attention scores to each feature. The weighted features are then passed to the feed-forward neural network (FFN) to predict the probability. BGs from DES DR1. Ensemble models in deep learning refer to combining multiple models to create a single model that performs better than the individual models. The idea behind ensemble models is to reduce the generalisation error and increase the stability of the system by taking into account multiple sources of information. Various kinds of ensemble learning exist in the literature, and they have been found helpful in a broad range of machine learning problems (Wang et al., 2022). For a detailed review of ensemble methods, please refer to Domingos and Hulten (1999) and Dietterich (2000). One of the easiest and most common ensemble methods is model averaging. In model averaging, multiple models are trained independently on the same training data, and the outputs of the models are averaged to make the final prediction. One of the main advantages of model averaging is that it is computationally efficient and does not require any additional training time. It also allows the use of different types of model architectures and can take advantage of their strengths and weaknesses and improve overall performance. Here we use averaging to create the ensemble models for LSBG DETR and LSBG ViT. ### Sersic fitting The candidates identified independently by both LSBG DETR and LSBT ViT ensemble models were subjected to a single component Sersic fitting using Galfit(Peng et al., 2002). This was done to re-estimate the \(\bar{\mu}_{eff}\) and \(r_{1/2}\) values of our LSBG candidates that we initially used for our sample selection. We employed a single-component Sersic fitting method to align with the LSBG search methodology of Tangolidis et al. (2021), who also utilized a similar approach. However, we also note that Sersic fitting does not always capture the full light from a galaxy. We used the magnitude (MAG_AUTO) and radius (FLUX_RADIUS) values from the gold catalogue as an initial guess for the Galfit procedure. Moreover, the Sersic index (\(n\)) and axis ratio (\(q\)) were initialised to be at a fixed value of 1 and we allowed to vary only within the range of \(0.2<n<4.0\) and \(0.3<q\leq 1.0\), respectively. A similar fitting procedure was done for both the \(g\)-band and \(i\)-band images of our sample. After the fitting, we excluded all the sources with poor/failed fits with either a reduced \(\chi^{2}>3\) or if their Galfit magnitude estimates \begin{table} \begin{tabular}{c c c c c} \hline **Model Name** & **s** & **h** & **T** & **e** \\ \hline LSBG VISION 1 & 4 & 12 & 4 & 55 \\ LSBG VISION 2 & 4 & 12 & 8 & 55 \\ LSBG VISION 3 & 6 & 12 & 4 & 67 \\ LSBG VISION 4 & 6 & 16 & 8 & 67 \\ \hline \end{tabular} \end{table} Table 1: Table showing the name of the model, size of the image patches (s), number of heads (h), number of transformer encoder layers (T) and the number of epochs taken to train (e) the four vision models in chronological order of creation. Figure 3: Scheme of the general architecture of the LSBG vision transformer (LSBG ViT). The input image is split into small patches and flattened into a sequence of 1D vectors and combined with positional encoding. The numbered circular patches represent the position encoding, and the counterpart represents the flattened 1-D sequence of the image patches. The combined 1-D sequence is passed to the transformer layers. The extra learnable class embedding has encoded the class of the input image after being updated by self-attention and passed it on to an MLP head to predict the output. diverge from their initial MAG_AUTO values by more than one mag. We have also excluded the cases where the estimated \(n\) and \(q\) values do not converge and are on the edge of the range specified above. For the remaining galaxies, we re-applied our \(g\)-band sample selection criteria of \(\bar{\mu}_{eff}>24.2\) mag arcsec\({}^{-2}\) and \(r_{1/2}>2.5^{\prime\prime}\), following Tanoglidis et al. (2021b). The \(\bar{\mu}_{eff}\) values were calculated using the relation given by Eq. 7: \[\bar{\mu}_{eff}=m+2.5\times log_{10}(2\pi r_{1/2}^{2}), \tag{7}\] where \(\bar{\mu}_{eff}\) is the mean surface brightness within the effective radius, \(m\) is the total magnitude and \(r_{1/2}\) is the half-light radius in a specific band estimated from Galfit. For all our measurements, we also applied a foreground Galactic extinction correction using the Schlegel et al. (1998) maps normalised by Schlafly & Finkbeiner (2011) and a Fitzpatrick (1999) dust extinction law. ### Visual Inspection Only the candidates identified independently by LSBG DETR and LSBT ViT ensemble models and passed the selection criteria for being an LSBG with the updated parameters from the Galfit were considered for visual inspection. This refined sample was subjected to visual inspection by two authors independently. Candidates identified as LSBG by both authors were treated as confident LSBGs, and candidates identified as LSBG by only one author were reinspected together to make a decision. Since visual inspection is time-consuming, we only resorted to it at the last step and tried as much to reduce the number of candidates shortlisted for visual inspection. To aid in visual inspection, we used two images for every candidate. We generated images enhancing the low surface brightness features using the APLpy package (Robitaille & Bressert 2012) and images downloaded from the DESI Legacy Imaging Surveys Sky Viewer (Dey et al. 2019). Furthermore, the \(g\)-band Sersic models from Galfit were also used to visually inspect the quality of the model fitting. Each candidate was then categorised into three classes based on the Galfit model fit and the images: LSBG, non-LSBG (Artifacts), or misfitted LSBGs. If the model of the galaxy was fitted correctly and the candidate showed LSBG features, it was classified as an LSBG. If the candidate shows LSBG features but does not fit correctly, we classify it as a misfitted LSBG. Finally, if the candidate does not have features of an LSBG, we classify it as an artefact or non-LSBG. ### Metrics for comparing models Here, we use accuracy, true positive rate (TPR), false positive rate (FPR) and area under the receiver operating characteristic (AUROC) curve as the metrics to compare the performance of the created transformer models. The classification accuracy of a model is defined as: \[\mathrm{Accuracy}=\frac{TP+TN}{TP+FP+TN+FN}, \tag{8}\] where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives. Since identifying LSBGs with less contamination is our primary focus, rather than the overall accuracy of the classifier, TPR and FPR are more informative metrics for evaluating the classifier's performance. The TPR is the ratio of LSBGs identified by the model to the total number of LSBGs, which can be expressed as \[TPR=\frac{TP}{TP+FN}. \tag{9}\] In the literature, sensitivity is another term used to represent the True Positive Rate (TPR), and it measures how well a classifier detects positive instances (in this case, LSBGs) from the total number of actual positive instances in a dataset. Similarly, FPR can be considered a contamination rate because it measures how often the classifier incorrectly classifies negative instances as positive. FPR is defined as \[FPR=\frac{FP}{FP+TN}. \tag{10}\] All the quantities defined above are threshold dependent and vary as a function of the chosen probability threshold. By constructing the receiver operating characteristic curve (ROC) and finding the AUROC, one could define a threshold-independent metric for comparing the models. The ROC curve is constructed by plotting the true positive rate (TPR) and false positive rate as a function of the threshold. The area under the ROC curve (AUROC) measures how well a classifier distinguishes between classes and is a constant for the model, unlike the accuracy, which varies with a threshold. If the AUROC is 1.0, the classifier is perfect with TPR = 1.0 and FPR = 0.0 at all thresholds. A random classifier has an AUROC \(\sim 0.5\), with TPR almost equal to FPR for all thresholds. ## 4 Results ### Model performance on the testing set We have created four models of each transformer, namely LSBG DETR and LSBG ViT, with different hyperparameters to generalise our results for both transformers. Each model was implemented as a regression model to predict the probability of an input being an LSBG, and we set 0.5 as the threshold probability for classifying an input as LSBG. Further, we use an ensemble of the four models as the final model for LSBG DETR and LSBG ViT. Table 1 describes the architecture, accuracy and AUROC of all the models, including the ensemble models on the test dataset, as mentioned in section 2.3. As mentioned earlier, the more insightful metrics are the true positives (TPR) and the false positives (FRP) rather than overall accuracy. These metrics can be visualised using a confusion matrix, which is shown in Fig. 4, for the ensemble models using a \begin{table} \begin{tabular}{c c c c c} \hline \hline **Model name** & **Accuracy (\%)** & **TPR** & **FPR** & **AUROC** \\ \hline LSBG VISION I & 93.55 & 0.97 & 0.12 & 0.980 \\ LSBG VISION 2 & 93.79 & 0.97 & 0.11 & 0.980 \\ LSBG VISION 3 & 93.47 & 0.97 & 0.11 & 0.981 \\ LSBG VISION 4 & 93.51 & 0.97 & 0.11 & 0.980 \\ LSBG VISION Ensemble & **93.75** & **0.97** & **0.11** & **0.983** \\ LSBG DETR 1 & 94.36 & 0.97 & 0.09 & 0.982 \\ LSBG DETR 2 & 94.28 & 0.96 & 0.08 & 0.980 \\ LSBG DETR 3 & 94.36 & 0.96 & 0.08 & 0.982 \\ LSBG DETR 4 & 94.24 & 0.95 & 0.07 & 0.982 \\ LSBG DETR Ensemble & **94.60** & **0.96** & **0.07** & **0.984** \\ \hline \end{tabular} \end{table} Table 2: Table comprising the architecture, accuracy, true positive rate (TPR), false positive rate (FPR) and AUROC of all the models in chronological order of creation. threshold of 0.5. The LSBG DETR ensemble had a TPR of 0.96 and an FPR of 0.07, indicating that the LSBG DETR ensemble model can accurately identify 96% of all LSBGs in the DES data, with an estimated 7% contamination rate in the predicted sample. Similarly, the LSBG ViT Ensemble model can identify 97% of all the LSBGs in DES but with 11% contamination. The receiver operator characteristic (ROC) curve of the LSBG DETR and LSBG ViT ensemble models are shown in Fig. 5. In terms of accuracy and AUROC, the LSBG DETR models performed slightly better than the LSBG ViT models. It is clear from Fig. 5 that both the ensemble models have a TPR \(\sim\) 0.75 even for a high threshold such as 0.9. Indicating that both the ensemble models can confidently identify around \(\sim\) 75% of all the LSBGs in DES and assign these candidates with a probability greater than 0.9. ### Search for LSBGs in the full coverage of DES Since the LSBG DETR model and the LSBG ViT model have different architectures and feature extraction principles, we regard the ensemble models of these two as separate independent transformer classifiers. In order to search for new LSBGs from DES, we employed the transformer ensemble model on the 419 782 objects that satisfied the selection criteria defined in section 2.2. The candidates scoring above the threshold probability of 0.5 were catalogued as potential LSBG candidates. The LSBG DETR ensemble classified 27 977 objects as LSBGs, among which 21 005 were already identified by Tanoglidis et al. (2021). Similarly, the LSBG ViT ensemble classified 30 508 objects as LSBGs, among which had 21 396 LSBGs common with the sample identified by Tanoglidis et al. (2021). So finally, 6 972 and 9 112 new candidates were classified as potential LSBGs by the LSBG DETR and LSBG ViT ensembles, respectively. However, only the 6 560 candidates identified by both the ensemble models independently were considered for further analysis to reduce the false positives. Since there is a possibility that there might be duplicates of the same candidates existing in the selected sample, we ran an automated spatial crossmatch to remove duplicate objects separated by \(<\) 5". The origin of these duplicates can be traced back to the fragmentation of larger galaxies into smaller parts by SourceExtractor. After removing the duplicates, the number of potential LSBG candidates reduced from 6 560 to 6 445. As discussed in Sect. 3.6, these candidates were subjected to single component Sersic model fitting using Galfit. During the Galfit modelling, 999 candidates had failed fits and were consequently removed from the sample since our objective is to produce a high-purity sample with accurate Sersic parameters. We visually inspected these unsuccessful fits and found that in most cases the presence of a very bright object near the candidate was the cause of the poor Sersic fit. Of the remaining 5 446 candidates, 4 879 passed the \(\bar{\mu}_{eff}\) and \(r_{1/2}\) selection criteria outlined in Sect. 2.2 with the updated parameters. These 4 879 candidates were inspected visually to identify the genuine LSBGs. After independent visual inspections by the authors, 4 190 candidates were classified as LSBGs and 242 candidates were found to be non-LSBGs. During visual inspection, 447 candidates were found to be possible LSBGs with unreliable measurements from Galfit. These candidates are excluded from our final sample, and here we only report on the most confident candidates that were identified as LSBGs during visual inspection. After correcting for the Galactic extinction correction, our final sample reduced to 4 083 new LSBGs from DES DR1. The schematic diagram showing the sequential selection steps used to find the new LSBG sample is shown in Fig. 6. A sample catalogue comprising the properties of the newly identified LSBGs is shown in table 3, and some examples of the new LSBGs that we have found are plotted in Fig. 7. The distribution of the \(r_{1/2}\), \(\bar{\mu}_{eff}\), Sersic index (\(n\)), and axis ratio (\(q\)) of the new sample of LSBGs is plotted in Fig. 8. The majority of the LSBGs in this new sample have \(r_{1/2}<7^{\prime\prime}\), and \(\bar{\mu}_{eff}<26\) mag arcsec\({}^{-2}\). The Sersic index of the new LSBG sample predominantly lies between 0.5 and 1.5 and has a median value of 0.85. This pattern is similar to the trend identified by Poulain et al. (2021) in the case of dwarf ellipticals, suggesting that a significant portion of the LSBGs sample could be comprised of such sources. In the case of the axis ratio, the new LSBG sample has a median axis ratio of 0.72 and has a distribution lying in the range of 0.3 to 1. The median value of 0.72 suggests that most galaxies in this sample have a slightly flattened or elongated shape. A detailed discussion of the properties of the new LSBGs identified in this work and their comparison with LSBGs identified by Tanoglidis et al. (2021) is presented in Sect. 5. Figure 4: Confusion matrix of LSBG DETR and LSBG ViT models plotted for a threshold = 0.5. Class 0 represents the artefacts, and Class 1 represents the low surface brightness galaxies. Figure 5: Receiver operating characteristic (ROC) curve of the ensemble models. The red and blue lines represent the variation of FPR and TPR as a function of the threshold for LSBG DETR and LSBG Vision ensembles, respectively. The red and blue points mark the TPR and FPR for a threshold = 0.9. ## 5 Discussion ### Transformers as LSBG Detectors In this study, we introduce the use of transformers as a classifier model for finding the undiscovered LSBGs in DES. Currently, in the literature, one of the reported deep-learning-based models for classifying LSBGs and artefacts is a CNN model named DeepShadows created by Tangolidis et al. (2021). They used the catalogue of LSBGs and artefacts identified from DES reported in Tangolidis et al. (2021) to generate the training data. The DeepShadows model achieved an accuracy of 92% in classifying LSBGs from artefacts and had a true positive rate of 94% with a threshold of 0.5. Moreover, the DeepShadows model also achieved an AUROC score of 0.974 on this training dataset. However, the DeepShadows was not applied to the complete DES data and checked how it would perform. Nevertheless, DeepShadows was the first deep-learning model to classify LSBGs and artefacts. In addition, Tangolidis et al. (2021) also showed that the DeepShadows was a better classifier than the support vector machine or random forest models. However, in our work, all of our transformer models were able to surpass the DeepShadows model in every metric individually, which can be seen from Table 2. Namely, in their respective classes, LSBG DETR 1 and LSBG ViT 2 had the highest accuracies (94.36% and 93.79%), respectively. Earlier searches for LSBGs have used semi-automated methods such as pipelines based on imfit by Greco et al. (2018) or simple machine-learning models such as SVMs by Tanoglidis et al. (2021). However, the success rate of these methods was very low, and the final candidate sample produced by these methods had around 50% false positives, which had to be removed by visual inspection. Here we explore the possibilities of transformer architectures in separating LSBGs from artefacts. We used two independent ensemble models of LSBG DETR and LSBG ViT models and single component Sersic model fitting to filter the LSBG candidates. Our final sample contained only \(\sim\) 5% as non-LSBGs, which is a significant improvement compared to the previous methods in the literature. Following the definition of an LSBG as described in Tanoglidis et al. (2021), we identified 4 083 new LSBGs from DES DR1, increasing the number of identified LSBGs in DES by 17%. Our results highlight the significant advantage of using deep-learning techniques to search for LSBGs in the upcoming large-scale surveys. To have more insights into the fraction of false positives from our method, we checked the performance of these models during training. We encountered around 7% and 11% of artefacts from the LSBG DETR ensemble and LSBG ViT sample, respectively, during training on the test dataset. However, using a combination of these models, we reduced the artefact fraction to less than 5% during visual inspection. Most of the non-LSBGs we encountered during the visual inspection were faint compact objects that blended in the diffuse light from nearby bright objects. We use the term 'non-LSBG' instead of artefacts here because, during the visual inspection, we classified some potential LSBGs as non-LSBG; these are objects for which the \(g\)-band images had instrumental artefacts or lack of sufficient signal in the \(g\)-band. Since the machine learning model intakes three bands as input (\(g,r\) and \(z\)), this suggests that the model was able to study and generalise the nature of LSBGs in each band and was able to predict if it is an LSBG or not based on the signal from the other bands. However, since we define LSBGs based on their \(g\)-band surface brightness and radius in this work, we classified the galaxies without reliable \(g\)-band data as non-LSBGs. Some non-LSBGs we encountered during the visual inspection are shown in Fig. 9 and Fig. 10. With the upcoming surveys of deeper imaging, these galaxies might be classified as LSBGs which might further reduce the non-LSBGs in our candidate sample. One another fact to notice when discussing the non-LSBGs from the candidate sample is that some of the candidates identified as LSBGs by the ensemble models (567 out of 5 446) did not meet the selection criteria of being an LSBG after being fitted with Galfit. These galaxies had \(r_{1/2}\) ranging from 2'' to Figure 6: Schematic diagram showing the sequential selection steps used to find the new LSBG sample. 20\({}^{\prime\prime}\), with a median of 3.85\({}^{\prime\prime}\), which is similar to the new LSBG sample we found. However, the majority of these galaxies have a mean surface brightness between 24.0 - 24.2 mag arcsec\({}^{-2}\), with a median of 24.16 mag arcsec\({}^{-2}\). This suggests that the machine learning model understood the criteria for angular size for LSBGs during its training, but it did not learn the strict condition about the surface brightness. This situation is similar to a human expert analysing a galaxy image to determine whether it is an LSBG or not. Features such as the size of the galaxy are easily identifiable to the human eye. However, determining the surface brightness accurately with only the human eye would be challenging, and there may be possible errors near the threshold of the galaxy. Figure 7: Cutouts of 9 confirmed new LSBGs after visual inspection. The unique identification number (co object id) for each galaxy in DES DR1 is given below each image. The images were generated by combining the \(g,r\) and \(z\) bands using APLpy package (Robitaille & Bressert 2012), and each image corresponds to a 67.32\({}^{\prime\prime}\times 67.32^{\prime\prime}\) region of the sky with the LSBG at its centre. region, similar to our machine learning model. So one could say that the machine learning model is behaving approximately like a human visual expert. Judging from the performance of our model on the training data, we cannot assert that we have discovered all the possible existing LSBGs from the DES DR1. As we can see from Fig. 4, the TP rate for the individual ensemble models were 0.96 and 0.97, respectively. This means that the model has not found all the possible LSBGs and a minor fraction of LSBGs is yet to be found in DES DR1. Moreover, to reduce the FPR and the burden during the visual inspection, we have only visually inspected the candidates identified commonly by both the ensemble models and passed the criteria for correctly fitting by Galfit. Fig. 8: Normalised distribution of half-light radius (top left panel), mean surface brightness (top right panel), Sérsic index (bottom left panel) and axis ratio (bottom right panel) of the new sample of LSBGs. The dashed line shows the median of the distribution Fig. 9: Examples of candidates (Coadd object id - 149796289 and 374192591) classified as non-LSBGs during visual inspection because of glitches in the g-band near the galaxy. The panels (a) and (c) show the RGB image created using the \(g,r\) and \(z\) bands with APLpy package (Robitaille and Bressert, 2012). The panels (b) and (d) show the image in the \(g\) band. Each image corresponds to a \(67.32^{\prime\prime}\times 67.32^{\prime\prime}\) region of the sky with the candidate at its centre. One of the notable things about the models in this work is that we are using two different ensemble models, each having four models in the ensemble. As we mentioned earlier, each ML model can be considered equivalent to a human inspector, and the ensemble models help balance out the disadvantages of the other models in the ensemble. A closer look at the individual probability distribution of these modes shows that there are 310 candidates among the 4 083 confirmed LSBG candidates, which had a probability of less than 0.5 for at least one model among the individual models. However, since we used an average ensemble model, we were able to identify these LSBG by balancing out the probability, which shows the advantages of using an ensemble model over a single model. Here, we use the visual inspection as the final step to confirm the authenticity of an LSBG detected by the models. However, it is essential to acknowledge the potential for human bias during the visual inspection, which can impact the accuracy and reliability of the results. For example, during the visual inspection, the visual inspectors disagreed on labelling approximately 10% of the candidate sample. Most of these galaxies had a mean surface brightness greater than 25.0 mag arcsec\({}^{-2}\), which suggests that even for human experts, it is challenging to characterise extremely faint LSBGs. However, with better imaging, this might change, but we must acknowledge that there will always be some human bias and error associated with human inspection. Also, we must consider that in the upcoming surveys, such as LSST or Euclid, the data size will be too large to inspect visually. In this scenario, relying solely on improved automated methods to purify the sample and accepting a small fraction of false positives could be a feasible solution. ## 6 The new sample of DES DR1 LSBGs ### The newly identified LSBGs sample The optical color of a galaxy can give insights into its stellar population. Conventionally, based on their color, the galaxies are divided into red and blue galaxies, and it has been known that color is strongly correlated to the morphology of a galaxy (Strateva et al. 2001). Blue color galaxies are usually found to be highly active star-forming spiral or irregular systems. In contrast, red color galaxies are mostly found to be spheroidal or elliptical. In addition, the red galaxies have also been found to tend to cluster together compared to the blue galaxies (Bamford et al. 2009). The LSBGs found by Tanoglidis et al. (2021) have found a clear bimodality in the \(g-i\) color distribution similar to the LSBGs found by Groe et al. (2018). In the left panel of Fig. 11, we present the \(g-i\) color distribution of the 4 083 new LSBGs and the 23,790 LSBGs found by Tanoglidis et al. (2021). We follow the criteria defined by Tanoglidis et al. (2021) to define red galaxies as galaxies having \(g-i>0.6\) and blue galaxies as galaxies having \(g-i<0.6\) where \(g\) and \(i\) represent the magnitude in each band. In the right panel of Fig. 11, we present the color as a function of mean surface brightness in \(g\)-band for the new sample identified in this work and the LSBGs identified by Tanoglidis et al. (2021). There are 1112 red LSBGs and 2,944 blue LSBGs in the new LSBG sample. 4 From Fig. 11, we can see that we have identified a relatively large fraction of blue LSBGs compared to Tanoglidis et al. (2021) and a considerable fraction of new red LSBGs with \(g-i\geq 0.80\) and a mean surface brightness less than 25.0 mag arcsec\({}^{-2}\). The bias against blue LSBGs and highly red LSBGs in the sample created by Tanoglidis et al. (2021) may have been caused by the bias in the training set used to create the SVM, which preselected the LSBG candidates. This bias could have occurred because a large fraction of their training set consisted of LSBGs near the Fornax cluster, which are mainly red LSBGs. Footnote 4: 27 LSBGs failed the modelling using Galfit for \(i\)-band, and they are not included in this color analysis. Looking at the distribution of \(\bar{\mu}_{eff}\) values of the new sample, both the red and blue LSBGs have a similar mean surface brightness range, with a median \(\bar{\mu}_{eff}\) of 24.75 and 24.68 mag arcsec\({}^{-2}\), respectively. Both populations of red LSBGs and blue LSBGs from the new sample have sizes ranging from 2.5''- 20''. However, as mentioned earlier, most of these LSBGs have radii less than 7'', with a median of 4.01'' for blue LSBGs and 3.59'' for red LSBGs. In comparison, blue LSBGs tend to have larger angular radii compared to red LSBGs. The Sersic index distribution of the red and blue LSBGs in the new sample has similar distribution and almost equal median values (0.847 and 0.845 for red and blue LSBGs, respectively). A median Sersic index of around 0.84 indicates that the majority of the galaxies are closer to a disk-shaped geometry, irrespective of their color. The distribution of the axis ratio of the red and blue LSBGs from the new sample shows a clear difference, as shown in Fig. 12. The median of the axis ratio distribution of the blue and red LSBGs is 0.7 and 0.8, respectively. This indicates that, in general, the red LSBGs are rounder than the blue LSBGs. Figure 10: Examples of candidates classified as non-LSBG during visual inspection because of lack of sufficient signal in the \(g\)-band (a) are shown in the top panel. Candidates classified as non-LSBG during visual inspection because of being artefacts are shown in the lower panel (b). The RGB images are created using the \(g\), \(r\) and \(z\) bands with APDpy package (Robitaille & Bressert 2012). Each image corresponds to a 67.32′′ \(\times\) 67.32′′ region of the sky with the candidate at its centre. ### Why are there additional LSBGs? One of the other things to investigate at this moment will be how different the new LSBG sample is compared to the LSBGs identified by Tanoglidis et al. (2021). Or, more precisely, one could wonder why these many LSBGs have been missed previously and whether it is somehow related to the nature of these galaxies. Apart from the Sersic index, all other Sersic parameters of the new and the old sample have an almost similar distribution. The distribution of the Sersic index for the new sample identified in this work and the LSBG sample identified by Tanoglidis et al. (2021) is shown in Fig. 13. The new LSBG sample has a Sersic index predominantly in the range \(n<1\), which is comparatively lower than the Sersic index distribution of LSBGs identified by Tanoglidis et al. (2021). However, this does not point to any reason why these LSBGs were missed in the previous search, and moreover, Tanoglidis et al. (2021) have also commented on the under-representation of red LSBGs with small Sersic index in their sample. To answer the aforementioned question, a close inspection of the methodology of Tanoglidis et al. (2021) shows that most of the new LSBGs (82%) we identified here were missed by the SVM in their first pre-selection step. This shows the importance of methodology in preselecting the samples. Since the methodology used by Tanoglidis et al. (2021) and Greco et al. (2018) have considerable similarities (e.g., usage of SVM), this indicates that Greco et al. (2018) might have also missed some LSBGs from the HSC-SSP survey and the fraction should be greater in comparison to Tanoglidis et al. (2021). It should be noted that there is a slight overlap in the regions of observation by Greco et al. (2018) and DES, as shown in Fig. 14. There are 198 LSBGs identified by Greco et al. (2018) from HSC-SSP in the field of view of DES and detected in the DES Y3 gold catalogue. Among these 198 LSBGs, Tanoglidis et al. (2021) has recovered 183 LSBGs, and we recovered 10 more additional LSBGs from this field, making the total number of recovered LSBGs to 193. We would also like to point out that there are additional LSBGs (\(\sim 200\)) in our total sample in the same region, Figure 11: Normalised distribution of color of the LSBGs from the new sample identified in this work and the LSBGs identified by Tanoglidis et al. (2021) plotted in the left panel. The right panel shows the color as a function of mean surface brightness in the \(g\)-band for the new sample identified in this work and the LSBGs identified by Tanoglidis et al. (2021). The dashed line shows the separation between red and blue LSBGS. Figure 12: Normalised distribution of axis ratio (left panel) of red and blue LSBGs from the new sample. The vertical lines show the median for each class. Figure 13: Normalised distribution of the Sérsic index of the LSBGs identified in this work and by Tanoglidis et al. (2021). The vertical lines show the median for each class but missed by Greco et al. (2018), even though the HSC-SSP data used by Greco et al. (2018) is about 2 orders of magnitude deeper than the DES DR1. However, we have also missed some LSBGs (\(\sim 150\)) that have been identified by Greco et al. (2018). These LSBGs were not detected in the DES Y3 gold catalogue and subsequently were missed by the searches by Tangolidis et al. (2021) and ours. With the DES data release 2 (DES DR 2) having an improved depth (\(\sim 0.5\) mag; Abbott et al. 2021), we should expect an increase in the number of LSBGs from DES. Therefore, there is a potential for using transfer learning with transformers in the future search for LSBGs from DES DR 2 (Abbott et al. 2021) and HSC-SSP data release 3 (Aihara et al. 2022). With the addition of the new 4 083 LSBGs, the number of LSBGs in the DES increased to 27,873, effectively increasing the average number density of LSBGs in DES to \(\sim 5.5\) deg\({}^{-2}\). In addition, it should also be noted that there are still around \(\sim 3000\) candidates identified by the ensemble models, which have not undergone further analysis to be verified as LSBGs. Potentially indicating that the number of LSBGs in DES might increase further in future. Hence the average number density of \(5.5\) deg\({}^{-2}\) reported here can only be taken as a lower limit. Earlier, Greco et al. (2018) estimated that the average number density of LSBGs in HSC-SSP is \(\sim 3.9\) deg\({}^{-2}\). However, this estimate was based on LSBG samples with \(\bar{\mu}_{eff}>24.3\) mag arcsec\({}^{-2}\), unlike the \(\bar{\mu}_{eff}>24.2\) mag arcsec\({}^{-2}\) selection we adopted in this work. For a similar selection on mue \(>24.2\) mag arcsec\({}^{-2}\) in the combined sample presented here (LSBGs identified in this work + LSBGs identified by Tanoglidis et al. (2021)), we obtain a higher number density of 4.9 deg\({}^{-2}\), compared to the previous estimates (\(3.9\) deg\({}^{-2}\) from Greco et al. (2018) and \(4.5\) deg\({}^{-2}\) from Tanoglidis et al. (2021)). As discussed above, the number density of the LSBGs will be influenced by the methodology used to search for the LSBGs. Similarly, one other intrinsic factor that can influence the number density is the completeness of the survey. Improved imaging techniques can reveal fainter objects, leading to an increase in the number density. The completeness of a survey can be determined by plotting the galaxy number count, and one could also have a rough idea about the redshift distribution of the objects of interest by comparing it with the Euclidean number count. Fig. 15 shows the number count of LSBGs identified in DES (this work and Tanoglidis et al. (2021)) and HSC (Greco et al. 2018). As expected, HSC has better completeness than DES. However, HSC still has a smaller number density than DES, which is evident from comparing the peaks of both number counts. The slope of the number counts near 0.6 (representing Euclidean geometry) for both HSC and DES suggest that most identified LSBGs are local (Yasuda et al. 2001). Furthermore, E Greene et al. (2022) has analysed the LSBG sample from HSC and estimated that the 781 LSBGs identified by Greco et al. (2018) have a redshift less than 0.15. With the increasing number of LSBGs identified from different surveys, one of the other questions that need to be addressed at this moment is the definition of an LSBG itself. One could define a different definition for an LSBG, consequently leading to finding a completely different sample of LSBGs from the same dataset, which in turn can affect the conclusions of the study. One of the primary factors contributing to these discrepancies is the current reliance on surface brightness-based definitions for LSBGs, which are contingent upon the observation band in use. Different observation bands may involve distinct threshold values. Depending on the band we use, the LSBG definition will likely vary. In this scenario, one potential solution is to define an LSBG based on the stellar mass density of the galaxy. Current definitions based on the stellar mass density define an LSBG as a galaxy with a stellar mass density, \(\Sigma_{star}\lesssim 10^{7}\) M\({}_{\odot}\) kpc\({}^{-2}\)(e.g., Carleton et al. 2023). Following Eq. 1 of Chamba et al. (2022), we made an estimate of the stellar mass surface density using our observed \(i\)-band surface brightness \(\bar{\mu}_{eff}\) and the stellar mass-to-light ratio obtained from the \(g-i\) color (Du et al. 2020). The stellar mass surface density distribution of the LSBGs from DES and HSC-SSP is shown in Fig. 16. Here we can see that most of the LSBGs satisfy this condition, and only a small percentage stay above the threshold of \(10^{7}\) M\({}_{\odot}\) kpc\({}^{-2}\). On average, the LSBGs from DES have a higher stellar mass surface density than those from HSC-SSP, which could be attributed to the higher depth in the data used by Greco et al. (2018). However, as argued by Chamba et al. (2022), accurate estimation of the stellar mass density requires deep photometry in multiple bands. In our case, we employed a single color, and as a result, the constraints we derived on the stellar mass density may be limited in accuracy. ## 7 Clustering of LSBGs in DES The on-sky distribution of the red and blue LSBGs identified in this work, along with those identified by Tanoglidis et al. (2021), is shown in Fig. 17 and Fig. 18. In the local universe, "normal" high surface brightness red galaxies tend to cluster together, while blue galaxies are much more dispersed in the field (Zehavi et al. 2005). Such a trend is also clearly visible for the LSBG sample. As seen in Fig. 17, red LSBGs tend to form concentrated nodes. In contrast, the blue LSBGs are distributed much more homogeneously in the sky, as seen in Fig. 18. A two-point auto-correlation function is a statistical tool commonly used to quantify the galaxy clustering (Peebles 1980). Here we use the angular two-point auto-correlation function, \(\omega(\theta)\), computed using the Landy & Szalay (1993) estimator defined as \[\omega=\frac{\hat{D}\!D(\theta)-2\hat{D}\!R(\theta)+\hat{R}\!R(\theta)}{\hat{R} \!R(\theta)}, \tag{11}\] where \[\hat{D}\!D =\frac{DD(\theta)}{n_{\theta}(n_{d}-1)/2}, \tag{12}\] \[\hat{D}\!R =\frac{DR(\theta)}{n_{d}n_{r}},\] (13) \[\hat{R}\!R =\frac{DD(\theta)}{n_{r}(n_{r}-1)/2}. \tag{14}\] Here DD(\(\theta\)) is the number of pairs in the real sample with angular separation \(\theta\), \(\hat{R}\!R(\theta)\) is the number of pairs within a random sample, DR(\(\theta\)) is the number of cross pairs between the real and random samples, \(n_{d}\) is the total number of real data points, and \(n_{r}\) is the total number of random points. We use a random sample of 4 491 746 points generated from the DES footprint mask. To compute \(\omega(\theta)\) we employ treecorr (Jarvis 2015). Errors are estimated using jackknife resampling where the sky is divided into 100 equal-sized batches for resampling (Efron & Gong 1983). For high surface brightness galaxy samples, the angular correlation function very often can be well fitted by a single power-law (Peebles & Hauser 1974; Peebles 1980; Hewett 1982; Koo & Szalay 1984; Neuschaefer et al. 1991) \[\omega(\theta)=A\theta^{1-\gamma} \tag{15}\] where \(A\) is the amplitude which represents the strength of the clustering, and \(\gamma\) represents the rate at which the strength of the clustering reduces as we go to large angular scales. This power-law behaviour is usually observed in a wide range of angular scales; however, it is not universal, especially on the smallest scales. Full modelling of the shape of the correlation function re Figure 16: Normalised distribution of stellar mass surface density of LSBGs identified in HSC (red line) and DES (black line). Figure 14: Sky distribution of the LSBGs identified from DES (black dots) by Tanoglidis et al. (2021) + this work and the LSBGs identified from HSC-SSP (blue dots) by Greco et al. (2018). Figure 15: The number count of galaxies as a function of \(i\)-band magnitude, with the y-axis displaying the logarithm of the number density per apparent magnitude. The red line with the blue error bars represents the data from HSC, and the black dashed line with green error bars represents the data from DES. Figure 17: Sky distribution of red LSBGs identified in this work (red dots) and the LSBGs identified (black dots) by Tanoglidis et al. (2021). Figure 18: Sky distribution of blue LSBGs identified from the new sample (blue dots) and the LSBGs identified (black dots) by Tanoglidis et al. (2021). quires taking into account different processes governing galaxy clustering on small scales (corresponding to galaxies located in the same dark matter halo) and at larger scales (corresponding to clustering of different haloes). This modelling is usually done using the halo occupation distribution models (HOD) (Ma & Fry 2000; Peacock & Smith 2000; Zheng et al. 2005; Kobayashi et al. 2022). In this work, however, we perform only a preliminary analysis and base interpretation of our data on the power-law fitting only. To compare the clustering of the LSBGs with the clustering of the high surface brightness galaxies (HSBGs), we constructed a control sample of HSBGs from the DES data. For this purpose, we selected galaxies in the surface brightness range \(20.0<\bar{\mu}_{eff}<23\) mag arcsec\({}^{-2}\) and in the magnitude range \(17<g<23\) mag (which is the same magnitude range as our LSBG sample). Additionally, we applied a photometric redshift \(z<0.1\) cut in order to keep the HSBGs sample consistent with the LSBGs, which are also expected to be mostly local (E Greene et al. 2022). For this purpose, we used the photometric redshifts from the DES Y3 gold catalogue calculated using the Directional Neighbourhood Fitting (DNF) algorithm (Sevilla-Noarbe et al. 2021; De Vicente et al. 2016). In addition, we also applied the selection cuts on the parameters from SourceExtractor such as SPREAD_MODEL,EXTENDED_CLASS_COBD and on colors (using the MAG_AUTO magnitudes) as described in Sect. 2.2. Initially, we computed the angular two-point auto-correlation function for the samples of LSBGs and HSBGs. Then we split the samples into red and blue galaxies to measure their clustering properties separately. For LSBGs, we followed the criterion defined in Sect. 6, i.e. a color cut of \(g-i=0.6\) mag to separate blue and red sources. As seen from the color histogram presented in Fig. 19, the HSBGs show a bi-modality around \(g-i=1.0\) mag, which can be most likely attributed to their different stellar masses. Consequently, we use the boundary \(g-i=1.0\) mag to divide our HSBG sample into red and blue sub-samples. The properties of all the samples used for the measurement of the galaxy clustering, together with the best-fit power-law parameters, are listed in Table 4. The 2-point auto-correlation functions for all the samples described above are shown in Fig. 20. As it is clear from Fig. 20, the angular two-point auto-correlation function of the red LSBGs does not follow a power law at small angular scales. Therefore, the power-law fits were only performed in the range of \(0.15\) deg to \(7\) deg to avoid them being affected by the one-halo effects. In part well fitted by the power law, for the red LSBGs, \(\omega(\theta)\) is significantly steeper than for the blue LSBGs. However, it flattens at smaller scales, i.e. between \(0.01\) deg and \(0.2\) deg; this behaviour is also transmitted to the full sample of LSBGs. In contrast, the blue LSBGs follow a power law behaviour, with a lower clustering amplitude and a much less steep slope, in almost all the angular scales. This behaviour of the angular correlation function might be explained by the observations by van der Burg et al. (2016) and Wittmann et al. (2017) that the number of LSBGs close to the cores of galaxy clusters decreases. Such suppression may reduce the clustering power on small scales, leading to a flattening in the auto-correlation function, which is seen for the red LSBGs, which are mostly associated with clusters. Comparison of clustering of the LSBGs and the HSBGs also shows notable differences. Not surprisingly, red samples, both of HSBGs and LSBGs, are more clustered than their blue counterparts. At the same time, the red LSBG sample has a significantly higher clustering amplitude than the reference red HSBG sample. Red LSBGs also display a steeper slope of \(\omega(\theta)\) at angular scales larger than \(0.15\) deg, but at smaller scales, their \(\omega(\theta)\) flattens, unlike in the case of red HSBGs for which we can even observe a hint of an upturn which can be associated with a one-halo term. This picture is consistent with a scenario in which red LSBGs are mostly associated with dense structures like clusters but do not populate their centres but rather the outskirts. In contrast, red HSBGs display the usual behaviour of red passive galaxies, appearing in a variety of environments, with a tendency to cluster and gather most strongly in the cluster centres. Blue LSBGs have a significantly lower clustering amplitude than their HSBG counterparts. At the same time, the slope of their \(\omega(\theta)\) at scales larger than \(0.15\) deg remains very similar. The blue HSBGs and LSBGs follow the usual distribution of blue star-forming galaxies, dispersed in the field and avoiding clusters. These results are consistent with the results obtained by Tanoglidis et al. (2021) for their sample of DES LSBGs. They compared the clustering of LSBGs with very bright galaxies in the magnitude range of \(14<g<18.5\) mag from the 2MPZ cat \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Sample} & \multirow{2}{*}{Number of galaxies} & \multicolumn{2}{c}{Median \(g\)} & \multicolumn{2}{c}{Median} & \multirow{2}{*}{\(A\)} & \multirow{2}{*}{\(\gamma\)} \\ & & (mag) & (mag arcsec\({}^{-2}\)) & & \\ \hline All & & & & & \\ HSBGs & 451,310 & 18.84 & 21.66 & 0.091 & 1.651 \\ Red & & & & & \\ HSBGs & 103,900 & 17.96 & 21.21 & **0.245** & **1.848** \\ & & & & **+0.004** & **+0.012** \\ Blue & & & & & \\ HSBGs & 347,410 & 19.21 & 21.81 & 0.0648 & 1.631 \\ & & & & & \\ \hline All & & & & & \\ LSBGs & 27,840 & 20.11 & 24.66 & 0.138 & 1.941 \\ & & & & & \\ Red & 18,924 & 20.23 & 24.89 & **0.671** & **2.090** \\ LSBGs & & & & **+0.079** & **+0.071** \\ Blue & & & & & \\ LSBGs & 8,916 & 20.07 & 24.59 & 0.051 & 1.620 \\ & & & & & \\ LSBGs & & & & & \\ \hline \end{tabular} \end{table} Table 4: Best-fitting power law parameters for the angular two-point auto-correlation function for HSBG and LSBGs along with the information on the number of galaxies, median \(g\)-band magnitude, and the mean surface brightness for each sample. Figure 19: Color distribution of the HSBGs from the DES DR1. The vertical line at \(g-i=1.0\) shows the color separation of the HSBGs into red and blue galaxies. alogue (Bilicki et al., 2014). They found that LSBGs had higher clustering amplitude in the range of 0.1 to 2 degrees, which is similar to our observations. However, our results contradict the early estimates from Bothun et al. (1993) and Mo et al. (1994), who infer that the LSBGs tend to cluster weakly spatially. However, their analyses were limited by a small data sample (\(\sim\)400 LSBGs), a small area of the sky, and most likely selection biases. Given the low accuracy of photometric redshifts for LSBGs in our sample, we do not attempt to reconstruct their spatial clustering in this work. Further analysis is planned as a follow-up to this study. ## 8 Identification of ultra-diffuse galaxies As discussed in Sect. 1, UDGs are a subclass of LSBGs that have extended half-light radii \(r_{1/2}\geq 1.5\) kpc and a central surface brightness \(\mu_{0}>24\) mag arcsec\({}^{-2}\) in \(g\)-band (van Dokkum et al., 2015). Significant population of UDGs have been discovered in the Coma cluster by van Dokkum et al. (2015) and other investigations have revealed a large number of UDGs in other galaxy clusters (Koda et al., 2015; Mihos et al., 2015; Lim et al., 2020; La Marca et al., 2022, 20). Later on, studies have shown that thousands of UDGs can be found in single individual clusters and that the abundance of UDGs scales close to linearly with host halo mass (van der Burg et al., 2016; Mancera Pina et al., 2018). In order to investigate if there are any cluster UDGs in the sample of LSBGs we identified in DES, we crossmatched our total LSBG sample (23, 790 LSBGs from Tanoglidis et al., 2021) and the 4 083 new LSBGs we identified) with the X-ray-selected galaxy cluster catalogue from the ROSAT All-Sky Survey (RXGCC; Xu et al., 2022). All the LSBGs at the angular distance from the centre of the cluster lower than \(R_{205}\)5 virial radius of the cluster were associated with that cluster. Here, \(R_{200}\) is the radius at which the average density of a galaxy cluster is 200 times the critical density of the universe at that redshift. We found that 1310 LSBGs from the combined catalogue and 123 LSBGs from our new sample were associated with 130 and 53 clusters, respectively. Using the redshift of the cluster provided in Xu et al. (2022), and assuming that the associated LSBG is at the same redshift as the cluster, we estimated the half-light radius of the LSBG and its projected comoving distance from the cluster centre. It should be noted that since we perform our crossmatching with only projected distances, some of the LSBGs associated with clusters could be non-cluster members that are projected along the field. However, it is unlikely to be the case for all, and since we do not have any other distance estimate for the LSBGs, we chose to adopt this method. However, it should be also noted that UDGs are not exclusively located in clusters; they can also be observed in groups (Cohen et al., 2018; Marleau et al., 2021) and even in field environments (Prole et al., 2019). In this section, we are only focusing on the LSBGs and UDGs associated with the clusters. Footnote 5: We used the \(R_{300}\) values and the redshifts provided by Xu et al. (2022) to obtain the \(R_{200}\) crossmatching radius. Following Ettori & Balestra (2009), we assume \(R_{200}\approx R_{500}/0.65\) where \(R_{500}\) is the radius at which the average density of a galaxy cluster is 500 times the critical density of the universe at that redshift. Among the 1,310 cluster LSBGs, we further classify 317 cluster UDG candidates based on their half-light radius (\(r_{1/2}\geq 1.5\) kpc) and the central surface brightness (\(\mu_{0}>24.0\) mag arcsec\({}^{-2}\)) in the \(g\)-band. Since we have not confirmed the physical distances to these galaxies and hence their physical sizes, they can only be regarded as UDG candidates. From here onward, when referring to UDGs in the paper, it is important to note that we are addressing UDG candidates and not confirmed UDGs. These 317 UDGs are distributed within 80 clusters making it the largest sample of clusters in which UDGs are studied. It should also be noted that Tanoglidis et al. (2021) also identified 41 UDGs from their LSBG sample in DES by associating the 9 most overdense regions of LSBGs with known clusters. However, they did not study the properties of those 41 UDGs in detail, and the 276 UDGs among the 317 UDGs reported here are completely new. The UDGs presented here have a median \(r_{1/2}\) of 2.75 kpc and \(\mu_{0}\) of 24.51 mag arcsec\({}^{-2}\). Six of the newly identified UDGs are shown in Fig. 21. As seen from Fig. 22, the majority of the cluster UDGs (253 out of 317) are red in color (\(g-i>0.6\) mag), which is similar Figure 20: Angular autocorrelation function for the full sample of LSBGs (grey line with open circles) and the sample of HSBGs (black line with crosses) is shown in the left panel. The angular autocorrelation function of the red LSBGs (red line), blue LSBGs (blue line), red HSBGs (orange line) and blue HSBGs (purple line) is shown in the right panel. The vertical green shaded region represents the region fitted for a power law (\(\omega=A\theta^{1-\gamma}\)), and the corresponding \(\gamma\) values are shown in the legend. to the trend of cluster LSBGs (909 out of 1310). This is consistent with theoretical predictions for cluster UDGs (Benavides et al., 2023). Mancera Pina et al. (2019) have also found similar distribution for the \(g-r\) color of 442 UDGs observed in 8 galaxy clusters. The joint distribution of the red and blues UDGs in the space of \(r_{1/2}\) and \(\mu_{0}\) is shown in Fig. 23. The red UDGs presented here have a median \(r_{1/2}\) of 2.75 kpc and \(\mu_{0}\) of 24.52 mag arcsec\({}^{-2}\). Similarly, the blue UDGs have a median \(r_{1/2}\) of 2.78 kpc and \(\mu_{0}\) of 24.41 mag arcsec\({}^{-2}\). Most of the red and blue UDGs have a half-light radius in the range \(1.5<r_{1/2}<6\) kpc. However, there is a small fraction of UDGs (6 out of 317) with \(r_{1/2}>10\) kpc, which is all red and have \(\mu_{0}<25.0\) mag arcsec\({}^{-2}\) which might be good potential candidates for the follow-up studies. For all the cluster LSBGs, we can see a gradient in color as shown in Fig. 24, where LSBGs towards the outskirts of clusters tend to be bluer than those in the centre. This is similar to the behaviour found in Virgo cluster LSBGs from Junais et al. (2022). However, for the cluster UDGs presented in this study, the color gradient appears much weaker, almost showing a flat distribution in comparison to the LSBGs. A similar weak trend, where more blue UDGs are found towards the cluster centre, was also noted by Mancera Pina et al. (2019). On the other hand, Roman and Trujillo (2017) and Alabi et al. (2020) reported a more pronounced color trend as a function of cluster-centric distance, while La Marca et al. (2022b) did not find any significant trend. However, the color gradient in the cluster LSBGs is not a significant trend, which is not a significant trend. Figure 21: Cutouts of 6 confirmed new UDGs. The unique identification number (co object id) for each galaxy in DES DR1 is given below each image. The images were generated by combining the \(g\), \(r\) and \(z\) bands using APLpy package (Robitaille and Bressert, 2012), and each image corresponds to a \(33.66^{\prime\prime}\times 33.66^{\prime\prime}\) region of the sky with the UDG at its centre. Figure 22: Color distribution of the 1,310 cluster LSBGs and 317 cluster UDGs from the DES DR1. ever, when directly comparing the trends in the color of UDGs in the cluster, one should keep in mind that these trends will be affected by several factors like the used bands for the color, sample size and the studied cluster, as we can see from the results in the literature, For example, our sample size (\(>\)300) is similar to the sample size of Mancera Pina et al. (2019) and have similar results whereas it is different from the findings of Roman & Trujillo (2017); Alabi et al. (2020) and La Marca et al. (2022b) which have a smaller sample size (\(<\)40). The trend observed in the half-light radius (Fig. 24) for both the cluster LSBGs and UDGs is quite evident. As we move towards the outer regions of the cluster centre, both LSBGs and UDGs show an increase in size. This behaviour is in agreement with the findings of Roman & Trujillo (2017). The gradients we observe in color and size with respect to the cluster-centric distance are consistent with the proposed UDG formation scenarios such as the galaxy harassment (Conselice, 2018), tidal interactions Mancera Pina et al. (2019), and ram-pressure stripping (Conselice et al., 2003b; Buyle et al., 2005). Such trends are also similar to what is observed for dwarf galaxies in the literature (Venhola et al., 2019), providing further support for the argument that UDGs can be considered as a subset of dwarf galaxies (Conselice, 2018; Benavides et al., 2023). The sample of UDG candidates presented here will be the subject of the follow-up analysis. Additionally, it should be noted that all the UDGs reported here are cluster UDGs. The actual number of UDGs in the LSBG catalogue (including low-density environments) might be more than this, and thus the reported number is only a lower limit on the total number of UDGs. ## 9 Conclusions In this paper, we explore the possibilities of using transformers in distinguishing LSBGs from artefacts in optical imaging data. We implemented four transformer models that combined the use of CNN backbone and self-attention layers to classify the labels; we call them LSBG DETR (LSBG detection transformers) models. Similarly, we have created four transformer models that directly apply attention to the patches of the images without any convolutions and these models we call LSBG vision transformers. We compared the performance of these two different architectures to the LSBG identification CNN model called DeepShadows presented in Tanoglidis et al. (2021). We found that the transformer models performed better than the DeepShadows, and later we used the ensemble of our transformer models to look for new LSBGs in the DES DR1 data that the previous searches may have missed. We follow the definition of an LSBG used by Tanoglidis et al. (2021), i.e. we define LSBGs as galaxies having a \(g\)-band mean surface brightness \(\bar{\mu}_{eff}>\) 24.2 mag arcsec\({}^{-2}\) and half-light radii \(r_{1/2}>\) 2.5\({}^{\prime\prime}\). Following this definition, we identified 4 083 new LSBGs from the DES DR1, increasing the number of identified LSBGs in DES by 17%. Our sample selection and LSBG identification pipeline consist of the following steps: 1. We preselect the objects from the DES Y3 Gold catalog based on the selection criteria described in Tanoglidis et al. (2021) using the SourceExtractor parameters. 2. We applied the ensemble of transformer models to this sample of preselected objects. We chose the objects identified independently by both the LSBG DETR ensemble and the LSBG ViT ensemble for a further follow-up to be inspected for being an LSBG. 3. We performed a Sersic fitting using Galfit and re-applied the selection cuts to further reduce the number of false positives. After this step, 4879 LSBG candidates remained to be visually inspected. 4. After the visual inspection, we report the presence of 4083 new LSBGs identified by the transformer ensemble models. Following Tanoglidis et al. (2021), we divided the total LSBG sample into two subsamples according to their \(g-i\) color. Among the 4083 new LSBGs presented here, 72% were identified as blue LSBGs, which is higher than the 67% observed in the sample presented by Tanoglidis et al. (2021). Additionally, we also found that we have a more fraction of red LSBGs with color, \(g-i>\) 0.8, compared to the sample of LSBGs presented by Tanoglidis et al. (2021). We speculate that the bias might have originated from the training set used by Tanoglidis et al. (2021) to train the SVM model to preselect the LSBG candidate sample. By combining the previously identified 23,790 LSBGs from Tanoglidis et al. (2021) with the LSBGs newly identified in our work, the total number of known LSBGs in the DES is increased to 27,873. This increases the number density of LSBGs in the DES from 4.13 to 4.91 deg\({}^{-2}\) for LSBGs with \(\bar{\mu}_{eff}>\)24.3 mag arcsec\({}^{-2}\) and from 4.75 to 5.57 deg\({}^{-2}\) for LSBGs with \(\bar{\mu}_{eff}>\)24.2 mag arcsec\({}^{-2}\). It should be stressed that this is a lower limit to the number density, and it would increase in the future with better imaging quality and better methodology for the surveys like LSST and Euclid. We also made an analysis of the clustering of LSBGs in DES. We found that the LSBGs tend to cluster strongly in comparison to the HSBGs from DES, which is similar to the findings by Tanoglidis et al. (2021). Upon further examination, we observed that the strong clustering tendency observed among low surface brightness galaxies (LSBGs) primarily stems from the red LSBGs, while the behaviour of blue LSBGs resembles that of blue high surface brightness galaxies (HSBGs) with weaker clustering tendencies. Additionally, we noted a decrease in the number of red LSBGs near the centre of the galaxy cluster, resulting in a flattening of the auto-correlation function on smaller scales which is similar to the conclusions of Wittmann et al. (2017). Additionally, we crossmatched the LSBGs with the X-ray-selected galaxy cluster catalogue from the ROSAT All-Sky Sur Figure 23: Joint distribution of the red (red dots) and blue (blue cross) UDGs in the space of \(r_{1/2}\) and \(\mu_{0}\) in the \(g\)-band. The vertical lines in the histogram on the x-axis and y-axis show the median for each class. vey (RXGCC; Xu et al., 2022) to find LSBGs associated with the clusters. Using the redshift information of the clusters, we identify 317 UDGs, among which 276 are reported for the first time. We also observed a color gradient among the cluster LSBGs, where LSBGs located towards the outskirts of clusters exhibit a bluer color compared to those at the centre, similar to findings by Junais et al. (2022) in the Virgo cluster LSBGs. However, this trend is relatively weak for the cluster UDGs in our study, unlike the LSBGs. A clear trend in the half-light radius of the cluster LSBGs and UDGs as a function of the cluster-centric distance is also visible. The LSBGs and UDGs grow in size as going from the cluster centre to the outskirts. The coherent trends in the color and size are in agreement with the proposed UDG formation mechanisms such as the galaxy harassment (Conselice, 2018), tidal interactions Mancera Pina et al. (2019), and ram-pressure stripping (Conselice et al., 2003; Buyle et al., 2005), giving more support to the argument that the UDGs are a subset of dwarf galaxies (Conselice, 2018; Benavides et al., 2023). The upcoming large-scale surveys such as LSST and Euclid are expected to cover around 18 000 and 14 5000 deg\({}^{2}\) of the sky, respectively (Ivezic et al., 2019; Euclid Collaboration et al., 2022). Extrapolating our results on the number density of LSBGs, we are expected to find more than 100 000 and 80 000 LSBGs from LSST and Euclid, respectively. In this scenario, an improved and efficient methodology will be highly significant, and we propose that transformer models could overcome this difficulty. With the aid of transfer learning, we are planning to extend our study to HSC SSP DR3 and thus pave a pathway for the LSBG detection in LSST and Euclid. ###### Acknowledgements. J and KM are grateful for support from the Polish National Science Centre via grant UMO-2018/30/E/ST9/00082. US acknowledges support from the National Research Foundation of South Africa (grant no. 137975). This research was partially supported by the Polish National Science Centre grant UMO-2018/30/MST9/00757 and the Polish Ministry of Science and Higher Education grant DIR/WK/2018/12.
2303.04107
Quadrupole instability of static scalarized black holes
The addition of a Ricci coupling to Einstein-scalar-Gauss-Bonnet theories makes general relativity a cosmological attractor. Previous work considered a quadratic coupling function with two independent coupling constants in such theories and showed that static, spherically symmetric, spontaneously scalarized black holes are radially stable beyond a critical value of the Ricci coupling constant. Here we demonstrate that these black holes are affected by a quadrupole instability which leads to two new branches of static, axially symmetric scalarized black holes. We discuss the properties of these solutions and provide embedding diagrams.
Burkhard Kleihaus, Jutta Kunz, Tim Utermöhlen, Emanuele Berti
2023-03-07T18:17:24Z
http://arxiv.org/abs/2303.04107v1
# Quadrupole instability of static scalarized black holes ###### Abstract The addition of a Ricci coupling to Einstein-scalar-Gauss-Bonnet theories makes general relativity a cosmological attractor. Previous work considered a quadratic coupling function with two independent coupling constants in such theories and showed that static, spherically symmetric, spontaneously scalarized black holes are radially stable beyond a critical value of the Ricci coupling constant. Here we demonstrate that these black holes are affected by a quadrupole instability which leads to two new branches of static, axially symmetric scalarized black holes. We discuss the properties of these solutions and provide embedding diagrams. pacs: 04.50.-h, 04.70.Bw, 97.60.Jd _Introduction._ In general relativity (GR), the no-hair theorems highly restrict the allowed black hole (BH) solutions and their properties [1; 2]. Real scalar fields, for instance, cannot lead to scalar hair. The situation is different in generalized theories of gravity intended to amend various shortcomings of GR (see, e.g., [3; 4]). A particularly attractive and well-studied class of theories contains a Gauss-Bonnet (GB) quadratic term coupled to a real scalar field. Such Einstein-scalar-Gauss-Bonnet (EsGB) theories lead to second order equations of motion and do not feature ghosts. In the low-energy limit of string theory, the scalar field corresponds to a dilaton [5; 6]. In these Einstein-dilaton-Gauss-Bonnet theories BHs always carry scalar hair [7; 8], and the vacuum BHs of GR are no longer solutions. However, EsGB theories allowing for different (non-dilatonic) coupling functions \(f(\phi)\) of the scalar field to the GB term admit GR BHs as special solutions of the field equations. For example, theories with coupling functions quadratic in \(\phi\) feature BH solutions with scalar hair that reduce to GR solutions for small couplings. These "spontaneously scalarized" BH solutions arise from a tachyonic instability of the GR BHs, due either to strong curvatures or to large spins [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Static, spherically symmetric, spontaneously scalarized BHs "branch off" the Schwarzschild BH solutions below a critical value of the mass (for fixed coupling constant), where the tachyonic instability produces a zero mode of the Schwarzschild BHs. For masses lower than this critical value, the Schwarzschild BHs possess an unstable radial mode [19; 20; 21] and a new branch of scalarized BHs (the "fundamental branch") emerges. The stability of scalarized BHs under radial perturbations depends on the specific choice of the coupling function and of the scalar field potential [19; 20; 21]. As shown in Refs. [19; 22; 23], linear mode stability applies to (most of) the fundamental scalarized branch of the extended scalar-tensor GB theories considered in Ref. [9]. In contrast, for a purely quadratic coupling function, \(f(\phi)=\phi^{2}/2\), the fundamental scalarized branch is radially unstable everywhere [19]. The fundamental branch can be made partially (radially) stable by including higher order terms or a potential. Then the radial instability sets in at a minimum of the scalarized BH mass [20; 21]. Recently, a partially radially stable fundamental branch of scalarized BHs was found in another interesting scenario [24; 25; 26]: Einstein-scalar-Gauss-Bonnet-Ricci (EsGBR) theories, which include also a term coupling the scalar field with the Ricci scalar via a quadratic coupling function (but with a different coupling constant). EsGBR theories are well motivated from a cosmological point of view, since they allow GR to be a cosmological attractor [24]: no fine-tuning of the scalar field in the early Universe is needed in order to have a vanishing scalar field at late times. Here we revisit scalarized BH solutions for the EsGBR action [24; 25; 26] \[\mathcal{S}=\frac{1}{16\pi}\int\mathrm{d}^{4}x\sqrt{-g}\left[R- \frac{1}{2}(\partial_{\mu}\phi)^{2}\right.\] \[\left.-\frac{\phi^{2}}{2}\left(\frac{\beta}{2}R-\alpha R_{\rm GB }^{2}\right)\right] \tag{1}\] with a real scalar field \(\phi\), coupling constants \(\alpha\) and \(\beta\), and GB invariant \(R_{\rm GB}^{2}=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+ R^{2}\). We show that scalarized BHs in EsGBR theories have an intriguing new feature: radially stable, static, spherically symmetric BHs on the fundamental scalarized branch develop a quadrupole instability below a critical value of the BH mass. At this critical value of the mass the spherically symmetric solutions possess a zero mode from where two branches of static, but only _axially_ symmetric, BH solutions arise. Therefore, stability under radial perturbations may not necessarily imply linear mode stability. We present the domain of existence of the new branches of BH solutions and study their physical properties. _General framework._ Variation of the action (1) with respect to the metric yields the generalized Einstein equations \[E_{\mu\nu}=G_{\mu\nu}-\frac{1}{2}T_{\mu\nu}^{\rm(eff)}=0\,, \tag{2}\] where the effective stress-energy tensor \[T_{\mu\nu}^{\rm(eff)}=T_{\mu\nu}^{(\phi)}-2\alpha T_{\mu\nu}^{\rm(GB)}+\beta T_{ \mu\nu}^{\rm(R)} \tag{3}\] has contributions from the scalar, GB and Ricci terms, respectively. Variation with respect to the scalar field yields the generalized Klein-Gordon equation \[\nabla^{2}\phi-\left(\frac{\beta}{2}R-\alpha R_{\rm GB}^{2}\right)\phi=0\,. \tag{4}\] The latter features the effective mass \[m_{\rm eff}^{2}=\frac{\beta}{2}R-\alpha R_{\rm GB}^{2}\,, \tag{5}\] which allows for spontaneous scalarization and for the attractive cosmological features of the model. We find solutions corresponding to static, axially symmetric spacetimes by imposing the ansatz [27; 28] \[ds^{2} = -be^{\nabla}dt^{2}+e^{F_{1}}\left(dr^{2}+r^{2}d\theta^{2}\right) \tag{6}\] \[+ e^{F_{2}}r^{2}\sin^{2}\theta d\varphi^{2}\,,\] with "quasi-isotropic" radial coordinate \(r\), auxiliary function \(b=\left(1-\frac{r_{\rm H}}{r}\right)^{2}\) (here \(r_{\rm H}\) is the isotropic horizon radius), and three unknown metric functions \(F_{0}\), \(F_{1}\), \(F_{2}\). These metric functions \(F_{i}\) (\(i=0,\,1,\,2\)) and the scalar field \(\phi\) depend only on the coordinates \(r\) and \(\theta\). Based on considerations of symmetry, regularity and asymptotic flatness of the solutions, we impose the following set of boundary conditions at spatial infinity, at the horizon and on the symmetry axis, respectively: \(F_{i}(\infty)=0\)\((i=0,\,1,\,2),\quad\phi(\infty)=0\); \(\partial_{r}F_{0}(r_{\rm H})=1/r_{\rm H}\), \(\partial_{r}F_{1}(r_{\rm H})=-2/r_{\rm H}\), \(\partial_{r}F_{2}(r_{\rm H})=-2/r_{\rm H}\), \(\partial_{r}\phi(r_{\rm H})=0\); and \(\partial_{\theta}F_{i}|_{\theta=0,\pi}=0\) (\(i=0,\,1,\,2\)), \(\partial_{\theta}\phi|_{\theta=0,\pi}=0\). Assuming the scalar field to be even under parity, a series expansion at infinity leads to \(\phi=Q/r+\ldots\), where \(Q\) represents the scalar charge. The BH mass \(M\) can be found from the metric function \(g_{tt}=-1+2M/r+\ldots\). Since all functions are even, we can limit calculations to one quadrant only and impose the boundary conditions \(\partial_{\theta}F_{i}|_{\theta=\pi/2}=0\) (\(i=0,\,1,\,2\)), \(\partial_{\theta}\phi|_{\theta=\pi/2}=0\) on the equatorial plane. The horizon metric determines the Hawking temperature [29] \[T_{\rm H}=\frac{1}{2\pi r_{\rm H}}e^{(F_{0}-F_{1})/2} \tag{7}\] and the horizon area \[A_{\rm H}=2\pi r_{\rm H}^{2}\int_{0}^{\pi}d\theta\sin\theta e^{(F_{1}+F_{2})/ 2}\,. \tag{8}\] The entropy, however, differs from the GR result, where it is simply a quarter of the horizon area [29]. For EsGBR BHs there are additional contributions [30], and the entropy is found as the following integral over the spatial cross section of the horizon: \[S=\frac{1}{4}\int_{\Sigma_{\rm H}}d^{2}x\sqrt{h}\left[1-\frac{\phi^{2}}{2} \left(\frac{\beta}{2}-2\alpha\hat{R}\right)\right]\,, \tag{9}\] where \(h\) is the determinant of the induced metric on the horizon and \(\hat{R}\) is the corresponding scalar curvature. _Numerical Results._ The numerical approach we employed previously for EsGB BHs [14; 18] is based on the finite difference solver FIDISOL/CADSOL [31; 32]. For EsGBR BHs, we supplemented this solver by a spectral solver. In both cases the unknown functions \((F_{0},F_{1},F_{2},\phi)\) are obtained for given values of the coupling constants \(\alpha\) and \(\beta\) and of the horizon radius \(r_{\rm H}\) by solving the chosen set of partial differential equations (PDEs) subject to the prescribed boundary conditions. In particular, we employ the linear combinations of the Einstein equations \(E_{i}^{t},E_{\varphi}^{\varphi},E_{\varphi}^{t}\) and \(E_{r}^{r}+E_{\theta}^{\theta}\) for the PDEs yielding the metric functions, and we treat the equations involving \(E_{r}^{r}-E_{\theta}^{\theta}\) and \(E_{r}^{\theta}\) as constraints. Introducing a compactified radial variable \(x=1-r_{\rm H}/r\), we map the interval \([r_{\rm H},\infty)\) to the finite interval \([0,1]\). We then discretize the equations on a nonequidistant grid in the variables \(x\) and \(\theta\) whose range is \(0\leq x\leq 1\) and \(0\leq\theta\leq\pi/2\). The resulting estimated numerical error is of order \(10^{-3}\) or less. We begin our discussion by recalling the fundamental scalarized static spherically symmetric BH branches [25; 26]. When \(\beta\) exceeds a critical value of about \(1.15\), radially stable BHs arise from the bifurcation point with the Schwarzschild BHs (see e.g. Fig. 2 of Ref. [26]). Analogous to EsGB BHs [20; 21], for smaller values of \(\beta\) these branches feature a minimum of the mass, where the radial instability sets in. For larger values of \(\beta\) the branches are radially stable, and they terminate with a solution such that a certain radicand in the horizon expansion of the scalar field vanishes [7]. In Fig. 1 we show the scaled scalar charge \(Q/\sqrt{\alpha}\) versus the scaled mass \(M/\sqrt{\alpha}\) for two values of the coupling, \(\beta=2\) and \(\beta=5\). These results agree with those of Ref. [26]. Figure 1: Branches of scalarized BHs for \(\beta=2\) and \(5\): scaled scalar charge \(Q/\sqrt{\alpha}\) vs scaled mass \(M/\sqrt{\alpha}\) for the fundamental spherically symmetric fundamental branches (dotted blue) and the axially symmetric branches (red) in the insets. The critical bifurcation points \(P_{2}\) are shown by black dots. However, unlike previously assumed, radial stability along these branches does not imply general mode stability. At the locations marked by black dots in Fig. 1 the scalarized BHs gain a zero mode with respect to quadrupole deformations, and we find two branches of axially symmetric BH solutions (red dashed and red dotted lines in the insets). Let us denote the bifurcation points of the axial branches by \(P_{2}\). The blue line in Fig. 2 shows these bifurcation points, starting from the critical value \(\beta_{\rm cr}=1.3458\) where they first appear. We note that for \(\beta_{\rm cr}\leq\beta\lesssim 1.52\) the onset of instability occurs on the radially unstable part of the corresponding spherically symmetric branch, whereas for \(\beta\gtrsim 1.52\) it occurs on the radially stable part. We also show the endpoints of the two axial branches. These are denoted by "dis = 0," since in the horizon expansion for the solutions a discriminant vanishes, and the existence of real BH solutions requires a positive sign [18]. Therefore the region between the blue line and the two black lines represents the domain of existence of the two axial branches for \(M/\sqrt{\alpha}>0.95\). The inset in Fig. 2 is a zoom-in of the axial branches for \(\beta=2\). Starting at the bifurcation with the spherical branch, the upper branch (oblate, red dashed line) curiously features a small portion where the BH mass decreases, but then it bends towards larger BH masses. On the contrary, the lower branch (prolate, red dotted line) immediately moves in the direction of larger BH masses. This curious feature is lost for larger values of \(\beta\). At the endpoints of the axial branches, their deformation becomes maximal. The deformation of the two axial branches is illustrated in Fig. 3, where we show embeddings of the BH horizons at the endpoints of the two axial branches for \(\beta=5\) and compare them with the spherical horizon. Clearly, along the lower (dotted) axial branch the deformation becomes prolate, while along the upper (dashed) axial branch it becomes oblate. In Fig. 4 we turn to thermodynamic properties, plotting the scaled entropy \(S/(4\pi\alpha)\) (left panel) and the scaled Hawking temperature \(16\pi T_{H}\sqrt{\alpha}\) (right panel) versus the scaled mass \(M/\sqrt{\alpha}\) for the spherical branches with \(\beta=2\) and \(\beta=5\). Let us focus first on the left panel. The entropy of the axial branches for \(\beta=5\) (shown in the insets) is very close to the entropy along the spherical solutions, but a high magnification (bottom right inset) uncovers small deviations: the prolate lower branch (dotted) has slightly higher entropy than the spherical branch, while the oblate upper branch (dashed) has slightly smaller entropy. Therefore the BHs on the prolate branch are, if only slightly, entropically favored. The closeness of the entropy of the spherical and axial branches is somewhat surprising, because the scaled horizon area \(A_{\rm H}/\alpha\) of the spherical branch is considerably larger than the area of the axial branches away from the bifurcation for BHs with the same scaled mass \(M/\sqrt{\alpha}\). The right panel of Fig. 4 shows that the temperature \(T_{\rm H}\sqrt{\alpha}\) of the scalarized BHs is a decreasing function of Figure 2: Domain of existence of axially symmetric scalarized BHs: scaled scalar charge \(Q/\sqrt{\alpha}\) vs scaled mass \(M/\sqrt{\alpha}\) for the endpoints of the two branches (black, determined by the vanishing of the respective discriminant) together with the critical points of the fundamental spherically symmetric branch (blue, determined by the zero mode). The black star indicates the onset of the quadrupole instability at \(\beta=1.3458\). The numbers correspond to selected values of \(\beta\). The inset shows a zoom of the spherically and axially symmetric branches for \(\beta=2\). Figure 3: Embedding of the horizon of scalarized BHs for \(\beta=5\): critical spherical BH (\(\alpha=13.79\)), endpoints of upper (\(\alpha=14.05\)) and lower (\(\alpha=13.45\)) axial branch for fixed circumferential radius of the horizon. \(M/\sqrt{\alpha}\), as in the case of Schwarzschild BHs. Both axial branches possess larger \(T_{\rm H}\sqrt{\alpha}\) than the spherical branch. The temperature of the axially symmetric BHs is larger than the temperature of spherically symmetric BHs on the radially stable branch, while it is smaller than the temperature of spherically symmetric BHs on the radially unstable branch (not shown in the figure). For large enough values of \(\beta\) (including the value 2, shown in the top right inset), there is still a part of the branch of oblate BHs (dashed red line) where the temperature increases with mass. This feature seems to disappear for \(\beta\gtrsim 3\). _Conclusions._ We have investigated spontaneously scalarized, static BH solutions in EsGBR theories with quadratic coupling functions. These cosmologically motivated theories allow for branches of static spherically symmetric BHs that are (in part) stable with respect to radial perturbations [24, 25, 26]. We have found that radial stability does not, in general, imply perturbative mode stability. In particular, we have demonstrated the occurrence of a quadrupole instability along the spherical branches for a wide range of coupling constants. Two distinct, axially deformed branches of BHs arise starting from a zero mode of the spherical BHs with respect to quadrupole deformations. The lower branch is prolate and entropically favored, while the upper branch is oblate and entropically disfavored relative to the spherical branch, although the difference in entropy is very small (the differences in horizon area and temperature are larger). These static axial branches of BHs represent new counterexamples to Israel's theorem [33]. The occurrence of a quadrupole instability might be attributed to the presence of the Ricci coupling. However, we cannot exclude the possibility that such an instability may occur in simpler EsGB theories with higher order terms in the coupling function or with a nonzero potential [20, 21]. This is an interesting topic for further study. We have considered the possibility of a dipole instability of EsGBR BHs, because Schwarzschild BH instabilities in EsGB theories involve first for the monopole (i.e., the radial instability) and then the dipole, before the quadrupole instability occurs [14]. However, our numerical investigations did not reveal any dipole instability. A perturbative study of the mode stability of the new axial branches will be technically challenging, and their nonlinear evolution is particularly interesting. Before tackling these difficult questions, it should be possible to understand various other physical properties of these solutions, such as their shadow. The axial symmetry of the static spacetimes should lead to deformations of the shadow, but without distinction with respect to co- and counter-rotation. A comparison with observations could set constraints on the coupling constants of the theory [34]. The static axially symmetric branches can be set into rotation, just like the spherically symmetric branches. This implies interesting overlapping domains of existence for rotating BHs in EsGBR theories, that will be presented elsewhere [35]. _Acknowledgments._ B.K. and J.K. gratefully acknowledge support by the DFG Research Training Group 1620 _Models of Gravity_ and DFG project Ku612/18-1. E.B. is supported by NSF Grants No. AST-2006538, PHY-2207502, PHY-090003 and PHY-20043, and NASA Grants No. 19-ATP19-0051, 20-LPS20-0011 and 21-ATP21-0010. This research project was conducted using computational resources at the Maryland Advanced Research Computing Center (MARCC). This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 690904. This research project was conducted using computational resources at the Maryland Advanced Research Computing Figure 4: Scaled entropy \(S/(4\pi\alpha)\) (left panel) and scaled Hawking temperature \(16\pi T_{H}\sqrt{\alpha}\) (right panel) versus the scaled mass \(M/\sqrt{\alpha}\) for the spherically symmetric fundamental branches (dotted blue) of scalarized BHs with \(\beta=2\) and \(\beta=5\). Black dots mark the critical bifurcation points. The prolate and oblate axially symmetric branches (red dotted and red dashed lines, respectively) are shown in the insets. Center (MARCC).
2302.02887
UVDoc: Neural Grid-based Document Unwarping
Restoring the original, flat appearance of a printed document from casual photographs of bent and wrinkled pages is a common everyday problem. In this paper we propose a novel method for grid-based single-image document unwarping. Our method performs geometric distortion correction via a fully convolutional deep neural network that learns to predict the 3D grid mesh of the document and the corresponding 2D unwarping grid in a dual-task fashion, implicitly encoding the coupling between the shape of a 3D piece of paper and its 2D image. In order to allow unwarping models to train on data that is more realistic in appearance than the commonly used synthetic Doc3D dataset, we create and publish our own dataset, called UVDoc, which combines pseudo-photorealistic document images with physically accurate 3D shape and unwarping function annotations. Our dataset is labeled with all the information necessary to train our unwarping network, without having to engineer separate loss functions that can deal with the lack of ground-truth typically found in document in the wild datasets. We perform an in-depth evaluation that demonstrates that with the inclusion of our novel pseudo-photorealistic dataset, our relatively small network architecture achieves state-of-the-art results on the DocUNet benchmark. We show that the pseudo-photorealistic nature of our UVDoc dataset allows for new and better evaluation methods, such as lighting-corrected MS-SSIM. We provide a novel benchmark dataset that facilitates such evaluations, and propose a metric that quantifies line straightness after unwarping. Our code, results and UVDoc dataset are available at https://github.com/tanguymagne/UVDoc.
Floor Verhoeven, Tanguy Magne, Olga Sorkine-Hornung
2023-02-06T15:53:34Z
http://arxiv.org/abs/2302.02887v2
# Neural Document Unwarping using Coupled Grids ###### Abstract. Restoring the original, flat appearance of a printed document from casual photographs of bent and wrinkled pages is a common everyday problem. In this paper we propose a novel method for grid-based single-image document unwarping. Our method performs geometric distortion correction via a deep fully convolutional neural network that learns to predict the 3D grid mesh of the document and the corresponding 2D unwarping grid in a multi-task fashion, implicitly encoding the coupling between the shape of a 3D object and its 2D image. We additionally create and publish our own dataset, called UVDoc, which combines pseudo-photorealistic document images with ground truth grid-based physical 3D and unwarping information, allowing unwarping models to train on data that is more realistic in appearance than the commonly used synthetic Doc3D dataset, whilst also being more physically accurate. Our dataset is labeled with all the information necessary to train our unwarping network, without having to engineer separate loss functions that can deal with the lack of ground-truth typically found in _document in the wild_ datasets. We include a thorough evaluation that demonstrates that our dual-task unwarping network trained on a mix of synthetic and pseudo-photorealistic images achieves state-of-the-art performance on the DocUNet benchmark dataset. Our code, results and UVDoc dataset will be made publicly available upon publication. + Footnote †: copyrighted: [https://doi.org/10.1007/s10](https://doi.org/10.1007/s10) geometric optimization is generally slow and has limited approximation capabilities, making it unsuitable for practical applications. Data-driven approaches instead rely on training a neural network to perform the unwarping. These methods are fast at runtime but typically require a large amount of high-quality training data, which can be difficult to obtain. The available training data can roughly be categorized as either synthetic or _in the wild_ document images. The former group is generated by rendering images using dense 3D scans of real document geometries, whilst the latter simply consists of photographs of actual deformed documents. The synthetic category has the problem that dense 3D capture is often noisy and rendering photorealistic paper can be challenging, making the appearance of the generated data samples non-realistic as a result. The challenge with the latter category is that ground truth data, most notably the ground truth unwarping function, is difficult to obtain. We use a neural approach, as it is more suited for everyday real-time applications and only requires a single RGB image as runtime input, whereas state-of-the-art model-driven approaches typically require multi-view images or additional 3D input data. We circumvent the training data problem by devising a custom data capture pipeline that allows efficient generation of pseudo-photorealistic images with registered ground truth 3D and unwarping information. Our main contribution is CGU-Net (Coupled Grids Unwarping Net), a deep convolutional network that performs document image unwarping from a single RGB image. It uses a dual-head approach to predict both a 3D grid mesh representing the 3D shape of the document, as well as a 2D unwarping grid representing a coarse backward map that can be used to perform the document unwarping. This dual-task approach encodes an implicit coupling between the 2D and the 3D grid, just like there is a physical coupling between the 3D document shape and its 2D image. Because we learn a coarse 2D unwarping grid instead of a dense unwarping flow, our network size is greatly reduced compared to state-of-the-art methods. To decrease the domain gap between the synthetic Doc3D dataset (Ma et al., 2018), usually used to train models for the task of document unwarping, and real document photographs, we present our own dataset, which contains 10,000 pseudo-photorealistic images of documents, equipped with all the required information to train a coarse grid-based neural document unwarging neural net. We achieve a photorealistic appearance by using image compositing instead of rendering, thereby retaining the shading and material appearance from the original image capture. Because our dataset is tailored to a coarse grid-based approach, it is easy to produce even though it includes numerous ground-truth annotations. We combine training on the large Doc3D synthetic dataset with our own custom data and obtain state-of-the-art performance on the DocUNet benchmark for most evaluation criteria. ## 2. Related Work Document image unwarping is a widely-studied topic, with contributions coming both from the fields of geometry processing as well as computer vision. We here divide previous work into two categories; namely those using a model-based approach and those using a data-driven approach. ### Model-based document unwarping Early works take a geometry processing approach and try to unwarp document images by first creating a 3D reconstruction of the document surface, which then is flattened to the plane using some optimization problem. These related works commonly obtain an estimate of the 3D document surface with the help of some piece of auxiliary equipment. These setups include structured light (Brown and Seales, 2001, 2004), two structured laser beams (Meng et al., 2014) and laser range scanners (Zhang et al., 2008). Other model-based methods also rely on 3D reconstruction as a first step in the document unwarping process, but they use multi-views images instead of additional hardware to estimate the 3D shape of the document surface (Koo et al., 2009; Tsoi and Brown, 2007; Ulges et al., 2004; Yamashita et al., 2004; You et al., 2018). Once a 3D reconstruction of the document surface is in place, different methods are used to flatten it to the plane. Brown and Seales (2001, 2004) and Zhang et al. (2008) flatten the document surface using a simulation of a stiff mass-spring system falling down to a plane under gravity. Another common technique is to fit a (piecewise-) smooth parametric surface to the estimated 3D document surface, and flatten it according to some parameterization. This approach can be done with generalized cylinders (Kil et al., 2017; Kim et al., 2015; Koo et al., 2009; Meng et al., 2018; Zhang et al., 2004), generalized ruled surfaces (Meng et al., 2014; Tsoi and Brown, 2007), smooth developable surfaces (Liang et al., 2005, 2008) and NURBS-like surfaces (Yamashita et al., 2004; Zhang and Tan, 2005). Often times these parametric approaches rely heavily on the texture flow of the text lines in the document to estimate parametric line directions, making them less suitable for documents that only contain sparse text. Additionally, due to their smooth nature, they cannot capture and accurately correct fine details like crumpling and folding. Finally, their optimization nature makes them slow and unsuitable for real-time applications, and their dependence on auxiliary equipment makes their use in real-world scenarios inconvenient and costly. ### Data-driven document unwarping The second group of document unwarping methods works directly on a single RGB image of the document, employing deep learning approaches to infer 2D displacement fields or coarse grids that are used to transform the distorted image into the unwarped one. Ma et al. (2018) proposed such a network that predicts the forward mapping from a distorted document image to its unwarped version through two chained U-Nets (the first estimates an initial guess, and the second refines this). Their method is fast in comparison to the geometrical approaches described in Sec. 2.1 but does not perform well on real-world data since it is trained on a synthetic dataset generated using non-physically plausible 2D deformations. DewarpNet (Das et al., 2019) used a more realistic synthetic dataset that contains renderings of 3D scanned physically deformed documents. They employ two chained networks, the first one inferring 3D coordinates from an image of the warped document, and the second inferring the backward mapping from the predicted 3D world coordinates. Both Li et al. (2019) and Das et al. (2021) have used a patch-based approach trying to predict the displacement field independently on different parts of the image. The distortions in each individual patch of the image being less complex than the distortion of the document as a whole, these methods allow a better handling of local distortions at the cost of having to properly stitch the different patches together. Rather than predicting the complete deformation field in one go, Marior (Zhang et al., 2022) and DocScanner (Feng et al., 2021) use an iterative refinement approach, where they progressively refine the predicted warping flow field. They additionally predict a foreground-segmentation mask before starting this iterative rectification process, removing the burden of localizing the document boundaries in the image from the unwrapping network. DocGeNet (Feng et al., 2022) also removes the background before feeding the image into the network. This work trains one network to predict the 3D shape of the document and one to predict the locations of textlines inside the image. A concatenation of these two features, which encodes the link between the warped and unwarped image, is fed into a network that predicts the displacement map. Textline and document information detected by a network is also used by Jiang et al. (2022) as explicit constraints of an optimization problem that computes the forward map. Recently, Xie et al. (2021) and Xue et al. (2022) followed the approach of predicting a coarse grid of control points rather than a dense deformation field, which they combine with a thin-plate spline interpolation to unwarp document images. Xue et al. (2022) additionally use a Fourier filtering step to enable direct comparison of the unwarped image with the ground-truth flatbed scan of the document, whilst removing the effect of differing lighting conditions, allowing it to be trained on real data with no backward mapping ground-truth. PaperEdge (Ma et al., 2022) uses a network architecture that can learn both from synthetic data (Doc3D dataset) as well as _in the wild_ captured images. They predict an initial coarse unwarging using the edges of the document, which then gets refined in an image-to-image style network that is trained using a siamese loss on pairs of slightly perturbed images. A variety of different network architecture styles have also been implemented, showing various performances and efficiency. Liu et al. (2020) use a pyramid encoder-decoder network to predict the unwarging field at multiple resolutions. Xie et al. (2020) use a fully convolutional neural network to learn the displacement field and perform background segmentation in a post-processing step. DocTr (Feng et al., 2021) uses transformer architectures, aiming to capture the global nature of the unwarging task. All of these data-driven methods use the Doc3D dataset proposed by Das et al. (2019) for training. Even though the 3D shapes of the dataset are more realistic than those in (Ma et al., 2018) since they are based on depth captures of actual deformed papers, they are heavily smoothed compared to the original document shapes. Also, the rendered appearance is not very realistic, which causes performance degradation when using the network on real photos. In contrast, our dataset, made of real captured sheets of paper, is more realistic both visually and geometrically. Additionally, our dual-task-based network architecture enforces the model to predict physically plausible shapes and unwarging grids. Our CGU-Net processes input images in a single stage without any segmentation pre-processing and predicts a coarse backward mapping rather than a dense displacement field, making it very efficient. ## 3. Method In order to be able to completely unwarp the document, we assume that the input photograph is taken from a camera position in which the document's 3D shape can be represented as a height field, i.e., the entire document is visible and there are no occlusions and foldovers. We use a multi-head network to predict a \(45\times 31\) 2D unwarging grid \(G\) containing pixel coordinates, and a \(45\times 31\) grid mesh of 3D shape coordinates \(W\) from a warped 488-by-712 input image \(I_{w}\). We decided on this grid aspect ratio to approximate the aspect ratio of A4 paper in portrait mode, the most common paper type that documents are printed on. Compared to the commonly used square grid resolution, this allows for higher approximation power for the horizontal lines of text, which are visually perceived stronger than the spacing of individual characters within a line. As illustrated in Fig. 2, the 2D unwarging grid \(G\) encodes the deformation that leads to the unwarped document: grid-point \(G_{i,j}\) holds the pixel coordinates (relative to the image, in the range \([-1,1]\)) of the pixel that will be placed at position \((i,j)\) in the unwarped image (up to constant scaling). The grid \(G\) can also be seen as a coarse backward mapping. The grid \(G\) is bilinearly interpolated before applying the unwrapping, to achieve a full-resolution unwarped image. ### Network architecture CGU-Net is a multi-head fully-convolutional encoder architecture inspired by the encoder part of the architecture used in (Xie et al., 2020). The input image goes through two convolutional downsampling layers that each use a \(5\times 5\) kernel and reduce the image size by a factor of two. This is followed by three dilated residual blocks, which lead to a spatial pyramid with stacked dilated convolutions. At the end, two heads with two convolutional layers predict \(G\) and \(W\), respectively. Fig. 6 gives a detailed graphical overview of our architecture. Figure 2. Our unwrapping pipeline. We start with an RGB image of a warped document and feed it into our encoder-style network. The network predicts both a 3D grid mesh (top branch), as well as a 2D unwarging grid (bottom branch) in parallel. The 2D unwarging grid is then bilinearly interpolated to the desired output image resolution and is used to sample pixels from the input image to obtain the final unwrapped document image. ### Training loss We now denote ground-truth variables as their regular symbols (e.g., \(G\)) and their predicted counterparts with a hat (e.g., \(\hat{G}\)). Our training loss is a combination of mean-squared error (MSE) losses on both the 2D unwarping grid \(G\) and the 3D grid mesh \(W\): \[\mathcal{L}=\mathcal{L}_{G}+\mathcal{L}_{W}=\|G-\hat{G}\|_{2}^{2}+\|W-\hat{W} \|_{2}^{2}. \tag{1}\] We obtain the ground-truth \(G\) and \(W\) for the Doc3D dataset by sampling the ground truth backward maps at a regular grid of \(45\times 31\) points covering the entire backward map. For our UVDoc dataset (see Sec. 4) we slice the available high resolution ground truths with factor 2. ### Training details We use the ADAM optimizer (Kingma and Ba, 2015) with a batch size of 8. The initial learning rate is set to 0.001 for 10 epochs and linearly decays to 0 over 5 further epochs. We alternate optimization steps based on a batch of Doc3D data with a batch of our UVDoc data, using the same loss function on both of them. We visually augment both the Doc3D and our data with noise, color changes and other appearance transformations. Additionally, we augment our data with rotations, since our images are captured from a more uniform angle than the Doc3D data. All images are tightly cropped before being fed to the network. ## 4. The UVDoc Dataset We create our own dataset, UVDoc, containing 10,000 pseudo-photorealistic images of warped documents. Our motivation is to obtain a dataset of photorealistic document images that has more ground truth information available than _document in the wild_ images, and more realistic appearance than synthetically generated renderings. This allows us to have a stronger supervision signal than what is available for general _document in the wild_ data, whilst also benefiting from more realistic appearance. We compare the main characteristics of our dataset against other available datasets in Table 1. Capture.We print regular grids of dots, with grid size of \(89\times 61\), on A4-sized pieces of paper using an inkjet printer with UV ink that is invisible to an RGB camera in regular light, but becomes visible in UV light in an otherwise dark room. Note that on the paper boundary we deviate slightly from a perfectly regular grid by offsetting the border dots a little, so that they fully fit on the paper and can be detected more easily. Importantly, we use recycled paper, as it has fewer brightening agents that react with the UV light, giving a higher contrast grid pattern, which is easier to detect in the UV-lit image. We fold and bend the pieces of paper in various ways to emulate common deformations. We then capture pairs of RGB-D images of deformed papers using the Intel RealSense SR305 depth camera: one image in regular lighting and one in UV lighting (Fig. 2(b)). We used two commercially-available 30 W, 395 nm UV lamps to reduce the amount of shadows in the UV-lit image. We control the camera and the lights using a laptop and remote switches, so that there is no movement between the two captured frames, and the depth and pixel information is aligned. We capture various types of deformed paper, such as curved, folded, or crumpled, and we also vary the lighting conditions. The dataset contains a total of 388 distinct geometries, which we augment to 1552 geometries by applying horizontal and/or vertical flips to each sample. Recovering the gridUsing the UV-lit image, where the printed grid is visible, we obtain the pixel coordinates of the grid points on the deformed piece of paper. To detect them we use OpenCV's SimpleBlobDetector, coupled with manual annotation for extreme cases where the automatic detection fails. Once all points have been detected, we compute their correspondences to the vertices of a \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & \# Samples & Type & BM \\ \hline Doc3D (Das et al., 2019) & 100,000 & Synthetic & ✓ \\ DIW (Ma et al., 2022) & 5,000 & Real & ✗ \\ WarpDoc (Xue et al., 2022) & 1,020 & Real & ✗ \\ Ours & 10,000 & Pseudo-real & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison between the different document unwarping datasets. The last column indicates whether the ground-truth backward mapping between the distorted and the unwarped document is available. Figure 3. An overview of our data capture setup and sample data acquired in the process. regular grid, which is equivalent to ordering them as an \(89\times 61\) grid. The technical details of solving this ordering problem are described in the appendix. We call the ordered grid the 2D unwarping grid. Combining the coordinates of the 2D unwarping grid with the depth values at these same pixel coordinates and the intrinsics of the camera, we construct a 3D grid mesh corresponding to the 3D shape of the piece of paper. Pseudo-photorealistic image generationSince we have a known mapping between the 2D unwrapping grid and the original regular 2D grid, we can construct a coarse \(uv\)-parameterization of the 3D grid mesh. We use bilinear interpolation for the \(uv\)-parameterization when applying a texture to the geometry and for the 2D unwarping grid when performing the unwarping, to obtain a full-resolution, dense backward mapping. The \(uv\)-parameterization is used to apply a document texture on top of the image of the blank warped document. As illustrated in Fig. 4, we blend the document texture with the lighting-baked blank document image by multiplying the two images. This gives a pseudo-photorealistic combination between the lighting and the texture. We also replace the background in the image with a background sampled from the _Describable Textures_ dataset (Cimpoi et al., 2014). Using this approach, we create a dataset of 10,000 images in total. We provide the original lighting-baked blank document image along with the \(uv\)-parameterization, so users of the dataset can easily replace the document and the background textures if desired. At the end of our data capture pipeline we are equipped with a ground-truth 2D unwarping grid, a \(uv\)-parameterization and a 3D grid mesh for each sample in our dataset. The full UVDoc dataset will be made publicly available upon publication. ## 5. Experiments ### Evaluation metrics We evaluate our method based on image similarity metrics as well as optical character recognition (OCR) performance. Following (Ma et al., 2018) and (Das et al., 2019), we use multi-scale structural similarity (MS-SSIM) and local distortion (LD) as metrics for the image similarity evaluation. We also evaluate our method using the aligned distortion (AD) metric, which corrects some of the flaws of the previous metrics. We evaluate OCR performance using the character error rate (CER) and edit distance (ED). The structural similarity measure (SSIM) (Wang et al., 2004) quantifies the visual similarity between two images by measuring the similarity of mean pixel values and variance within image patches between the two images. The multi-scale variant (MS-SSIM) repeats this process at multiple scales using a Gaussian pyramid and computes a weighted average over the different scales as its final measure. We use the same weights as described in the original implementation (Wang et al., 2003). LD is computed using a dense SIFT flow mapping (Liu et al., 2008) from the ground truth image to the rectified image. Using this registration, LD is computed as the mean \(L_{2}\) distance between mapped pixels (You et al., 2018), essentially measuring the average local deformation of the unwarped image. Aligned distortion (AD) is a more robust variant of the LD metric, introduced in (Ma et al., 2022). In contrast to LD, AD eliminates the error caused by a global translation and scaling of the image by factoring out the optimal affine transformation out of the SIFT flow distortion. Such a global affine transformation can cause large LD values but does not greatly impact human readability of the image. Additionally, AD weighs the error according to the magnitude of the gradient in the image, emphasizing interesting areas, such as text or image edges, rather than the background. Prior to computing these similarity metrics, we resize all images, both rectified and ground-truth, to a 598,400-pixel area, as suggested in (Ma et al., 2018). In addition to the image similarity metrics, we evaluate OCR performance based on character error rate (CER) and editing distance (ED) (Navarro, 2001). The CER is defined as the ratio between the ED (the edit distance between the recognized and reference text) and the number of characters in the reference text. We obtain the reference text by extraction from the flatbed scans of the documents. The full definition for the CER then becomes: \(\text{CER}=(s+i+d)/N\), where \(s,i,d\) are the number of substitutions, insertions and deletions, respectively, and \(N\) is the number of characters in the reference text. ### Evaluation We evaluate our CGU-Net on the DocUNet benchmark dataset (Ma et al., 2018). It is composed of 65 documents. For each of them, 2 deformed images in a real-world scenario are provided. The ground Figure 4. Left: The pipeline used to combine the captured image of a blank paper, the texture and the background. Right: Some examples of document images from our own dataset, UVDoc. We capture samples with various geometric deformations and under varying lighting conditions. truth flatbed-scans are also provided for comparison. Note that similarly to Feng et al. (2022) we exclude the two images of document 64, as the real world images are rotated by 180 degrees. We also exclude this image when computing the quantitative results for previous works. Quantitative evaluationWe compare our method with several state-of-the-art deep learning methods. For each of them, we use the DocUNet result images published by the authors. All metric scores are evaluated using Tesseract v4.0.0, pytesseract v0.3.10, MATLAB R2022a, Levenshtein v0.20.8 and jiver v2.5.1. The results are presented in Table 2. Comparing to previous works, our CGU-Net achieves state-of-the-art LD and AD performance and a second-best MS-SSIM score. Our method ranks third based on the CER metric, but nevertheless achieves a state-of-the-art ED score. In our experience, the performance on the OCR metrics is highly unstable and shows large variance, whereas the visual evaluation metrics (MS-SSIM, LD and AD) are far more stable over repeated experiments (see Table 3 and Table 4). Since the differences in CER and ED scores between our method and previous works are relatively small compared to the size of the respective standard deviations that we observe during our ablation study, we put less emphasis on these metrics than on the visual ones. Our approach builds on a grid-based unwrapping method, thanks to which our network is significantly smaller in size than current state-of-the-art methods, whilst still achieving state-of-the-art performance. We compare our network size to previous works in the last column of Table 2. Qualitative evaluationIn addition to the quantitative comparisons made in the previous section, we provide a qualitative comparison to previous works. We show a side-by-side comparison of unwrapped images by several methods in Fig. 5. The left-most column of input images are all images from the "crop" subset. The images unwrapped by our method are perceptually of high quality and have good unwrapping at the borders of the document as well, even though we do not include explicit handling of borders or segmentation, in contrast to (Feng et al., 2021, 2022; Ma et al., 2022; Zhang et al., 2022). Our method is also able to produce convincing unwarpings of images containing non-Latin characters. We present more qualitative results on real-world images in Figure 1. We include the unwrapped images for all items in the DocUNet benchmark in the supplemental material. ### Ablation study We show the effectiveness of the dual task learning, i.e., combining predicting the 3D and the 2D grid meshes in the training process, as well as the benefit of combined training on both Doc3D and our UVDoc dataset, via ablation experiments. As we notice large variance in the OCR performance during our experiments, here we use averages of 10 repeated experiments with constant settings to perform the ablation study. We first show that the dual task of learning the 3D grid \(W\) along with the 2D grid \(G\) improves the performance by comparing it against our full model but with the loss on \(W\) removed. Table 3 shows that including the MSE loss on \(W\) greatly improves the MS-SSIM, LD and AD metrics. On average MS-SSIM improves by 4.7%, LD by 8.8% and AD by 3.3%. These improvements are statistically significant, as indicated by the \(p\)-values (one-tailed unpaired t-test) of \(p=7\times 10^{-10}\), \(p=7\times 10^{-9}\), \(p=3\times 10^{-5}\), respectively. The average CER and ED scores degrade slightly, but these changes are not statistically significant, as indicated by the \(p\)-values (\(p=0.1\) and \(p=0.4\), respectively). We show in Table 4 that training on a combination of the Doc3D and UVDoc datasets significantly improves the performance on all metrics, compared to training only on the Doc3D data. To ensure a fair comparison between the two, we double the number of epochs for the Doc3D-only training (both the number of epochs at a constant learning rate, as well as the number of epochs with linearly decaying learning rate), such that they process the same number of samples and have an equal amount of optimizer steps. Adding the UVDoc data to the training process leads to improvements of 4.5% for MS-SSIM, 7.6% for LD and 2.5% for AD (\(p=1\times 10^{-9}\), \(p=2.7\times 10^{-7}\) and \(p=6\times 10^{-4}\)). For CER and ED there is a small decrease in performance, but this change is not statistically significant (\(p=0.09\) and \(p=0.1\)). We attribute the improvement in visual metrics performance to the fact that our data (UVDoc) is closer in appearance to the real document photographs in the DocUNet dataset, as well as the fact that the 3D ground-truth data in our dataset is more physically accurate (albeit coarser), since we do not apply any smoothing to it. ## 6. Conclusion We presented CGU-Net, a multi-task deep learning approach to document image unwrapping that implicitly encodes the coupling between the document's 3D shape and its appearance in a 2D photograph. Our grid-based approach to document unwrapping leverages a new dataset that is pseudo-photorealistic in appearance and includes numerous ground truth annotations, enabling a strong supervision \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & MS-SSIM \(\uparrow\) & LD \(\downarrow\) & AD \(\downarrow\) & CER \(\downarrow\) & ED \(\downarrow\) & Para. \\ \hline DewarpNet & 0.472 & 8.38 & 0.396 & 0.217 & 834.32 & 86.9M \\ DispFlow & 0.432 & 7.62 & 0.395 & 0.292 & 1216.34 & 23.6M \\ DocTr & _0.509_ & 7.78 & 0.366 & 0.181 & 711.96 & 26.9M \\ PW Unwrapping & 0.490 & 8.64 & 0.431 & 0.253 & 987.02 & - \\ DDCP & 0.473 & 8.93 & 0.423 & 0.278 & 1117.92 & 13.3M \\ FDRNet & **0.543** & 8.08 & 0.396 & 0.214 & 878.02 & - \\ RDGR & 0.495 & 8.50 & 0.432 & **0.171** & 732.36 & - \\ Marior & 0.476 & 7.37 & 0.404 & 0.199 & 797.00 & - \\ PaperEdge & 0.472 & 7.98 & _0.367_ & 0.193 & 763.48 & 36.6M \\ DocGeoNet & 0.504 & 7.70 & 0.378 & 0.190 & 735.82 & 24.8M \\ \hline Ours & 0.513 & **7.18** & **0.352** & _0.184_ & **689.54** & 8M \\ \hline \hline \end{tabular} \end{table} Table 2. Quantitative unwrapping performance comparisons on the DocUNet benchmark dataset. **Bold** font indicates best, underline indicates second-best and _italic_ indicates third-best score. The last column compares the network sizes, expressed in number of parameters (millions). We compare our results against DewarpNet (Das et al., 2019), DispFlow (Xie et al., 2020), DocTr (Feng et al., 2021), PW Unwurping (Das et al., 2021), DDCP (Xie et al., 2021), FDRNet (Xue et al., 2022), RDGR (Jiang et al., 2022), Mario (Zhang et al., 2022), PaperEdge (Jua et al., 2022) and DocGeoNet (Feng et al., 2022).
2303.03668
Single-Shot Readout of a Superconducting Qubit Using a Thermal Detector
Measuring the state of qubits is one of the fundamental operations of a quantum computer. Currently, state-of-the-art high-fidelity single-shot readout of superconducting qubits relies on parametric amplifiers at the millikelvin stage. However, parametric amplifiers are challenging to scale beyond hundreds of qubits owing to practical size and power limitations. Nanobolometers have a multitude of properties that are advantageous for scalability and have recently shown sensitivity and speed promising for qubit readout, but such thermal detectors have not been demonstrated for this purpose. In this work, we utilize an ultrasensitive bolometer in place of a parametric amplifier to experimentally demonstrate single-shot qubit readout. With a readout duration of $13.9~\mu\mathrm{s}$, we achieve a single-shot fidelity of 0.618 which is mainly limited by the energy relaxation time of the qubit, $T_1 = 28~\mu\mathrm{s}$. Without the $T_1$ errors, we find the fidelity to be 0.927. In the future, high-fidelity single-shot readout may be achieved by straightforward improvements to the chip design and experimental setup, and perhaps most interestingly by the change of the bolometer absorber material to reduce the readout time to the hundred-nanosecond level and beyond.
András M. Gunyhó, Suman Kundu, Jian Ma, Wei Liu, Sakari Niemelä, Giacomo Catto, Vasilii Vadimov, Visa Vesterinen, Priyank Singh, Qiming Chen, Mikko Möttönen
2023-03-07T06:06:11Z
http://arxiv.org/abs/2303.03668v2
# Single-Shot Readout of a Superconducting Qubit Using a Thermal Detector ###### Abstract Measuring the state of qubits is one of the fundamental operations of a quantum computer. Currently, state-of-the-art high-fidelity single-shot readout of superconducting qubits relies on parametric amplifiers at the millikelvin stage. However, parametric amplifiers are challenging to scale beyond hundreds of qubits owing to practical size and power limitations. Nanobolometers have a multitude of properties that are advantageous for scalability and have recently shown sensitivity and speed promising for qubit readout, but such thermal detectors have not been demonstrated for this purpose. In this work, we utilize an ultrasensitive bolometer in place of a parametric amplifier to experimentally demonstrate single-shot qubit readout. With a readout duration of 13.9 us, we achieve a single-shot fidelity of 0.618 which is mainly limited by the energy relaxation time of the qubit, \(T_{1}=28\) us. Without the \(T_{1}\) errors, we find the fidelity to be 0.927. In the future, high-fidelity single-shot readout may be achieved by straightforward improvements to the chip design and experimental setup, and perhaps most interestingly by the change of the bolometer absorber material to reduce the readout time to the hundred-nanosecond level and beyond. + Footnote †: preprint: APS/123-QED Qubit readout is a fundamental operation in quantum computing, both for determining the result at the end of a computation [1; 2], as well as for error correction, which is necessary for fault-tolerance [3; 4; 5]. Currently, one of the most promising platforms for realizing a practically useful quantum computer is based on superconducting qubits [6; 7; 8]. Importantly, readout is currently one of the main bottlenecks on the way to quantum-error-corrected large-scale superconducting quantum processors: most of the error in the error correction cycles demonstrated in recent state-of-the-art experiments [9; 10] arose during the readout phase. Thus improving existing readout techniques and especially discovering completely new advantageous ways to carry out readout are of great and urgent importance. The standard method of measuring the state of superconducting qubits in the framework of circuit quantum electrodynamics is referred to as _dispersive readout_[6; 11; 12]. Here, a qubit is dispersively coupled to a readout resonator, the frequency of which shifts depending on the qubit state. The current state of the art in dispersive readout is a single-shot readout fidelity in excess of 99% with an averaging time of less than 100 ns for single qubits [13; 14], and 97-98% on average for the simultaneous multiplexed readout of several qubits [15; 10]. To achieve a signal-to-noise ratio (SNR) sufficient for high-fidelity single-shot dispersive readout, the output signal from the readout resonator is typically amplified at the millikelvin stage by a parametric amplifier, such as a Josephson parametric amplifier (JPA) [16] or a traveling-wave parametric amplifier (TWPA) [17]. These amplifiers can be quantum limited, i.e., the noise added by the amplification stems solely from the Heisenberg uncertainty principle for the in-phase and quadrature components of the amplified signal [18; 19; 20]. Parametric amplifiers are widely utilized due to their high gain and low noise, but they suffer from some drawbacks that challenge their applicability when scaling to large numbers of qubits. Namely, JPAs have a fairly narrow bandwidth, which renders them less suitable for multiplexed qubit readout [15]. On the other hand, although TWPAs provide a broad bandwidth, they have limited dynamic range for the purposes of highly multiplexed readout and typically incorporate more than \(10^{3}\) Josephson junctions, rendering their on-chip footprint sizeable and high-yield large-scale fabrication challenging. Importantly, both of these amplifier types require strong isolation between the amplifier and the qubit-resonator system. In the case of JPAs, this is because the amplifier works by reflection, requiring the use of a circulator, and TWPAs require a relatively strong pumping tone possibly near the frequencies of the qubit and the readout resonator. Furthermore, TWPAs amplify vacuum and other noise, which may be reflected by later components in the amplification chain due to imperfect impedance matching. This noise may leak backwards to the TWPA input and cause decoherence in the qubit if not isolated. To mitigate these issues, permanent-magnet microwave isolators are placed between the readout transmission line and the amplifier input. Such isolators introduce losses in the signal, are very large in size, costly, and require bulky shielding to protect both the qubit and the amplifier from the magnetic fields they introduce. These scalability issues have motivated the development of novel readout techniques that evade the need for parametric amplifiers. For example, Ref. [21] introduces a microwave photon counter instead of voltage amplifi cation at the millikelvin stage. However, this method requires a more involved pulse sequence, and it suffers from the backaction and possible creation of quasiparticles due to their tunneling events. In Ref. [22], qubit readout is achieved without a parametric amplifier by driving the qubit to the second excited state before readout, at the cost of an increased measurement time. However, the used non-parametric microwave amplifier introduces high-temperature noise close to the qubit frequency, which calls for bulky microwave isolators. Another currently developing approach to scalability is to deliver signals to and from the cryostat in the optical domain [23; 24]. Interestingly, ultrasensitive nanobolometers [25; 26] have been recently demonstrated to be fast and sensitive enough for readout of superconducting qubits, reaching thermal time constants in the hundred nanosecond scale and energy resolution of a few typical microwave photons [27]. Such bolometers have a number of characteristic properties that are attractive for scalable qubit readout. In contrast to a parametric amplifier, the bolometer introduced in Ref. [25] is driven by a probe tone with a frequency below \(1\,\mathrm{GHz}\), well below the typical frequencies \(5\)-\(8\,\mathrm{GHz}\) of the qubit and the resonator. In addition, the bolometer can be probed with a low power of roughly \(-130\,\mathrm{dBm}\) at the chip, while for example TWPAs typically require pump powers of \(-75\,\mathrm{dBm}\) or higher [17]. Furthermore, the absorbing port of the bolometer can be conveniently matched to \(50\,\Omega\)[26]. This presents a cold bath to the readout transmission line, while allowing for the detection frequency to vary by orders of magnitude given a fixed probe frequency. These features may eliminate the need for isolators between the qubit-resonator system and the bolometer, and together with the small size of the bolometers render them highly promising from the scalability point of view. Another advantage of bolometers compared with parametric amplifiers is that since a bolometer measures power, or photon number, it is not bound to add quantum noise stemming from the Heisenberg uncertainty principle. The vacuum noise does not promote detection events in the bolometer since no energy can be extracted from the vacuum. Thus bolometric readout is fundamentally different from the usual dispersive readout. In addition, bolometers are relatively simple to both fabricate and operate [26]. They do not require engineering of a large number of small Josephson junctions, and a bolometer requires only a single continuous probe tone with two parameters, power and frequency, to optimize performance. Owing to these appealing features, it is of great interest to study whether the fundamentally different operation principle of bolometers can be harnessed in the readout of superconducting qubits. However, no thermal detector to date has been employed for this purpose. In this work, we integrate an ultrasensitive bolometer at millikelvin temperature to the readout circuitry of a superconducting qubit (Fig. 1). After the characterization of the bolometer and the qubit (Fig. 2), we demonstrate single-shot qubit readout at the fidelity of \(0.618\) (Fig. 3). Taken that we have not used the fastest compatible bolometers [27], but those that are orders of magnitude slower [26], this first demonstration of bolometric single-shot qubit readout seems a promising milestone for future high-fidelity scalable readout of superconducting qubits. ### Experimental setup A schematic diagram of the experimental setup is shown in Fig. 1. A standard flux-tunable Xmon qubit [28] with a transition frequency \(f_{\mathrm{q}}=7.655\,\mathrm{GHz}\) at the flux sweet spot, anharmonicity \(\alpha/(2\pi)=-273\,\mathrm{MHz}\), energy relaxation time \(T_{1}=28\,\mathrm{\SIUnitSymbolMicro s}\), and Ramsey dephasing time \(T_{2}=7.6\,\mathrm{\SIUnitSymbolMicro s}\) is capacitively coupled to a coplanar readout resonator in a notch configuration. The resonator has its fundamental resonance frequency at \(f_{\mathrm{r,g}}=5.473\,\mathrm{GHz}\) if the qubit is in its ground state \(\ket{\mathrm{g}}\) and a linewidth \(\kappa_{\mathrm{r}}/(2\pi)=1.0\,\mathrm{MHz}\). The resonator is capacitively coupled to the qubit with a coupling strength \(g/(2\pi)=61\,\mathrm{MHz}\). These parameters produce a dispersive shift of \(\chi/2\pi=-0.3\,\mathrm{MHz}\), i.e., the resonator frequency shifts to \(f_{\mathrm{r,e}}=f_{\mathrm{r,g}}+2\chi/(2\pi)\) if the qubit is in its excited state \(\ket{\mathrm{e}}\). To read out the qubit state, a rectangular microwave pulse of length \(t_{\mathrm{RO}}\approx 10\,\mathrm{\SIUnitSymbolMicro s}\), frequency \(f_{\mathrm{d}}\), and power \(P_{\mathrm{d}}\) is applied to the feedline of the readout resonator. In typical dispersive readout, the drive frequency is chosen to be in the middle of the dressed resonator frequencies, \(f_{\mathrm{d}}=(f_{\mathrm{r,g}}+f_{\mathrm{r,e}})/2\), and the photons reflected from the resonator accumulate a phase shift depending on the qubit state [6]. In contrast, here we operate in a photodetection mode [29] by driving close to one of the dressed frequencies, \(f_{\mathrm{d}}\approx f_{\mathrm{r,g}}\). In this driving scheme, information about the qubit state is mostly carried by the power of the signal, i.e., the number of photons emitted by the resonator into the feedline. A power difference is necessary for the bolometer, since it is a thermal detector, sensitive to the power but insensitive to the phase of its input signal. Note that for the standard dispersive readout, the ratio \(|\chi|/\kappa_{\mathrm{r}}=1/2\) yields the optimal SNR [30], while for photodetection-based readout, the optimal ratio is higher [29]. The output of the feedline for the readout resonator is connected to the absorber port of the bolometer. The bolometer, similar to the device of Ref. [26], resides on a chip different from that of the qubit-resonator system as shown in Fig. 1. The main component of the bolometer is a resistive \(\mathrm{Au_{x}Pd_{1-x}}\) (\(x\approx 0.6\)) nanowire. A segment of the nanowire works as the resistive absorber and the remaining wire is interrupted by a series of superconducting Al islands, forming a chain of SNS junctions. The impedance \(Z(T_{\mathrm{e}})\) of this junction chain depends on the electron temperature of the nanowire \(T_{\mathrm{e}}\). The junction chain is embedded in an effective \(LC\) cir cuit formed by a shunt capacitor \(C_{1}=134\,\mathrm{pF}\) in parallel with \(Z(T_{\mathrm{e}})\), which can be modeled as a parallel resistance and inductance [25]. The nanowire is grounded between the absorber and the junctions, so that the essentially purely real-valued impedance of the absorber does not contribute to the \(LC\) circuit. The bolometer is probed by reflecting a continuous tone at power \(P_{\mathrm{p}}\) and frequency \(f_{\mathrm{p}}\) from the gate capacitor \(C_{\mathrm{g}}=0.87\,\mathrm{pF}\). With a low \(P_{\mathrm{p}}\), the \(LC\) circuit resonates at the frequency \(f_{\mathrm{b}}=585\,\mathrm{MHz}\) with a linewidth \(7.6\,\mathrm{MHz}\). As radiation is absorbed by the absorber, \(T_{\mathrm{e}}\) increases, which shifts \(f_{\mathrm{b}}\) down, and thus by observing changes in the reflection coefficient \(\Gamma\) at the gate capacitor, it is possible to detect the radiation incident on the bolometer input. The reflected probe signal is amplified by a low-noise high-electron-mobility-transistor (HEMT) amplifier at \(4\,\mathrm{K}\), and further amplified, demodulated and digitized at room temperature in a heterodyne configuration. The shift in \(f_{\mathrm{b}}\) due to the readout pulse incoming from the readout resonator is observed as a change in the digitized voltage [see Fig. 2(a)-(b)]. For low \(P_{\mathrm{p}}\), the readout pulse causes \(T_{\mathrm{e}}\) and thus the reflected signal to approach a steady state value exponentially with a thermal time constant \(\tau_{\mathrm{b}}\). The time constant depends on \(f_{\mathrm{p}}\), \(P_{\mathrm{p}}\), and the power of the readout pulse \(P_{\mathrm{d}}\) in a non-trivial way due to electrothermal feedback [25, 26]. For the relevant parameter regime considered here, \(\tau_{\mathrm{b}}\) varies between \(10\,\mathrm{\SIUnitSymbolMicro s}\) and \(1\,\mathrm{ms}\). Notably, we have \(\tau_{\mathrm{b}}\gtrsim T_{1}\), which implies that for qubit readout, we must operate the bolometer in a calorimetric fashion, i.e., with \(t_{\mathrm{RO}}<\tau_{\mathrm{b}}\). This is highlighted in Fig. 2(a), where the time constant, extracted from a measurement with a long readout pulse \(t_{\mathrm{RO}}>1\,\mathrm{ms}\), is \(\tau_{\mathrm{b}}=36.2\,\mathrm{\SIUnitSymbolMicro s}\). With \(t_{\mathrm{RO}}=10\,\mathrm{\SIUnitSymbolMicro s}\), which is more feasible for qubit readout than \(t_{\mathrm{RO}}>\tau_{\mathrm{b}}\), the steady state is far from the reached maximum signal level. For high enough \(P_{\mathrm{p}}\), the electrothermal feedback results in a bistability for the electron temperature of the SNS junction. This bistability can be exploited for high-fidelity photodetection by operating the bolometer in a latching mode using a pulsed probe tone [25]. However, this scheme introduces significant dead time for the detection, and requires detailed calibration of the pulse shape. For simplicity, we thus focus on the continuous detection scheme with sufficiently low \(P_{\mathrm{p}}\). Using time-domain data digitized from the bolometer output, we define the detector signal as \[S=\bar{V}-\bar{V}_{0}, \tag{1}\] where \(\bar{V}_{0}\) is the time-average of the digitized voltage before the readout pulse and \(\bar{V}\) is the time-average of the voltage over some averaging window \([t_{0},t_{\mathrm{RO}}]\). Since \(t_{\mathrm{RO}}<\tau_{\mathrm{b}}\), increasing the time over which the average is taken can significantly decrease \(\bar{V}\), but may increase the SNR. In practice, we observe that choosing \(t_{0}=0.75\times t_{\mathrm{RO}}\) balances these two effects reasonably well. Both the qubit-resonator and bolometer chips are mounted on their individual sample holders, which are placed in separate magnetic shields and attached to the mixing-chamber plate of a dilution refrigerator with the base temperature of \(12\,\mathrm{mK}\). Note that in the setup presented in Fig. 1, an additional microwave switch and directional coupler are placed between the qubit and bolometer. These allow measuring the qubit and driving the bolometer individually, and are only used for initial separate characterization of the qubit and the bolometer. The auxiliary readout channel called for the use of an isolator between the chips, but this can be removed in future experiments. ## III Results During initial characterization, we apply no driving to the readout resonator of the qubit and find the bolometer resonance by measuring the reflection coefficient of Figure 1: **Experimental setup.** The readout resonator is driven at frequency \(f_{\mathrm{d}}\approx 5.5\,\mathrm{GHz}\) by a readout tone which is reflected off of the resonator and absorbed by a metallic nanowire in the absorber port of the bolometer. The absorbed radiation shifts the electron temperature \(T_{\mathrm{e}}\) and consequently the impedance \(Z(T_{\mathrm{e}})\) of a chain of SNS junctions, marked by a brown cross. A continuous probe tone with frequency \(f_{\mathrm{p}}\approx 580\,\mathrm{MHz}\) is reflected off of the gate capacitor \(C_{\mathrm{g}}\) of the bolometer as determined by the reflection coefficient \(\Gamma\). The reflected signal is amplified, digitized, and used to determine the qubit state. the bolometer probe signal as a function of \(f_{\mathrm{p}}\) and \(P_{\mathrm{p}}\). Next, we apply pulses to the readout resonator with \(f_{\mathrm{d}}\) well detuned from \(f_{\mathrm{r,g}}\), and map the detector signal \(S\) as a function of \(f_{\mathrm{p}}\) and \(P_{\mathrm{p}}\). These results, shown in Fig. 2(c), provide us with a feasible operation point \((f_{\mathrm{p}},P_{\mathrm{p}})=(580.5\,\mathrm{MHz},-128.7\,\mathrm{dBm})\) where \(S\) is maximized with \(P_{\mathrm{p}}\) below the region where the electrothermal feedback induces bistability as discussed above. Table 1 summarizes the parameter values used during the initial characterization. With the bolometer operation point fixed, we carry out standard qubit characterization measurements [31] while monitoring the bolometer signal. For example, Fig. 2(d) shows \(S\) as a function of the qubit readout power \(P_{\mathrm{d}}\), both on resonance \(f_{\mathrm{d}}=f_{\mathrm{r,g}}\) and off-resonance at \(f_{\mathrm{d}}=f_{\mathrm{r,g}}-70\,\mathrm{MHz}\), with the qubit drive turned off. Using \(P_{\mathrm{d}}=-126\,\mathrm{dBm}\), a maximal contrast of \(32\,\mathrm{dB}\) between the on and off-resonance drives is achieved with a very long readout pulse. More importantly, Fig. 2(d) further highlights that the long bolometer time constant is limiting the readout signal since the contrast is greatly reduced for the microsecond pulses, which are still roughly an order of magnitude longer than the current state-of-the-art qubit readout. A reasonable contrast is not achieved with a \(1\,\mathrm{\SIUnitSymbolMicro s}\) readout pulse even for the highest feasible readout power of \(-108\,\mathrm{dBm}\), beyond which the resonator becomes nonlinear and the quantum non-demolition nature of the readout breaks down [32, 33]. After finding the resonator and qubit frequencies using the standard single-tone and two-tone spectroscopies adjusted to our bolometric readout, we carry out a Rabi oscillation measurement. We initialize the qubit to its ground state, drive it with a rectangular pulse, and measure the readout resonator using the bolometer. Here, we employ a \(10\,\mathrm{\SIUnitSymbolMicro s}\) readout pulse and ensemble-average the result 512 times. With the length and the modulation frequency of the qubit drive pulse are varied, we obtain a clear chevron pattern as desired [see Fig. 2(e)], from which we determine a \(\pi\) pulse length of \(100\,\mathrm{ns}\). With the qubit frequency and \(\pi\) pulse length calibrated, we carry out single-shot qubit readout by alternating between preparing the qubit in \(|\mathrm{g}\rangle\) and \(|\mathrm{e}\rangle\), and recording \(S\) with no ensemble-averaging. For each prepared state, we record \(10^{4}\) data points. The data are binned to produce a histogram, with the bin width chosen using Scott's rule [34]. To maximize the readout fidelity \(F=1-P(\mathrm{g}|\mathrm{e}\rangle-P(\mathrm{e}|\mathrm{g}))\), we optimize the threshold value for \(S\) to decide the measurement outcome, \(|\mathrm{g}\rangle\) or \(|\mathrm{e}\rangle\). Here, \(P(a|b)\) is the probability of measuring the qubit in the state \(|\mathrm{a}\rangle\) provided that it was prepared in the state \(|\mathrm{b}\rangle\), where the probabilities are obtained from the measured distributions. The highest fidelity is obtained by increasing the readout pulse length to \(t_{\mathrm{RO}}=20\,\mathrm{\SIUnitSymbolMicro s}\), so that the above-discussed choice of \(t_{0}=0.75\times t_{\mathrm{RO}}\) yields an averaging time of \(5\,\mathrm{\SIUnitSymbolMicro s}\) for \(\tilde{V}\), and setting the readout power to \(P_{\mathrm{d}}=-108\,\mathrm{dBm}\), which is just below the point of nonlinearity for the readout resonator. Fig \begin{table} \begin{tabular}{r|c|c} Quantity & Symbol & Value \\ \hline Bolometer probe frequency & \(f_{\mathrm{p}}\) & \(580.5\,\mathrm{MHz}\) \\ Bolometer probe power (nominal) & \(P_{\mathrm{p}}\) & \(-128.7\,\mathrm{dBm}\) \\ Readout resonator drive frequency & \(f_{\mathrm{d}}\) & \(5.400\,\mathrm{GHz}\) or \(f_{\mathrm{r,g}}\) \\ Readout drive power (nominal) & \(P_{\mathrm{d}}\) & \(-107.8\,\mathrm{dBm}\) \\ Readout pulse length & \(t_{\mathrm{RO}}\) & \(10\,\mathrm{\SIUnitSymbolMicro s}\) \\ Number of ensemble-averages & & \(16\) \\ \end{tabular} \end{table} Table 1: Typical parameter values used during characterization measurements. Note that all powers we report in this manuscript are uncalibrated, nominal values reaching the corresponding chip, based on the estimated attenuation of the lines in our setup. Figure 2: **Characterization experiments.** (a) Example interface of the probe signal reflected from the bolometer with a qubit readout pulse length of \(t_{\mathrm{RO}}=10\,\mathrm{\SIUnitSymbolMicro s}\) (dark gray dots) and \(>1\,\mathrm{ms}\) (light gray dots). The solid green line is an exponentially rising fit to the long pulse, and the dashed horizontal lines indicate the extracted values of \(\tilde{V}\) and \(\tilde{V}_{0}\) used to define the detector signal \(S\) for the short pulse. The parameter values in all panels are as shown in Table 1 unless otherwise specified. (b) As (a) but only for the \(10\,\mathrm{\SIUnitSymbolMicro s}\) readout pulse. The blue and red-colored regions indicate where \(\tilde{V}_{0}\) and \(\tilde{V}\) are averaged, respectively. These data are taken with 128 ensemble averages. The signal appears noisier than in panel (a), since each data point in (a) is calculated by averaging 512 adjacent data points. (c) Detector signal \(S\) as a function of the probe frequency \(f_{\mathrm{p}}\) and probe power \(P_{\mathrm{p}}\). In the region shaded with red, where \(P_{\mathrm{p}}\gtrsim-128.5\,\mathrm{dBm}\), the bolometer exhibits bistability due to electrothermal feedback. The red cross indicates the chosen operation point for qubit readout. The colorbar is shared with panel (e). (d) Detector signal as a function of the nominal power of the readout pulse \(P_{\mathrm{d}}\) for various indicated readout pulse lengths and for the readout pulse applied off-resonance (\(f_{\mathrm{d}}=5.400\,\mathrm{GHz}\), filled markers) and on-resonance (\(f_{\mathrm{d}}=5.473\,\mathrm{GHz}=f_{\mathrm{r,g}}\), unfilled markers). The resonator becomes significantly nonlinear at approximately \(-108\,\mathrm{dBm}\), indicated by the black dotted vertical line. (e) Detector signal as a function of the resonant qubit drive pulse length and frequency, showing Rabi oscillation. These data are taken with 512 ensemble averages. ure 3(a) shows the measured probability distributions for the probe signal \(S\) with these parameters, from which we extract the fidelity \(F=0.49\). In an effort to further optimize the single-shot readout fidelity, we carry out an additional experiment, where we intentionally stretch the readout pulse to be unreasonably long, \(40\,\mathrm{\SIUnitSymbolMicro s}\), and instead of just storing \(S\) derived from the time-averaged quantities \(\bar{V}_{0}\) and \(\bar{V}\), we record the full time traces of the bolometer output signal for 1000 single shots. With these data, we may vary the effective readout pulse length and averaging time of \(\bar{V}\) in post processing. Figure 3(c) shows the resulting single-shot readout fidelity as a function of the digitally determined pulse length and averaging time. We observe a region of relatively high fidelity \(F>0.6\) [highlighted by the red boundary in Fig. 3(c)], with the highest fidelity of 0.618 achieved with \(t_{\mathrm{RO}}=13.9\,\mathrm{\SIUnitSymbolMicro s}\) and \(t_{\mathrm{RO}}-t_{0}=10.6\,\mathrm{\SIUnitSymbolMicro s}\). Figure 3(b) shows the probability distributions of the signal with these parameter values. The fidelity is overall higher in Fig. 3(b)-(c) than in Fig. 3(a). This is mostly because in panel (a), \(\bar{V}_{0}\) is calculated individually for each single shot. Since each value of \(\bar{V}_{0}\) is averaged over a relatively short, 1.11\(\,\mathrm{\SIUnitSymbolMicro s}\) window, it introduces significant noise to the value of \(S\) calculated using Eq. (1). In contrast, a common value of \(\bar{V}_{0}\) is used for the whole data set of panels (b)-(c), meaning that only the fluctuations of \(\bar{V}\) contribute to the noise of \(S\). Rescaling the standard deviation of the data of panels (b)-(c) by a factor corresponding to the noise from this 1.1\(\,\mathrm{\SIUnitSymbolMicro s}\) window reproduces the increased noise level of panel (a). For the qubit prepared in \(\ket{\mathrm{e}}\), the single-shot probability distribution is clearly bimodal in Figs. 3(a)-(b). This is expected, since \(t_{\mathrm{RO}}\) is comparable to the qubit energy decay time \(T_{1}\), and thus the qubit experiences significant spontaneous relaxation during the readout. Even in the case of infinitely narrow distributions and ideal state preparation, the \(T_{1}\) decay produces an error of approximately \(1-\exp{[-t_{\mathrm{RO}}/(2T_{1})]}\approx 22\%\) for \(t_{\mathrm{RO}}=13.9\,\mathrm{\SIUnitSymbolMicro s}\) and \(T_{1}=28\,\mathrm{\SIUnitSymbolMicro s}\)[35]. The long readout is stems from the relatively long effective thermal time constant of the bolometer, \(\tau_{\mathrm{b}}=9.4\,\mathrm{\SIUnitSymbolMicro s}\), extracted from the data of Fig. 3(c) for the qubit prepared in \(\ket{\mathrm{g}}\). In Fig. 3(c), the trend of relatively high and constant fidelity obtained for simultaneously increasing readout pulse length and averaging time indicates that the increase in the fidelity owing to the increasing bolometer SNR approximately compensates the decrease in the fidelity owing to increasing qubit decay. With short readout times, the fidelity is low because the distributions corresponding to the different qubit states are not well separated. The data of Fig. 3(a), as well as the \(\ket{\mathrm{g}}\) state data of Fig. 3(b) are well modeled by a sum of two Gaussian distributions. For the \(\ket{\mathrm{e}}\) state data of Fig. 3(b), we utilize a model that incorporates the decay of the qubit during the readout (see Methods). This advanced model is not needed for the \(\ket{\mathrm{e}}\) state data of Fig. 3(a), since the effect of the decay on the distribution is masked by noise. Using the parameters extracted from these fits, we determine the fidelity with \(t_{\mathrm{RO}}=13.9\,\mathrm{\SIUnitSymbolMicro s}\) and the \(T_{1}\) error removed to be \(F\approx 0.927\). The \(\ket{\mathrm{e}}\) state data in Fig. 3(b) deviate from the fit above roughly \(150\,\mathrm{mV}\). We attribute this to qubit excitations outside the computational subspace owing to high readout power. ## Conclusions In summary, we demonstrated that a state-of-the-art thermal detector is capable of achieving reasonable-fidelity single-shot readout of a superconducting qubit, i.e., no voltage amplification is required for the qubit readout signal. We were able to carry out qubit characterization measurements with significantly less ensemble Figure 3: **Single-shot readout.** (a) Probability distributions of the single-shot qubit readout signal \(S\) for the qubit prepared in the ground (blue dots) and excited (red dots) states. The solid lines are fits to a sum of two Gaussian distributions. The vertical dashed line indicates the threshold which yields the highest readout fidelity. The bolometer is tuned to its operation point and the readout power is \(P_{\mathrm{d}}=-108\,\mathrm{dBm}\) with a pulse length of \(t_{\mathrm{RO}}=20\,\mathrm{\SIUnitSymbolMicro s}\) and an averaging time of \(5\,\mathrm{\SIUnitSymbolMicro s}\) for \(\bar{V}\). (b) As (a) but for \(t_{\mathrm{RO}}=13.9\,\mathrm{\SIUnitSymbolMicro s}\) and an averaging time of \(10.6\,\mathrm{\SIUnitSymbolMicro s}\) for \(\bar{V}\) and long averaging for \(\bar{V}_{0}\) (see text). Here, the fit for the excited state is obtained using a model that takes into account the \(T_{1}\) decay of the qubit during the readout (see Methods). (c) Single-shot readout fidelity as a function of the readout pulse length and averaging time for \(\bar{V}\), both changed in post processing. The red boundary indicates the region where the fidelity is greater than 0.6. The highest fidelity is achieved with the parameters indicated by the red cross, where the data of panel (b) are obtained. averaging than in previous setups lacking millikelvin amplifiers. Using a 13.9 us readout pulse, we achieve a single-shot readout fidelity of 0.62, and a fidelity of 0.93 after removing errors arising from finite qubit lifetime. Several factors in our experiment can be improved to increase the single-shot readout fidelity. Most significantly, the SNR in the single-shot readout is fairly low, which mainly arises from the long effective thermal time constant of the bolometer, which is on the order of ten microseconds. This motivates to use a long readout pulse, which degrades the fidelity due to the \(T_{1}\) decay of the qubit. A straightforward way to reduce the time constant is to use a material with a lower heat capacity in place of AuPd as the absorber. A promising candidate for this is graphene, which has been observed to lead to extremely high-sensitivity bolometers with time constants in the hundred-nanosecond range [27; 36]. Apart from the bolometer chip itself, minor modifications to our experimental setup may significantly improve the SNR. In the experiments presented here, we have a number of additional microwave components between the qubit and bolometer, which were used to independently characterize the qubit and the bolometer. Removing these components and directly connecting the output of the qubit readout feedline to the absorber of the bolometer removes the loss of signal between the two chips. Ultimately, the bolometer could be directly connected to the readout feedline, either on-chip or in a layered flip-chip architecture [37]. The losses between the qubit and bolometer are an important factor in degrading the fidelity especially in our case, since the maximum power we can apply to the resonator is on the low end of what the bolometer can detect. It is also possible to optimize the qubit chip design to be better suited for photodetection-based readout. In our particular design, the readout resonator is placed in a notch-type configuration with symmetric coupling to the input and output feedlines. This is disadvantageous, since effectively half of the signal reflected from the resonator is lost as it escapes through the input port. Changing to a transmission configuration with a weak coupling to the input port is expected to significantly increase the power incident on the bolometer. The ratio \(|\chi|/\kappa_{\mathrm{r}}=0.3\) in our sample is close to the value of 1/2 which maximizes the SNR in typical dispersive readout, but it is sub-optimal for photodetection-based readout where a larger ratio is desirable. Furthermore, our readout resonator lies below the qubit frequency which leads to the onset of non-adiabatic and even chaotic resonator dynamics at a much lower photon number than for an elevated readout frequency [32; 12; 33]. Thus designing the readout circuit to resonate above the qubit frequency can lead to a greatly increased signal power at the bolometer input. Finally, the readout fidelity can be improved by advanced pulsing schemes and improved data analysis methods. Instead of a simple rectangular pulse, the resonator can be driven by a two-step pulse [13], where the drive amplitude is initially high for a short amount of time. This may enable the use of a power level that the bolometer can detect with higher SNR, still maintaining the quantum-non-demolition nature of the readout. Alternatively, the SNR of the signal reaching the bolometer may be increased by preparing the qubit in a higher excited state prior to readout [22], or by using a two-tone drive that induces an effective longitudinal coupling between the qubit and the bolometer [38; 39]. Data analysis can be improved by utilizing more sophisticated signal processing than the simple boxcar time-averaging employed here. For example, the time-averaging can be weighted by the separation between the average trajectories corresponding to \(|\mathrm{g}\rangle\) and \(|\mathrm{e}\rangle\)[13], or the single-shot trajectories can be classified using a machine learning algorithm [40]. Note that in our current analysis, we have neglected the falling edge of the voltage signal that can be taken into use in the advanced analysis methods to increase the SNR. The averaging over the falling edge does not increase the \(T_{1}\)-related infidelity since during this averaging time, the readout pulse is off and hence possible qubit decay will not lead to a change in the bolometer signal. In fact, using a graphene bolometer in the calorimetric mode seems promising to improve the readout fidelity (Methods). By incorporating the improvements discussed above, we estimate that it may be possible to carry out single-shot qubit readout at 99.9% level of fidelity using bolometers (see Methods). Thus, bolometers are a promising component for scalable high-fidelity readout of superconducting qubits owing to their low power consumption, large probe tone frequency offset from qubits, resilience against quantum noise, small footprint, no need for microwave isolators, and naturally introduced low-temperature bath for qubits that can be impedance matched at a very broad range of readout frequencies. ## Methods ### Bolometer fabrication In the bolometer fabrication, we begin with a four-inch silicon wafer (100) of high resistivity (\(\rho>10\) k\(\Omega\)cm), covered by a 300-nm thermal oxide. Then, we sputter 200 nm of pure Nb onto the wafer. We use AZ5214E photoresist in positive mode with hard contact to define the waveguide in a Karl Suss MA-6 mask aligner. After development, the sample undergoes etching using a Plasmalab 80Plus Oxford Instruments RIE system. The plasma operates with a gas flow of SF\({}_{6}\)/O\({}_{2}\) at 40 sccm/20 scm with an rf power of 100 W. We clean the resist residuals in acetone and IPA using an ultrasonic excitation, and then dry the chip with a nitrogen gun. Next, we use an atomic-layer deposition (ALD) method to grow a 45-nm dielectric layer of Al\({}_{2}\)O\({}_{3}\) in a Beneq TFS-500 system. Subsequently, we protect the dielectric layer at desired capacitor regions using AZ5214E resist, and wet-etch the rest of the ALD oxide with an ammonium fluoride-hydrofluoric acid mixture. We then cleave the 4-inch wafer into a 2 \(\times\) 2 cm\({}^{2}\) chip using Disco DADdy. The nanowire is patterned using EPBG5000pES electron beam lithography (EBL) using a bilayer of MMA/PMMA resist on a single chip. We deposit a 30-nm-thick AuPd layer in an electron beam evaporator at a rate of 0.5 A/s. After liftoff in acetone overnight, we pattern the superconducting leads galvanically connected to the nanowire by EBL and deposit them with 100 nm of Al at a rate of 5 A/s. Finally, we cleave each pixel (5 \(\times\) 5 mm\({}^{2}\)) using a laser micromachining system and package the chosen chip. We employ Al bonding wires to connect the chip to the printed circuit board of the sample holder. ### Qubit fabrication The qubit samples are fabricated using the following steps: First, a layer of 200 nm of Nb is sputtered onto a high-resistivity silicon substrate. Next, we define the transmission line, readout resonator, and transmon shunt capacitor using photolithography, followed by dry reactive ion etching (RIE). The Al/Al\({}_{2}\)O\({}_{3}\)/Al Josephson junctions are subsequently fabricated using standard electron beam lithography and the Dolan bridge method, where oxidation is used between the deposition of two Al layers to form the Josephson junctions. To ensure galvanic contact between Al and Nb, the niobium oxide is removed by argon milling prior to the deposition of any Al. The excess metal is lifted off in acetone. The room temperature resistance of the Al/Al\({}_{2}\)O\({}_{3}\)/Al junctions are measured to select a sample that is most likely to yield the desired qubit frequency. Finally, the chip is diced into individual samples and the selected sample is wire-bonded to a sample holder. ### Details of the experimental setup A diagram of the full experimental setup is shown in Extended Data Fig. 1. A PXI control computer initiates a measurement by sending a software trigger to a National Instruments NI-5782 transceiver module connected to a NI PXIe-7962R analog-to-digital converter (ADC) running custom FPGA code. The transceiver sends further digital trigger signals to initiate pulses to the readout resonator and to excite the qubit. The clocks of all devices are synchronized using a 10 MHz Rb reference. The qubit excitation pulse is generated by an Active Technologies AT1212 digital-to-analog converter, upconverted to gigahertz range, and directed into the cryostat via a series of attenuators and filters. A microwave switch placed at the mixing-chamber plate is used to select which of the six qubits on the chip is being driven. The qubit frequency is adjusted by applying a magnetic field to the qubit-resonator chip using a hand-wound coil outside the sample holder. The readout pulse is generated by a microwave source using pulse modulation. The pulse is reflected off of the resonator on the qubit-resonator chip, after which it is directed through a circulator, a double pole double throw (DPDT) switch, a directional coupler and some filters, before reaching the bolometer absorber port. The components between the qubit-resonator and bolometer chips are needed only to individually characterize the qubit-resonator chip (by toggling the DPDT switch) and the bolometer (by applying a pulse through the directional coupler), and could be removed in future experiments. To probe the bolometer, a continuous microwave tone is split into two paths, a reference and a signal. The signal is directed into the cryostat, reflected off of the probe port of the bolometer, and amplified and demodulated to an intermediate frequency (IF) using a local oscillator (LO), which is detuned from the probe tone by a fixed IF frequency of 70.3125 MHz. The reference is demodulated by the LO without passing through the cryostat, and is used as an amplitude and phase reference for the signal. Both IF signals are digitized by the NI-5782 at a sampling rate of 250 MSA/s and digitally demodulated into in-phase (\(I\)) and quadrature (\(Q\)) components. The NI-5782 also handles ensemble-averaging the data where applicable, as well as further downsampling by boxcar averaging a variable number of adjacent points. From time-domain data in the \(IQ\)-plane, we calculate the average signal before the pulse, \(\tilde{V}_{0}\), and the average signal during the pulse \(\tilde{V}\), as determined by \(t_{0}\) and \(t_{\mathrm{RQ}}\). The data is then rotated in the \(IQ\)-plane such that \(\tilde{V}-\tilde{V}_{0}\) lies on the \(I\) axis. We define \(\tilde{V}\) and \(\tilde{V}_{0}\) discussed in the main text as the \(I\)-components of \(\tilde{V}\) and \(\tilde{V}_{0}\), respectively, after applying this rotation. For the single-shot experiments, we apply a common rotation across all shots, with the rotation angle chosen such that the readout fidelity is maximized. Note that the phase of the signal in the \(IQ\) plane is due to the reflection off of the effective \(LC\) circuit of the bolometer, and is thus completely independent of the phase of the photons emitted by the readout resonator. The data acquisition and storage is managed by the QCoDeS [41] data acquisition framework. Data analysis and fitting is carried out using the NumPy [42], SciPy [43], xarray [44] and lmfit [45] libraries. ### Sample characterization Characterization measurements of the bolometer are shown in Extended Data Fig. 2. Extended Data Fig. 2(a) shows the signal coming out of the cryostat normalized by the reference, as discussed above. The data are averaged for 1 ms over 16 repetitions. For each pixel in Extended Data Fig. 2(b), a 2 ms pulse is applied to the bolometer absorber port via the readout resonator with an off-resonant tone of \(f_{\mathrm{d}}=5.4\) GHz and \(P_{\mathrm{d}}=-108\) dBm. From these data, we extract the time constant \(\tau_{\mathrm{b}}\) shown in Extended Data Fig. 2(c) by fitting the time-domain signal to an exponential model. Extended Data Fig. 3 shows standard characterization experiments of the qubit-resonator system using the bolometer. Panels (a)-(b) show single-tone and two-tone spectroscopy, respectively, from which we extract an initial estimate for the resonator and qubit frequencies. In Fig. 3(c), a \(\pi\) pulse is applied to the qubit, and the readout pulse is applied after different delays to extract the \(T_{1}\) decay time. Extended Data Fig. 3(d) shows the result of a Ramsey experiment, where the qubit is driven by two \(\pi/2\) pulses with a varying idle time between them, for different modulation frequencies of the pulse. These data are used to extract the \(T_{2}\) time, given by the exponentially decaying envelope of the oscillation. The signal level varies in Extended Data Fig. 3, since different values of \(P_{\mathrm{d}}\) and \(t_{\mathrm{RO}}\) were used during the characterization while optimal values for these parameters were not yet known. In addition, the cryostat was thermally cycled between the acquisition of the data of panels (a), (b) and those of (c), (d). This is why the qubit frequency has decreased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is increased in panel (d) compared with panel (b) and (c). The qubit frequency is decreased in panel (d) compared with panel (b) and (c). The qubit frequency is decreased in panel (d) compared with panel (b) and (c). Fig. 2(e). ### Detector signal probability distribution with long averaging time Following Ref. [35], we develop a model for the probability distribution of the detector signal for the qubit nominally prepared in \(|\mathrm{e}\rangle\) (with some preparation error) and for the averaging time being significant compared to the energy decay time \(T_{1}\). Let \(v_{\mathrm{g/e}}(t,t_{\mathrm{d}})\) be the time domain signal of the bolometer output when the qubit is in the state \(|\mathrm{g}\rangle\) or \(|\mathrm{e}\rangle\) at the beginning of the readout pulse, \(t=0\). Here, \(t_{\mathrm{d}}\) denotes the time at which the qubit instantaneously decays to \(|\mathrm{g}\rangle\) if it was initially in \(|\mathrm{e}\rangle\) (\(v_{\mathrm{g}}\) is independent of \(t_{\mathrm{d}}\)). We assume that no thermal excitation events occur, and that the noise in the signal is Gaussian. Formally, the signal is given by \[v_{\mathrm{g/e}}(t,t_{\mathrm{d}})\mathrm{d}t=u_{\mathrm{g/e}}(t,t_{\mathrm{ d}})\mathrm{d}t+\sqrt{P_{\mathrm{N}}}\,\mathrm{d}W(t), \tag{2}\] where \(u_{\mathrm{g/e}}(t,t_{\mathrm{d}})\) is the evolution of the output in the absence of noise, \(P_{\mathrm{N}}\) is the noise power spectral density and \(\mathrm{d}W(t)\) is the Wiener increment [46]. To incorporate the bolometer dynamics, we model \(u_{\mathrm{g/e}}(t,t_{\mathrm{d}})\) as exponentially approaching some steady-state values \(c_{\mathrm{g}}\) and \(c_{\mathrm{e}}\), for the qubit in the state \(|\mathrm{g}\rangle\) or \(|\mathrm{e}\rangle\), respectively [26]. Namely, we assume that the time evolution is of the form \[u_{\mathrm{g}}(t)=c_{\mathrm{g}}(1-e^{-t/\tau_{\mathrm{b}}}) \tag{3}\] and \[u_{\mathrm{e}}(t,t_{\mathrm{d}})= \theta(t_{\mathrm{d}}-t)c_{\mathrm{e}}\left(1-\mathrm{e}^{-t/\tau _{\mathrm{b}}}\right)\] \[+\theta(t-t_{\mathrm{d}})\left[(c_{\mathrm{g}}-c_{\mathrm{e}}^{ \prime})\left(1-\mathrm{e}^{-(t-t_{\mathrm{d}})/\tau_{\mathrm{b}}}\right)+c_{ \mathrm{e}}^{\prime}\right], \tag{4}\] where \(\theta(x)\) is the step function and \(c_{\mathrm{e}}^{\prime}=c_{\mathrm{e}}(1-\mathrm{e}^{-t_{\mathrm{d}}/\tau_{ \mathrm{b}}})\). Averaging the bolometer output from \(t_{0}\) to \(t_{\mathrm{R}\mathrm{O}}\) yields the averaged detector signal \[\bar{V}_{\mathrm{g/e}}(t_{\mathrm{d}}) =\frac{1}{t_{\mathrm{R}\mathrm{O}}-t_{0}}\int_{t_{0}}^{t_{ \mathrm{R}\mathrm{O}}}v_{\mathrm{g/e}}(t,t_{\mathrm{d}})\mathrm{d}t\] \[=\bar{U}_{\mathrm{g/e}}(t_{\mathrm{d}})+X[0,\sigma^{2}], \tag{5}\] where \(\bar{U}_{\mathrm{g/e}}(t_{\mathrm{d}})\equiv(t_{\mathrm{R}\mathrm{O}}-t_{0} )^{-1}\int_{t_{0}}^{t_{\mathrm{R}\mathrm{O}}}u_{\mathrm{g/e}}(t,t_{\mathrm{ d}})\,\mathrm{d}t\) and \(X[0,\sigma^{2}]\) is a normally distributed random variable with mean 0 and standard deviation \(\sigma=\sqrt{P_{\mathrm{N}}/(t_{\mathrm{R}\mathrm{O}}-t_{0})}\). Above, we have neglected the constant \(V_{0}\), which simply shifts \(\bar{V}_{\mathrm{g/e}}\). With a fixed \(t_{\mathrm{d}}\), the probability of obtaining a given detector voltage \(V\) thus obeys the probability distribution \[P_{\mathrm{g/e}}\left(V|t_{\mathrm{d}}\right)=\frac{1}{\sqrt{2\pi\sigma^{2} }}\exp\left\{-\frac{\left[V-\bar{U}_{\mathrm{g/e}}(t_{\mathrm{d}})\right]^{2} }{2\sigma^{2}}\right\}, \tag{6}\] which is a Gaussian centered around \(\bar{U}_{\mathrm{g/e}}(t_{\mathrm{d}})\) having a standard deviation \(\sigma\) as defined above. If the qubit is in the state \(|\mathrm{g}\rangle\) at \(t=0\), the probability distribution \(P_{\mathrm{g}}(V)\) is simply a Gaussian centered around the constant value \(\bar{U}_{\mathrm{g}}\). When the qubit is initially in \(|\mathrm{e}\rangle\), the probability distribution is obtained by calculating the average of \(P_{\rm e}(V|t_{\rm d})\) for all possible realizations of the qubit decay time \(t_{\rm d}\) weighted by the probability density \(P(t_{\rm d})\) of the decay occurring at \(t_{\rm d}\): \[P_{\rm e}(V)=\int_{0}^{\infty}P(V|t_{\rm d})P(t_{\rm d})\,{\rm d}t_{\rm d}, \tag{7}\] where \(P(t_{\rm d})\) is exponentially distributed: \(P(t_{\rm d})={\rm e}^{-t_{\rm d}/T_{1}}/T_{1}\). Thus the total probability distribution for the qubit nominally prepared in \(|{\rm e}\rangle\) is the following weighted sum of these distributions: \[P_{\rm e}^{\rm tot}(V)=P_{\rm x}P_{\rm g}(V)+(1-P_{\rm x})P_{\rm e}(V), \tag{8}\] where \(P_{\rm x}\) is the probability that the qubit was actually in the state \(|{\rm g}\rangle\) at \(t=0\). Here, \(P_{\rm x}\) includes state preparation errors, as well as the \(T_{1}\) decay of the qubit during the short delay between the application of the \(\pi\) pulse and the start of the readout pulse. In the case where \(u_{\rm g,e}(t,t_{\rm d})\) is step-function-like, Eqs. (7) and (8) have analytical expressions [35]. However, we do not obtain an analytical expression for \(P_{\rm e}(V)\) given the temporal evolution arising from Eq. (4), and hence, we calculate the integral of Eq. (7) numerically. Scaling this by the bin width used in Fig. 3(b) produces the prediction for the distribution, with \(c_{\rm g}\), \(c_{\rm e}\), \(T_{1}\), \(\sigma\), and \(P_{\rm x}\) as the fitting parameters. For the data of Fig. 3(b), we find \(c_{\rm g}=24.7\,{\rm mV}\), \(c_{\rm e}=182\,{\rm mV}\), \(T_{1}=25.8\,{\rm us}\), \(\sigma=17.4\,{\rm mV}\) and \(P_{\rm x}=0.20\). The readout fidelity is given by \[F=1-\int_{-\infty}^{V_{\rm th}}P_{\rm e}^{\rm tot}(V)\,{\rm d}V-\int_{V_{\rm th }}^{\infty}P_{\rm g}(V)\,{\rm d}V, \tag{9}\] where \(V_{\rm th}\) is the threshold value for assigning the measurement outcome to \(|{\rm g}\rangle\) or \(|{\rm e}\rangle\), and we again assume that no thermal excitations occur. If we further assume that \(P_{\rm x}=0\) and that the qubit never relaxes, \(P_{\rm e}^{\rm tot}(V)\) reduces to a Gaussian distribution. In this case, the readout fidelity is maximized with \(V_{\rm th}=(\bar{U}_{\rm e}+\bar{U}_{\rm g})/2\), which yields \[F={\rm erf}\left(\frac{\bar{U}_{\rm e}-\bar{U}_{\rm g}}{2\sqrt{2\sigma^{2}}} \right), \tag{10}\] where \({\rm erf}(x)\) is the error function. ### Estimate of parameters needed for 99.9% fidelity Here, we quantitatively discuss how to achieve 99.9% fidelity with bolometric readout following the improvements to the experiment proposed in the main text. To be able make a fair comparison between the performance of different bolometers presented in the literature [26; 27], we carry out this analysis by assuming that we operate in the calorimetric mode. Namely, we interpret the distributions of Fig. 3(b) to represent a measurement of the energy of the absorbed pulses. In such an experiment, the signal is directly proportional to the energy packet arriving at the bolometer whereas the width of the distribution is independent of the energy. Let us first assume that the length of the readout pulse is reduced to 200 ns without changing the readout power. This means that the bolometer-absorbed energy, and hence the SNR in Fig. 3(b) is reduced by a factor of 70. By changing the metallic bolometer to the graphene bolometer, the energy resolution has been observed to increase by a factor of 13 [27]. Thus to obtain 99.9% fidelity, one needs to increase the amount of energy absorbed by the graphene bolometer in 200 ns by a factor of \(A_{99.9\%}\approx 1.8\times 70/13\approx 10\), where the factor of 1.8 arises from the fact that in order to reduce the overlap infidelity of Fig. 3(b) from 7% to 0.1%, one needs a factor of 1.8 improvement in the SNR in Eq. (10). Below, we aim to show that by making different improvements to the measurement scheme, we can arrive at several factors \(A_{\alpha}\) of increment to the energy absorbed by the bolometer, and taken all these factors into account, it is possible to exceed the required SNR, i.e., \(A_{99.9\%}=10<\prod_{\alpha}A_{\alpha}\). Since we have assumed 200 ns readout time, having a qubit \(T_{1}\) of the order of 100 us is sufficient to reach 99.9% readout fidelity. Note that the bolometer is a power sensor, and thus the SNR is directly proportional to the absorbed power for low enough powers considered here [27] [see also Fig. 2(d)]. Firstly, by switching to a transmission-type setup, where all of the readout photons are directed into the bolometer instead of half of them escaping through the input, increases the power incident on the bolometer by a factor of \(A_{\rm t}=2\). Secondly, removing additional components between the qubit and bolometer chips reduces losses approximately by \(A_{\rm c}\approx 1\,{\rm dB}\approx 1.25\). Thirdly, by optimizing the dispersive shift \(\chi\) for photodetection-type readout, we expect that the photon number occupying the resonator during readout and thus incident power can be increased by a factor \(A_{\chi}=2\) based on Ref. [29]. Fourthly, switching the qubit-resonator system to a configuration where the resonator resonance frequency lies above the qubit frequency (keeping the detuning \(|f_{\rm f,g}-f_{\rm q}|\) constant) allows driving with a larger number of photons [32], which we estimate to be at least \(A_{\rm a}=1.5\) times greater than with our current parameters. Fifthly, increasing the admissible drive power by doubling the resonator drive frequency to 10 GHz may further double the photon energy \(hf_{\rm r,g}\), introducing a factor \(A_{2f}=2\). In total, the improvement to the SNR is given by \[A_{\rm t}A_{\rm c}A_{\rm a}A_{2f}A_{\chi}\approx 15>A_{99.9\%}, \tag{11}\] and thus a high fidelity is feasibly achievable. ## Data availability The data that support the findings of this study are available at [https://doi.org/10.5281/zenodo.7773981](https://doi.org/10.5281/zenodo.7773981). ## Author contributions A.M.G. and S.N. conducted the experiments and analysed the data. W.L. designed and fabricated the bolometer chip. The qubit chip was designed by S.K. and fabricated by J.M. with help from V. Ve. G.C., P.S. and Q.C. assisted with characterizing and operating the bolometer. A.M.G. and V. Va. developed the model for the signal probability distribution. The manuscript was written by A.M.G. and M.M., with comments from all authors. The work was conceived and supervised by M.M. ## Acknowledgements The authors acknowledge funding from the Academy of Finland Centre of Excellence program (project nos. 352925, 336810, 336817, and 336819), European Research Council under Advanced Grant no. 101053801 (ConceptQ), the Future Makers Program of the Jane and Aatos Erkko Foundation, the Technology Industries of Finland Centennial Foundation and the Finnish Foundation for Technology Promotion. We thank Armen Alizadeh, Slawomir Simbierowicz, and Russell Lake for useful discussions. ## Competing interests M.M. declares that that he is a Co-Founder of the quantum-computer company IQM Finland Oy. Other authors declare no competing interests.
2305.16855
Regular access to constantly renewed online content favors radicalization of opinions
Worry over polarization has grown alongside the digital information consumption revolution. Where most scientific work considered user-generated and user-disseminated (i.e.,~Web 2.0) content as the culprit, the potential of purely increased access to information (or Web 1.0) has been largely overlooked. Here, we suggest that the shift to Web 1.0 alone could include a powerful mechanism of belief extremization. We study an empirically calibrated persuasive argument model with confirmation bias. We compare an offline setting -- in which a limited number of arguments is broadcast by traditional media -- with an online setting -- in which the agent can choose to watch contents within a very wide set of possibilities. In both cases, we assume that positive and negative arguments are balanced. The simulations show that the online setting leads to significantly more extreme opinions and amplifies initial prejudice.
Guillaume Deffuant, Marijn A. Keijzer, Sven Banisch
2023-05-26T12:05:20Z
http://arxiv.org/abs/2305.16855v1
# Regular access to constantly renewed online content favors radicalization of opinions ###### Abstract Worry over polarization has grown alongside the digital information consumption revolution. Where most scientific work considered user-generated and user-disseminated (i.e., Web 2.0) content as the culprit, the potential of purely increased access to information (or Web 1.0) has been largely overlooked. Here, we suggest that the shift to Web 1.0 alone could include a powerful mechanism of belief extremization. We study an empirically calibrated persuasive argument model with confirmation bias. We compare an offline setting--in which a limited number of arguments is broadcast by traditional media--with an online setting--in which the agent can choose to watch contents within a very wide set of possibilities. In both cases, we assume that positive and negative arguments are balanced. The simulations show that the online setting leads to significantly more extreme opinions and amplifies initial prejudice. Keywords:Opinion dynamics Online media Confirmation bias Web 1.0 Biased processing ## 1 Introduction Political polarization--whether affective, ideological or network polarization--seems to be growing in many countries over the last decades. For instance, the divide between liberals and conservatives in the U.S. or between globalists and populists in Europe appears deeper today than ever before. As a result, we have seen massive violent protests and increased institutional instability. It is thus important to better understand the causes of polarization. It is often suggested that increased access to online media that took place during the same period could have contributed polarization. The popularity of the Internet has revolutionized the way we consume and share information. Web 1.0 (i.e., cognition-oriented) technologies have liberated access to information, Web 2.0 (communication-oriented) has democratized information sharing and Web 3.0 (cooperation-oriented) have made it possible to more easily coordinate actions [16]. While all three technological waves have impacted opinion formation processes, the majority of the scientific work linking political polarization to the popularity of the internet has focused on Web 2.0 aspects of, mainly, online social media. In particular, on communication in digital "echo chambers" created by algorithmic "filter bubbles" in which large groups of users sharing similar views about a topic reinforce each other in their views [4, 22]. Moreover, the unprecedented competition for attracting the attention of users also favors extreme and negative content, particularly in messages of limited size (like on Twitter), which may also lead to opinion radicalization and entrenchment, without algorithmic selection. In this paper, we focus on the Web 1.0 aspects of the information revolution. How did regular access to a wide diversity of constantly renewed contents about any topic affect the opinion formation of individual users? Before the emergence of online media, the means to access news was limited to a small set of newspapers or TV channels, that strongly frame debates. The situation changed radically, as the new media offer an almost infinite variety of comments and viewpoints, constantly changing, among which the users navigate. In order to address how Web 1.0 affects individual opinions, we consider a recent, empirically-grounded model of opinion dynamics under biased processing [8]. Here, an individual's opinion is the aggregate of attitudes towards a series of known arguments, in line with _Persuasive Argument Theory_ (PAT) [31]. Typically, computational models of social influence consider the state variable to be the opinion of the agents, and this opinion evolves according to assumed rules during interactions [14]. In contrast, in the model considered in this paper, the state of the agent is defined by its beliefs about different arguments and these beliefs in association with the valence of the related arguments (for or against a topic) define the agent's opinion [6, 28]. The assumption that individuals engage in biased processing is by no means a stretch as it is well-grounded in literature on confirmation bias [32]. Confirmation bias predicts that given the same argument, two individuals may react differently as a function of their currently held belief. Specifically, arguments that are closer to the currently held belief are considered stronger than those farther away [21, 35]. Despite the prominence of confirmation bias, only very few opinion dynamics models take it into account [8, 10, 14, 36], the bounded confidence family of models [11, 19] being a prominent exception. Comparing the mechanisms of biased processing between online and offline contexts is particularly interesting for two main reasons. First, it might be the case that biased processing and the availability of more information due to Web 1.0 alone is enough to explain differences in polarization between online and offline contexts with the calibrated model from Banisch and Shannon [8]. Here, we investigate whether this is the case by looking at how opinions are affected as a function of low (offline) or high (online) availability of information. Second, many contributions in psychology from recent years have shown how consumption of information online, in particular on online social media, affects biased processing. There is ample evidence showing that online, confirmation bias for the selection and processing of information is strong [25] and may be stronger online than offline [33]. Some argue that motivated reasoning--here referring to explicitly seeking out information that confirms currently held beliefs [26]--could account for this effect [29]. But when it comes to actual attitude change (as opposed to mere selection of information), it appears that this effect obtains mostly because people tend to respond differently when overwhelmed with information, rather than through motivated reasoning [34]. Information overload thus strengthens individual tendencies to process new information in light of what they know, rather than in relation to the quality of this information [18]. This implies that online, people might not only be confronted with more options to find information that suits their beliefs best, but that their way to consider content is modified by the abundance of signals. Here, we investigate the effect of strength of confirmation bias in both the online and offline context. In what follows, we define stylized situations that respectively correspond to offline and online information landscapes and we compare the results obtained in computer simulations with the model for different levels of confirmation biases in these two different contexts. Our work speaks to seminal experimental work on confirmation bias [27] which has shown that individuals may "draw undue support for their initial positions from mixed or random empirical findings" (p. 2098). The model leads to testable hypotheses about how individuals are expected to change their opinion in different information settings characterized more (online) or less (offline) information diversity. ## 2 Biased dynamics of argument endorsement ### The general model In this simple setting, we consider a single agent that has a prior belief \(\alpha\in[-1,1]\) about a given topic. The agent has access to \(N_{a}\) arguments about the topic. We assume that each argument \(i\in\{1,...,N_{a}\}\) is either in favor of (then \(v_{i}=1\)) or against the topic (then \(v_{i}=-1\)). The state of the agent is characterized by two vectors: * The knowledge vector \((k_{i})_{i\in\{1,...,N_{a}\}}\). For each argument \(i\), if the agent knows the argument \(i\), then \(k_{i}=1\) otherwise, \(k_{i}=0\). * The belief vector \((b_{i})_{i\in\{1,...,N_{a}\}}\). For each argument \(i\), if the agent believes the argument \(i\), then \(b_{i}=1\) otherwise, \(b_{i}=0\). If the agent does not know the argument, then \(b_{i}=0\). Then, the balance of beliefs \(\overline{b}\) of the agent about the considered topic is the sum of its prior belief and the beliefs multiplied by the values of the arguments, for the arguments known by the agent: \[\overline{b}=\alpha+\sum_{i=1}^{N_{a}}b_{i}v_{i}. \tag{1}\] The dynamics rules the determination of beliefs about new arguments. Let \(\overline{b}(t)\), \(k_{i}(t)\) and \(b_{i}(t)\) be respectively the balance of beliefs, the knowledge and the beliefs of the agent at time \(t\). Assume that at time \(t+1\) the agent accesses to \(2q\leq N_{a}\) arguments \(j_{1},..,j_{2q}\). Then, for \(j\in\{j_{1},...,j_{2q}\}\): * If \(k_{j}(t)=1\), the agent knows the argument and does not change its knowledge or belief. Thus \(k_{j}(t+1)=k_{j}(t)\) and \(b_{j}(t+1)=b_{j}(t)\). * If \(k_{j}(t)=0\): * The agent acquires the knowledge about the argument: \(k_{j}(t+1)=1\). * The agent determines its belief about the argument. \(b_{j}(t+1)=1\) with probability : \[P(b_{j}(t+1)=1)=\frac{1}{1+\exp(-\beta\overline{b}(t)v_{j})}.\] (2) Otherwise, \(b_{j}(t+1)=0\). The probability to believe the item depends thus on parameter \(\alpha\) and \(\beta\). Parameter \(\alpha\) is a prior belief of the agent about the issue. Parameter \(\beta\) rules the confirmation bias in the beliefs. The higher is \(\beta\) the higher the tendency to believe items in the direction of the current balance of beliefs \(\overline{b}\). Moreover, for \(i\notin\{j_{1},..,j_{2q}\}\)\(k_{i}(t+1)=k_{i}(t)\), \(b_{i}(t+1)=b_{i}(t)\). Finally, the attitude of the agent about the topic is updated: \[\overline{b}(t+1)=\alpha+\sum_{i=1}^{N_{a}}b_{i}(t+1)v_{i}. \tag{3}\] We also define the attitude \(a(t)\in[-1,1]\) of the agent, as follows : \[a(t)=2\left(\frac{1}{1+\exp(-\beta\overline{b}(t)})-\frac{1}{2}\right). \tag{4}\] The attitude aims at quantifying how the agent is favorable or unfavorable to the issue at stake. In the model this attitude is expressed by the propensity of the agent to accept arguments from one side or the other. When the attitude is positive, its value is the probability that the agent believes a positive argument. When it is negative, its opposite is the probability that the agent believes a negative argument. When the attitude is close to 1 or -1, this corresponds to an extreme attitude, as the agent is completely close to the opposite side and systematically accepts any argument on their own side. As a result, for the same balance of beliefs, two agents having different confirmation biases (ruled by parameter \(\beta\)) have different attitudes towards the issue. ### Offline setting The typical situation that we aim to represent here is an individual consuming TV or newspaper reports in which the pro and contra arguments about the topic are balanced, but limited. In this offline setting, we assume that, at each time step, the agent is exposed to \(2q\leq N_{a}\) arguments \(j_{1},..,j_{2q}\) such that the overall valence of the argument is neutral: \[\sum_{i=1}^{2q}v_{j_{i}}=0 \tag{5}\] Hence, we have \(q\) arguments of each sign. Once the agent knows all the arguments, its attitude remains fixed. Let \(T\) be the time when this situation is reached. In the agent simulations, we start with all knowledge (\(k_{i}\)) is equal to 0. We run the model for a number of steps \(T\) and compute the final distribution of opinions for different values of the total number of arguments \(N_{a}\) and the number exposed \(2q\) at each time step,. We do not have to repeat computer simulations in order to evaluate the average results, as the model can be represented as a Markov chain ruling the probabilities of being in states defined by four integer values. Here, a state of the Markov chain is indeed defined by the vector \((K_{+},B_{+},K_{-},B_{-})\) of, respectively, the number of known positive arguments, the number of believed positive arguments, the number of known negative arguments, and the number of believed negative arguments. \[K_{+}=\sum_{i=1}^{n_{a}}k_{i},\ K_{-}=\sum_{i=n_{a}+1}^{N_{a}}k_ {i}, \tag{6}\] \[B_{+}=\sum_{i=1}^{n_{a}}b_{i},\ B_{-}=\sum_{i=n_{a}+1}^{N_{a}}b_ {i}. \tag{7}\] As the number of believed arguments is lower or equal to the number of known arguments, the total number of states is \(N_{S}=\frac{1}{4}(n_{a}+1)^{2}(n_{a}+2)^{2}\). Considering an agent in the state \(S=(K_{+},B_{+},K_{-},B_{-})\), we can express the probability \(P(S,S^{\prime})\) that the agent goes to the state \(S^{\prime}=(K^{\prime}_{+},B^{\prime}_{+},K^{\prime}_{-},B^{\prime}_{-})\) after being exposed to a random neutral set of \(q\) arguments. Starting with a probability \(1\) of being in the state \(S_{0}=(0,0,0,0)\), we can evolve the probabilities of the different states after each step. In this case, as the number of arguments is limited, when the number of steps increase indefinitely, the model converges to a fixed probability distribution. Appendix 0.A.1 provides the details of this model. ### Online setting In the online setting, agents choose the arguments among a much wider set of signals than in the offline case. Again, we assume that the platform proposes a balanced set of pro and contra arguments. Consider an agent who selects to consider \(2q\) arguments about the topic of interest at each point in time. The choice of the agent is based on the valence (pro or against the considered topic) of the signals. We assume that the selection of arguments to engage with by the agent is biased--which is in line with previous empirical work [3, 29]--governed by \(\gamma\). It can be understood as a parameter that governs a first confirmation bias or possibly the degree of motivated reasoning agents engage in [26]. The probability of \(c(t,j)\), that the agent chooses to watch content \(j\) of valence \(v_{j}\) at time \(t\), is: \[P(c(t,j))=\frac{1}{1+\exp(-\gamma\overline{b}(t)v_{j})}. \tag{8}\] After choosing what content to engage with, the agent updates its beliefs with the same rules as before. In this case, the agent's opinion potentially changes all the time, as the agent always considers new arguments. Therefore, we set a time horizon \(T\) after which we stop the simulation. The main difference with the offline setting is that the agent gets more opportunities to choose the content it wants to engage with, and its initial choice is itself subject to confirmation bias. We will consider in particular the case of users who are careful to balance their selection of news (i.e. for whom \(\gamma=0\)) in order to clearly distinguish between the effects of selection and belief bias.6 Footnote 6: While motivated reasoning and confirmation bias can both apply to conscious or less-conscious processes, motivated reasoning is generally understood to be former and [26] confirmation bias to be latter [32]. Therefore, it is possible that the selection of information includes some motivated reasoning as well as being subject to confirmation bias. We can actually represent the state of the agent by the balance of its beliefs minus the prior belief: \(\overline{b}-\alpha=B_{+}+B_{-}\). For a time horizon \(T\), the state \(S\) is an integer in : \(\{-2qT,2qT\}\). For any state in \(\{-2q(T-1),2q(T-1)\}\), we can compute the transition probabilities to all the states in \(\{-2qT,2qT\}\). Then, starting with the system being the state \(S=0\) with probability 1, we can compute the probability distribution over all the states at any step \(t<=T\). A detailed description of the Markov Chain can be found in Appendix 0.A.2. ## 3 Simulation results ### Increased extremism in the online setting **Indicator.** The indicator of extremism \(E\), is defined at \(t\) on the probability distribution \(\mu_{t}(\overline{b})\) of the agent holding balance of beliefs \(\overline{b}\in\{-B,...B\}\) as the average of the absolute value of the attitude \(a(\overline{b})\) (see Eq. 4): \[E=\sum_{b=-B}^{B}\mu_{t}(\overline{b})\left|a(\overline{b})\right|, \tag{9}\] Indeed, if this indicator is close to 1, ti means that the agent's attitude is very likely to be 1 or -1, in which case the agent accepts all arguments from their side and discards any argument from the opposite side, which fits the usual view of an extreme behaviour. **Examples of comparisons between online and offline settings.** Figure 1 provides examples of comparison of the evolution of normalized opinions between offline and online settings. In these examples, the number of considered arguments at each iteration is 4 in both cases. In the offline setting, two arguments of each sign are chosen at random at each iteration. In the online setting, the agent chooses 4 arguments at random, with no selection bias \(\gamma=0\). In the offline model, the total number of arguments is \(N_{a}=12\). In the online, the number of arguments increases of 4 at each iteration, as they are assumed new. The bias in the choice of the arguments to consider is \(\gamma=0\). In these examples, polarization is much stronger in the online setting. This is reflected by the extremism indicator which is lower than 0.5 in the offline examples and higher than 0.7 in the online examples. Figure 2 shows the evolution of the logarithm of the distribution of balance of beliefs for the same examples as Figure 1. Indeed, after a few iterations, especially in the online case, without the logarithm, the values of the distribution are very small and difficult to distinguish. These figures show that, in the online case, the agent has a high probability to reach balances of beliefs which are higher than 20 or lower than -20, whereas in the online case, the opinions are most likely between -3 and +3. In the online case, the distribution is steadily enlarged at each iteration, which increases the likelihood of extremism. As the number of arguments is fixed in the offline case, the tendency to become more extreme comes only from the discovery of unknown arguments. **Effects of different parameters on extremism.** Figure 3 shows how the extremism indicator evolves with the number of iterations \(t\), for different values of the belief confirmation bias parameter \(\beta\). For the considered values of the parameters, in the offline case, extremism remains almost constant after 4 or 5 iterations. On the contrary, in the online case, extremism continues to strongly increase until the last iterations for all values of \(\beta\). This increase gets smaller when the extremism reaches values that are close to 1, like for \(\beta=0.5\). This constant increase of the extremism in the online setting can be explained by the shape of the distribution of balances of beliefs over time as shown on Figure 2, on the left panels. Indeed, at each iteration, there is a high probability to get a stronger balance of beliefs by adding new positive or negative beliefs. In the offline setting, this increase is constrained by the limited number of items. Figure 4 shows the effect of \(\gamma\), the bias in the choice of the items (online setting), on the evolution of the extremism for different values of the bias in the belief of items \(\beta\). All the other parameters being the same as on Figure 1. The result shows a substantial increase of the extremism for \(\gamma=0.2\) and even more for \(\gamma=0.4\). Figure 1: Evolution of the distribution of attitude in Online and offline Markov models. In both models, the number of arguments considered at each iteration is \(2q=4\) and the initial bias \(\alpha=0\). The total number of arguments in the offline model is \(N_{a}=12\). In the online model, the choice bias is \(\gamma=0\). In each case, the extremism indicator \(E\) is computed at the last iteration. Figure 2: Evolution of the logarithm of the distribution of the balance of beliefs \(\overline{b}(t)\) in Online and offline Markov models. The parameters are the same as on Figure 1. In both models, the number of arguments considered at each iteration is \(2q=4\) and the initial bias \(\alpha=0\). The total number of arguments in the offline model is \(N_{a}=12\). In the online model, the choice bias is \(\gamma=0\). In each case, the extremism indicator \(E\) is computed at the last iteration. The effect of parameter \(\gamma\) is actually very similar to the effect of a selection algorithm that could be implemented by the platform. It plays the role of a filter prior to a more careful attention devoted to the item. ### Amplification of prejudice in online setting **Indicator.** Let \(\mu_{t}(\overline{b})\) be the probability distribution of the balance of beliefs \(\overline{b}\), varying between \(-B+\alpha\) and \(B+\alpha\) at iteration \(t\). The prejudice \(P(t)\) is defined as the average of the attitude: \[P(t)=\sum_{b=-B+\alpha}^{B+\alpha}a(\overline{b})\mu_{t}(\overline{b}), \tag{10}\] where \(a(\overline{b})\) is the attitude associated with the balance of beliefs as specified by equation 4. The value of this indicator is in \([-1,1]\) and expresses the average tendency to believe positive or negative items. It is \(0\) when the distribution is symmetric, that is, when the probability to believe negative or positive items are the same (as it is always the case when the initial opinion \(\alpha=0\)). It is positive if the agent has a higher probability to believe positive than negative items, it is negative in the opposite case. Note that, at \(t=0\), \(P(0)\) is simply the attitude at \(t=0\) because the distribution is all concentrated (Dirac distribution) at value \(\alpha\). In the following, we measure the relative increase \(I_{P}(t)\) of the prejudice over time which is: Figure 4: Extremism indicator during iterations for online setting. Effect of the bias in item choice parameter \(\gamma\). The other parameters are the same as on Figure 1. Figure 3: Extremism indicator during iterations. Left panel: offline. Right panel: Online. The parameters are the same as on Figure 1. \[I_{P}(t)=1-\frac{P(0)}{P(t)} \tag{11}\] In other words, \(I_{P}(t)\) express to what extent the prejudice is amplified over time. Examples.Figure 5 shows examples of model runs with an initial bias \(\alpha=0.5\). In this case the distribution is asymmetric. In both cases the initial prejudice \(P(0)=0.05\). However, at \(t=10\), we have \(P(10)=0.08\) for the offline setting and \(P(10)=0.18\) for the online setting. This example suggests that the initial prejudice tends to increase more with the interactions in the online setting. Systematic simulations.Figure 6 shows the relative increase of prejudice \(I_{P}(t)\) for different values of \(\beta\) and for a prior belief \(\alpha=0.5\). In the online setting, the figure shows the results for 3 values of the bias in the choice of the items \(\gamma\in\{0,0.2,0.4\}\). When \(\gamma=0\), the relative increase of prejudice is higher in the online than in the offline setting for \(0.1\leq\beta\leq 0.25\). However, for \(\beta>0.25\), the final relative increase of prejudice is higher in the offline setting. Overall, the relative amplification of the prejudice is even higher when the bias on belief \(\beta\) is small (for \(\gamma=0.2\), and even more for \(\gamma=0.4\)), while the opposite is true in the offline case. Moreover, the amplification of the prejudice is higher online than offline for all values of \(\beta\), and the increase takes place more rapidly (after 6 or 7 iterations). ## 4 Discussion The Web 1.0 revolution brought regular access to a wide diversity of constantly renewed content about any topic to the billions of people connected to the internet. Our model suggest that this access to online content increases the effect of cognitive biases (i.e., confirmation bias and motivated reasoning) which amplifies bi-polarization, extremization and prejudice. Algorithmically-induced selective exposure (associated with Web 2.0) certainly increases these effects even more, but the role of the almost infinite choice of contents offered by online environments should not be neglected. This point resonates with effects of 'globalization' or network size in other modeling traditions [1, 11, 12, 19, 20, 23, 24] and invites to refrain putting the blame of polarization exclusively or even majorly on web personalization [22]. The model introduced in this paper is based on a recent contribution that experimentally calibrated the agents' influence-response function after exposure to a balanced set of arguments [8]. The approach relies on several assumptions that deserve further investigation, in our view. First, Figure 5: Initial bias \(\alpha=0.5\). Evolution of the distribution of normalized opinions in online and offline Markov models. In both models, the bias on item belief is \(\beta=0.2\). In each case, the prejudice indicator \(P\) is computed at the last time step. all beliefs are considered dichotomous. The agents either completely believe or completely reject an argument. However, it is often the case that we believe more or less an argument. It would be important to investigate if a model introducing several levels of beliefs would show the same effects. Second, all arguments hold the same weight when the aggregate opinion is constructed. How opinions are precisely constructed from a set of arguments is an open empirical question, but as some motivated reasoning scholars point to the active construction of the same known arguments to reach different conclusions [2; 13; 26], it is likely that there is some form of weighing of arguments involved. Our model showed that different opinions can emerge given the same information, if that information is evaluated based on some biased belief. When weighing gets involved, different opinions can also emerge between agents who believe the same information. Such a model could resonate well with the reality of radically different interpretations accepting the same facts, which we have seen during even the largest of societal challenges such as COVID-19 or in the climate change mitigation debate. While enriching the model with more complex processes and calibrating those processes to empirical micro and macro-level data could improve our understanding of cognition and information consumption in different contexts, we believe that the main conclusions from this paper are likely to hold up. The discussion framed by traditional media focuses on a limited set of arguments, because they should meet the interest of a wide audience. Introducing very specific points or details introduces a high risk to loose the average reader or spectator. With the freedom to browse on a virtually infinite set of arguments, online information consumers can find large set of items that fit perfectly their specific interest, dig deeper and deeper in their own direction, ultimately forging a robust, likely extreme, opinion. The work presented here fits into a broader tradition of social simulation models that have tried to tackle the question of globalization from different angles. Our findings resonate with those from the bounded confidence tradition [11; 19] where confirmation bias can produce the empirical macro-patterns observed online [12]. They echo results from nominal opinion models in the tradition of Axelrod [1] with globalization causing fragmentation, not consensus [24] and online communication Figure 6: Relative increase of prejudice \(I_{P}(t)=1-\frac{P(0)}{P(t)}\). The non-specified parameters are the same as on Figure 1. creating large cleavages between those who think differently [5, 23]. What is more, they fit the results from the Construct-model tradition [9] that, much like _PAT_, considers the adoption of facts based on information processing biases [30]. We propose a number of possible avenues for future theoretical and experimental research based on the model discussed here, that stand between this abstract model and the patterns we see at a macro-scale in reality [15]. Theoretically, the addition of information filtering systems can be an interesting extension to the model. Web personalization can be implemented in many ways--for instance, as network-based, popularity-based or cultural distance-based filters--and the choice for a particular filtering procedure may have profound impacts on the dynamics of opinions [22]. One could also imagine to integrate this model with the model from Geschke et al. [17] who considered individual, social and technological filters. In order to capture aspects of the Web 2.0 the model should be enriched by incorporating social networks and social influence components [7, 14]. Networks dictate how information spreads through a population. Both social structure and the way in which communication is structured vary between on and offline contexts and may impact model dynamics [23]. Finally, including multiple opinions that co-evolve [6] could create more complex agent profiles and possibly overcome amplification of initial bias. The research presented here also resulted in promising directions for experimental research. While this model is built upon empirical work by Banisch and Shannon [8], the previous work did not consider crowded (nor infinte) information environments. What is more, the model here is extended with a selection bias (modeled as \(\gamma\) in Eq. 8). An extension of the original experiment [35] could consider the information selection process introduced here and control the amount of information that individuals can choose to engage with. This would allow to empirically calibrate the selection bias \(\gamma\) and validate the conjectures of this paper. The digital revolution has come with many challenges, of which increasing polarization is just one. Theoretical explanations like the one outlined in this paper help to understand the logic of assumed causal links, and point to where the culprits may be. We have shown that we need not assume that polarization arises from advanced Web 2.0 selective exposure, complex social network topologies or heterogenous influence-response functions, but that a cognitive processing suffices to generate extremization. Interpret these findings at your own risk. ## Acknowledgements Marijn Keijzer acknowledges IAST funding from the French National Research Agency (ANR) under the Investments for the Future (Investissements d'Avenir) program, grant ANR-17-EURE-0010. ## Appendix 0.A Appendix: Details of the Markov models ### Model with offline interactions The model includes \(N_{a}=2n_{a}\) arguments in total and during \(T\) time steps, at each time step, the agent receives information about \(q\) positive and \(q\) negative arguments, chosen at random among the \(N_{a}\) possible arguments. The different states of the agent are defined by four integers \((k_{p},k_{n},b_{p},b_{n})\), where \(k_{p}\in\{0,...,n_{a}\}\) is the number of positive arguments known by the agent, \(k_{n}\in\{0,...,n_{a}\}\) is the number of negative arguments known by the agent, \(b_{p}\in\{0,...,k_{p}\}\) is the number of positive arguments believed by the agent and \(b_{n}\in\{0,...,k_{n}\}\) is the number of negative arguments believed by the agent. Let \(\mathcal{S}\) be the total set of these possible states. Let \(\mu_{t}(k_{p},k_{n},b_{p},b_{n})\) be the probability that the agent is in state \((k_{p},k_{n},b_{p},b_{n})\). We assume: \(\mu_{0}(0,0,0,0)=1\) and for all the other states, \(\mu_{0}(k_{n},k_{p},b_{p},b_{n})=0\). Then, at each step \(t\in\{1,...,T-1\}\), we compute \(\mu_{t+1}\) from \(\mu_{t}\). This is done as follows: 1. We initialise the difference distribution \(\delta(k_{p},k_{n},b_{p},b_{n})=0\), for all \((k_{p},k_{n},b_{p},b_{n})\in\mathcal{S}\). 2. For all states \((k_{p},k_{n},b_{p},b_{n})\in\mathcal{S}\) such that \(\mu_{t}(k_{p},k_{n},b_{p},b_{n})>0\): 1. There are \(n_{a}-k_{p}\) unknown positive arguments and \(n_{a}-k_{n}\) negative arguments. The maximum of new arguments received is thus \(m_{p}=\min(q,n_{a}-k_{p})\) positive and \(m_{n}=\min(q,n_{a}-k_{n})\) negative arguments. As the arguments are assumed randomly chosen, the probability that the agent gets a new positive argument is \(P_{kp}=\frac{m_{p}}{n_{a}}\) and the probability to get \(i_{p}\in\{0,...,m_{p}\}\) new arguments is (Bernoulli formula): \[P(k_{p}^{\prime}=k_{p}+i_{p})=\binom{i_{p}}{m_{p}}P_{kp}^{i_{p}}(1-P_{kp})^{m_ {p}-i_{p}}.\] (12) Similarly, the probability that the agent gets a new positive argument is \(P_{kn}=\frac{m_{n}}{n_{a}}\) and the probability to get \(i_{n}\in\{0,...,m_{n}\}\) new arguments is : \[P(k_{n}^{\prime}=k_{n}+i_{n})=\binom{i_{n}}{m_{n}}P_{kn}^{i_{n}}(1-P_{kn})^{m_ {n}-i_{n}}.\] (13) 2. For each new positive argument among \(i_{p}\), the probability to believe this argument is: \[P_{bp}=\frac{1}{1+\exp(-\beta(b_{p}-b_{n}+\alpha))}.\] (14) Hence, the probability to believe \(j_{P}\in\{0,...,i_{p}\}\) positive arguments among the \(i_{p}\) new ones is: \[P(b_{p}^{\prime}=b_{p}+j_{p})=\binom{j_{p}}{i_{p}}P_{bp}^{j_{p}}(1-P_{bp})^{i_ {p}-j_{p}}.\] (15) Similarly, for each new negative argument among \(i_{n}\), the probability to believe this argument is: \[P_{bn}=\frac{1}{1+\exp(\beta(b_{p}-b_{n}+\alpha))}.\] (16) Hence, the probability to believe \(j_{n}\in\{0,...,i_{n}\}\) negative arguments among the \(i_{n}\) new ones is: \[P(b_{n}^{\prime}=b_{n}+j_{n})=\binom{j_{n}}{i_{n}}P_{bn}^{j_{n}}(1-P_{bn})^{i_ {n}-j_{n}}.\] (17) 3. For all considered values \(i_{p}\), \(i_{n}\), \(j_{p}\), \(j_{n}\), let: \[dP=P(k_{n}^{\prime}=k_{n}+i_{n})P(k_{n}^{\prime}=k_{n}+i_{n})P(b_{p}^{\prime}= b_{p}+j_{p})P(b_{n}^{\prime}=b_{n}+j_{n}).\] (18) We update \(\delta\) as follows: \[\delta(k_{p},k_{n},b_{p},b_{n}) :=\delta(k_{p},k_{n},b_{p},b_{n})-dP\] (19) \[\delta(k_{p}+i_{p},k_{n}+i_{n},b_{p}+j_{p},b_{n}+j_{n}) :=\delta(k_{p}+i_{p},k_{n}+i_{n},b_{p}+j_{p},b_{n}+j_{n})+dP.\] (20) 3. We update the distribution \(\mu_{t}\). For all states: \[\mu_{t+1}(k_{p},k_{n},b_{p},b_{n})=\mu_{t}(k_{p},k_{n},b_{p},b_{n})+\delta(k_{ p},k_{n},b_{p},b_{n}).\] (21) Finally, we compute the distribution of probabilities to have a bias of \(2*\left(\frac{1}{1+\exp(-\beta(b+\alpha)}-\frac{1}{2}\right)\), as the sum of \(\mu(k_{p},k_{n},b_{p},b_{n})\) for all the couples \((b_{p},b_{n})\) such that \(b_{p}-b_{n}=b\). ### Model with online interactions In this case, the set of states is simpler: they correspond to all the possible values of the balance of beliefs \(b_{p}-b_{n}\), which are \(\mathcal{S}=\{-Tn_{a},...,Tn_{a}\}\), where \(T\) is the number of iterations in which the agent consults \(2q\) items on the platform. We define the probability distribution \(\mu_{t}(b)\), for \(b\in\mathcal{S}\). Initially, \(m_{0}(0)=1\), and \(\mu_{0}(b)=0\) for \(b\neq 0\). Then, at each step \(t\in\{1,...,T-1\}\), we compute \(\mu_{t+1}\) from \(\mu_{t}\). This is done as follows: 1. We initialise a difference distribution such that \(\delta(b)=0\) for all \(b\in\mathcal{S}\), 2. for each state \(b\) such that \(\mu_{t}(b)>0\): 1. we compute the probability to choose a positive content: \[P_{c}=\frac{1}{1+\exp(-\gamma(b+\alpha))}.\] (22) Then, for \(c_{p}\in\{0,...,2q\}\) the probability to choose to consult \(i_{p}\) positive contents and \(q_{a}-c_{p}\) negative contents is: \[P_{cp}=\binom{c_{p}}{q_{a}}P_{c}^{c_{p}}(1-P_{c})^{q_{a}-c_{p}}.\] (23) 2. Then, the probability to believe \(j_{p}\in\{0,...,c_{p}\}\) contents among the chosen \(c_{p}\) is: \[P(b_{p}^{\prime}=b_{p}+j_{p})=\binom{j_{p}}{c_{p}}P_{b}^{j_{p}}(1-P_{b})^{c_{ p}-j_{p}},\] (24) with: \[P_{b}=\frac{1}{1+\exp(-\beta(b+\alpha))}.\] (25) Similarly, setting\(c_{n}=q_{a}-c_{p}\), the probability to believe \(j_{n}\in\{0,...,c_{n}\}\) contents among the chosen \(c_{n}\) is: \[P(b_{n}^{\prime}=b_{n}+j_{n})=\binom{j_{n}}{c_{n}}P_{n}^{j_{n}}(1-P_{n})^{c_{ n}-j_{n}},\] (26) with: \[P_{n}=\frac{1}{1+\exp(\beta(b+\alpha))}.\] (27) 3. For all the considered \((c_{p},c_{n},i_{p},i_{n})\), let: \[dP=\mu(b)P_{cp}P(b_{p}^{\prime}=b_{n}+j_{p})P(b_{n}^{\prime}=b_{n}+j_{n})\] (28) We update the distribution \(\delta\) as follows: \[\delta(b) :=\delta(b)-dP\] (29) \[\delta(b+j_{p}-j_{n}) :=\delta(b+j_{p}-j_{n})+dP.\] (30) 3. We update the distribution \(\mu_{t}\). For all states: \[\mu_{t+1}(b)=\mu_{t}(b)+\delta(b).\] (31) Finally, like with the offline model, the distribution of probabilities to have a bias of \(2*\left(\frac{1}{1+\exp(-\beta(b+\alpha)}-\frac{1}{2}\right)\), is \(\mu(b)\).
2309.01024
Spectral Libraries for Analyzing Spectra of Low-Metalicity Galaxies
We present a set of isochrone-tailored spectral libraries for analyzing composite spectra of low-metallicity massive star clusters or starburst galaxies. Specifically, we have computed non-LTE spectra for stars of all initial masses for isochrones at metallicities, Z=0.006, 0.002,and 0.0004, with and without considering rotation. These isochrones were constructed by the Geneva group (Ekstr\"{o}m et al., 2011; Georgy et al., 2013; Groh et al., 2019; Eggenberger et al. 2020). We also present a Python program for integrating individual spectra for an arbitrary initial mass function.
Ivan Hubeny, Sara R. Heap
2023-09-02T21:24:54Z
http://arxiv.org/abs/2309.01024v2
# Spectral Libraries for Analyzing Spectra of Low-Metality Galaxies ###### Abstract We present a set of isochrone-tailored spectral libraries for analyzing composite spectra of low-metallicity galaxies. Specifically, we have computed synthetic spectra for stars of all initial masses for isochrones at metallicities \(Z=0.002\) and \(Z=0.0004\), with and without considering rotation, constructed by the Geneva group (Ekstrom et al., 2011; Georgy et al.. 2013; Groh et al., 2019). We also present a Python program for integrating the individual spectra with a given initial mass function. ## 1 Introduction Predicting a composite spectrum of a stellar system (e.g. a stellar cluster, starburst, whole galaxy or its part), composed of stars of essentially same age and various initial masses, involves several basic ingredients: * evolutionary models and consequent isochrones for a set of ages of the system; * an initial mass function, i.e. a probability distribution of stars with their initial mass; and * synthetic spectra of individual stars The isochrones for systems with various initial metallicities were presented by the Geneva group (Ekstrom et al., 2011; Georgy et al., 2013; Groh et al., 2019). For practical purposes, an isochrone describes a dependence of basic stellar parameters on the initial mass for a given age. These basic parameters are, among others, the effective temperature, surface gravity, radius (or the total luminosity), and chemical abundances of most important chemical species. The spectra of the individual stars are usually taken from independently calculated, general-use spectral libraries, which typically contain spectra for a discrete set of effective temperatures, \(T_{\rm eff}\), surface gravities, \(\log g\), and metallicities,1\(Z\). However, there are several inconveniences and/or drawbacks in using such existing spectral libraries: Footnote 1: By a metallicity we understand here a single number that represents the ratio of chemical abundances of all species except of H and He with respect to the solar abundance. The solar metallicity is taken as \(Z_{\odot}=0.014\). * one needs to perform a number of 2- or 3-dimensional interpolations to determine a stellar spectrum for stellar parameters (effective temperature, surface gravity, metallicity) stipulated the isochrone table; item the parameter space covered by the library may be insufficient; for instance some \((T_{\rm eff},\log g)\) pairs required by the isochrone table are beyond the range of an adopted spectral library; * The evolution of a star changes chemical abundances of some species differently than others; an obvious example being an increase of nitrogen and decrease of carbon abundance with age. The pre-constructed spectral libraries do not typically consider such selective abundance patterns. * For certain studies, existing spectral libraries may have insufficient spectral resolution needed for intended study. These problems (or at least inconveniences) are readily lifted if a spectral library is specifically tailored for a given isochrone, so there are no gaps in the parameters space, and no interpolations are needed. The spectral resolution may also be chosen to be sufficiently high to allow even very detailed and accurate analysis. On the other hand, this may seem to be an overkill, or an unnecessary burden on computer memory, because in some instances the basic stellar parameters for several considered initial masses are very close, and thus the synthetic spectra are may also be very similar. However, the currently available computer memory, both internal, as well as external storage is readily available and is becoming increasingly cheaper, so this concern is no longer serious. ## 2 New isochrone-tailored spectral libraries In view of the reasons outlined above, we present here a set of synthetic spectral libraries specifically tailored to published isochrones (Ekstrom et al., 2011; Georgy et al., 2013 - hereafter referred to as Geneva isochrones). Specifically, we have constructed synthetic spectra for all initial masses, taking the basic stellar parameters (\(T_{\rm eff}\), \(\log g\), chemical abundances, and luminosity) from the isochrone tables. We leave a more detailed description of the modeling procedures (underlying model atmospheres; details of spectrum synthesis) to the Appendix A. Here we only stress that the presented spectra are _photospheric_ spectra, i.e. a possible stellar wind is not taken into account In the context of spectrum synthesis, the relevant quantities extracted from the isochrone table are: effective, temperature, surface gravity, luminosity, radius, and current chemical abundances of the most important species, namely He, C, N, O, Ne, and Al, and possibly rotational velocity, if rotation is being considered in the stellar evolution. Our current libraries include spectra for two different initial metallicities, \(Z=0.002\) and \(Z=0.0004\), each with and without considering rotation. In all cases, the isochrones are presented for 5 ages, \(\log(\mbox{age})=6.0,6.3,6.5,6.7\), and 7.0 years. Therefore, there are altogether 20 sets of spectra, stored in 20 individual directories. The individual isochrones are summarized in Table 1, which also shows the number of individual initial masses considered for a given isochrone. The naming convention is very simple and self-explanatory. For instance Z0002t6.7R contains a set of spectra for the metallicity \(Z=0.002\), at age \(t=10^{6.7}\) years, and with rotation (label R). The directory is composed of a set of synthetic spectra for all initial masses considered in the corresponding Geneva table, together with an overview table with the name *.tab, in this example Z002t6.7R.tab, that lists a subset of values from the corresponding Geneva isochrone table which are directly relevant to the spectrum synthesis. The individual columns are * initial mass [in \(M_{\odot}\)] * effective temperature [K]; * gravity acceleration at the surface [cm s\({}^{-2}\)] * logarithm of the total luminosity [in \(L_{\odot}\)] * equatorial rotational velocity [km s\({}^{-1}\)] * stellar radius [in \(R_{\odot}\)] * logarithm of the mass loss rate [in \(M_{\odot}\)yr\({}^{-1}\)]. * GA-Ed \begin{table} \begin{tabular}{l c c c c c} \hline Name/age & 6.0 & 6.3 & 6.5 & 6.7 & 7.0 \\ \hline Z002[age] & 52 & 52 & 115 & 125 & 137 \\ Z002t[age]R & 52 & 52 & 52 & 120 & 127 \\ Z0004[age] & 52 & 52 & 125 & 142 & 162 \\ Z0004[age]R & 52 & 52 & 124 & 129 & 146 \\ \hline \end{tabular} Here, [age] is a 3-elements string representing log (age) in years, e.g., 6.0, 6.3, etc. \end{table} Table 1: Number of considered initial masses in the individual isochrone sets * the last 5 columns are the abundances of He, C, N, O, and Ne relative to the solar abundance.. A screenshot of the first several lines of one of such file, is shown in Fig. 1. All spectra are constructed with a two-point resolution, \(R=10,000\), between 200 and 10,000 A, which allows for a large variety of studies. With this setup, each file contains the emergent absolute monochromatic luminosity at 78,324 wavelength points. The naming convention for the individual models is given by an example. For instance, the file Z002t6.7Rm27.583.spec covers the synthetic spectrum for a star with the initial mass \(M_{\rm ini}=27.583\,M_{\odot}\). Each such file contains three or four columns, depending on whether the isochrone is constructed without considering rotation (3 columns), or with rotation (4 columns). For the spectra without rotation, the individual columns are: * wavelength [A]; * monochromatic luminosity [erg s\({}^{-1}\)A\({}^{-1}\)] * monochromatic luminosity in the continuum (same units). We chose to specify the continuum spectrum at all wavelengths. This is not exactly necessary because the continuum is very smooth compared to the true synthetic spectrum, but this arrangement is very convenient because, again, it avoids interpolating in wavelength. For the spectral set corresponding to the isochrone constructed with rotation. the meaning is analogous, but requiring an additional explanations, namely * wavelength [A]; * monochromatic luminosity [erg s\({}^{-1}\)A\({}^{-1}\)], for a non-rotating star. Although the isochrone assumes that stars rotate, this luminosity is included in order to allow the user to consider some other treatments of rotation, or to adopt some more sophisticated treatment of distribution of rotational velocities and rotational axes. * monochromatic luminosity in the continuum; Figure 1: A screenshot of the beginning of an example *.tab file. * monochromatic luminosity, computed taking into account the rotation, with the rotational velocity taken as \(v\sin\approx 0.6\,v_{\rm eq}\), which represents a mean of projected rotational velocities assuming the rotation axes are distributed randomly. As we will describe in SS 3, and in Appendix B, we provide not only the individual initial-mass spectra, but also the IMF-integrated spectra, called imfinteg spectra. Some users may want first to examine the imfinteg files which contain the co-added spectra from all initial masses of an isochrone with a Salpeter IMF applied. Altogether, there are 20 files for 2 metallicities (Z002 or Z0004), 5 ages (log age=6.0, 6.3, 6.5, 6.7, and 7.00), and 2 rotational status (non-rotating or rapidly rotating). Inter-comparison of the spectral isochrones of the five different ages enables one to "watch" how aging affects lines of interest. Comparison of spectral isochrones of rotating stars with that of non-rotating stars of the same age can show surprising differences, because rapidly rotating stars have longer main-sequence lifetimes. The format of imfinteg spectra has 4 columns for non-rotating stars giving: wavelength, total flux, continuum flux, and flux normalized to the continuum. Imfinteg spectra for rapidly rotating stars has 5 columns, with the 5th (rightmost) column giving the total flux without rotational broadening applied. It is useful in untangling differences due to composition from differences in line strength. There is much to be learned from imfinteg spectra, but sooner or later, users will want to consult the individual initial-mass spectra from which the imfinteg spectra are formed. These spectra are invaluable in learning why a spectral line is strong or weak, or why the profile of spectral line has the shape that it does. ## 3 Creating Custom Integrated Spectra The individual spectra for stars with given initial masses are the basic ingredient of the present library. However, to obtain astronomically interesting results, we have to provide a mechanism to generate integrated spectra for the whole stellar system. To this end, we present a Python program, iso.py that contains routine integ. This routine integrates the individual spectra of a given isochrone, for a specified initial mass function (IMF), represented by a power law, and for specified lower and upper initial mass cutoffs. Some details of the integration procedure are presented in Appendix B. The program requires that all the *.spec files and the corresponding *.tab file are located in the same subdirectory, and all the filenames are the same as in the standard distribution of the files; otherwise the program would not work The program is called with several parameters: * - string, optional, if set, specifies the directory where the spectral set is located. Default is './', i.e. the current directory. pref - string that specifies the prefix of the file names with the spectra, i.e. the part of the filename common for all initial mass spectra. In other words, it is the part of the filenames before the mass indication, e.g., the string before m43.756.spec alpha - float, optional, sets the IMF power law index. If not specified, the default is alpha=\(-1.35\), i.,e. the Salpeter initial mass function. param - integer, optional, Sets the index of the column of the given spectral file that contains the integrated quantity. The default is param=1, i.e. one integrates the full synthetic spectrum. For param=2, the program integrates the continuum luminosity. mmin - float, optional. If set, it specifies the minimum mass for integrations (in \(M_{\odot}\)). The default is the lowest mass included in the isochrone table (typically 0.8). mmax - float, optional. Analogous to mmin, but for the maximum mass. The default is mmax given by the maximum initial mass of the isochrone table. wmin - float, optional. Sets the lowest wavelengths for the integrated spectrum. Default is wmin=0 wmax - float optional. Analogous for the highest wavelength. Default 10001. The program returns three arrays: wave - wavelengths [in A] spec - integrated spectrum for these wavelengths [erg s\({}^{-1}\) A\({}^{-1}\)] cont - Normalized luminosity, i.e the integrated luminosity divided by integrated luminosity in the continuum. The integrated spectra are normalized to 1 solar mass, so the total luminosity of a stellar system is obtained by multiplying spec or cont by the total mass, in units of the solar mass. The program also produces a file with the name composed of pref and.tot that contains the integrated spectrum. The file has four or five columns, depending one whether the isochrone is constructed for non-rotating or rotating stars, namely \(\bullet\) wavelength [A] \(\bullet\) integrated monochromatic luminosity [erg s\({}^{-1}\)A\({}^{-1}\)]. For isochrones with rotation, it is an integral of _rotated_ spectra;. \(\bullet\) integrated monochromatic luminosity in the continuum (same units); \(\bullet\) integrated monochromatic luminosity in the continuum; The files for isochrones with rotation contain in addition the fifth column, \(\bullet\) the integrated monochromatic luminosity evaluated without taking into account stellar rotation.. For example, a command (in plain Python) >>> import iso >>> wave,spec,cont = iso.integ(pref='Z002t6.0') produces an integrated spectrum for the Z00t6.0 isochrone for the Salpeter IMF, \(\alpha=-1.35\), and for all initial masses contained in the table. Besides the returned parameters wave, spec, cont, the program also produced file Z002t6.0.tot, whose content is described above. This example assumes that one runs Python in the directory that includes the spectral files for the appropriate isochrone. The spectra in this directory have filenames, for instance, Z002t6.0m0.8.spec, Z002t6.0Rm0.884.spec, etc. One may then simply plot the integrated spectrum as >>> import matplotlib.pyplot as plt >>> plt.plot(wave,spec) or with a number of appropriate keyword parameters. For isochrones with constructed with rotation, we stress that the non-rotated spectrum is given by the 2nd column, while the rotated spectra by the fourth column of the individual *.spec files. Therefore, the analogous integrated spectrum that takes into account rotation, and computed only between 1100 and 1300 A, is produced by >>> iso.integ(pref='Z002t6.0',param=3,wmin=1100,wmax=1300). To integrate over the masses between 1 and 30 \(M_{\odot}\), and with the exponent \(\alpha=-1.6\), and issues a command >>> w,s,c = iso.integ(pref='Z002t6.0R',alpha=-1.6,mmin=1,mmax=30) and the relative spectrum can be plotted as >>> plt.plot(w, s/c) The program isp.py also contains a simple routine specplot which plots several spectra contained in the package (or elsewhere). It is called with two parameters: files - string that specified list of lines to be plotted in the syntax of Linux command ls. For instance, for files = 'Z002t6.7Rm3[0-9]*.spec' one plots the content of all files that are selected by means of the command ls Z002t6.7Rm3[0-9]*.spec; that is spectra for all initial masses between 30 and 40 \(M_{\odot}\) of the isochrone Z002t6.7R. Note: if the program is not called from the directory where the files are located, one has to specify the whole path. param - integer, optional. Sets the index of the parameter to be plotted. Default param=1, i.e. the full spectrum. With param=2 one plots the relative spectrum. ## 4 Obtaining the spectral libraries and the codes The files can be downloaded from: [https://www.as.arizona.edu/~hubeny/isochrones](https://www.as.arizona.edu/~hubeny/isochrones) At this site, there are 20 gzipped tar files, each containing a set of spectra for individual initial masses for a given isochrone. In addition, each set contains the corresponding *.tab file, and the *tot file which contains an integrated spectrum with the Salpeter IMF - the isointeg spectra, mentioned at the end of SS 2. The latter file is included just as an example of an integrated, IMF-Weighted spectrum for the given age. However, using the accompanied Python program iso.integ, also residing in this site, the user can easily construct different IMF-weighted spectra ## Appendix A: Model atmospheres and synthetic spectra We use NLTE metal line-blanketed model atmosphere grids OSTAR2003 (Lanz & Hubeny 2003), and BSTAR2007 (Lanz & Hubeny 2007) that contain a large number models for various metallicities. The BSTAR grid cover effective temperatures between 15,000 and 30,000 K, and the OSTAR grid between 27,500 and 55,000 K. The highest \(\log g\) is 4.75, while the lowest \(\log g\) depends on effective temperature, and is essentially determined as the lowest gravity for which the model is stable (close to the Eddington limit). Since some basic stellar parameters, \(T_{\rm eff}\) and \(\log g\), stipulated in the isochrone tables are outside the range covered by the OSTAR and BSTAR grids, we have computed such models using the same procedure as used for original OSTAR and BSTAR grids. We have also extended the set of NLTE metal line-blanketed model to lower effective temperatures. The new model grid is called LBASTAR (for late-B and A stars), and will be described in a future paper (Hubeny et al., in prep.). The new models are computed using program tlusty (Hubeny & Lanz 1995), in its newest version Hubeny & Lanz 2017; Hubeny et al. 2021). THe individual spectra are constructed as follows: First, we find the appropriate metallicity set from the OSTAR, BSTAR, and LBASTAR grids, In our case. these are T-models (\(Z/Z_{\odot}=0.1\). We find values of \(T_{\rm eff}^{\rm grid}\) and \(\log g^{\rm grid}\) that represent the closest lower and higher values for the given \(T_{\rm eff}\) and \(\log g\), and interpolate the four grid model structures to a new one at the \(T_{\rm eff}\) and \(\log g\) given by the isochrone table, and compute a usual synthetic spectrum for this model. As shown by Lanz & Hubeny (2003), the most accurate way to obtain synthetic spectra for for a model atmosphere with parameters between the grid values of \(T_{\rm eff}\) and \(\log g\) considered in the grid is first to interpolate to the atmospheric structure (temperature, density, atomic level populations) to get a new model atmosphere, and with this interpolated model atmosphere to calculate a synthetic spectrum, possibly with modified abundances of chemical elements, provided that the modifications of abundances are relatively small (say, ip to about 0.3 dex; which is the case here). We use computer program synspec (Hubeny & Lanz 2011, 2017). ## Appendix B: IMF-Integrated spectra The synthetic the spectrum of the whole of the whole galaxy is obtained by a standard procedure, which we very briefly describe here. It is based on adopting certain initial mass function (IMF). For simplicity, we illustrate the procedure on a simple case of a power-law IMF, such as for instance the Salpeter (1955) initial mass function. The IMF is defined by \[f(m)\,dm=w_{0}\,m^{\alpha}\,dm, \tag{1}\] where \(f(m)dm\) is the number of stars with masses in the range \((m,m+dm)\), and \(w_{0}\) is a normalization constant; \(\alpha\) is the power-law exponent; for the Salpeter IMF, \(\alpha=-1.35\). To integrate the set of individual spectra to obtain the spectrum of the whole system as \[F_{\lambda}^{\rm tot}=w_{0}\sum_{i}F_{\lambda}(m_{i})\,w(m_{i}) \tag{2}\] The normalization constant is determined by the condition on the total mass of the system, \[M=\int_{m_{\rm min}}^{m_{\rm max}}m\,dm=w_{0}\int_{m_{\rm min}}^{m_{\rm max}} m^{1+\alpha}dm=w_{0}\left(m_{\rm max}^{2+\alpha}-m_{\rm min}^{2+\alpha} \right)/(2+\alpha), \tag{3}\] and therefore \[w_{0}=(2+\alpha)/\left(m_{\rm max}^{2+\alpha}-m_{\rm min}^{2+\alpha}\right)M \equiv w_{00}^{M}M. \tag{4}\] This procedure is used in the accompanied Python program iso.integ, described in SS 3..
2310.15132
Viability under Degraded Control Authority
In this work, we solve the problem of quantifying and mitigating control authority degradation in real time. Here, our target systems are controlled nonlinear affine-in-control evolution equations with finite control input and finite- or infinite-dimensional state. We consider two cases of control input degradation: finitely many affine maps acting on unknown disjoint subsets of the inputs and general Lipschitz continuous maps. These degradation modes are encountered in practice due to actuator wear and tear, hard locks on actuator ranges due to over-excitation, as well as more general changes in the control allocation dynamics. We derive sufficient conditions for identifiability of control authority degradation, and propose a novel real-time algorithm for identifying or approximating control degradation modes. We demonstrate our method on a nonlinear distributed parameter system, namely a one-dimensional heat equation with a velocity-controlled moveable heat source, motivated by autonomous energy-based surgery.
Hamza El-Kebir, Richard Berlin, Joseph Bentsman, Melkior Ornik
2023-10-23T17:38:34Z
http://arxiv.org/abs/2310.15132v1
# Viability under Degraded Control Authority ###### Abstract In this work, we solve the problem of quantifying and mitigating control authority degradation in real time. Here, our target systems are controlled nonlinear affine-in-control evolution equations with finite control input and finite- or infinite-dimensional state. We consider two cases of control input degradation: finitely many affine maps acting on unknown disjoint subsets of the inputs and general Lipschitz continuous maps. These degradation modes are encountered in practice due to actuator wear and tear, hard locks on actuator ranges due to over-excitation, as well as more general changes in the control allocation dynamics. We derive sufficient conditions for identifiability of control authority degradation, and propose a novel real-time algorithm for identifying or approximating control degradation modes. We demonstrate our method on a nonlinear distributed parameter system, namely a one-dimensional heat equation with a velocity-controlled moveable heat source, motivated by autonomous energy-based surgery. ## I Introduction In control systems, fault detection and mitigation is key in ensuring prolonged safe operation in safety-critical environments [1]. Any physical system undergoes gradual degradation during its operational life cycle, for instance due to interactions with the environment or from within as a result of actuator wear and tear. Gradual degradation or impairment, as the name suggests, often reduces the performance of a system in cases when potential degradation modes were not taken into account during control synthesis. Fault tolerance is a key property of systems that are capable of mitigating or withstanding system faults, including gradual degradation. A number of stochastic approaches to fault identification and mitigation have been developed in the past, with the main objective of estimating the _remaining useful life_ (RUL) of a system, and how this metric is influenced by the controller. Mo and Xie [2] developed an approach to approximate the loss in effectiveness cause by actuator component degradation using a reliability value. Their method relies on frequency domain analysis using the Laplace transform, which is limited to linear systems; in turn, proposed reliability improvements hinge on the use of a PID controller strategy and rely on a particle swarm optimization routine, which is highly restrictive with regard to runtime constraints and convergence guarantees. A similar approach was developed by Si _et al_. [3], where reliability was assessed using an event-based Monte Carlo simulation approach, wherein potential degradation modes are simulated _en masse_, further limiting the applicability of this method. This is due to the intractable number of potential failure modes that may be encountered in practice, which would demand a very large number of Monte Carlo simulations. In the deterministic setting, Wang _et al_. [4] considered control input map degradation and actuator saturation in discrete-time linear systems, where a fault-tolerant control is developed by solving a constrained optimization problem. Given the discrete-time linear system setting, [4] uses efficient linear matrix inequality (LMI) techniques for controller synthesis. However, the class of actuator degradations considered in [4] is limited to linear diagonal control authority degradation with input saturation. In the context of switching systems, Niu _et al_. [5] considered the problem of active mode discrimination (AMD) with temporal logic-constrained switching, where a set of known switching modes was known _a priori_. The AMD problem rests on a nonlinear optimization routine, which depends directly on temporal logic constraints and known switching modes that are often not known in advance. In the present work, we consider a class of faults, which we refer to as _actuator degradation_. The latter may arise as a result of wear and tear, software errors, or even adversarial intervention. Considering the following nonlinear control-affine dynamics \(\dot{x}(t)=f(x(t))+g(x(t))u(t)\), we define input degradation modes of the form \(\ddot{x}(t)=f(\ddot{x}(t))+Rg(\ddot{x}(t))Pu(t)\), where \(P\) and \(R\) are two unknown time-varying maps. We refer to \(P\) as a _control authority degradation map_ (CDM), whereas \(R\) is referred to as a _control effectiveness degradation map_ (CEM). Our focus in this work is on CDMs; a number of common CDMs are illustrated in Fig. 1. A CDM \(P\) effectively acts as a control input remapping, and can be thought of in the context of control systems with delegated control allocation, e.g., when an actuator with internal dynamics takes \(u(t)\) and remaps it based on its internal state. Such a setting includes common degradation modes such as deadband or saturation, or any other nonlinear transformation due to effects such as friction. In more extreme cases, it is possible that \(P\) maps a control signal \(u_{t}(t)\) to another control signal \(u_{j}(t)\) due to incorrect wiring or software design. The types of control authority degradation maps that we allow for in this work go beyond linear maps applied to discrete-time finite-dimensional linear systems, which hitherto been the main focus in prior work. We develop an _efficient passive algorithm for detection and identification of CDMs_, with the quality of the reconstructed CDM monotonically increasing with system run time. Using this reconstruction of the CDM, we develop a _novel method for stabilizing control signals_, with tight approximation error bounds that decrease with system run time. We note that we do not consider external disturbances or other unmodeled dynamics in this work; robustness results regarding the effects of disturbances will be the subject of future work. The results of this work allow for _guaranteed approximation of arbitrary control degradation maps_ without the need for knowledge of possible degradation modes or handcrafted filters, addressing an open problem in the literature The natural next step of this work, outside of the scope of this letter, is to approximate unviable control signal with their closest viable counterpart, with robustness bounds on the maximum trajectory deviation. ## II Preliminaries ### _Notation_ We use \(\|\cdot\|\) to denote the Euclidean norm. Given two sets \(A,B\subseteq\mathbb{R}^{n}\), we denote by \(A+B\) their Minkowski sum \(\{a+b:a\in A,b\in B\}\); the Minkowski difference is defined similarly. By \(2^{A}\) we refer to the power set of \(A\), i.e., the family of all subsets of \(A\). We denote a closed ball centered around the origin with radius \(r>0\) as \(B_{r}\). By \(B(x,r)\) we denote \(\{x\}+B_{r}\). We denote by \(\mathcal{L}(A,B)\) the set of bounded linear operators, and by \(\mathcal{C}(A,B)\) the set of closed linear operators between \(A\) and \(B\). We define \(\mathbb{R}_{+}:=[0,\infty)\). For two points in a Banach space \(\mathcal{B}\ni a,b\), let \([a,b]\) denote the convex hull of \(a\) and \(b\), i.e., \([a,b]:=\mathrm{conv}\{a,b\}\). Given a point \(x\in S\) and a set \(A\subseteq S\), we denote \(d(x,A):=\inf_{y\in A}d(x,y)\). We define the distance between two sets \(A,B\subseteq\mathbb{R}^{n}\) to be \[d(A,B):=\sup_{a\in A}\inf_{b\in B}\|a-b\|. \tag{1}\] We denote the Hausdorff distance as \[d_{\mathrm{H}}(A,B):=\max\{d(A,B),d(B,A)\}, \tag{2}\] An alternative characterization of the Hausdorff distance reads: \[d_{\mathrm{H}}(A,B)=\inf\{\rho\geq 0:\,A\subseteq B_{+\rho},B\subseteq A_{+ \rho}\}, \tag{3}\] where \(X_{+\rho}\) denotes the \(\rho\)_-fattening_ of \(X\), i.e., \(X_{+\rho}:=\bigcup_{x\in X}\{y\in\mathbb{R}^{n}:\|x-y\|\leq\rho\}\). We denote by \(\partial A\) the boundary of \(A\) in the topology induced by the Euclidean norm. For a function \(g:A\to B\), we denote by \(g^{-1}\) the inverse of this function if an inverse exists and otherwise denoting the preimage. By \(\mathrm{dom}(g)\) we refer to the domain of the function (in this case \(A\)). We denote by \(g^{\dagger}\) the Moore-Penrose pseudo-inverse of a linear function \(g\). We use the Iverson bracket notation \([\![\cdot]\!]\), where the value is \(1\) if the expression between the brackets is true, and \(0\) otherwise. In this work, we shall consider star-shaped sets, which are defined as follows: **Definition II.1** (Star-shaped Set and MGFs).: We call a closed compact set \(K\subseteq\mathcal{B}\)_star-shaped_ if there exist (i) \(\zeta\in K\), and (ii) a unique function \(\rho:\,B_{1}\to\mathbb{R}_{+}\), such that: \(K=\bigcup_{i\in B_{i}}[\zeta,\zeta+\rho(i)]\) where \(B_{1}\) denotes the unit ball in \(\mathcal{B}\). We call \(\rho\) a _Minkowski gauge function_ (MGF), and \(\zeta\) the _star center_. ### _Problem Formulation_ Consider a known nonlinear control-affine system of the form of \[\dot{x}(t)=f(x(t))+g(x(t))u(t), \tag{4}\] where \(x\in X\), \(u\in U\subseteq\mathcal{U}\), \(X\) and \(\mathcal{U}\) are Hilbert spaces, and \(f:X\to X\) and \(g:X\to\mathcal{L}(\mathcal{U},X)\). In this work, we assume \(\mathcal{U}=\mathbb{R}^{m}\). In addition, we assume that \(U\) is a _star-shaped_ subset of \(\mathbb{R}^{m}\) such that span \(U=\mathbb{R}^{m}\). Finally, we assume that the full-state of the degraded system, \[\dot{x}(t)=f(\bar{x}(t))+Rg(\bar{x}(t))Pu(t), \tag{5}\] is known without error. In system (5), a control action degradation map \(R\) can model changes in the control allocation function \(g\), which may include actuator reconfiguration, such as a change in the trim angle on aircraft control surfaces, or misalignment of actuators due to manufacturing imperfections or wear and tear. Since \(R\) acts after \(g\), it does not directly remap the control signal \(u(t)\), but it changes the action of a control input on the system; we therefore talk about control effectiveness, as opposed to control authority in the case of \(P\), which acts before \(g\). Changes in the drift dynamics \(f(x(t))\) will not be treated in this work. In addition to identifying or approximating CDM \(P\), we are interested in 'undoing' the effects of control authority degradation as much as possible. In particular, we are interested in the set of control signals (4) that can still be replicated in (5) when the CDM is acting; we call this the set of _viable control inputs_, \(U_{\mathrm{v}}\). With knowledge of \(P\), we Fig. 1: Comparison between various classes of control authority degradation maps. develop in this work a method to obtain, for \(u_{\rm cmd}\in U_{\rm v}\), \(u_{\rm v}\) such that \(Pu_{\rm v}=u_{\rm cmd}\); here, \(u_{\rm cmd}\) and \(u_{\rm v}\) are called commanded and vhiabilized control inputs, respectively. This approach is closely related to a technique known in the literature as _fault hiding_[8]. Fault hiding is achieved by introducing an output observer based on the output of the degraded system, and augmenting the nominal system model by introducing so-called virtual actuators, which requires a nonlinear reconfiguration block that is strongly dependent on the underlying problem structure and failure modes [8, SS3.6, p. 42]. In the setting considered in this work, we show that we can adopt the fault hiding philosophy under much less stringent constraints for a general class of systems and degradation modes. In this work, we are interested in modeling unknown degraded system dynamics (5) for a time-invariant control authority degradation map (CDM) \(P:U\rightarrow\bar{U}\), and no control effectiveness degradation (i.e., \(R=I\)). This amounts to reconstructing, or _identifying_, \(P\): **Problem 1** (Identifiability of Control Authority Degradation Maps).: For a class of time-invariant CDMs \(P\in P\), if possible, identify \(P\) based on a finite number of full state, velocity, and control input observations (\(\tilde{x}(t)\), \(\hat{\tilde{x}}(t)\), \(u(t)\)) of the degraded system. Ideally, we would like to identify general nonlinear CDMs with known bounds on the approximation error. We illustrate the control authority degradation modes that are covered in this work in Fig. 1. We now proceed by solving Problem 1 for an unknown multi-mode affine CDMs, which allows for approximating Lipschitz continuous nonlinear CDMs with bounded error. ## III Identifiability of Control Authority Degradation Maps We now consider Problem 1. Let us assume that for \(U\), the Minkowski gauge function \(\varrho\) is known. Let \(P:U\rightarrow\bar{U}\) be an unknown control authority degradation map (CDM). We assume that \(\bar{U}\) is also a star-shaped set, providing conditions on \(P\) and \(U\) under which this holds. It bears mentioning that star-shaped sets are more general than convex sets; most results presented in this work will apply to star-shaped sets, which include polytopes, polynomial zonotopes, and ellipsoids. Before we provide any results on the identifiability of control authority degradation modes, we pose the following key assumption on the nominal system dynamics (4). We allow for an _infinite-dimensional_ state-space \(X\), that is to say, \(X\) is a set of _functions_, but \(X=\mathbb{R}^{n}\) is also captured: **Assumption 1**.: For system (5), assume that 1. \(g(x)\) has closed range for all \(x\in X\); 2. \(g(x)\) is injective for all \(x\in X\), i.e., \(\ker(g(x))=\{0\}\); 3. \(\dot{x}\) is known at some \(x\in X\) with \(u=0\). _Remark 1_.: In the case of finite-dimensional systems, i.e., \(X\subseteq\mathbb{R}^{n}\), the first two conditions of Assumption 1 can be stated as: 1. The system is not overactuated, i.e., \(m\leq n\); 2. \(g(x)\) is of full-column rank for all \(x\in X\). We shall consider the case of multiple control degradation modes acting throughout the space \(U\). The simplest of the so-called _conditional control authority degradation modes_ (c-CDMs) acts only on a compact subset of \(U\); we refer to these c-CDMs as _partial control authority degradation modes_ (p-CDMs). Consider two compact star-shaped sets \(\bar{U},\bar{U}\subseteq\bar{U}\), and two p-CDMs \[P_{\bar{U}}(u) :=u+[\![u\in\bar{U}]\!](P-I)u, \tag{6}\] \[P_{\bar{U}}(u) :=u+[\![u\not\in\bar{U}]\!](P-I)u, \tag{7}\] for some control degradation map \(P\). Here, \(P_{\bar{U}}\) is an _internally acting_ partial CDM (i.e., acting inside \(\bar{U}\)), whereas \(P_{\bar{U}}\) is an _externally acting_ partial CDM (acting outside \(\bar{U}\)); when this distinction is immaterial, we use a combined hat and check symbol (e.g., \(\bar{U}\)), where \(\bar{U}\) is simply called the _affected set_ of control inputs. In reconstructing an \(N\)-mode c-CDM, we face the problem of discerning which control inputs belong to which conditional degradation mode. To make this problem tractable, we pose the following assumption: **Assumption 2**.: Let the internally acting \(N\)-mode c-CDM satisfy the following properties: 1. The number of modes \(N\) is known; 2. \(\bar{U}^{\prime}\) is a family of convex sets; 3. \(\bar{\mathcal{P}}\) is a family of affine maps denoted by \(Q_{i}=p_{i}+P_{i}\). 4. There exists a known \(\delta>0\), such that for all \(i\neq j\), \(d_{\rm H}\left((\bar{U}_{i},\bar{P}_{i}\bar{U}_{i}),(\bar{U}_{j},\bar{P}_{j} \bar{U}_{j})\right)\geq\delta\). We are also interested in obtaining outer-approximations of \(\bar{U}\) and inner-approximations of \(\bar{U}\) for each degradation mode, as illustrated in Fig. 2, so that we can restrict control inputs to regions that are guaranteed to be unaffected. Since we only have access to a finite number of control input samples, we pose the following assumption regarding the regularity of the MGF associated with \(P\bar{U}\). **Assumption 3**.: Assume that \(\bar{U}\) has star center \(\zeta=0\), and assume that the MGF \(\overline{\varrho}\) associated with \(\bar{U}\) is Lipschitz continuous, i.e., there exists a known \(\widetilde{L}\) such that \(|\overline{\varrho}(l)-\overline{\varrho}(l^{\prime})|\leq\widetilde{L}\|l-l^{ \prime}\|\), for all \(l,l^{\prime}\in B_{1}\). We now proceed to show that Assumption 3 holds for the image of Lipschitz star-shaped sets under affine maps. **Lemma 1**.: _Given a star-shaped set \(U\) characterized by a Lipschitz MGF \(\varrho\) and star center \(\zeta\), the range of \(U\) under an affine map \(Qu:=p+Pu\) is also a star-shaped set with Lipschitz MGF._ We can now pose a key result on the guaranteed approximation of Lipschitz MGFs from a finite set of samples. **Proposition 1**.: _Assume that Assumption 3 holds for the unknown MGFs \(\delta\) and \(\varrho\). Then, for some given \(\tilde{u},\tilde{u}^{\prime}\in\bar{U}\) and \(\hat{u},\hat{u}^{\prime}\not\in\hat{U}\), we have for all \(\mu\in[0,1]\):_ \[\begin{split}&\tilde{\varrho}\left(\frac{\mu\hat{l}+(1-\mu)\hat{l} ^{\prime}}{\|\mu\hat{l}+(1-\mu)\hat{l}^{\prime}\|}\right)\leq\\ &\min\left\{\|\hat{u}\|+(1-\mu)\hat{L}\|l-l^{\prime}\|,\|\hat{u} ^{\prime}\|+\mu\hat{L}\|\hat{l}-\hat{l}^{\prime}\|\right\},\end{split} \tag{8}\] _and_ \[\begin{split}&\hat{\varrho}\left(\frac{\mu\hat{l}+(1-\mu)\hat{l} ^{\prime}}{\|\mu\hat{l}+(1-\mu)\hat{l}^{\prime}\|}\right)\geq\\ &\max\left\{0,\|\hat{u}\|-(1-\mu)\hat{L}\|\hat{l}-\hat{l}^{\prime }\|,\|\hat{u}^{\prime}\|-\mu\hat{L}\|\hat{l}-\hat{l}^{\prime}\|\right\},\end{split} \tag{9}\] _where \(\hat{l}^{\prime}:=\hat{u}/\|\hat{u}\|\) and \(\hat{l}^{\prime}:=\hat{u}^{\prime}/\|\hat{u}^{\prime}\|\)._ Proof.: This result follows directly from non-negativity of the MGF and the mean value theorem, given the Lipschitz continuity of \(\tilde{\varrho}\) as assumed in Assumption 3. The results given in Proposition 1 allow for direct inner-approximation of \(\hat{U}\) and outer-approximation of \(\hat{U}\); these results will allow us to restrict closed-loop control inputs to a subset of \(U\) that is _guaranteed to be unaffected by \(P\)_ as illustrated in provided in Fig. 2. The method for approximating \(\tilde{U}\) will be rigorized in the next theorem. We now pose the main result on the identifiability of \(N\)-mode conditional control authority degradation modes (c-CDMs), where multiple affine CDMs act on disjoint subsets of \(U\); this will allow us to approximate of Lipschitz continuous CDMs as shown at the end of the next section. **Theorem 1** (Reconstructing \(N\)-mode Affine c-CDMs).: _Consider system (5) and Assumptions 1-2. Assume that the c-CDM is represented by \(N\) unknown internally acting affine maps \(Q_{i}\), each acting on mutually disjoint unknown star-shaped sets \(\tilde{U}_{i}\subseteq U\), giving \(Q_{\hat{U}^{\prime}}\) as the \(p\)-CDM. Let there be a given array of distinct state-input pairs \(\left[(\hat{x}\{i\},u\{i\})\right]_{i=1}^{N}\), and a corresponding array of degraded velocities \(\{\hat{x}\{i\}|_{i=1}^{N}\}\) obtained from system (5), with \(N^{\prime}\geq N(m+1)\). Let there also be a given array of undegraded state-input pairs \(\left[(x_{\ast}[i],u_{\ast}[i])\right]_{i=1}^{N}\), with \(M\geq m\). Assume that there exist \(m\) state-input pairs indexed by \(j\) and \(J_{\ast}\), such that the arrays of input vectors \(\{u\{J_{j}\}\}_{j=1}^{m}\) and \(\{u_{\ast}\{J_{\ast}\}\}_{j=1}^{m}\) are linearly independent._ _Cluster the array \(\left[(u[i],\overline{u}\{i\})\right]_{i=1}^{N^{\prime}}\) into \(N\) clusters with a Hausdorff distance of at least \(\delta\) between each pair of clusters. If each cluster \(i\) contains at least \(m\) vectors \(u[i]\) that are linearly independent, then \(Q_{\hat{U}^{\prime}}\) can be approximated as follows:_ \[\tilde{Q}_{\hat{U}^{\prime}}u=\begin{cases}u&u\not\in\tilde{U}_{\text{outer}}, \\ \sum_{i=1}^{N}[u\in\tilde{U}_{i,\text{inner}}]Q_{i}u&u\in\tilde{U}_{\text{inner} },\\ \text{inconclusive}&u\in\tilde{U}_{\text{outer}}\setminus\tilde{U}_{\text{inner} },\end{cases} \tag{10}\] _where \(\tilde{U}_{\text{inner}}:=\bigcup_{i=1}^{N}\tilde{U}_{i,\text{inner}}\) and \(\tilde{U}_{\text{outer}}:=\bigcup_{i=1}^{N}\tilde{U}_{i,\text{outer}}\). Each \(Q_{i}\) is obtained by considering for each cluster \(v_{i}:=g^{\dagger}(\hat{x}[j](\hat{x}[j]-f(\tilde{x}[j]))\) where index \(j\) is not part of the array of linearly independent inputs indexed by \(j\), \(\mathbf{u}:=\left[\nicefrac{{u}}{{u}}\right]_{i=1}^{-v_{i}}\cdot\nicefrac{{u}} {{u}}\right]\) and \(\Delta\mathbf{u}:=\left[g^{\dagger}(\tilde{x}[j]_{i})(\tilde{x}[j]_{j}]-f( \tilde{x}[j]_{j}))-u[J_{j}]-v_{i}\right]_{j=1}^{m}\). Linear operator \(P_{i}\) is obtained as_ \[P_{i}=(\mathbf{u}+\Delta\mathbf{u})\mathbf{u}^{\mathsf{T}}(\mathbf{u}\mathbf{u }^{\mathsf{T}})^{-1}. \tag{11}\] _The translation \(p_{i}\) is obtained as \(p_{i}=v_{j}-P_{i}u[j]\), which yields the \(i\)'th mode affine CDM \(Q_{i}\):_ \[Q_{i}u:=p_{i}+P_{i}u. \tag{12}\] _Here, each affected set is approximated as follows: In case \(\tilde{U}_{i}\) is internally acting (i.e., \(\tilde{U}_{i}=\tilde{U}_{i}\)), (8) yields an outer-approximation to \(\tilde{\varrho}_{i}\) by taking a convex combination of the \(m\) basis vectors \(\{\overline{i}_{\ast}^{\prime}\{u_{\ast}\}|=u_{\ast}U_{\ast,j}\}/\|u_{\ast}[u_{ \ast}[j]]\|_{j=1}^{m}\) and their values. Similarly, for externally acting \(\tilde{U}_{i}\) (i.e., \(\tilde{U}_{i}=\tilde{U}_{i}\)), (9) yields an inner-approximation to \(\hat{\varrho}_{i}\) using \(m\) basis vectors \(\{\overline{i}_{\ast}^{\prime}\{U_{j}\}|\}_{j=1}^{m}\). Inner- and outer-approximations satisfy the relation \(\tilde{U}_{i,\text{inner}}\subseteq\tilde{U}_{i}\subseteq\tilde{U}_{i,\text{outer}}\) (cf. Fig. 2)._ Proof.: We first consider a globally acting affine CDM. We obtain the closed-form expression of \(P_{i}\), (11), by solving the quadratic program \(\min_{P\in\mathcal{C}(U,\tilde{U})}\|P\mathbf{u}-(\mathbf{u}+\Delta\mathbf{u} )\|^{2}\), which yields a unique linear map \(P\) that maps \(\mathbf{u}\) to \(\mathbf{u}+\Delta\mathbf{u}\) as desired. The translation term \(p_{i}\) can be verified by direct substitution in (12), yielding the affine map \(Q_{i}\). In (11), since the inverse of \(\mathbf{u}^{\mathsf{T}}\mathbf{u}\) must be taken, we require both that \(\mathbf{u}\) is a square matrix, and \(\mathbf{u}^{\mathsf{T}}\mathbf{u}\) is invertible. This is achieved by considering \(\mathbf{u}\in\mathbb{R}^{m\times m}\) of full column rank, as guaranteed by the linear independence hypothesis. Regarding \(g^{\dagger}(x)\), the Moore-Penrose pseudo-inverse is defined for a general Hilbert space \(X\), provided that range\((g(x))\) is closed for all \(x\in X\)[10, SS4.2, p. 47]. For \(g^{\dagger}(x)\) to be a left-inverse, a necessary condition is that \(g(x)\) be injective, i.e., \(\ker(g(x))=\{0\}\) for all \(x\in X\)[10, Cor. 2.13, p. 36]. Finally, the translation term \(p\) is accounted for as well (12). To approximate the \(i\)'th affected set, \(\tilde{U}_{i}\), we require a spanning set of basis vectors that lie within \(\tilde{U}_{i}\), as provided for in the hypotheses. The unknown MGF associated with \(\tilde{U}_{i}\) can be obtained according to Proposition 1 using (8)-(9), where an inner-approximation is desired for internally acting p-CDMs, and outer-approximations for externally acting p-CDMs. These approximations are obtained through repeated Fig. 2: Comparison between inner- and outer-approximations of \(\hat{U}\) and \(\tilde{U}\) respectively, based on Proposition 1 and Theorem 1 for a 1-mode c-CDM. The region with top-right-pointing hatching indicates the set in which the control input is unaffected; the red-colored region indicates the affected set. The respective approximations of \(\tilde{U}\) allow one to find regions in which control inputs are guaranteed to be unaffected. In the left image, the set indicated by top-left-pointing hatching is an inner-approximation of \(\hat{U}\), and in the right image this set is an outer-approximation of \(\hat{U}\). convex combinations and the corresponding inequality given in (8)-(9), for a total of \(m\) times; an explicit expansion of the resulting expression is omitted here for the sake of space. _Remark 2_.: This result incorporates p-CDMs that map a set \(\tilde{U}\) to a constant, e.g., \(Q_{\tilde{U}^{\prime}}\tilde{U}=p\). To highlight the utility of this result, it should be noted that the _hypotheses given here allow for commonly encountered degradation modes such as deadzones and saturation to be modeled_ (see Fig. 1(4)). Additionally, Theorem 1 allows for _discontinuous_ control authority degradation modes, a property that is rarely present in prior work. We can now consider the case in which \(P\) is a Lipschitz continuous CDM. We consider an approximation of \(P\) by an \(N\)-mode affine c-CDM \(\tilde{P}\), for which we derive an explicit error bound given that the Lipschitz constant of \(P\), \(L_{P}\), is known. **Theorem 2** (Approximating Lipschitz continuous CDMs by \(N\)-mode Affine c-CDMs).: _Let the hypotheses of Theorem 1 hold, with the exception that \(P:=Q_{\tilde{U}^{\prime}}\) is now a Lipschitz continuous CDM with Lipschitz constant \(L_{P}\) and Assumption 2 is now dropped. If \(N\) clusters that satisfy the linear independence requirements of Theorem 1 are identified, then the resulting \(N\)-mode affine c-CDM approximation \(\tilde{P}\) has the following error:_ _For all \(u\in\tilde{U}_{i,\mathrm{inner}}\) and all \(i=1,\ldots,N\),_ \[\|Pu-\tilde{Pu}\|\leq\|\min_{j=1,\ldots,m}\epsilon_{i,j}+L_{P}\|u[i,j]-u\|, \tag{13}\] _where \(\epsilon_{i,j}:=\|Pu_{i}[j]-\tilde{P}_{i}u_{i}[j]\|\), and \(u[i,j]:=\mathbf{u}_{i}[j]\), where \(\mathbf{u}_{i}\) is an array composed of all control inputs in the \(i\)'th cluster._ Proof.: The proof is similar to that of Theorem 1, with the error bound (13) following an application of the triangle inequality in combination with the Lipschitz continuity of \(P\), the properties of the affine maps \(\tilde{P}_{i}\), and the known samples of \((u,Pu)\). We can now pose a convergence result on the \(N\)-mode affine c-CDM approximation \(\tilde{P}\) of a Lipschitz continuous CDM \(P\). _Corollary 1_.: Error bound (13) is monotonically decreasing in the the number of samples \(N^{\prime}\) and the number of c-CDM modes \(N\). In the limit of the \(N^{\prime},N\rightarrow\infty\), error bound (13) converges to zero. Proof.: In (13), \(\epsilon_{i,j}\) monotonically converges to zero, because the operator norm \(\|P-\tilde{P}_{i}\|\) restricted to the \(i\)'th cluster converges monotonically to zero; this fact follows by considering that the diameter of each cluster converges to zero for a greater number of samples and clusters, similarly to the proof of Lemma 2, as well as the fact that \(P\) is Lipschitz continuous, meaning that the total variation of \(P\) on this restriction decreases monotonically as well. Another consequence of the diminishing cluster diameter is that \(\|u[i,j]-u\|\) converges monotonically to zero. In the results given above, we find that it is in general impossible to uniquely determine each \(\tilde{U}\) from finitely many samples. Intuitively, given a greater number of distinct points inside \(\tilde{U}\) and \(U\setminus\tilde{U}\), it should be possible to more tightly approximate \(\tilde{U}\). This idea is illustrated in Fig. 3. We now state a lemma on the convergence of inner- and outer-approximations of the affected set \(\tilde{U}\). **Lemma 2**.: _Consider \(\epsilon>0\), such that a given set of \(N_{\epsilon}\geq m\) distinct pairs \((u,P_{\tilde{U}}u)\) denoted by \(\mathcal{GU}_{N,\epsilon}\), satisfies Assumptions 1-3, where \(P_{\tilde{U}}\) is (i) an \(N\)-mode affine c-CDM, or (ii) a Lipschitz continuous CDM. Let \(\mathcal{GU}_{N}\) be such that for each \(u_{i}\) in \(\mathcal{GU}_{N,\epsilon}\), \(\bigcup_{i=1}^{N_{\epsilon}}B_{\epsilon}(u_{i})\supseteq\tilde{U}\); i.e., \(\epsilon\)-balls centered at each sampled control input form a cover of \(\tilde{U}\). Let \(\tilde{U}_{\mathrm{inner}}^{N_{\epsilon}}\) and \(\tilde{U}_{\mathrm{outer}}^{N_{\epsilon}}\) denote the corresponding inner- and outer-approximations of \(\tilde{U}\) using the procedure given in Theorem 1 from \(\mathcal{GU}_{N,\epsilon}\). Then, we have \(\tilde{U}_{\mathrm{inner}}^{N_{\epsilon}}\subseteq\tilde{U}_{\mathrm{inner}}^ {N_{\epsilon}}\subseteq\tilde{U}\) and \(\tilde{U}\subseteq\tilde{U}_{\mathrm{outer}}^{N_{\epsilon}}\subseteq\tilde{ U}_{\mathrm{outer}}^{N_{\epsilon}}\) for all \(\epsilon^{\prime}<\epsilon\). In addition, we have \(\lim_{\epsilon\to 0}\tilde{U}_{\mathrm{inner}}^{N_{\epsilon}}=\lim_{\epsilon\to 0} \tilde{U}_{\mathrm{outer}}^{N_{\epsilon}}=\tilde{U}\)._ Proof.: Since it is assumed that the pairs in \(\mathcal{GU}_{N,\epsilon}\) are distinct, the approximations of \(\tilde{\varrho}\) and \(\tilde{\varrho}\) obtained in Theorem 1 will become increasingly tight for decreasing \(\epsilon\), since the expressions derived in Theorem 1 will rely increasingly less on the Lipschitz bound assumption. Since \(d_{\mathrm{H}}(\tilde{U}_{\mathrm{inner}}^{N_{\epsilon}},\tilde{U}_{\mathrm{outer}}^ {N_{\epsilon}})\) is monotonically decreasing for decreasing \(\epsilon\), in the limit of \(\epsilon\to 0\), both sequences will converge to \(\tilde{U}\) in the Hausdorff distance. This follows from the fact that the Hausdorff distance between the boundary of \(\tilde{U}\) and the sampled points \(u\) decreases monotonically with decreasing \(\epsilon\), leading to tighter approximations of \(\tilde{\varrho}\) and \(\tilde{\varrho}\) as per Proposition 1. _Remark 3_.: In Lemma 2, note that the \(\epsilon\)-covering argument is required to ensure that the distinct points are sufficiently dispersed; simply considering \(N\rightarrow\infty\) does not ensure convergence of the Hausdorff distance between the inner- and outer-approximation to zero. This fact can also be observed when looking at Fig. 3. Fig. 3: Comparison between inner- and outer-approximations of \(\tilde{U}\) and \(\tilde{U}\) respectively, based on Proposition 1 and Theorem 1 for an increasing number of samples for a 1-mode c-CDM. Clearly, for a larger number of points of sufficiently dispersed points, increasingly tight approximations are obtained as formalized in Lemma 2. ## IV Application We consider an infinite-dimensional system based on a 3D model of tissue thermodynamics during electrospray [1]: \[\dot{z}(t,\xi) =a\nabla^{2}z(t,\xi)+q(\xi)u_{1}, \tag{14}\] \[d(t) =z(t,1)u_{2}\] where \(u\in[0,10]\times[0,1]\). The unit heat source is modeled as \(q(\xi)=\frac{1}{\varepsilon}[\xi\in[0,\varepsilon]]\), for some known \(\epsilon>0\). This model approximates a slab of tissue with the state representing the surface temperature; \(u_{1}\) denotes the input power and \(u_{2}\) denotes the needle depth. For simplicity, we set the input power \(u_{1}=1\), and consider only the needle depth \(u_{2}\) as the free control input. We can express system (14) as affected by a CDM \(P\) as: \[\dot{x}(t,\xi)=\begin{bmatrix}\dot{z}(t,\xi)\\ \dot{d}(t)\end{bmatrix}=\begin{bmatrix}a\nabla^{2}x_{1}(t,\xi)\\ 0\end{bmatrix}+\begin{bmatrix}q(\xi)&0\\ 0&1\end{bmatrix}P\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix}.\] We consider a CDM of the form \(Pu=\begin{bmatrix}I&P_{12}\\ 0&I\end{bmatrix}u\), where \(P_{12}\) is the map to be identified. We are interested in a 3-mode piecewise linear CDM \(P_{12}\), with \(\tilde{U}_{1}=[0,0.25]\), \(\tilde{U}_{2}=[0.5,0.75]\), and \(\tilde{U}_{3}=[0.75,1]\); these regions are illustrated in Fig. 4. Region 1 corresponds to a charred region at the top of the tissue where the needle does not fully contact the tissue. Region 2 is a layer of pristine tissue, where the original dynamics act. Region 3 is a layer of highly vascularized tissue, in which a large fraction of heat that is added to the system gets transported away. We consider a piecewise linear function \[P_{12P} := (0.25+3p)[p<0.25]+[0.25\leq p\leq 0.75]\] \[+ (2.5-2p)[p>0.75].\] We consider a sinusoidal control signal for the probe depth with a period of 0.3 seconds, \(u_{2}(t)=(1-\cos(20\pi t/3))/2\), and a state-input sampling frequency of 20 Hz. We assume stochastic sampling periods, where the time is perturbed with a uniform 0.01 second error to model signal processing delays. The underlying goal of this application is to perform passive probing of the affected tissue layers and reconfigure the thermodynamics model to account for tissue damage, as is commonly encountered in electrosurgery. Fig. 5 shows on the left the Hausdorff distance between each affected region over time to show that approximations become tighter with time, according to the decreasing minimal covering radius \(\epsilon\) (right), as shown in Lemma 2. After three samples in each region, we uniquely identify the appropriate affine map, but the inner-approximation of the affected region is refined passively over time. ## V Conclusion In this work, we have introduced the concept of a _control authority degradation map_ (CDM). We have proved conditions on the identifiability of a broad class CDMs, including \(N\)-mode affine CDMs and Lipschitz continuous CDMs, for a class of affine-in-control nonlinear systems. Based on the identifiability results, we have formulated a constructive method for reconstruction or approximating CDMs, with explicit bounds on the approximation error. Our CDM identification method is executable in real time, and is guaranteed to monotonically decrease in error as more full-state observations become available. We apply our methods of CDM identification and stabilization of control signals to a controlled partial differential equation motivated by an electrosurgical process, showing how our guaranteed CDM reconstruction quality improves over time.
2302.05130
Nanocrystalline CaWO$_4$ and ZnWO$_4$ Tungstates for Hybrid Organic-Inorganic X-ray Detectors
Hybrid materials combining an organic matrix and high-Z nanomaterials show potential for applications in radiation detection, allowing unprecedented device architectures and functionality. Herein, novel hybrid organic-inorganic systems were produced using a mixture of tungstate (CaWO$_4$ or ZnWO$_4$) nanoparticles with a P3HT:PCBM blend. The nano-tungstates with a crystallite size of 43 nm for CaWO$_4$ and 30 nm for ZnWO$_4$ were synthesized by the hydrothermal method. Their structure and morphology were characterized by X-ray diffraction and scanning electron microscopy. The hybrid systems were used to fabricate direct conversion X-ray detectors able to operate with zero bias voltage. The detector performance was tested in a wide energy range using monochromatic synchrotron radiation. The addition of nanoparticles with high-Z elements improved the detector response to X-ray radiation compared with that of a pure organic P3HT:PCBM bulk heterojunction cell. The high dynamic range of our detector allows for recording X-ray absorption spectra, including the fine X-ray absorption structure located beyond the absorption edge. The obtained results suggest that nanocrystalline tungstates are promising candidates for application in direct organic-inorganic X-ray detectors.
Inga Pudza, Kaspars Pudzs, Andrejs Tokmakovs, Normunds Ralfs Strautnieks, Aleksandr Kalinko, Alexei Kuzmin
2023-02-10T09:27:30Z
http://arxiv.org/abs/2302.05130v1
Nanocrystalline CaWO\({}_{4}\) and ZnWO\({}_{4}\) Tungstates for Hybrid Organic-Inorganic X-ray Detectors ###### Abstract Hybrid materials combining an organic matrix and high-Z nanomaterials show potential for applications in radiation detection, allowing unprecedented device architectures and functionality. Herein, novel hybrid organic-inorganic systems were produced using a mixture of tungstate (CaWO\({}_{4}\) or ZnWO\({}_{4}\)) nanoparticles with a P3HT:PCBM blend. The nano-tungstates with a crystallite size of 43 nm for CaWO\({}_{4}\) and 30 nm for ZnWO\({}_{4}\) were synthesized by the hydrothermal method. Their structure and morphology were characterized by X-ray diffraction and scanning electron microscopy. The hybrid systems were used to fabricate direct conversion X-ray detectors able to operate with zero bias voltage. The detector performance was tested in a wide energy range using monochromatic synchrotron radiation. The addition of nanoparticles with high-Z elements improved the detector response to X-ray radiation compared with that of a pure organic P3HT:PCBM bulk heterojunction cell. The high dynamic range of our detector allows for recording X-ray absorption spectra, including the fine X-ray absorption structure located beyond the absorption edge. The obtained results suggest that nanocrystalline tungstates are promising candidates for application in direct organic-inorganic X-ray detectors. tungstates; hybrid organic-inorganic X-ray detectors; X-ray sensing + Footnote †: journal: Article 0 Footnote †: journal: Article ## 1 Introduction Nowadays, developing new radiation detectors based on nanomaterials is an active field of research [1]. Among the different types of detectors, hybrid organic-inorganic systems for X-ray detection have attracted considerable attention during the last ten years [2]. The strong advantages of such systems are the combination of relatively inexpensive, easy-to-manufacture, flexible, and low-bias voltage (\(<\)10 V) organic semiconductors with high-Z inorganic (nano-)compounds [1; 3; 4; 5]. The latter provide a large X-ray cross-section and control of spectral selectivity, which improve the absorption efficiency and sensitivity while maintaining the beneficial physical properties of the host organic matrix [2]. Various inorganic materials have been proposed for use in hybrid organic-inorganic systems for X-ray detection [2]. However, the search for the best system(s) that can be used in everyday applications remains one of the most important and challenging tasks in the field [6; 7]. Here, we propose tungstates with a general chemical formula of AWO\({}_{4}\)[8] (where A is a divalent ion, for example, Ca, Sr, Ba, Pb, Ni, Zn, or Cd) as a new class of materials for use in hybrid organic-inorganic systems for direct-conversion X-ray detection. Using tungstates opens up a wide range of possibilities for solving a specific problem by optimizing their chemical composition and degree of crystallinity. This is convenient for producing hybrid systems but has not been exploited until now. Another advantage of tungstates for use in X-ray detectors is the high Z of tungsten (Z = 74) and the possibility of selecting the Z number of the second metal ion in a wide range. This allows for optimizing the absorption efficiency in a certain range of X-ray energies. In this study, two possible candidates for use as the X-ray absorber in hybrid organic-inorganic direct X-ray detectors, i.e., nanocrystalline tungstates CaWO\({}_{4}\) and ZnWO\({}_{4}\) with the scheelite and wolframite crystallographic structures [8], respectively, were synthesized and characterized by X-ray diffraction (XRD) and scanning electron microscopy (SEM). The X-ray detectors were fabricated based on a mixture of nanotungstates with a P3HT:PCBM blend, and their ability to detect X-rays was demonstrated using tunable synchrotron radiation. ## 2 Materials and Methods ### Nanoparticle Synthesis and Characterization CaWO\({}_{4}\) and ZnWO\({}_{4}\) nanoparticles (NPs) were produced by the hydrothermal method [9]. Citric acid C\({}_{6}\)H\({}_{8}\)O\({}_{7}\) was used as a surfactant/capping agent and provided kinetic grain size control [10]. First, 3 mmol of CaCl\({}_{2}\) (97%, Alfa Aesar, Haverhill, MA, USA) and Na\({}_{2}\)WO\({}_{4}\cdot\)2H\({}_{2}\)O (\(\geq\)99%, 97%, Alfa Aesar, Haverhill, MA, USA) were separately dissolved in deionized water (Figure 1). Next, citric acid (1.5 mmol) was added to the CaCl\({}_{2}\) solution, and the obtained mixed solution was subsequently added to the Na\({}_{2}\)WO\({}_{4}\cdot\)2H\({}_{2}\)O solution. The solution pH was adjusted to 9 by adding an appropriate amount of NaOH (\(\geq\)98%, Sigma-Aldrich, St. Louis, MO, USA) solution in water. The obtained solution was mixed under constant magnetic stirring for 30 min. One part (16 mL) of the resulting solution was sealed in a Teflon-lined stainless-steel autoclave (25 mL) and was allowed to react at \(\sim\)160 \({}^{\circ}\)C for 24 h, followed by natural cooling to room temperature. The rest of the solution was left at room temperature (RT) for 24 h for a comparison. The synthesis of ZnWO\({}_{4}\) NPs was analogous, only we used Zn(NO\({}_{3}\))\({}_{2}\cdot\)6H\({}_{2}\)O (98%, Sigma-Aldrich, St. Louis, MO, USA) as the zinc source. Note that in this case, a lower pH value (pH = 8) was used, because an increase in pH to 9 resulted in ZnO impurities. The precipitates of nanotungstates obtained after hydrothermal treatment were washed and centrifuged several times in the following sequence: distilled water, isopropanol, acetone, and chlorobenzene, and finally dried in air at 75 \({}^{\circ}\)C. The phase composition and crystallinity of all samples were controlled by powder X-ray diffraction (XRD) at room temperature using a benchtop Rigaku MiniFlex 600 diffractometer (Rigaku, Tokyo, Japan) with Bragg-Brentano geometry (Cu K\({}_{\alpha}\) radiation), operated at 40 kV and 15 mA. The crystallite size of NPs was estimated using the Rietveld refinement method as implemented in Profex software [11]. Figure 1: Schematics of CaWO\({}_{4}\) nanoparticle synthesis. See text for details. The sample morphology was studied by scanning electron microscopy (SEM) in immersion mode using a Helios 5 UX microscope (Thermo Fisher Scientific, Waltham, MA, USA) (Elstar in-lens SE TLD detector) operated at 2.00 kV. The particle-size distribution was evaluated considering the statistics of 200 NP measurements in the SEM micrographs. ### Hybrid Organic-Inorganic X-ray Detector Fabrication and Measurements Hybrid organic-inorganic X-ray detectors were fabricated on top of the 25 \(\times\) 25 mm ITO (In\({}_{2}\)O\({}_{3}\):Sn)-coated glass with a sheet resistance of 5 \(\Omega\)/_sq_ (Prazisions Glas & Optik GmbH ). A poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS; Heraeus Al4083) layer with a thickness of 40 nm was used as the hole transport and electron blocking layer. It was spin-coated in air (2500 rpm for 40 s with an acceleration of 2500 rpm/s) and annealed at 150 \({}^{\circ}\)C for 10 min. A suspension of tungstate NPs and P3HT:PCBM was prepared by mixing the tungstate powder with a solution of the P3HT:PCBM mixture (weight ratio 1:1) in chlorobenzene (99.8% anhydrous, Sigma-Aldrich, St. Louis, MI, USA), followed by sonication of the premix for 1 h. The weight ratio of NPs:P3HT:PCBM in the suspension was 2:1:1. Thin films were fabricated from the suspension by the blade-casting method on a substrate heated to 75 \({}^{\circ}\)C. The P3HT:PCBM mixture was crystallized by annealing at 140 \({}^{\circ}\)C for 15 min. We deposited a 5 nm thick hole-blocking layer of 4.7-diphenyl-1,10-phenanthroline (BPhen, Sigma-Aldrich, St. Louis, MI, USA) and 100 nm thick Al electrode on top of the hybrid layer by thermal evaporation in a vacuum at a pressure of less than \(7\times 10^{-6}\) mbar. Al electrodes were deposited in a way that six separate "active pixels" with a size of 4 \(\times\) 4 mm were formed and could be independently tested. Thus, the final hybrid organic-inorganic X-ray detector was composed of five ITO/PEDOT:PSS/NPs:P3HT:PCBM/BPhen /Al layers. The detectors were additionally encapsulated with glass to reduce their possible degradation in air. No significant degradation of the detectors was observed during the experiments. X-ray measurements were conducted with the DESY PETRA-III P64 Advanced X-ray Absorption Spectroscopy undulator beamline [12]. The PETRA-III storage ring operated at \(E\) = 6 GeV and current \(I\) = 100 mA in top-up 40 bunch mode. Fixed-exit double-crystal monochromator Si (111) was used to select the required X-ray energy from the undulator photon source. The X-ray intensity \(I_{0}\) before the sample was monitored with an ionization chamber. The beam size on the sample was about \(1\times 1\) mm. The hybrid organic-inorganic X-ray detector was placed inside the vacuum chamber and connected to a Keithley 428 current amplifier. The signal \(I_{detector}\) was measured with the P64 beamline ionization chamber monitoring setup. The absolute values of the detected photocurrent were in the range of 0.1-1.0 nA. A passivated implanted planar silicon (PIIPS) detector (Canberra) was used for the simultaneous detection of X-ray fluorescence. All measurements were recorded in a dark environment to exclude possible photoelectric effects induced by light in the experimental hutch. ## 3 Results and Discussion Bulk CaWO\({}_{4}\) and ZnWO\({}_{4}\) tungstates have different crystallographic structures (scheelite and wolframite [8]) composed of WO\({}_{4}\) tetrahedral and WO\({}_{6}\) octahedral units (see the insets in Figure 2), respectively. Both calcium and zinc ions are coordinated with oxygen ions; however, calcium ions have eight-fold coordination, while zinc ions have six-fold coordination. In scheelite CaWO\({}_{4}\), Ca\({}^{2+}\) cations are located between slightly distorted WO\({}_{4}\) tetrahedra [13], whereas in wolframite ZnWO\({}_{4}\), distorted WO\({}_{6}\) and ZnO\({}_{6}\) octahedra are connected by edges and form infinite zigzag chains [14]. Both tungstates can be prepared in the nanocrystalline form [15; 16]. The XRD patterns of CaWO\({}_{4}\) and ZnWO\({}_{4}\) nanocrystalline samples synthesized at RT and using hydrothermal treatment at \(\sim\)160 \({}^{\circ}\)C for 24 h in the autoclave are shown in Figure 2. The CaWO\({}_{4}\) nanoparticles prepared at RT demonstrated weak crystallinity, while the ZnWO\({}_{4}\) nanoparticles were amorphous. At the same time, the XRD patterns of hydrothermally treated tungstate samples contained many Bragg peaks, which could be indexed to the pure tungstate phases (PDF Card 00-041-1431 for CaWO\({}_{4}\) and PDF Card 04-009-8448 for ZnWO\({}_{4}\)): tetragonal phase (space group \(I4_{1}/a\)) with the lattice constants \(a=b=5.246\) A and \(c=11.380\) A for CaWO\({}_{4}\) and monoclinic phase (space group \(P2/c\)) with the lattice constants \(a=4.690\) A, \(b=5.734\) A and \(c=4.940\) A for ZnWO\({}_{4}\). The average crystallite sizes of hydrothermally grown CaWO\({}_{4}\) and ZnWO\({}_{4}\) nanotungstates were determined using Rietveld refinement and were equal to \(\sim\)43 nm and \(\sim\)30 nm, respectively. The CaWO\({}_{4}\) seeds for nanoparticle synthesis left at RT for 24 h had a size of \(\sim\)5 nm. Both tungstates have different morphology, as shown in the SEM micrographs (Figure 3). Products synthesized at RT formed fine powders with agglomerated particles (Figure 3a,b). The crystallinity was significantly improved after hydrothermal treatment at \(\sim\)160 \({}^{\circ}\)C as evidenced by the well-defined facets of the particles (Figure 3c,d). CaWO\({}_{4}\) formed microspheres composed of irregular polyhedral NPs having an approximate average diameter of 45 nm with a standard deviation of 10 nm (Figure 3c). This estimate is in agreement with the average crystallite size found from the XRD data. For ZnWO\({}_{4}\), rod-like morphology and higher uniformity were evident (Figure 3d). Individual ZnWO\({}_{4}\) nanorods with a length of 62 \(\pm\) 14 nm and a diameter of 25 \(\pm\) 3 nm were resolved. The average crystallite size estimated by XRD was between the determined length and diameter values of particles from the SEM micrographs. Figure 2: X-ray diffraction patterns of CaWO\({}_{4}\) (**a**) and ZnWO\({}_{4}\) (**b**) nanoparticles left at RT (**bottom**) and treated at \(\sim\)160 \({}^{\circ}\)C for 24 h (**middle**). The standard PDF cards of the CaWO\({}_{4}\) (PDF Card 00-041-1431) and ZnWO\({}_{4}\) (PDF Card 04-009-8448) phases are shown for comparison at the (**top**). The crystal structures of the tungstates are shown in the insets. Hydrothermally treated tungstate NPs were incorporated into a P3HT:PCBM matrix to fabricate a hybrid organic-inorganic direct-conversion X-ray detector. The concentration of the NPs:P3HT:PCBM suspension used for the active layer deposition in this study was fixed at a weight ratio of 2:1:1. It can be seen in the cross-sectional SEM images (Figure 3e,f) that after mixing, the NPs distributed fairly uniformly in the hybrid layer. The thickness of the active layer with CaWO\({}_{4}\):P3HT:PCBM (ZnWO\({}_{4}\):P3HT:PCBM) was \(\sim\)17 \(\upmu\)m (\(\sim\)16 \(\upmu\)m). The X-ray detectors were fabricated with a sandwich-type architecture and multilayer stacking as depicted in Figure 4a. The detectors were realized on the top of the ITO-coated glass substrate. A polymer mixture of two ionomers PEDOT:PSS was used as the hole transport and electron blocking layer. The active layer was composed of a NPs:P3HT:PCBM mixture. Bathophenanthroline (BPhen) played the role of a hole-blocking layer on top of the active layer Figure 3: SEM micrographs of CaWO\({}_{4}\) and ZnWO\({}_{4}\) nanoparticles at RT (**a**,**b**) and treated at \(\sim\)160 \({}^{\circ}\)C for 24 h (**c**,**d**). Cross-section SEM images of fabricated hybrid detectors (part of tungstate NPs incorporated in a P3HT:PCBM matrix) are also shown in (**e**,**f**). The particle-size distributions of CaWO\({}_{4}\) (diameter of nanoparticles) and ZnWO\({}_{4}\) (diameter and length of nanorods) estimated from the corresponding SEM micrographs are shown in the insets in (**c**,**d**), respectively. due to its wide energy gap and high ionization potential [17; 18]. Finally, aluminium film was used as a top electrode, and the whole structure was encapsulated under a glass. A detector without NPs (pure P3HT:PCBM \(\sim\)3.3 \(\upmu\)m thick layer) was also fabricated for comparison. The experiment scheme and a photo of one of the detectors are shown in Figure 4b. The P3HT:PCBM blend is one of the most studied and used active materials for bulk heterojunction organic solar cells [19], which show good efficiency in the visible and UV ranges. In the X-ray range, the absorption probability strongly depends on the atomic number Z, the density of the material, and the incident X-ray photon energy. Therefore, organic polymers exhibit a low X-ray attenuation coefficient (especially in the hard X-ray range) that can be improved by the incorporation of inorganic particles including elements with a high atomic number (high-Z) values [2]. Note that a sharp rise in the X-ray absorption coefficient, called the X-ray absorption edge, is observed for elements at particular X-ray energies when the incident X-ray energy is equal to that of the binding energy of a core-level electron. For instance, Ca (\(Z\) = 20) and Zn (\(Z\) =30) have K edges at 4038.5 eV and 9659 eV, respectively, whereas the more heavy W (\(Z\) = 74) has L\({}_{3}\)-edge at 10,207 eV [20]. A strong increase in absorption beyond the edge can be used for the optimization of detector spectral sensitivity in a required energy range. In some compounds, intense absorption, the so-called "white line" (WL), is observed just above the edge. Its existence is common in X-ray absorption spectra of tungsten oxide compounds at the W L\({}_{2,3}\) edges and is caused by quasilocalized 5d(W) states [21; 22]. In this study, we employed the presence of the WL, located at 10,212 eV at the W L\({}_{3}\) edge, to amplify the signal of detectors containing tungstates. The fabricated hybrid detectors were exposed to monochromatic synchrotron radiation at three selected X-ray photon energies (9500 eV, 10,212 eV, and 20,000 eV) to study the effects of tungstate NPs present in the active layer on the detector response. The first energy value (9500 eV) was selected between the K edges of Ca and Zn, the second energy value (10,212 eV) was located at the WL maximum of the tungsten L\({}_{3}\) edge, and the third energy value (20,000 eV) was chosen well above all absorption edges in the two tungstates. Figure 4: A schematic representation of fabricated hybrid organic–inorganic direct-conversion X-ray detectors (**a**). A scheme of X-ray response measurements at synchrotron, and a photo of fabricated X-ray detector (**b**). X-ray—induced response signals (I\({}_{detector}\)/I\({}_{0}\)) for pure P3HT:PCBM (**c**), hybrid CaWO\({}_{4}\)+P3HT:PCBM (**d**), and ZnWO\({}_{4}\)+P3HT:PCBM (**e**) systems under repeated X-ray exposures (on/off cycles). When X-ray photons are absorbed in matter, electron-hole pairs are generated via the internal photoelectric effect, followed by an avalanche of secondary generated electrons. In the detector, the charge carriers are transported to the electrodes and can be detected as electric current. Note that similar to polymer solar cells [23], our hybrid P3HT:PCBM-based detectors can operate without any external bias voltage. Time-dependent X-ray experiments for three detectors were performed at the above-mentioned incident X-ray energies by periodically turning the incident X-rays on and off with a 120 s period. Note that only two X-ray energies (9500 eV and 10,212 eV) were used in the case of the CaWO\({}_{4}\)-based detector. The normalized X-ray response on/off signals, i.e., the intensity ratio \(I_{detector}(E)/I_{0}(E)\), where \(I_{0}(E)\) corresponds to the incident X-ray intensity measured by the ionization chamber, and I\({}_{detector}(E)\) is the signal detected by the detector, are shown in Figure 4c-e. As one can see, all three detectors demonstrated sensitivity to X-rays in a form close to a square-shaped response, which differs from the saw-tooth-shaped photocurrent response observed in some organic [5] and hybrid [24] X-ray detectors. Note the different \(y\)-axis scale in Figure 4d. It is difficult to compare absolute values of the photocurrent for different detectors because they are strongly affected by the fabrication quality. Nevertheless, we can compare relative responses at different energies. A response to exposure to X-rays of the pure organic P3HT:PCBM detector is shown in Figure 4c. Because organic polymers are composed of light elements, they exhibit a low X-ray attenuation coefficient and are less sensitive at larger energies. As a result, the photogenerated current is close for X-ray photons at 9500 eV and 10,212 eV but is smaller at 20,000 eV. After X-ray illumination begins, the top of the current pulse relaxes and drops by about 30-50% during the first seconds. When the shutter blocs the X-rays, the dark current becomes slightly negative and relaxes back to the initial value. The sharp signal at the X-ray shutter switching points is presumably related to the charge trapping/detrapping at material/electrode interfaces and space charge effects [25; 26]. The X-ray-induced response of both hybrid detectors with nanoparticles (Figure 4d,e) was close after sufficient stabilization time and is similar to that found in other hybrid detectors [27; 28]. The presence of tungstate nanoparticles significantly (3-4 times) enhanced the response of the detector at the energy of 10,212 eV just above the W L\({}_{3}\) edge due to the presence of the strong absorption resonance ("white line") [21; 22]. In the case of the ZnWO\({}_{4}\)-based detector (Figure 4e), an additional contribution came from the absorption caused by the Zn K-edge located at 9659 eV. Note that the X-ray absorption caused by tungsten L\({}_{1,2,3}\) edges, located at 10,207 eV, 11,544 eV, and 12,100 eV, respectively, contributed as much as up to 20,000 eV. As a result, the response of the ZnWO\({}_{4}\)-based detector was about two times larger at 20,000 eV than at 9500 eV. Finally, we used a hybrid detector with ZnWO\({}_{4}\) nanoparticles to record the Ni K-edge X-ray absorption spectrum of a Ni foil (Figure 5). A PIPS detector was simultaneously used in fluorescence mode for comparison. As one can see, the X-ray absorption near-edge structure (XANES) \(\mu(E)\) measured in continuous scan mode by the hybrid detector exhibited broadened oscillations; nevertheless, all main spectral features were present. The observed broadening was caused by the slow response of the hybrid detector compared with PIPS. The broadening effect decreased in the range of the extended X-ray absorption fine structure (EXAFS) \(\chi(k)k^{2}\) (\(k=\sqrt{(2m_{e}/\hbar^{2})(E-E_{0})}\), where \(m_{e}\) is the electron mass, \(\hbar\) is the Planck constant, and \(E_{0}\) is the threshold energy, i.e., the energy of a free electron with zero momentum), located beyond the absorption edge, due to the frequency of oscillations of the absorption coefficient becoming gradually lower with increasing energy [29]. Note the good agreement between the EXAFS spectra measured by the two detectors at large wave numbers \(k\), where the signal is multiplied by the \(k^{2}\) factor. This finding suggests a good dynamic range of the ZnWO\({}_{4}\)-based hybrid detector. ## 4 Conclusions Nanocrystalline CaWO\({}_{4}\) and ZnWO\({}_{4}\) tungstates were studied as promising candidates for use in hybrid organic-inorganic direct-conversion X-ray detectors operating without a bias voltage. The tungstate nanoparticles (NPs) with a crystallite size of \(\sim\)43 nm for CaWO\({}_{4}\) and \(\sim\)30 nm for ZnWO\({}_{4}\) were synthesized by the hydrothermal method with different morphologies. Hybrid organic-inorganic X-ray detectors were fabricated on top of ITO-covered glass and had a sandwich-type structure composed of five ITO/PEDOT:PSS/NPs:P3HT: PCBM/BPhen/Al layers. A pure organic detector without nanoparticles was used for comparison. Feasibility experiments were performed using monochromatic synchrotron radiation allowing us to conduct the measurements in a wide X-ray energy range (9000-20,000 eV). We showed that the presence of tungstate nanoparticles with high-Z elements increases the X-ray attenuation efficiency and, thus, improves the response of the hybrid detector to X-rays compared with those of a pure organic one based on a P3HT:PCBM bulk heterojunction cell. The use of the developed detector for spectroscopic applications was also demonstrated by recording the Ni K-edge X-ray absorption spectrum of nickel foil. Its well-resolved and extended fine X-ray absorption structure indicates the high dynamic range of the detector. Such hybrid detectors with different AWO\({}_{4}\) tungstate nanoparticles can provide a cost-effective solution that can be optimized for a particular energy range by selecting the A-cation type and operating without external voltage. Conceptualization, I.P. and K.P.; Data Curation, I.P. and K.P.; Funding Acquisition, A.K. (Aleksandr Kalinko); Investigation, I.P., K.P., A.T., N.R.S., A.K. (Aleksandr Kalinko) and A.K. (Alexei Kuzmin); Methodology, I.P., K.P., A.K. (Aleksandr Kalinko) and A.K. (Alexei Kuzmin); Project Administration, A.K. (Aleksandr Kalinko); Resources, A.K. (Aleksandr Kalinko); Supervision, A.K. (Alexei Kuzmin); Writing--Original Draft, I.P. and A.K. (Alexei Kuzmin); Writing--Review and Editing, I.P., K.P., A.K. (Aleksandr Kalinko) and A.K. (Alexei Kuzmin). All authors have read and agreed to the published version of the manuscript. The authors are thankful for the financial support from the Latvian Council of Science project No. lzp-2019/1-0071. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available because of ongoing research. The experiment at the DESY PETRA-III synchrotron was performed within project No. I-20211105 EC at the Institute of Solid State Physics, University of Latvia, as the Center of Excel Figure 5: Comparison of the Ni K-edge (**a**) XANES \(\mu(E)\) and (**b**) EXAFS spectra \(\chi(k)k^{2}\) of nickel foil measured using novel hybrid and PIPS detectors. \(k\) is the wavenumber of the excited photoelectron. lence has received funding from the European Union's Horizon 2020 Framework Programme H2020-WIDESPREAD-01-2016-2017-TeamingPhase2 under grant agreement No. 739508, project CAMART2. The authors declare no conflict of interest. The following abbreviations are used in this manuscript: \begin{tabular}{l l} EXAFS & Extended X-ray absorption fine structure \\ NP & nNanoparticle \\ PEDOT:PSS & Poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) \\ P3HT:PCBM & Poly(3-hexylthiophene-2,5-diyl):Phenyl-C61-butyric acid methyl ester \\ RT & Room temperature \\ SEM & Scanning electron microscopy \\ WL & White line \\ XANES & X-ray absorption near edge structure \\ XRD & X-ray diffraction \\ \end{tabular}
2308.10200
Broadband multi-wavelength study of LHAASO detected AGN
Recently, the Large High Altitude Air Shower Observatory (LHAASO) collaboration presented the first catalog of $\gamma$-ray sources using 508 days of LHAASO data, from March 2021 to September 2022. This catalog contains four blazars and a possible liner-type AGN counterpart. In this work, we establish averaged multi-wavelength SEDs by combining data from the \textsl{Fermi}-Large Area Telescope, \textsl{Swift}, ZTF, and WISE covering the same period as the LHAASO detection. In general, these five AGNs are found in low states at all wavelengths. To study the multi-wavelength properties of these AGNs, several jet emission models, including the one-zone leptonic model, the one-zone leptonic and hadronuclear ($pp$) model, the one-zone proton-synchrotron model, and the spine-layer model are applied to reproduce their averaged SEDs, respectively. We find that the one-zone leptonic model can reproduce most of the SEDs, except for the high-energy tail of the LHAASO spectra of Mrk 421 and Mrk 501. To improve the fitting, emission from $pp$ interactions is favoured in the framework of a one-zone model. The spine-layer model, which can be treated as a multi-zone scenario, can also provide good spectral fits. The influence of different extragalactic background light models on fitting LHAASO energy spectrum is also discussed.
Ze-Rui Wang, Rui Xue, Dingrong Xiong, Hai-Qin Wang, Lu-Ming Sun, Fang-Kun Peng, Jirong Mao
2023-08-20T08:18:28Z
http://arxiv.org/abs/2308.10200v2
# Broadband multi-wavelength study of LHAASO detected AGN ###### Abstract Recently, the Large High Altitude Air Shower Observatory (LHAASO) collaboration presents the first catalog of \(\gamma\)-ray sources using 508 days LHAASO data from March 2021 to September 2022. This catalog contains five active galactic nuclei (AGNs), of which four are blazars and one is a liner-type AGN. In this work, we establish averaged multi-wavelength SEDs by combining data from _Fermi_-Large Area Telescope, \(Swift\), \(ZTF\), and \(WISE\) with the same period as the LHAASO detection. In general, these five AGNs are found in low states at all wavelengths. To study the multi-wavelength properties of these AGNs, several jet emission models, including the one-zone leptonic model, the one-zone leptonic and hadronuclear (\(pp\)) model, the one-zone proton-synchrotron model, and the spine-layer model are applied to reproduce their averaged SEDs, respectively. We find that the one-zone leptonic model can reproduce most of the SEDs, except for the high-energy tail of LHAASO spectra. To improve the fitting, emission from \(pp\) interactions is favoured in the framework of one-zone model. The spine-layer model, which can be treated as a multi-zone scenario, also can provide good spectra fits. The influence of different extragalactic background light models on fitting LHAASO energy spectrum is also discussed. Gamma-ray sources (633); High energy astrophysics (739); Relativistic jets (1390) 0000-0002-4882-2886]Ze-Rui Wang 0000-0002-2882-2886]Rui Xue 0000-0002-4882-2886]Dingrong Xiong 0000-0002-3133-2233]Hai-Qin Wang 0000-0002-3133-2233]Lu-Ming Sun 0000-0002-3133-2233]Fang-Kun Peng ## 1 Introduction Very high energy (VHE, 0.1 \(\sim\) 100 TeV) \(\gamma\)-ray is one of the most important messengers for the investigation of the most extreme phenomena in the Universe. More than 90 extragalactic sources have been detected in the VHE band, the majority of which are jetted active galactic nuclei (AGNs; de Naurois, 2021). These VHE AGNs include blazars with powerful jets pointing toward the observer, and radio galaxies, which are considered as the misaligned counterparts of blazars (Blandford & Rees, 1978; Urry & Padovani, 1995). In addition, other subclasses with GeV detection (e.g., narrow-line Seyfert 1 galaxies; Laushvili et al., 2023), or jets are potential VHE emitters as well. Recently, the Large High Altitude Air Shower Observatory (LHAASO) collaboration present the first LHAASO catalog of VHE \(\gamma\)-ray sources, five of which are AGNs (Cao et al., 2023). These five AGNs include four high-synchrotron-peaked (HSP; Abdo et al., 2010) blazars, which are Mrk 421, Mrk 501, 1ES 1727+502 and 1ES 2344+514, and a Liner-type AGN named NGC 4278. Although _Fermi_ telescope does not detect GeV photons from NGC 42781, the parsec-scale jet discovered by radio observation is still a possible site for the acceleration of relativistic particles (Ly et al., 2004; Giroletti et al., 2005). The newly LHAASO observation sheds light on the VHE radiation mechanism of AGNs. The physical origin of the VHE emission of jetted AGNs is complex and under debate. Due to the lack of strong external photon fields for HSP blazars, the most commonly used interpretation is the synchrotron self-Compton process (SSC; Bloom & Marscher, 1996; Marscher & Travis, 1996; Abdo et al., 2011). Since the Klein-Nishina ef fect softens the SSC spectrum, the SSC model predicts a soft VHE spectra naturally. However, the intrinsic hard VHE spectra of some AGNs imply a different physical interpretation. Several models are proposed, such as the spine-layer model (Ghisellini et al., 2005; Acciari et al., 2020), the proton-synchrotron model (Aharonian, 2000; Mucke and Protheroe, 2001; Mucke et al., 2003; Cerruti et al., 2015; Xue et al., 2023), and the ultra-high-energy cosmic-ray propagation model (Essey et al., 2011; Prosekin et al., 2012; Das et al., 2020, 2022). The reported minute-scale variability at VHE band also implies a multi-zone origin of the jet's multi-wavelength emission (e.g., Begelman et al., 2008). On the other hand, many associations between high-energy neutrinos and AGNs have been reported (IceCube Collaboration et al., 2018, 2018; Rodrigues et al., 2021; Aartsen et al., 2020; Giommi et al., 2020; Padovani et al., 2022; Sahakyan et al., 2023), which suggest that hadronic interactions, including photohadronic (\(p\gamma\)) and hadronuclear (\(pp\)) interactions, in the jet could not be ignored simply. Multi-wavelength modeling finds that the emission of the secondary electrons/positrons could contribute at the VHE band (Cerruti et al., 2019; Gao et al., 2019; Liu et al., 2019; Xue et al., 2019; Wang and Xue, 2021). Our preliminary work suggests that \(pp\) interactions in the jet can generate detectable VHE emission and successfully predicts that Mrk 421, Mrk 501 and 1ES 2344+514 would be detected by LHAASO (Xue et al., 2022). In this work, to understand the radiation mechanism of these LHAASO detected AGNs comprehensively, we build averaged multi-wavelength spectral energy distributions (SEDs) by combining observations of \(WISE\) in the infrared band, Zwicky Transient Facility (ZTF) in the optical band, \(Swift\) in the X-ray band, and \(Fermi\)-Large Area Telescope (LAT) data in the \(\gamma\)-ray band. Several models are applied to fit SEDs, especially the VHE spectra. This paper is organized as follows: In Sect. 2, we describe multi-wavelength observations and data reduction. The model description and fitting results are presented in Sect. 3. Finally, we end with discussions and conclusions in Sect. 4. The cosmological parameters \(H_{0}\) = 70 km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{0}\) = 0.3, and \(\Omega_{\Lambda}\)= 0.7 are adopted. ## 2 Observations and Data Reduction In this section, we present the multi-wavelength observations of Mrk 421, Mrk 501, 1ES 1727+502, 1ES 2344+514, and NGC 4278 from 2021 March 5 to 2022 September 30 during the operation of the Water cherenkov Detector Array (WCDA) of LHAASO and the processes of data reduction. The detailed information of these five LHAASO AGNs is given in Table 1. ### Fermi-LAT The LAT on board the Fermi mission is a pair-conversion instrument that is sensitive to GeV emission (Atwood et al., 2009). Data are analyzed with the fermitools version 2.2.0. A binned maximum likelihood analysis is performed on a region of interest (ROI) with a radius 10\({}^{\circ}\) centered on the "R.A." and "decl." of each source. Recommended event selections for data analysis are "FRONT+BACK" (evtype=3) and evclass=128. We apply a maximum zenith angle cut of \(z_{\rm max}=90^{\circ}\) to reduce the effect of the Earth albedo background. The standard gtmktime filter selection with an expression of (DATA_QUAL \(>\) 0 & \(\&\) LAT_CONFIG == 1) is set. A source model is generated containing the position and spectral definition for all the point sources and diffuse emission from the 4FGL (Abdollahi et al., 2022) within 15\({}^{\circ}\) of the ROI center. The analysis include the standard galactic diffuse emission model (gll_iem_v07.fits) and the isotropic component (iso_P8R3_SOURCE_V3_v1.txt), respectively. We bin the data in counts maps with a scale of 0.1\({}^{\circ}\) per pixel and set ten logarithmically spaced bins per decade in energy. The energy dispersion correction is made when event energies extending down to 100 MeV are taken into consideration. Because the 4FGL-DR3 catalog cover a full 12 years of data in the broad energy range, for analysis on the spectral energy distribution and short-time period lightcurves, the spectral parameters of weak sources located within 10\({}^{\circ}\) of the center of the ROI are fixed during the maximum likelihood fitting. In a few cases, we fix or delete some sources to obtain a convergent fit. We divide this SED into six equal logarithmic energy bins in the 0.1- 100 GeV, and an additional bin in the 100-800 GeV for these LHAASO sources. We built GeV lightcurves using about 8-day intervals between 0.1-100 GeV photons, shown in Fig. 1. The data points with low detection significance are given upper limits at the 95% confidence level. The test statistic and spectral index can be found in Table 1. ### Swift-XRT We make use of Swift-XRT data products generator2 (xrt_prods) to obtain 0.3-10 keV X-ray light curves and spectra. Version 1.10 of the xrt_prods module is released as part of swift tools v3.0. This facility allow the creation of publication-ready X-ray light curves, spectra. Processing is performed using HEASOFT v6.29. Instrumental artifacts such as pile up and the bad columns on the CCD are corrected for Evans et al. (2007, 2009)3. These spectra and X-ray light curves are produced by specifying the same covering times as the optical band data. Other settings have adopted default values from xrt_prods. Those obtained spectra are not the single observed spectrum but the average spectra during selection times, which are observed by photon-counting (PC) mode and windowed timing (WT) mode. After download \begin{table} \begin{tabular}{r c c c c c c c} \hline \hline Source name & R.A. (J2000) & Decl. (J2000) & \(z\) & \(M_{\rm BH}\) & TS(500 d) & \(\Gamma_{\rm index}\)(500 d) & Type \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline Mrk 421 & 11 04 19 & +38 11 41 & 0.031 & \(1.35\times 10^{9}\) (Wu et al., 2002) & 16652 & 1.83\(\pm\)0.01 & HSP blazar \\ Mrk 501 & 16 53 52.2 & +39 45 37 & 0.034 & \(1.00\times 10^{9}\) (Katarzyński et al., 2001) & 4644 & 1.78\(\pm\)0.02 & HSP blazar \\ IES 1727+502 & 17 28 18.6 & +50 13 10 & 0.055 & \(5.62\times 10^{7}\) (Wu et al., 2002) & 287 & 1.72\(\pm\)0.01 & HSP blazar \\ IES 2344+514 & 23 47 04 & +51 42 49 & 0.044 & \(6.31\times 10^{8}\) (Wu et al., 2002) & 390 & 1.82\(\pm\)0.01 & HSP blazar \\ NGC 4278 & 12 20 06.8 & +29 16 50.7 & 0.002 & \(3.10\times 10^{8}\) (Wang \& Zhang, 2003) & 9 & - & Liner \\ \hline \end{tabular} \end{table} Table 1: The Sample. Columns from left to right: (1) the source name. (2) right ascension (R.A.). (3) declination (Decl.). (4) the redshift of the source. (5) the SMBH mass in units of the solar mass, \(M_{\odot}\). (6) Test statistic (_Fermi_-LAT). (7) Spectral Index (_Fermi_-LAT). (8) the type of AGNs. Figure 1: Multiwavelength light curves (LCs) of LHAASO sources. Panels from top to bottom in these five figures: LCs of \(WISE\), \(ZTF\), \(Swift\)-XRT, and \(Fermi\)-LAT. The meaning of symbols is given in the legend of Mrk 421. ing those average spectra, we chose XSPEC (version 12.9) to fit them, and fit the spectra of the two modes separately. The specific fitting process is as follows. In order to obtain smaller flux errors, we apply the grppha command to rebin channels, setting minimum number of groups greater than 29. The group min of NGC 4278 is equal to 4 due to insufficient photon counts. Model of TBabs*cflux*powerlaw are considered for fitting these spectra. The Galactic absorption \(N_{H}\) is took from HEASARC tool4(HI4PI Collaboration et al., 2016) which is frozen during fitting. The reduced chi-squared or C-statistic values are used to measure the quality of fitting. As an example, the fitting results of the PC model spectra are shown in Fig. 2. For the two sources of Mrk 421 and Mrk 501 under WT model, the logarithmic parabolic (logpar) model with \(E_{\rm min}\) = 1 keV replace the power law model because of better fitting. For NGC 4278, model of TBabs*cflux*(powerlaw+bbody) is also applied to fit the spectrum. And we find that it can be well fitted by a single power law with index of 1.157\({}^{+0.01}_{-0.01}\), the model of TBabs*cflux*(powerlaw+bbody) does not significantly improve the fit. After the fitting is completed, we use the eewefspec command to convert them into unfolded spectra whose y value could be transformed into \(\nu F_{\nu}\) of SED. Footnote 4: [https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl) ### Ztf The optical magnitudes in g, r and i bands are collected from the 17th ZTF public data release5(Masci et al., 2019). If catfags for an ZTF-image has value < 32768 (i.e., does not contain bit 15), the photometry at that epoch is probably usable (Masci et al., 2019). Thus, in order to obtain good observation data, we require catflags score=0 for other sources apart from Mrk 421 that there are not data with catflags score=0. It should be noted that these data with catflags score=4096 are chosen for Mrk 421. We convert the g, r, i magnitudes into fluxes following Xiong et al. (2020). In addition, the Galactic extinctions in the g, r and i bands are corrected, and the NASA/IPAC Extragalactic Database provide extinction values. The average magnitudes and errors in average value during selected period are regarded as optical band data to perform fitting of SED. Footnote 5: [https://irsa.ipac.caltech.edu/applications/Gator/](https://irsa.ipac.caltech.edu/applications/Gator/) ### Wise The WISE (Wright et al., 2010) telescope has been operating a repetitive all-sky survey since 2010, except for a gap between 2011 and 2013. The WISE telescope visits each location every half a year and takes \(>\) 10 exposures during \(\sim\) 1 days. Although initially four filters are used, most of the time only two filters, named W1 and W2, are used. The central wavelengths of the two filters are 3.4 \(\mu\)m and 4.6 \(\mu\)m. We collected the magnitudes of the five sources by point spread functions (PSF) fitting from the NASA/IPAC InfRared Science Archive (IRSA)6. Following Jiang et al. (2021), we selected magnitudes with good image quality (qi_fact\(>\)0) and unaffected by charged particle hits (saa_sep\(>\)0), scattered moon light (moon_masked\(<\)1) or artifacts (cc_flags=0), and then binned the magnitudes every half a year since we did not detect any intraday variabilities. Footnote 6: [https://irsa.ipac.caltech.edu/applications/Gator/](https://irsa.ipac.caltech.edu/applications/Gator/) ## 3 SED Modelling These five LHAASO AGNs do not show significant flare during the same observation period of LHAASO in all bands, as shown in Fig. 1. Therefore, we use the averaged flux of each band to construct SEDs. As mentioned above, the first LHAASO catalog report five AGNs including four HSP blazars and one Liner-type AGN. In the case of blazars, the multi-wavelength emission, apart from the obvious thermal peaks in the infrared and optical bands, is undoubtedly from the jet. For the Liner-type AGN NGC 4278, the jet/RIAF and the thin accretion disk may alternately dominate as the origin of the radiation, depending on the strength of its X-ray emission (Younes et al., 2010). In our data analysis, the discovered hard X-ray power-law spectrum favours the jet origin. In order to better understand the radiation mechanisms of these LHAASO detected AGN, four popular jet models to reproduce the SEDs: the one-zone SSC model, the one-zone SSC+\(pp\) model, the one-zone proton-synchrotron model and the spine-layer model. The synchrotron radiation, the inverse Compton (IC) radiation and the \(pp\) interactions radiation are calculated using the naima Python package (Zabalza, 2015). We also consider the absorption of \(\gamma\)-ray photons due to the soft photons in the radiation zone (Xue et al., 2022) and the extragalactic background light (EBL; Dominguez et al., 2011) during the propagation in the intergalactic space. In addition, the energy of the absorbed \(\gamma\)-ray photons in the radiation region is converted to lower energy through the cascade process. The calculation of the cascade spectrum is applied as proposed in Bottcher et al. (2013). In the infrared and optical bands of SEDs, all sources exhibit a clear hump, which differs significantly from the trend in the other bands. This is normally suggested as the emission from the host galaxy. We assume that the host galaxy is a 13 Gyr old elliptical galaxy for all of the AGNs (Raiteri et al., 2014) and use the SWIRE template7(Polletta et al., 2007) to generate the spectrum of host galaxy in the fitting. Footnote 7: [https://irsa.ipac.caltech.edu/applications/Gator/](https://irsa.ipac.caltech.edu/applications/Gator/) \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{SSC model and SSC+\(pp\) model 1 & & & & & & & \\ Source name & \(\theta\) (\({}^{\circ}\)) & \(\Gamma\) & \(L_{\rm e}^{\rm inj}\) (ergs\({}^{-1}\)) & \(\gamma_{\rm e,b}\) & \(\gamma_{\rm e,max}\) & \(s_{\rm c,1}\) & \(s_{\rm c,2}\) & \(B\) (G) & \(R\) (cm) & \(L_{\rm p}^{\rm inj}\)/\(L_{\rm Egal}\)2 \\ \hline Mrk 421 & 2 & 23 & 9.00E+43 & 3.00E+05 & 1.00E+07 & 2.20 & 4.90 & 0.07 & 9.30E+15 & 2.50E-01 \\ Mrk 501 & 2 & 23 & 4.80E+43 & 5.30E+05 & 1.00E+07 & 2.23 & 4.90 & 0.09 & 4.00E+15 & 4.17E-01 \\ 1ES 1727+502 & 2 & 23 & 1.80E+43 & 1.80E+05 & 1.00E+07 & 2.10 & 3.10 & 0.12 & 1.50E+15 & 2.00E+01 \\ 1ES 2344+514 & 2 & 23 & 6.50E+43 & 8.00E+05 & 1.00E+07 & 2.50 & 4.90 & 0.08 & 1.80E+15 & 4.17E-03 \\ NGC 42783 (SSC) & 2 & 5 & 8.50E+40 & 9.00E+05 & 1.00E+07 & 1.00 & 3.00 & 0.04 & 8.50E+13 & - \\ NGC 42784 (SSC+\(pp\)) & 2 & 5 & 2.15E+43 & 4.00E+03 & 1.00E+07 & 1.50 & 4.90 & 0.01 & 1.00E+15 & 1.22E-04 \\ NGC 42785 (SSC) & 3 & 3.10E+42 & 3.00E+06 & 5.00E+07 & 1.00 & 2.30 & 0.20 & 1.50E+14 & - \\ NGC 42786 (SSC+\(pp\)) & 3 & 3.90E+43 & 5.00E+03 & 1.00E+07 & 1.50 & 4.90 & 0.05 & 3.00E+15 & 1.67E-02 \\ \hline \hline \multicolumn{10}{c}{SSC+proton-synchrotron model (two zone) 7 (SSC)} \\ Source name & \(\theta\) (\({}^{\circ}\)) & \(\Gamma\) & \(L_{\rm e}^{\rm inj}\) (ergs\({}^{-1}\)) & \(\gamma_{\rm e,b}\) & \(\gamma_{\rm e,max}\) & \(s_{\rm c,1}\) & \(s_{\rm c,2}\) & \(B\) (G) & \(R\) (cm) & \(L_{\rm p}^{\rm inj}\)/\(L_{\rm Egal}\)2 \\ \hline Mrk 421 (spine) & 2 & 21 & 3.30E+43 & 4.00E+05 & 1.00E+07 & 1.80 & 4.50 & 0.02 & 7.00E+16 & 1.00E+17 \\ Mrk 421 (layer) & 2 & 4 & 2.00E+42 & 5.00E+03 & 1.00E+07 & 1.80 & 4.50 & 0.02 & 8.40E+16 & 5.00E+17 \\ Mrk 501 (spine) & 2 & 18 & 6.00E+43 & 1.50E+06 & 1.00E+07 & 2.10 & 4.90 & 0.01 & 7.00E+16 & 1.00E+17 \\ Mrk 501 (layer) & 2 & 3 & 1.30E+42 & 6.00E+03 & 1.00E+07 & 2.10 & 4.90 & 0.01 & 8.40E+16 & 5.00E+17 \\ 1ES 1727+502 (spine) & 2 & 21 & 6.50E+42 & 5.00E+05 & 1.00E+07 & 1.90 & 4.20 & 0.02 & 7.00E+16 & 1.00E+17 \\ 1ES 1727+502 (layer) & 2 & 4 & 1.50E+42 & 3.00E+04 & 1.00E+07 & 1.90 & 4.20 & 0.02 & 8.40E+16 & 5.00E+17 \\ 1ES 2344+514 (spine) & 2 & 23 & 7.00E+42 & 5.00E+05 & 1.00E+07 & 2.00 & 4.50 & 0.02 & 7.00E+16 & 1.00E+17 \\ 1ES 2344+514 (layer) & 2 & 4 & 2.20E+42 & 1.20E+04 & 1.00E+07 & 2.00 & 4.50 & 0.02 & 8.40E+16 & 5.00E+17 \\ NGC 4278 (SSE+\(pp\)) & 2 & 18 & 4.70E+38 & 8.00E+05 & 1.00E+07 & 1.00 & 3.20 & 0.15 & 5.20E+15 & 1.00E+17 \\ NGC 4278 (SSE+\(pp\)) & 2 & 9.00E+39 & 1.00E+07 & 1.00 & 3.20 & 0.15 & 6.24E+15 & 5.00E+17 \\ NGC 4278 (SSE+\(pp\)) & 30 & 25 & 3.00E+41 & 8.00E+02 & 1.00E+07 & 1.00 & 3.20 & 0.30 & 7.00E+16 & 1.00E+17 \\ NGC 4278 (SSE+\(pp\)) & 30 & 2 & 4.60E+40 & 5.00E+06 & 1.00E+07 & 1.00 & 3.20 & 0.03 & 8.40E+16 & 5.00E+17 \\ \hline \hline \end{tabular} \end{table} Table 2: The fitting parameters. The minimum electron Lorentz factor \(\gamma_{\rm e,min}\) is set to \(1\times 10^{2}\) because that it is insensitive in fitting. The ‘-’ sign indicates that the parameter do not exist in the One-zone SSC model. Figure 2: The fitting results of the \(Swift\)-XRT PC model spectra. Figure 3: One-zone SSC modeling. The meanings of line styles are given in the legend of Mrk 421. The light blue data points are infrared data from WISE, the dark blue data points are optical data from ZTF, the orange and green data points are X-ray data from Swift-XRT’s PC mode and WT mode, respectively, and the purple data points are \(\gamma\)-ray data from \(Fermi\)-LAT, the red strap shows the observation of WCDA, and the red upper limit point is from KM2A. The gray data points are historical data from the Space Science Data Center. The one-zone SSC model is the simplest and most commonly used model in the study of jet emission. In this paper, we assume a broken power-law injection electron density distribution. By taking into account the radiative cooling and the escape of the electrons, the steady-state electron density distribution can be calculated with (Xue et al., 2019) \[N_{\rm e}(\gamma_{\rm e})=\frac{3L_{\rm e}^{\rm inj}n_{\rm e}^{\rm inj}(\gamma_{ \rm e})}{4\pi R^{3}m_{\rm e}c^{2}\int_{\gamma_{\rm e}}n_{\rm e}^{\rm inj}(\gamma_ {\rm e}){\rm d}\gamma_{\rm e}}{\rm min}\{t_{\rm cool}(\gamma_{\rm e}),t_{\rm esc }\}, \tag{1}\] where \(n_{\rm e}^{\rm inj}(\gamma_{\rm e})\propto\left\{\begin{array}{ll}\gamma_{ \rm e,min}^{-\gamma_{\rm e,1}},&\gamma_{\rm e,min}\leq\gamma_{\rm e}\leq\gamma _{\rm e,b}\\ \gamma_{\rm e,b}^{\gamma_{\rm e,2}-\gamma_{\rm e,1}}\gamma_{\rm e}^{-\gamma_{ \rm e,2}},&\gamma_{\rm e,b}<\gamma_{\rm e}\leq\gamma_{\rm e,max}\end{array}\right.\) is the injection electron density distribution, \(\gamma_{\rm e}\) is the electron Lorentz factors, \(\gamma_{\rm e,min}\) and \(\gamma_{\rm e,max}\) are the minimum and maximum electron Lorentz factors of the distribution, \(\gamma_{\rm e,b}\) is the break electron Lorentz factor, \(s_{\rm e,1}\) and \(s_{\rm e,2}\) are the low-energy and the high-energy indexes of the broken power-law spectrum, \(L_{\rm e}^{\rm inj}\) is the electron injection luminosity, \(R\) is the radius of radiation zone, \(m_{\rm e}\) is the rest mass of the electron, \(c\) is the speed of light, \(t_{\rm esc}=R/c\) is the escape timescale, \(t_{\rm cool}=3m_{\rm e}c/(4\sigma_{\rm T}\gamma_{\rm e}(n_{\rm B}+f_{\rm KN}u _{\rm ph}))\) is the electron cooling timescale, \(\sigma_{\rm T}\) is the Thomson scattering cross section, \(f_{\rm KN}\) is the factor accounting for Klein-Nishina (KN) effects (Moderski et al., 2005), \(u_{\rm B}=B^{2}/(8\pi)\) is the energy density of the magnetic field, \(B\) is the magnetic field strength, and \(u_{\rm ph}\) is the energy density of the soft photons8. The observed emission will be Doppler boosted by a factor \(\delta^{4}\), where \(\delta=[\Gamma(1-\beta_{\rm T}{\rm cos}\theta)]^{-1}\) is the Doppler factor, \(\Gamma\) is the the bulk Lorentz factor, \(\beta_{\rm T}c\) is the velocity of the jet, \(\theta\) is the viewing angle of the jet. Footnote 8: We use the iterative approach to calculate \(u_{\rm ph,\,syn}\) in the SSC process. Current observational data do not provide good constraints on all parameters in the modeling. Therefore, we fix some of the less sensitive parameters in fitting to reduce the number of free parameters. All the fitting parameters can be found in Table 2. The relativistic jet of the blazars is close to the line of sight of the observer, so we set the viewing angles for Mrk 421, Mrk 501, 1ES 1727+502, and 1ES 2344+514 to 2\({}^{\circ}\) uniformly. Giroletti et al. (2005) suggest that the viewing angle of NGC 4278 is uncertain, and their study reports that it could have a small viewing angle (\(2^{\circ}<\theta<4^{\circ}\)), alternatively a large viewing angle. We therefore divide it into two cases, one with the same viewing angle (\(\theta=2^{\circ}\), hence NGC 42781) as the blazar sources and the other with a larger viewing angle (\(\theta=30^{\circ}\), hence NGC 42782). Footnote 1: We use the iterative approach to calculate \(u_{\rm ph,\,syn}\) in the SSC process. The fitting results of the one-zone SSC model are shown in Fig. 3. In the case of four blazars, it can be found that the data of X-ray and GeV \(\gamma\)-ray data can be fitted very well. However, the high-energy tail of VHE spectrum of LHAASO is poor interpreted. This is caused by the KN effects, which steepens the spectrum naturally. When \(\gamma_{\rm e}E_{0}/\left(m_{\rm e}c^{2}\right)>1\), the IC scattering occurs from the Thomson regime into the KN regime, where \(E_{0}\) is the energy of the soft photon in the comoving frame. Then we can obtain the critical electron Lorentz factor \[\gamma_{\rm KN}=\frac{m_{\rm e}c^{2}}{E_{0}}, \tag{2}\] and the corresponding critical energy of the IC radiation can be estimated by \(E_{\rm KN}\approx\gamma_{\rm KN}^{2}E_{0}=m_{\rm e}^{2}c^{4}/E_{0}\). In the observer frame, \[E_{\rm KN}^{\rm obs}\approx\frac{\delta^{2}m_{\rm e}^{2}c^{4}}{(1+z)^{2}} \frac{1}{E_{0}^{\rm obs}}. \tag{3}\] The soft photon energy can be approximately replaced by the peak energy of the low-energy hump. For four blazars, We can obtain \(E_{0}^{\rm obs}\sim 1\,{\rm keV}\). Then substituting \(E_{0}^{\rm obs}\) and \(\delta\) into Eq. (3), we get the critical energy \(E_{\rm KN}^{\rm obs}\approx 0.2\,{\rm TeV}\). This means that the IC radiation spectrum is steep after \(\sim 0.2\,{\rm TeV}\) because of KN effects, as shown in Fig. 3. Therefore, the high-energy tail of LHAASO spectra cannot be fitted with the one-zone SSC model, unless very extreme parameters are considered. In the case of NGC 4278, due to the lack of GeV \(\gamma\)-ray data, the fitting parameters have a larger space to choose from. Nevertheless, in order to explain the spectra in both the X-ray and VHE bands simultaneously, extreme parameters are required. For example, the model requires a very hard low-energy slope (\(s_{\rm e,1}=1\)) or a very large minimum electron Lorentz factor (approaching \(\gamma_{\rm e,b}\)) to explain the very hard X-ray spectra. If we consider that its X-ray radiation is produced by the electron-synchrotron process, a large \(\gamma_{\rm e,b}\) (around \(10^{6}\)) is required. In addition, the critical energy \(E_{\rm KN}^{\rm obs}\) is about \(0.7\,{\rm GeV}\) for the case of NGC 42781 and \(7\,{\rm MeV}\) for the case of NGC 42782, which requires the low-energy slope \(s_{\rm e,1}\) to be close to 1 and the high-energy slope \(s_{\rm e,2}\leq 3\) in order to counteract the impact of the KN limit and fit the VHE spectra. For NGC 42782, it is also necessary to set \(\gamma_{\rm e,max}\) to \(5\times 10^{7}\). Footnote 2: We use the iterative approach to calculate \(u_{\rm ph,\,syn}\) in the SSC process. ### One-zone SSC+\(pp\) model The \(pp\) model has a potential to produce the observed TeV spectra of blazars without exceeding the Eddington luminosity, which is difficult to avoid in the \(p\gamma\) model (Xue et al., 2019). Our recent study (Xue et al., 2022) shows that the \(pp\) interactions in the jet have the potential to generate VHE emission that can be deteced by LHAASO. Therefore, we incorporate the \(pp\) interactions into the one-zone SSC model to reproduce the SEDs. In the \(pp\) modeling, we assume a power-law injection proton density distribution. By taking into account the radiative cooling and the escape of the protons, then the steady-state proton density distribution can be calculated with (Xue et al., 2022) \[N_{\rm p}(\gamma_{\rm p})=\frac{3L_{\rm p}^{\rm inj}n_{\rm p}^{\rm inj}(\gamma_ {\rm p})}{4\pi R^{3}m_{\rm p}c^{2}\int\gamma_{\rm p}n_{\rm p}^{\rm inj}( \gamma_{\rm p}){\rm d}\gamma_{\rm p}}{\rm min}\{t_{\rm cool}(\gamma_{\rm p}),t_{ \rm esc}\}, \tag{4}\] where \(n_{\rm p}^{\rm ini}\big{(}\gamma_{\rm p}\big{)}\propto\gamma_{\rm p}^{-s_{\rm p}}\) is the injection proton density distribution, \(\gamma_{\rm p}\) is the proton Lorentz factors in the range of \(\gamma_{\rm p,min}\) to \(\gamma_{\rm p,max}\), \(s_{\rm p}\) is the slope of the power-law spectrum, \(L_{\rm p}^{\rm ini}\) is the proton injection luminosity, \(m_{\rm p}\) is the rest mass of the proton, \(t_{\rm cool}(\gamma_{\rm p})\) is the cooling timescale of the proton. More specifically, \(t_{\rm cool}(\gamma_{\rm p})\) is dominated by the \(pp\) interactions in the SSC+\(pp\) scenario and can be approximated by \(t_{\rm cool}^{pp}\left(\gamma_{\rm p}\right)=1/\left(K_{pp}\sigma_{pp}n_{\rm H }c\right)\), where \(K_{pp}\approx 0.5\) is the inelasticity coefficient, \(n_{\rm H}\) is the number density of cold protons in the jet, \(\sigma_{pp}=\left(34.3+1.88\mathcal{L}+0.25\mathcal{L}^{2}\right)\left[1- \left(\frac{E_{\rm th}^{\rm pp}}{\gamma_{\rm p}n_{\rm p}c^{2}}\right)^{4} \right]^{2}\) is the cross section for inelastic \(pp\) interactions (Kelner et al., 2006), \(E_{\rm th}^{\rm pp}=1.22\times 10^{-3}\,\)TeV is the threshold energy of production of \(\pi^{0}\), and \(\mathcal{L}=\ln\left(\frac{\gamma_{\rm p}m_{\rm p}c^{2}}{1\,\)TeV\(}\right)\). To maximise the efficiency of the \(pp\) interaction within a reasonable parameter range, analytical calculations suggest that Figure 4: One-zone SSC+\(pp\) modeling. The meanings of symbols and line styles are given in the legend of Mrk 421. the power of the cold protons in the jet should be set as half the Eddington luminosity (Li et al., 2022; Xue et al., 2022). Then we can get the number density of cold protons \(n_{\rm H}=L_{\rm Edd}/\left(2\pi R^{2}\Gamma^{2}m_{\rm p}c^{3}\right)\), where \(L_{\rm Edd}=1.26\times 10^{38}M_{\rm BH}/M_{\odot}\) is the Eddington luminosity, and \(M_{\rm BH}\) is the SMBH mass. We may estimate the maximum proton energy by equating the acceleration timescale with the escape timescale (Xue et al., 2019). Then the maximum proton energy can be calculated by \[\gamma_{\rm p,max}=\frac{eBR}{\alpha m_{\rm p}c^{2}}, \tag{5}\] where \(e\) is the elementary charge, \(\alpha\) is the factor representing the deviation from the highest acceleration rate. We employ \(\alpha=1100\), which corresponds to the shock speed measured in the upstream frame of \(\sim 0.07c\) in the situation of shock acceleration (Rieger et al., 2007). We set the minimum proton energy \(\gamma_{\rm p,min}=1\) and the slope of the power-law spectrum \(s_{\rm p}=1.5\). These two parameters have no effect on the fitting results, only on the required proton injection luminosity. Finally, the free parameter of the \(pp\) model only remains the proton injection luminosity \(L_{\rm p}^{\rm inj}\), which is shown in Table 2. The fitting results can be found in Fig. 4. The radiation produced by the \(pp\) interactions fits perfectly the high-energy tail of the LHAASO spectrum, which previously could not be explained by the one-zone SSC model. The dotted curve in Fig. 4 shows the neutrino flux produced by the \(pp\) interactions, which should be comparable to the photon flux produced at the same time (as shown in two cases of NGC 4278). The sudden drop of photon flux in four cases of blazars is due to the absorption of photon-photon interactions occurring in the jet and in the intergalactic propagation. The fitting parameters in Table 2 show that Mrk 421, Mrk 501, 1ES 2344+514, NGC 4278\({}^{\rm a}\) and NGC 4278\({}^{\rm b}\) can be fitted without exceeding the Eddington luminosity, and a quite low proton injection luminosity is required in the cases of 1ES 2344+514, NGC 4278\({}^{\rm a}\) and NGC 4278\({}^{\rm b}\). The proton injection luminosity used in fitting of 1ES 1727+502 is 20 times of the Eddington luminosity, because of its low SMBH mass (\(5.62\times 10^{7}\)\(M_{\odot}\)). For comparison, the fiducial SMBH mass of the BL Lacs is \(10^{8.5-9}\)\(M_{\odot}\)(Shaw et al., 2013; Xiao et al., 2022). Moreover, based on the fitting results, it can be found that the cascade process make a negligible contribution to the energy spectrum in the one-zone SSC+\(pp\) model. ### One-zone proton-synchrotron model The proton-synchrotron emission in the framework of one-zone model is often suggested as a possible interpretation of the high-energy component of HSP AGNs (e.g., Aharonian, 2000; Bottcher et al., 2013; Acciari et al., 2020), although extreme physical parameters, such as a super-Eddington jet power and a strong magnetic field, are usually introduced (cf., Cerruti et al., 2015; Petropoulou and Dermer, 2016; Xue et al., 2023). In this subsection, before fitting SEDs of these five LHASSO AGNs, we search the proton-synchrotron modeling parameter space with an analytical method proposed in our recent work (Xue et al., 2023). In this method, the parameter space is limited by three constraints, which are the total jet power (dominated by the injection power of relativistic protons and the power carried in magnetic field) does not exceed the Eddington luminosity, relativistic protons can be accelerated to the required maximum energy, and the emitting region is transparent to VHE photons, respectively. In addition, observation results suggest that the magnetic field in the inner jet of AGNs is typically lower than 10 G (O'Sullivan and Gabuzda, 2009; Pushkarev et al., 2012; Hodgson et al., 2017; Kim et al., 2022), and the bulk Lorentz factor of jet is lower than 30 (e.g., Hovatta et al., 2009). If a reasonable parameter space that satisfies observational constraints can be found (i.e., \(B\lesssim 10\) G and \(\Gamma\lesssim 30\)), we fit their SEDs. There are two strategies to fit the LHAASO spectra with proton-synchrotron emission. The first one is to use proton-synchrotron emission to account for the entire high-energy component. The second one is to use proton-synchrotron emission to fit the high-energy tail of the LHAASO spectra, while the rest of the high-energy component is still attributed to the leptonic SSC emission. For the first strategy, the index \(s_{\rm p}\) of injected proton energy distribution can be obtained by the photon index \(\Gamma_{\rm index}\) of _Fermi_-LAT spectrum, i.e., \(s_{\rm p}=2\Gamma_{\rm index}-1\). Then we derive values of \(s_{\rm p}\) of Mrk 421, Mrk 501, 1ES 1727+502, and 1ES 2344+514, which are 2.66, 2.56, 2.44, and 2.64, respectively. For NGC 4278, since only upper limits are given by _Fermi_-LAT, we default \(s_{\rm p}=2\). Refer to the leptonic modeling in Sect. 3.1, we set the peak energies \(E_{\rm peak}^{\rm syn}\) of proton-synchrotron emission of Mrk 421, Mrk 501, 1ES 1727+502, and 1ES 2344+514 are 100 GeV, and \(E_{\rm peak}^{\rm syn}=500\) GeV for NGC 4278. Xue et al. (2023) find that the reasonable parameter space might be found if considering a relative large blob radius (\(R\gtrsim 10^{16}\) cm), and the size of parameter space is inversely proportional to \(R\). So here we only check if the parameter space can be found when \(R=10^{16}\) cm. For the second strategy, we default \(E_{\rm peak}^{\rm syn}=14\) TeV and \(s_{\rm p}=2\) for all five AGNs, since there is no constraint on \(s_{\rm p}\). Since the default peak energy is quite large, it is only necessary to check if the parameter space exist in a large emitting region. Here, we set \(R=10^{17}\) cm. The results of the parameter space are shown in Fig. 5. In the first strategy (the upper six panels), it can be seen that no valid parameter space is found for all five AGNs (SED fitting with strong magnetic fields is given in Appendix A). Among them, the super-Eddington jet power is needed for four blazars because soft proton indexes are suggested by the _Fermi_-LAT spectra. For NGC 4278, there is a large parameter space to get the sub-Eddington jet power, however this Figure 5: The ratio of \(L_{\rm jets}/L_{\rm Edd}\) in the \(\Gamma-B\) diagram for the One-zone proton-synchrotron model. The upper six panels show the results that applying proton-synchrotron emission to explain the entire high-energy component, and the lower six panels show the results that applying proton-synchrotron emission to explain the high-energy tail of LHAASO spectra. The black curves with arrows represent the parameter space that satisfied the Hillas condition. The vertical blue curves with arrows show the lower limit of \(\Gamma\) that allows the escape of maximum energy \(\gamma\)-ray photons. The vertical and horizontal purple curves show the space that \(B\lesssim 10\) G and \(\Gamma\lesssim 30\). The white dashed contours denote specific values of \(\log(L_{\rm jet}/L_{\rm Edd})\) associated with the color bar. space is in conflict with the Hillas condition (black curves with arrows). In the second strategy (the lower six panels), it can be seen that only 1ES 2344+514 can find a reasonable parameter space. However, with \(R=10^{17}\) cm, if we set \(B=7\) G and \(\Gamma=10\), the energy density of low-energy component \(U_{\rm syn}\approx 8.2\times 10^{-7}\) erg cm\({}^{-3}\) would be much lower than that of magnetic field \(U_{\rm B}\approx 1.9\) erg cm\({}^{-3}\). Therefore, in the framework of one-zone model, it is impossible to fit the GeV data of 1ES 2344+514 with SSC emission when using the proton-synchrotron emission to explain the LHAASO spectrum. A second emitting region has to be introduced. In Fig. 6, we show that the SED of 1ES 2344+514, including the LHAASO spectrum, can be explained as a superposition of leptonic emission from first emitting zone and proton-synchrotron emission from second emitting zone. The leptonic emission from the first emitting zone is the same as that obtained in Sect. 3.1. For the second emitting zone, we set \(R=10^{17}\) cm, \(B=9\) G, and \(\Gamma=12\) as indicated by the obtained parameter space. Overall, the proton-synchrotron emission appears to be poorly interpreted for LHAASO spectra. Even considering the second emitting zone, the proton-synchrotron emission is only applicable to 1ES 2344+514. ### Spine-layer model The observed limb-brightening at the parsec (e,g., Giroletti et al., 2004, 2006; Piner et al., 2010) and the kiloparsec scales (e.g., Owen et al., 1989; Laing et al., 2011) suggest that the jet could be structured with a fast spine surrounded by a slower layer. Based on this observation, the spine-layer (or structured) jet model is proposed and applied to account for the rapidly variable VHE emission (e.g., Ghisellini et al., 2005) and to reproduce the quiescent state SED (Tavecchio & Ghisellini, 2016). In this subsection, we apply the spine-layer model to fit the SEDs. This model consists of two components, a relatively small cylinder that is the spine (denoted by the subscript's') and another hollow cylinder wraps around the spine as the layer (denoted by the subscript 'l'). Similar to the conventional two-zone model, this model also requires two sets of parameters. The difference is that the spine and the layer influence each other and there is a relationship between these two sets of parameters. We basically use the same settings as the one-zone SSC model for each component. There are three differences from before: 1. The radiation zone changes from spherical to a cylinder (spine) or hollow cylinder (layer), so the radius of the radiation zone \(R\) is changed to the cross-section radius \(R_{\rm c}\), and we add a parameter of the length of the cylinder \(L\). In addition, all calculations in relation to the shape of the radiation zone must also be replaced. For example, the volume of a sphere in Eq. (1) must be replaced by that for a cylinder or hollow cylinder. 2. We set \(R_{\rm I}=1.2R_{\rm s}\) to reduce the number of free parameters, which follows Ghisellini et al. (2005). The spectral indexes of the electron spectrum in the layer are set to be the same as that in the spine, because they cannot be constrained in fitting. 3. Photons produced in one component can enter another component and, as soft photons, enhance the IC emission in both components. And this process of scattering the soft photons coming from another component is commonly known as the external Compton (EC) process. The the energy density of the soft photons from another component is calculated as suggested in Ghisellini et al. (2005). Finally, there are fifteen free parameters, which can be found in Table 2. In this spine-layer model, we consider three strategies for modeling: 1. We consider using the EC process to fit the high-energy hump independently. As discussed in Sect. 3.1, the one-zone SSC model is difficult to reproduce the entire VHE data due to the KN effect. In the spine-layer model, if soft photons come from another component, Eq. 3 is rewritten as \[E_{\rm KN}^{\rm obs}\approx\frac{\delta_{\rm s}\delta\mu_{\rm e}^{2}c^{4}}{ \Gamma^{\prime}\left(1+z\right)^{2}}\frac{1}{E_{\rm 0}^{\rm obs}},\] (6) where \(\Gamma^{\prime}=\Gamma_{\rm s}\Gamma_{\rm 1}(1-\beta_{\rm s}\beta_{\rm t})\) is the relative Lorentz factor between the spine and the layer, \(\beta_{\rm s}\) and \(\beta_{\rm t}\) are velocities for the spine and the layer, respectively. In the EC Figure 6: Two-zone proton-synchrotron modeling. The meanings of symbols and line styles are given in the legend. process, the soft photons provided by another component would not be constrained from the fitting of low-energy hump as in the SSC process. Therefore, the KN effect could be weaken as long as another component can provide low energy soft photons, as shown in Eq. 6. This indicates that the EC process might improve the fitting result of the SSC process on VHE observation. We then fit the SEDs where all the observed radiation is produced in one component and the other component provides only soft photons. The fitting results are shown in Fig. 7. It can be seen that the EC process with slight/no KN effect still cannot effectively reproduce the high-energy tail of LHAASO spectrum of blazars. This is because the radiation spectrum produced by the IC process is not in the standard power-law form. The morphology of the spec Figure 7: The first fitting strategy of the spine-layer model. The multi-wavelength data is explained by the emission from one component. The meanings of symbols and line styles are given in the legend of Mrk 421. trum produced by the IC process approaches a smooth curve because it is influenced by both electron and soft photon spectra, the KN effect, and the absorption of photon-photon interactions. This is evident in the fitting result of Mrk 421, where the EC radiation spectrum is curved compared to the power-law spectrum observed by LHAASO. In the case of NGC 4278\({}^{\rm a}\), the multi-wavelength radiation is explained by emission produced in the spine, while for NGC 4278\({}^{\rm b}\) it is explained by an emission from the layer. Because the relativistic beaming effect reduces the flux in the case with a larger viewing angle and a larger bulk Lorentz factor. The Doppler factor \(\delta=0.30\) is small for the spine of NGC 4278\({}^{\rm b}\). Figure 8: The second fitting strategy (two zone) of the spine-layer model, the SEDs are reproduced by the superposition of emissions from two components. The meanings of symbols and line styles are given in the legend of Mrk 421. 2. We consider fitting the SED with a superposition of multi-radiation processes from one emitting region. To be specific, the low-energy hump is fitted by synchrotron radiation, and the high-energy hump is fitted by the radiation superposition of the SSC and the EC processes. To reproduce the high-energy hump by the radiation superposition of the SSC and the EC processes, we consider a scenario similar to that shown in Fig. 6. In this strategy, the peak energy of SSC or EC radiation is required to reach \(\sim 14\,\mathrm{TeV}\). If the high-energy hump originates from the IC process, the threshold peak energy is \[E^{\mathrm{obs}}_{\mathrm{IC,peak}}<\gamma_{\mathrm{e,b}}m_{\mathrm{e}}c^{2} \frac{\delta}{1+z}.\] (7) Substituting \(E^{\mathrm{obs}}_{\mathrm{IC,peak}}=14\,\mathrm{TeV}\) into Eq. 7, we obtain, \[\gamma_{\mathrm{e,b}}>2.74\times 10^{7}\frac{1+z}{\delta}.\] (8) The characteristic photon energy in the observer frame produced by the electron-synchrotron process can be calculated by \[E^{\mathrm{syn}}_{\mathrm{e,c}}=\frac{3heB\gamma_{\mathrm{e}}^{2}}{4\pi m_{ \mathrm{e}}c}\frac{\delta}{1+z}\approx 1.74\times 10^{-8}\gamma_{\mathrm{e}}^{2} \frac{B}{1\,\mathrm{G}}\frac{\delta}{1+z}\,\mathrm{eV},\] (9) where \(h\) is the Planck constant. Based on the observed peak energy of the low-energy hump, we can then derive parameter constraints for the magnetic field strength, the break electron Lorentz factor and the Doppler factor. Substituting the peak energy of low-energy hump \(E^{\mathrm{syn}}_{\mathrm{e,peak}}\sim 1\,\mathrm{keV}\) of four blazars and Eq. 8 into Eq. 9, we obtain, \[B<7.66\times 10^{-5}\frac{1+z}{\delta}\,\mathrm{G},\] (10) which deviates strongly from the median (\(\sim 0.4\,\mathrm{G}\)) of the magnetic field strength estimated from the VLBI core shift-measurements for BL Lacs (Pushkarev et al., 2012). 3. The superposition of radiation from two components seems to be a plausible strategy to explain the VHE spectra, as it has minimal parameter constraints. The fitting results are shown in Fig. 8. In the cases of four blazars, the X-ray spectra are explained by synchrotron emission produced in the spine, and the \(\gamma\)-ray radiation is from the superposition of emission from the spine and the layer. As shown in Fig. 8, the EC emission spectrum produced in the spine rapidly decreases near TeV or sub-TeV due to the KN effect. The EC radiation spectrum produced in the layer can be extended to higher energies, because a larger break electron Lorentz factor is set in the layer. Although the spectral index of the radiation spectrum for each component is different from the observed spectral index in the VHE band due to the influence of the KN effect, the VHE spectrum can still be reproduced by superimposing the radiation of the two regions. In the two cases of NGC 4278, a very hard electron spectrum is still required in fitting. The low-energy and the high-energy humps are both explained by the radiation superposition from two components. ## 4 Discussion and Conclusion ### Can emission from \(p\gamma\) interactions interpret the LHAASO spectra? As shown in Sect. 3.1, the one-zone SSC model can fit most of the multi-wavelength spectra, except for the high-energy tail of the LHAASO spectra. To obtain a better fit, we comprehensively test the contributions from \(pp\) interactions, proton-synchrotron emission, and the spine-layer model. On the other hand, since various soft photon fields exist in AGNs environment, many works dedicate themselves to studying the electromagnetic and neutrino emissions from \(p\gamma\) interactions. As suggested by many recent studies (e.g., Sahu et al., 2021, 2022; Alfaro et al., 2022), here we analytically discuss if emission from \(\pi^{0}\) decay in the \(p\gamma\) interactions can improve the fitting of LHAASO spectra. Using the \(\delta\)-approximation, the relation between the energy of \(\pi^{0}\) decay VHE photons \(E^{\mathrm{obs}}_{\mathrm{VHE}}\) and the energy of target photons \(E^{\mathrm{obs}}_{\mathrm{tar}}\) both in the observer' frame can be obtained, \[E^{\mathrm{obs}}_{\mathrm{tar}}\simeq 0.9\,\,\mathrm{MeV}\big{(}\frac{\delta}{ 20}\big{)}^{2}\big{(}\frac{14\,\,\mathrm{TeV}}{E^{\mathrm{obs}}_{\mathrm{VHE} }}\big{)}, \tag{11}\] when considering the peak cross section of photopion interactions due to the \(\triangle^{+}(1232)\) resonance. By taking \(L_{\mathrm{Edd}}\) as the maximum proton injection luminosity, \(10^{-28}\,\,\mathrm{cm}^{2}\) as the photopion cross section weighted by inelasticity, the lower limit of flux of the target photons can be estimated by \[\begin{split}\nu F_{\nu_{\mathrm{tar}}^{\mathrm{obs}}}\simeq 2.2 \times 10^{-10}\,\,\mathrm{erg}\,\,\mathrm{s}^{-1}\,\,\mathrm{cm}^{-2}\,\, \big{(}\frac{R}{10^{16}\,\,\mathrm{cm}}\big{)}\big{(}\frac{\Gamma}{10}\big{)} ^{2}\\ \big{(}\frac{\delta}{20}\big{)}\big{(}\frac{14\,\,\mathrm{TeV}}{ E^{\mathrm{obs}}_{\mathrm{VHE}}}\big{)}\big{(}\frac{\nu F_{\nu_{\mathrm{14 }}\,\,\mathrm{TeV}}}{10^{-13}\,\,\mathrm{erg}\,\,\mathrm{s}^{-1}\,\,\mathrm{ cm}^{-2}}\big{)}\big{(}\frac{10^{9}\,\,M_{\odot}}{M_{\mathrm{BH}}}\big{)}.\end{split} \tag{12}\] As shown in Fig. 3, the model predicted fluxes at \(\sim 1\,\mathrm{MeV}\) for these LHAASO AGNs are \(\sim 10^{-12}\,\,\mathrm{erg}\,\,\mathrm{s}^{-1}\,\,\mathrm{cm}^{-2}\). If considering all emission processes occur in one single region, it can be seen that even when the emission from \(p\gamma\) interactions is only used to account for the high-energy tail of the LHAASO spectra, the model-predicted flux is still two orders of magnitude lower than the flux required. If one attempts to use the emission from \(p\gamma\) interactions to account for the whole LHAASO spectrum, the required flux of target photons would be more than three orders of magnitude higher than the model-predicted flux. Even taking into account that the \(p\gamma\) interactions and the leptonic processes may occur in different regions, i.e., a multi-zone case, the required flux at MeV band (\(>\sim 10^{-10}\) erg s\({}^{-1}\) cm\({}^{-2}\)) still far exceeds all the existing AGN observations.Therefore, using the emission from the \(p\gamma\) interactions to interpret the LHAASO spectra can be confidently ruled out. ### The influence of different EBL models The observed \(\gamma\)-ray spectra of extragalactic sources, especially in the VHE band, are softened by the interactions of the \(\gamma\)-ray photons with the EBL. The energy spectrum of EBL is difficult to obtain by direct observation, so many researchers have used various methods to estimate it. To evaluate the influence of different EBL models on the fitting results, we apply five EBL models to Mrk 421, showing the optical depths (left panel in Fig. 9) and the intrinsic VHE spectra (middle panel in Fig. 9), respectively. The energy of \(\gamma\)-ray opacity equal to unity varies in different EBL models. Within these models, the energy is focused between the 7-10 TeV range for Mrk 421. The EBL model (green line in Fig. 3) used for the calculation in Sect. 3, shows a relatively moderate optical depth in the energy range of the WCDA. Furthermore, various EBL models are also applied to calculate the corresponding intrinsic VHE spectra, based on the observed spectrum given by WCDA. The middle panel of Fig. 9 shows that all models indicate the presence of a new component beyond \(\sim 10\) TeV, which is consistent with our fitting results. From Fig. 9, it can be seen that all EBL models have an equivalent effect on the VHE spectra below 4 TeV. Thus, the intrinsic VHE spectrum can be estimated by extrapolating the 1-4 TeV spectrum up to higher energies, if we assume that the VHE radiation comes from one single component. The right panel of Fig. 9 displays the hypothetical extended intrinsic spectrum (represented by the black line) and the expected observed spectrum after absorption. It appears challenging to reproduce the entire VHE spectrum from the radiation of a single component, even without considering the KN effect in the one-zone SSC model. The above discussion indicates that applying different EBL models does not affect our conclusions. In addition, the observational data from the \(\gamma\)-ray telescope can help to constrain the absorption optical depth induced by EBL and to constrain the EBL model (e.g., Aharonian et al., 2007; Fermi-LAT Collaboration et al., 2018; Abeysekara et al., 2019; Acciari et al., 2019). However, this would require a more abundant or simultaneous set of observation data. As it is beyond the scope of this paper, we will not discuss it further. ### Conclusion In this work, we collect multi-wavelength data from five LHAASO AGNs during the same observation period as LHAASO. Based on theoretical and fitting analysis, we suggest that the one-zone SSC model is capable of reproducing most of the SED, with the exception of the VHE tail in blazar cases. The inability of the VHE tail is mainly due to the collective effect of the KN effect, the EBL absorption and the parameter constraints for other bands observations. This is well demonstrated in the case of NGC 4278, which is very close to us and has almost no EBL absorption. In addition, its multi-wavelength data has very weak parameter constraints. Therefore, when we consider more extreme parameters, the one-zone SSC model can reproduce its SED, especially the LHAASO spectrum. We suggest that the high-energy tail of the LHAASO data of blazars cannot be fitted with the one-zone SSC model, unless very extreme parameters are considered. This is similar to the conclusion of Katarzynski et al. (2005), which suggests that the Thomson scattering into VHE photon energies requires unacceptably large Doppler factors. To reproduce the SEDs of LHAASO AGNs, we apply the \(pp\) model, the proton-synchrotron model and the spine-layer model. The results suggest that the one-zone model, upon incorporating \(pp\) interactions, effectively accounts for all observations in the SEDs, especially the tail of VHE observation. In addition, a multi-zone model is also feasible if we consider the superposition of radiation generated by different regions to explain VHE observations, as demonstrated in the spine-layer model presented in Sect. 3.4. Our analysis results indicate that the proton-synchrotron model and the \(p\gamma\) model are difficult to explain the SEDs without considering very extreme parameters. Of all the sources, only the SEDs of 1ES 2344+514 can be reproduced using the two-zone proton-synchrotron model. A very large magnetic field (\(>10\) G) must be introduced to fit the SEDs of the other LHAASO AGNs, whether in one-zone or two-zone proton-synchrotron models. The low interaction efficiency of \(p\gamma\) model, brought about by the lack of suitable soft photon fields, prevents it from reproducing the SEDs within reasonable parameters. ### Outlook Our results suggest that VHE observations are crucial to constraint the jet model. As mentioned in Sect. 3.1, the SSC process of HBL enters the KN regime in the VHE band. Detailed observations of VHE can verify or rule out the origin of the one-zone SSC model more precisely. Furthermore, Fig. 9 shows that different EBL absorption models have significant different influence on the VHE observation beyond 7 TeV. Therefore, by conducting further observations exceeding 7 TeV of extragalactic sources, we can limit the EBL model better. This method has already been extensively applied in other \(\gamma\)-ray telescopes (e.g., Fermi-LAT Collaboration et al., 2018; Abeysekara et al., 2019; Acciari et al., 2019). Multi-wavelength variability can provide a different perspective to study the emission origin. For example, long-term monitoring is carried out for Mrk 421, as it is one of the closest BL Lac objects. Its VHE variability displays highly complicated behaviour. Most observations have found a strong correlation between flares in the VHE band and the X-ray band (Fossati et al., 2008; MAGIC Collaboration et al., 2021; Arbet-Engels et al., 2021; Acciari et al., 2021). Some observations have reported the variations in VHE band correlated with X-rays, but not with the optical (Giebels et al., 2007) and the other bands (Aleksic et al., 2015). Some variability studies indicate that the correlation between the X-ray band and the VHE band shows different behaviour (Acciari et al., 2020), and Abeysekara et al. (2020) find that the flux relationship changes from linear to quadratic, to no correlation, and to anti-correlation over the decline epochs. Blazejowski et al. (2005) report the inconsistency of X-ray band and VHE band flare times. Taken together, these phenomena are difficult to explain using the one-zone SSC model. Therefore, the observations of variability in VHE band and the corresponding simultaneously SEDs are very important to investigate the radiation mechanisms and the physical properties of blazars. We thank the anonymous referee for insightful comments and constructive suggestions. Z.R.W. acknowledges the support by the NSFC under Grant No. 12203024 and the support by the Department of Science & Technology of Shandong Province under Grant No. ZR2022QA071. R.X. acknowledges the support by the NSFC under Grant No. 12203043. D.R.X acknowledge the science research grants from the China Manned Space Project with No. CMS-CSST- 2021-A06, Yunnan Province Youth Top Talent Project (YNWR-QNBJ-2020-116) and the CAS "Light of West China" Program. F.K.P acknowledges support by the National Key Research and Development Program of China (grant numbers 2022SKA0130100 and 2022SKA0130102), the National Natural Science Foundation of China (Grants No. 12003002), the University Annual Scientific Research Plan of Anhui Province 2023, and the Doctoral Starting up Foundation of Anhui Normal University 2020 (903/752022). L.M.S. acknowledges the support from NSFC grant No. 12103002 and Anhui Provincial Natural Science Fondation grant No. 2108085QA43. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Figure 9: The optical depths (left panel), the intrinsic VHE spectra (middle panel) and the hypothetical extended intrinsic spectrum and the expected observed spectrum (right panel) of Mrk 421 (z=0.031) for different EBL models. The optical depths taken from Finke et al. (2010); Dominguez et al. (2011); Gilmore et al. (2012); Saldana-Lopez et al. (2021); Finke et al. (2022). ## Appendix A Proton-Synchrotron Modeling with Strong Magnetic Fields Some studies show that the high-energy hump of SEDs can be fitted by the proton-synchrotron process with a strong magnetic field (Cerruti et al., 2015; Wang et al., 2023), although such a large magnetic field strength contradicts current observations. In the following, we will present the fitting results of the proton-synchrotron model, with a strong magnetic field. In this scenario, the leptonic modeling follows that given in Sect. 3.1, and the hadronic modeling basically follows that given in Sect. 3.2. There are five differences from before: 1. The power-law proton spectrum cannot fit the observation of LHAASO and therefore the injection proton density distribution is changed to a broken power-law spectrum, i.e., \[n_{\rm p}^{\rm inj}(\gamma_{\rm p})\propto\left\{\begin{array}{ll}\gamma_{ \rm p}^{-s_{\rm p,1}},&\gamma_{\rm p,min}\leq\gamma_{\rm p}\leq\gamma_{\rm p, b}\\ \gamma_{\rm p,b}^{\gamma_{\rm p,2}-s_{\rm p,1}}\gamma_{\rm p}^{-s_{\rm p,2}},& \gamma_{\rm p,b}<\gamma_{\rm p}\leq\gamma_{\rm p,max}\end{array}\right..\] (A1) 2. In the proton-synchrotron modeling, a higher maximum proton Lorentz factor is required to produce TeV emission than that in the \(pp\) interactions. Here we set \(\alpha\) = 1, which implies an extreme acceleration efficiency. 3. In the proton-synchrotron model, a large magnetic field is needed to accelerate protons to higher energies and produce higher energy emissions. We boldly fix the magnetic field \(B\) to 35 G for all of five AGNs. 4. To maximise the efficiency of the proton-synchrotron process within a reasonable parameter space, we assume that the power of the magnetic field equals to half the Eddington luminosity. Then the radius of radiation zone can be written as, \[R=\sqrt{L_{\rm Edd}/\left(2\pi\Gamma^{2}cU_{\rm B}\right)}.\] (A2) 5. During fitting, we find a significant degeneracy between \(\gamma_{\rm p,max}\) and \(\gamma_{\rm p,b}\). In order to reduce the number of free parameters, we set \(\gamma_{\rm p,b}=\gamma_{\rm p,max}/10\). Finally, there are nine free parameters left, which can be found in Table 3. The fitting results are shown in Fig. 10. It can be seen that the LHAASO observations are well reproduced for Mrk 421, Mrk 501, 1ES 2344+514, NGC 4278\({}^{\rm a}\) and NGC 4278\({}^{\rm b}\). In the case of 1ES 1727+502, however, it deviates significantly from the observations, which may be caused by the maximum energy that protons can reach. The characteristic photon energy in the observer's frame produced by the proton-synchrotron process can be calculated by \[E_{\rm p,c}^{\rm syn}=\frac{3heB\gamma_{\rm p}^{2}}{4\pi m_{\rm p}c}\frac{ \delta}{1+z}\approx 9.46\times 10^{-12}\gamma_{\rm p}^{2}\frac{B}{1\,{\rm G }}\frac{\delta}{1+z}{\rm eV}.\] (A3) \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{proton-synchrotron model} & \\ Source name & \(\theta\) (\({}^{\circ}\)) & \(\Gamma\) & \(L_{\rm p}^{\rm inj}\left({\rm erg\,s^{-1}}\right)\) & \(\gamma_{\rm p,b}\) & \(\gamma_{\rm e,max}\) & \(s_{\rm e,1}\) & \(s_{\rm e,2}\) & \(s_{\rm p,1}\) & \(s_{\rm p,2}\) & \(L_{\rm p}^{\rm inj}/L_{\rm Edd}\) \\ \hline Mrk 421 & 2 & 23 & 2.60E+42 & 1.30E+04 & 1.00E+07 & 1.40 & 4.90 & 2.41 & 5.40 & 4.00E-01 \\ Mrk 501 & 2 & 23 & 1.20E+42 & 2.30E+04 & 1.00E+07 & 1.40 & 4.90 & 2.41 & 4.20 & 2.78E-01 \\ 1ES 1727+502 & 2 & 23 & 4.00E+41 & 3.00E+04 & 1.00E+07 & 1.50 & 4.20 & 2.20 & 3.00 & 2.86E-01 \\ 1ES 2344+514 & 2 & 23 & 2.20E+41 & 3.00E+04 & 1.00E+07 & 1.60 & 4.90 & 2.60 & 4.20 & 5.00E-01 \\ NGC 4278\({}^{\rm a}\) & 2 & 5 & 1.30E+39 & 3.00E+05 & 1.00E+07 & 1.00 & 4.90 & 2.10 & 4.00 & 5.00E-07 \\ NGC 4278\({}^{\rm b}\) & 30 & 3 & 3.00E+41 & 5.00E+05 & 1.00E+07 & 1.00 & 4.90 & 1.50 & 3.60 & 2.86E-06 \\ \hline \end{tabular} \end{table} Table 3: The fitting parameters of proton-synchrotron model with strong magnetic fields. To reproduce the VHE spectra, protons with maximum energy should emit at least 20 TeV photons (the energy range of WCDA data is 1-25 TeV). Substituting Eq. (5) and Eq. (49) into Eq. (50) yields \[E_{\rm p,max}^{\rm syn}=\frac{3he^{3}}{\pi m_{\rm p}^{3}c^{6}}\frac{B\delta L_{ \rm Edd}}{\alpha^{2}\Gamma^{2}(1+z)}\approx 0.16\left(\frac{10}{\alpha} \right)^{2}\frac{B}{1\,{\rm G}}\frac{M_{\rm BH}}{10^{9}M_{\odot}}\frac{\delta }{\Gamma^{2}(1+z)}\,{\rm TeV}. \tag{51}\] It is clear that \(\alpha\), \(B\) and \(\Gamma\) are the three parameters that will affect the value of \(E_{\rm p,max}^{\rm syn}\). To increase \(E_{\rm p,max}^{\rm syn}\), \(\alpha\) must be lowered, but \(\alpha=1\) is already the theoretical minimum value. So to increase \(E_{\rm p,max}^{\rm syn}\), alternative acceleration mechanisms with higher efficiency than Fermi first-order are needed. Similar to \(\alpha\), it is also needed to reduce \(\Gamma\) to get a larger \(E_{\rm p,max}^{\rm syn}\). However, reducing \(\Gamma\) will also Figure 10: One-zone proton-synchrotron modeling. The meanings of symbols and line styles are given in the legend of Mrk 421. lead to the observed flux decrease due to the weakening of the beaming effect. Unless we increase \(L_{\rm p}^{\rm ini}\) at the same time, but that would cause the jet power to exceed the Eddington luminosity. The proton injection luminosity in fitting of four blazars (shown in Table 2) is close to half of the Eddington luminosity, and the power of magnetic field is assumed previously to be half of the Eddington luminosity, so the sum of the two is very close to the Eddington luminosity. Finally, \(B\) is the only parameter that can be adjusted to get a larger \(E_{\rm p,max}^{\rm syn}\). Substituting \(E_{\rm p,max}^{\rm syn}=20\,\rm TeV\), \(\alpha=1\), \(\Gamma\) and \(\theta\) used in fitting into Eq. (A4), we can obtain the minimum required magnetic field strength \(B_{\rm min}=421\,\rm G\) for 1ES 1727+502. The jet is unlikely to have such a strong magnetic field.
2303.12765
A simulation framework of procurement operations in the container logistics industry
This study proposes a simulation framework of procurement operations in the container logistics industry that can support the development of dynamic procurement strategies. The idea is inspired by the success of Passenger Origin-Destination Simulator (PODS) in the field of airline revenue management. By and large, research in procurement has focused on the optimisation of purchasing decisions, i.e., when-to-order and supplier selection, but a principled approach to procurement operations is lacking. We fill this gap by developing a probabilistic model of a procurement system. A discrete-event simulation logic is used to drive the evolution of the system. In a small case study, we use the simulation to deliver insights by comparing different supplier selection policies in a dynamic spot market environment. Policies based on contextual multi-armed bandits are seen to be robust to limited access to the information that determines the distribution of the outcome. This paper provides a pool of modelling ideas for simulation and observational studies. Moreover, the probabilistic formulation paves the way for advanced machine learning techniques and data-driven optimisation in procurement.
George Vassos, Klaus K. Holst, Pierre Pinson, Richard M. Lusby
2023-03-22T17:23:49Z
http://arxiv.org/abs/2303.12765v2
# A simulation framework of procurement operations in the container logistics industry ###### Abstract This study proposes a simulation framework of procurement operations in the container logistics industry that can support the development of dynamic procurement strategies. The idea is inspired by the success of Passenger Origin-Destination Simulator (PODS) in the field of airline revenue management. By and large, research in procurement has focused on the optimisation of purchasing decisions, i.e., when-to-order and supplier selection, but a principled approach to procurement operations is lacking. We fill this gap by developing a probabilistic model of a procurement system. A discrete-event simulation logic is used to drive the evolution of the system. In a small case study, we use the simulation to deliver insights by comparing different supplier selection policies in a dynamic spot market environment. Policies based on contextual multi-armed bandits are seen to be robust to limited access to the information that determines the distribution of the outcome. This paper provides a pool of modelling ideas for simulation and observational studies. Moreover, the probabilistic formulation paves the way for advanced machine learning techniques and data-driven optimisation in procurement. ## 1 Introduction The procurement business is important to companies with widespread or global supply chains, where, naturally, demand for products and services is generated daily from business operations. Therefore, a procurement organisation has to be in place to ensure such demand is met in a structured fashion. The procurement organisation is a three-layer hierarchical entity, within a company, with the purpose of carrying out the procurement enterprise. On the top layer, category management identifies categories of products and services and develops strategies for establishing framework agreements (contracts) with suppliers. On the middle layer, sourcing must implement the strategies to consolidate a supplier portfolio with contract and spot market options. On the bottom layer, purchase managers execute the strategy by allocating demand to suppliers in daily operations. This generic structure of procurement is depicted on the right side of Figure 1. In the container logistics industry, there is customer demand for container transportation. A container logistics company utilises a fleet of transportation units that can be owned by the company or by service providers in a contract-spot market. Procurement maintains a contract-spot portfolio of container transportation suppliers as well as suppliers that can satisfy demand for the operation of company-owned assets. Figure 1 illustrates the structure of the procurement business in container logistics. Knowledge and insights about the nature of operations are core determinants of a procurement strategy. In a typical procurement process, strategic and tactical planning are made in a static fashion, similar to airlines before the development and success of the Passenger Origin-Destination Simulator (PODS), in the sense that they rely on a long-term forecast of the demand, which is then fed into optimisation models to develop operational policies over annual or quarterly horizons (Fry, 2015, Chapter 3). PODS is a tool for revenue management developed by Boeing with a simulation model of customer demand and few options for the number of airlines and network competition. Similar to PODS, the proposed framework can enable a transition towards more flexible solutions where the effect of strategic and tactical decisions is continuously informed and policies are dynamically updated in light of new data. This study develops a probabilistic formulation of the mechanism underlying the evolution of a procurement system where in the ambient scheme demand, decisions, and outcome (welfare measures) are interdependent, as shown in Figure 2. We believe this study can spark future research in procurement, as the idea behind PODS has led to a series of concomitant studies (Fry, 2015; Wittman and Belobaba, 2016; Weatherford, 2017; Wang et al., 2021; Szymanski et al., 2021), leading to a valuable product for revenue optimisation. Our goal is to advance a joint probabilistic model of operations and demand in procurement, fostering the use of advanced machine learning techniques in optimisation which is currently trending in the Operations Research (OR) field (Lodi et al., 2020). Chilmon and Tipi (2020) and Ha et al. (2018) argue on the importance of a unified framework for procurement operations in which demand and operations are interconnected. The simultaneous optimisation of all model parameters is very Figure 1: Procurement in the container logistics industry: circles and arrow heads, on the left side, represent operational sites and transportation units or vice versa. The sites generate demand, i.e., product and service requests, that is sent to the operational layer of the procurement organisation where purchase orders are executed. The higher layers, that is, category management and sourcing are responsible for developing and implementing the business policies to guide the decisions of purchase management. likely to be intractable even for simple simulation configurations; however, extensive control over the evolution of the system can help formulate more concise optimisation problems than those under the usual untestable exogeneity assumptions on demand (Zheng et al., 2021; Zhang and Chen, 2013) or spot market (Nie et al., 2017; Hu et al., 2012). The state of the procurement system is determined by historical (demand, decision, outcome) records, exogenous effects, and the list of unresolved requests. On a given day, purchase management will decide whether or not and to whom to send purchase orders given the present state of the system, which is in turn affected by the decisions taken. A case study is carried out to illustrate how the simulation can be used to compare different supplier selection policies in a dynamic spot market environment. Supplier selection is the primary operational task in procurement where the business strategy is executed to realise the expectations of category management. The utility of the spot market is twofold: first, it is a safety buffer against disruptions in demand and helps to avoid supplier lock-in (Hamdi et al., 2018), and, second, it enables the exploration of new collaboration opportunities. Contextual multi-armed bandits are known to have robust performance in non-stationary environments (Sutton and Barto, 2018). Our insights indicate that in a spot market situation contextual multi-armed bandits outperform static supplier selection policies with the same knowledge of the outcome distribution. Hence, such models can be employed to perform efficient supplier selection or exploration of new opportunities in the spot market and continuously inform a more dynamic business strategy. This paper is structured as follows. Section 2 is a review of state-of-the-art literature on modern approaches to procurement and the need for a new paradigm. Section 3, provides a unified probabilistic model and the set of assumptions under which it describes the data generating mechanism in the procurement business. An outline of a discrete-event simulation (Law, 2015) design is included to inform software implementations. Section 4, proposes specific expressions and algorithms for the components of the probabilistic model, and we showcase the usage of our proposed framework in an example in Section 5. Finally, Section 6 discusses future research perspectives. Figure 2: Generic architecture of the simulation logic including the three core components, namely, the demand, decision, and outcome mechanisms, the auxiliary component where the pending requests are maintained, and the interactions between them represented by the arrows. Notice how the demand may only be impacted by the decisions through the outcome. Literature review The significance of the procurement organisation to modern multinational corporations has been re-evaluated in recent years to accommodate the complexity of supply chain planning in the global environment (Hamdi et al., 2018). Strategic collaborations with suppliers and the establishment of strong supplier relationships are critical to the efficiency of daily operations. The work by Martinez-de Albenziz and Simchi-Levi (2005) shows the link between the choice of a supply contract portfolio and the optimisation of procurement operations. Hamdi et al. (2018) explain the important role of supplier selection in supply chain risk mitigation. By and large, research in the field of procurement and supply chain planning has focused on stochastic and robust optimisation but the field of OR (Govindan and Cheng, 2018). These traditional OR approaches often rely on long-term forecasts of the demand and assumptions about the market environment that are postulates and do not emerge from data modelling. Limitations of such approaches have been outlined by Abbasi et al. (2020), Powell (2019), and Shahriari et al. (2016). Dai et al. (2020) criticised the use of long-term forecasts in the development of procurement strategies. The situation is similar to that with the airline revenue management business that led to the development of PODS which has been a key lever in the transition from static to dynamic revenue management in the airline industry (Fry, 2015, Chapter 3). The recent advances in machine learning and data-driven optimisation promote a new paradigm in procurement research to enable the transition to dynamic strategies. A trend towards introducing machine learning in supply chain management research within OR has been highlighted by Lodi et al. (2020), Larsen et al. (2018), and Nguyen et al. (2018). Policy learning under uncertainty in a Markov decision process (MDP) framework has been attempted on similar applications (Powell, 2019; Haehl and Spinler, 2018). Moreover, concerns on the causal interpretation of OR models have been raised (Bertsimas and Kallus, 2020, 2017). In a systematic literature review, Chilmon and Tipi (2020) pointed out the great utility of simulation methods in modelling complex systems such as an end-to-end supply chain (E2ESC). They highlighted the absence of a principled framework, i.e., of generic rules, concepts, and processes, which can provide regularity for the future efforts in modelling complex systems, and emphasised the importance of having a theoretical model for real world problems, a point also made by Ha et al. (2018), as well as the scarcity of such efforts (Chilmon and Tipi, 2020). Supplier or vendor selection is one of the operational problems that falls within the scope of supply chain planning. Supplier selection can be understood as the execution of a procurement strategy in daily operations. Govindan and Cheng (2018) stress the potential of multidisciplinary research to improve supply chain planning in uncertain environments. Nguyen et al. (2018) found a growing interest in data-driven decision-making in supply chain management (SCM) in recent years. They suggested that data-driven approaches have attracted great interest among researchers in confronting the challenges of SCM. Important problems of procurement such as the identification of optimal supplier selection policies have been studied in isolation taking approaches such as fuzzy multi-criteria decision-making (MCMD), game theory, swarm-based heuristics, robust optimisation, or fuzzy goal programming (Khemiri et al., 2017; Chen, 2015; Yoon et al., 2020; Prince et al., 2013; Pal et al., 2011; Kanyalkar and Adil, 2010; Torabi and Hassini, 2009). Such studies focused on obtaining insights on very specific study designs and do not provide a portable logic that can be readily utilised by new researchers. Interest has also been shown in procurement policies under uncertainty, usually on price or demand, (Dai et al., 2020; Bollapragada et al., 2015; Xie et al., 2013; Hegedus and Hopp, 2001); however, the standard practice is to postulate a probability distribution on the price or demand without any insights on the underlying generating mechanism. Data-driven approaches to procurement are few and fairly recent (Zhang et al., 2022; Mogale et al., 2020; Akcay and Corlu, 2017) and have also been anticipated due to the rise of e-procurement systems (Chibani et al., 2018; Oh et al., 2014; Chan et al., 2011; Talluri et al., 2007). Case-specific simulation models have been developed for the investigation of particular research inquiries (Tao et al., 2020; He et al., 2016; Johnson et al., 2013) but they are not generalisable to confront a wider variety of procurement problems. Interestingly, simulation alternatives have been employed to enable the comparison of procurement policies (Mula et al., 2013). Procurement has also been considered jointly with other processes in a broader supply chain planning context (Karabag and Gokgur, 2022; Yaghin and Goh, 2021; Reiner et al., 2014). Joint optimisation at such scale could easily become intractable. The complexity burden could be mitigated with the use of a simulation model of procurement. Buyer-supplier interaction is a core topic in procurement (Ghadimi et al., 2016) and also a focal point of the proposed simulation model. Other cases that are of interest in recent literature on procurement are concerned with green or sustainable procurement (Niu et al., 2021; Ghadge et al., 2019; Dey et al., 2019) and can also benefit from using a simulation. This study proposes a simulation framework of procurement operations that is founded on a probabilistic model that captures the dynamics between operations and demand. On the highest level, the model consists of demand, decision, and outcome components with an interdependence structure. Furthermore, a set of configurable specifications allows great flexibility when simulating a variety of scenarios, and enables both simulation-based optimisation and data-driven approaches. Demand is considered in the form of purchase requisitions which are common in the procurement business and flexible to configure. The important operational decisions in procurement are: (1) when to place an order, (2) supplier selection, and (3) demand allocation (Sun et al., 2022). Attempts to model (1) as an optimal stopping problem are made by Xie and Zheng (2013) and Fontes (2008). Research in (2) throughout a relatively long period of time has pointed out cost, lead or delivery time, and quality as the main drivers of the business (Rao et al., 2017; Weber et al., 1991; Dickson, 1966). The problem of demand allocation in procurement is linked to compliance with existing supply contracts. The proposed simulation framework can be configured to generate insights in this situation. The novelty of this study is a formulation of the procurement system that avoids any independence assumptions between operations and demand or spot prices and makes no assumptions on the functional form of the random variables that describe the procurement system. Previous studies have only treated the case of independent demand, spot market, and procurement operations under regularity assumptions that allowed for convenient solutions at the expense of generality. ## 3 Mathematical framework and simulation logic In this section, we propose a joint probabilistic model of procurement operations, decision-making and outcome mechanisms, and demand. We introduce a model of the evolution of procurement operations which enables a systematic representation of the problem. Let \(B_{\ell}\) denote the demand information available at calendar day \(\ell\), with \(\ell=1,\ldots,\tau\) enumerating \(\tau\) calendar days, and \(m=1,\ldots,M_{\ell}\), where \(M_{\ell}\) is the total number of entries in all requisitions in \(B_{\ell}\) We use the term line item to refer to a single entry, i.e., request for a product, in a requisition. Let \(\mathcal{D}_{\ell}\) denote the set of suppliers who are relevant at time \(\ell\), and \(D_{\ell m}\in\{(0,\emptyset),(1,1),\ldots,(1,|\mathcal{D}_{\ell}|)\}\) the relative decision variable. That is, at any calendar day \(\ell\), the purchase management needs to decide whether to place a purchase order, for line item \(m\), or not. If the decision is to order it, then one out of \(|\mathcal{D}_{\ell}|\) distinct supplier firms must be selected to send the order to. We let \(Y_{\ell m}\) denote the response variable, i.e., cost, delivery time, and quality, relative to line item \(m\) at calendar day \(\ell\). **Example 3.1**.: _Suppose we observe for \(2\) days a procurement system with three products \(\{1,2,3\}\) that belong to a single category and two suppliers \(\{1,2\}\) that can provide all three products. Following the logic of the proposed data generating mechanism, we illustrate below a potential observation from this system. Notice that \(b_{1}\) is a purchase requisition that contains two line items \(b_{11}\) and \(b_{13}\) and similar for \(b_{2}\)._ \begin{tabular}{c|c c c c c} _Product id_ & \(b_{1}\) & \(d_{1}\) & \(y_{1}\) & \(b_{2}\) & \(d_{2}\) & \(y_{2}\) \\ \hline \(1\) & \(b_{11}\) & \((0,\emptyset)\) & \(\emptyset\) & \(b_{21}\) & \((1,1)\) & \(y_{21}\) \\ \(2\) & \(\emptyset\) & \(\emptyset\) & \(\emptyset\) & \(b_{22}\) & \((1,1)\) & \(y_{22}\) \\ \(3\) & \(b_{13}\) & \((1,2)\) & \(y_{13}\) & \(\emptyset\) & \(\emptyset\) & \(\emptyset\) \\ \end{tabular} A realisation of the system over a period of \(\tau\) calendar days will have the structure \[(b_{1},d_{1},y_{1}),\ldots,(b_{\tau},d_{\tau},y_{\tau}) \tag{1}\] which is a draw from the probability distribution, also called mechanism, with density function that can be factorised as \[\prod_{\ell=1}^{\tau}p(b_{\ell}\,|\,\overline{y}_{\ell-1},\overline{d}_{\ell -1},\overline{b}_{\ell-1})\,p(d_{\ell}\,|\,\overline{y}_{\ell-1},\overline{d }_{\ell-1},\overline{b}_{\ell})\,p(y_{\ell}\,|\,\overline{y}_{\ell-1}, \overline{d}_{\ell},\overline{b}_{\ell}) \tag{2}\] where \(\overline{b}_{\ell}\doteq(b_{1},\ldots,b_{\ell})\), \(\overline{d}_{\ell}\doteq(d_{1},\ldots,d_{\ell})\) and \(\overline{y}_{\ell}\doteq(y_{1},\ldots,y_{\ell})\), and we use the convention that \(\overline{b}_{0}=\overline{d}_{0}=\overline{y}_{0}=\emptyset\). Letting \(b_{\ell}^{-}\doteq(\overline{y}_{\ell-1},\overline{d}_{\ell-1},\overline{b} _{\ell-1})\), \(d_{\ell}^{-}\doteq(\overline{y}_{\ell-1},\overline{d}_{\ell-1},\overline{b} _{\ell})\), and \(y_{\ell}^{-}\doteq(\overline{y}_{\ell-1},\overline{d}_{\ell},\overline{b}_{ \ell})\) we can express the probability density function more compactly as \[\prod_{\ell=1}^{\tau}p(b_{\ell}\,|\,b_{\ell}^{-})\,p(d_{\ell}\,|\,d_{\ell}^{-} )\,p(y_{\ell}\,|\,y_{\ell}^{-}) \tag{3}\] A mechanism encapsulates the entirety of cause and effect relationships that govern the evolution of a system. Directed acyclic graphs (DAGs) are common devices, in the theory coarsely termed Causality or Causal Inference (Hernan and Robins, 2020; Peters et al., 2017), used to represent structural causal models; in the context of this study, a structural causal model can be seen as a collection of probability distributions having a density of the sort of (3). An arrow in a DAG indicates a cause-effect relationship between the connected nodes and is always directed from left to right. Figure 3 attempts an economic illustration of the longitudinal observation structure (1) aiming to highlight the working cause-effect relationships at time \(\ell\). Figure 3 identifies the model which includes a conditional independence assumption. **Assumption 3.1** (Conditional independence).: _Demand at time \(\ell\) is independent of past decisions given the historical demand and outcome information, i.e.,_ \[B_{\ell}\,\mbox{\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$} \raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$ \perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1.72 pt}{$\perp$}\raisebox{-1.72pt}{$\perp$}\raisebox{-1. from \(\overline{D}_{\ell-1}^{(0)}\) and \(\overline{D}_{\ell-1}^{(1)}\) to \(B_{\ell}\) in Figure 3, and notice the flexibility of the proposed model in terms of capturing cause-effect relationships. Under the simplifying Assumption 3.1, Equation (3) remains a valid expression of the probability density function with the difference that \(b_{\ell}^{-}\doteq(\overline{y}_{\ell-1},\overline{b}_{\ell-1})\). In the general case, we would like to generate a number of features from historical data, that is, to construct a design using transformations \(B_{\ell}^{-}\mapsto f_{b,\ell}(B_{\ell}^{-})\doteq\widetilde{B}_{\ell}\), \(D_{\ell}^{-}\mapsto f_{d,\ell}(D_{\ell}^{-})\doteq\widetilde{D}_{\ell}\), and \(Y_{\ell}^{-}\mapsto f_{y,\ell}(Y_{\ell}^{-})\doteq\widetilde{Y}_{\ell}\), and, potentially, introduce additional exogenous covariates \[\left(\overline{X}_{\ell}^{(b)},\overline{X}_{\ell}^{(d)},\overline{X}_{\ell }^{(y)}\right)\mapsto\left(f_{x,\ell}^{(b)}(\overline{X}_{\ell}^{(b)}),f_{x, \ell}^{(d)}(\overline{X}_{\ell}^{(d)}),f_{x,\ell}^{(y)}(\overline{X}_{\ell}^ {(y)})\right)\doteq\left(\widetilde{X}_{\ell}^{(b)},\widetilde{X}_{\ell}^{(d) },\widetilde{X}_{\ell}^{(y)}\right)\] so that \((\widetilde{B}_{\ell},\widetilde{X}_{\ell}^{(b)})\) can serve as input to the demand model for generating the next demand instance \(B_{\ell}\), \((\widetilde{D}_{\ell},\widetilde{X}_{\ell}^{(d)})\) can be used in the decision model to generate \(D_{\ell}\), and \((\widetilde{Y}_{\ell},\widetilde{X}_{\ell}^{(y)})\) would be the input to the outcome model for obtaining \(Y_{\ell}\). Figure 4 demonstrates the logic in three simple diagrams. In the following sections, we describe in more detail the demand, decision, and outcome models that we have introduced. ### Demand model We propose a two-stage model for the generation of purchase requisitions. First, a recurrent event process, modelled with an intensity function, triggers the generation of a requisition; we may refer to the exact time when a generation is triggered as an event time. Furthermore, we argue that the nature of a procurement organisation allows for an independence assumption on the demand between categories so that we can focus on one category without loss of generality. Second, once a requisition generation has been triggered, we describe the selection of line items with a sequential decision model. Figure 3: A directed acyclic graph (DAG) that illustrates the dynamics in the model under the assumption that there is no unobserved confounding so that any arrow represents a direct cause-effect relationship. A DAG is useful by way of communicating the set of cause-effect relationships that are at work in a particular simulation scenario. #### 3.1.1 Time-to-request model A procurement setting involves operational sites that generate recurrent purchase requisitions for different categories of products. One of the strategic tasks of a procurement organisation, as mentioned in Section 1, is to segregate the products into categories so that each category can be treated independently. Thus, we can naturally make an independence assumption which allows us to consider a single category as our point of reference and readily generalise our analysis to all categories. **Assumption 3.2** (Independent categories).: _Demand is independent across categories of products._ Suppose at time \(t\), \(N_{t}\) requisitions are sent to procurement from all operations so that, in continuous time, there is a counting process \((N_{t}:t\geq 0)\) at work. Under the temporal ordering, it is possible to introduce the demand process \((B_{t}:t\geq 0)\) as well as additional exogenous processes associated with the counting process. The sequence of event times, at which requisitions are generated, is guided by a recurrent event process which, in line with the literature on event history analysis (Cook and Lawless, 2007; Aalen, 2008), is modelled using an intensity function \(\lambda(\,\cdot\,|\,H_{t})\), where \(H_{t}\) contains historical counts \((N_{s}:0\leq s<t)\) and information exogenous to the counting process, namely, the covariate process \((X_{t}^{(b)}:t\geq 0)\) and historical process \(B_{t}^{-}\doteq((B_{s},Y_{s}):0\leq s<t)\). Event times take values in the event time scale induced by \(\lambda(\,\cdot\,|\,H_{t})\) which is, in our case, indexed by \(t\) and it is the calendar scale but can be finer than days, e.g., hours. Cook and Lawless (2007) define the intensity function, that is, the probability that a requisition is generated exactly at time \(t\), as \[\lambda(t\,|\,H_{t})\doteq\lim_{\Delta t\to 0}\frac{P(\Delta N_{t}=1\,|\,H_{t} )}{\Delta t} \tag{5}\] Figure 4: Graphical input/output illustration of the (a) demand model, (b) decision model, and the (c) outcome model. Each model takes as input a number of features generated from the endogenous variables of the relative mechanism as reported in (3) as well as additional features constructed from exogenous covariates and generates the corresponding output variable. and can be used to model the generation of requisitions for some category of products at an operational site. When modelling a recurrent event process with an intensity function it is customary to assume that no more than one requisition can be generated at a particular time instance (Cook and Lawless, 2007; Aalen, 2008). In this study, it is easy to force the veracity of this assumption in the rare situation that two events are generated simultaneously by assigning them a random order and adding a very small increment to event(s) in later order. In this study, an event time can always be identified with an operational site \(i=1,\ldots,n\) and the number of requisitions generated at that site \(n_{i}\); in the notation, dependence of \(j\) from \(i\) is suppressed. Thus, the individual unit of observation is an operational site, and an event history instance has the form \[(t_{i1},\ldots,t_{in_{i}}:i=1,\ldots,n)\] where event times are grouped by operational site. Individual-specific historical information \(H_{it}\), for \(i=1,\ldots,n\), can capture the observed heterogeneity in the event times of different operational sites. Furthermore, we introduce an individual-specific, potentially, time-varying, random effect \(\zeta_{it}\) to induce the correlation structure between individuals. Hence, we can understand an operational site \(i\) as sending \(n_{i}\) requisitions, over a period of interest \((0,\tau]\), at times \(0<t_{i1}<\cdots<t_{in_{i}}\leq\tau\) with intensity \(\lambda_{i}(\,\cdot\,|\,H_{it},\zeta_{it})\). #### 3.1.2 Requisitions generation model The previous section describes an intensity model of the event times at which requisition generations are triggered. This section takes the next step to propose a mechanism of how a requisition's line items are filled. In more detail, the procedure of filling a requisition is seen as a sequential decision mechanism of the joint decision about whether to keep adding line items or stop, which product to introduce, and at what quantity. We explain the logic of the proposed mechanism in a simple example and provide a formal expression. **Example 3.2**.: _Suppose an agent at some operational site \(i\) is preparing the \(j\)-th requisition to be sent from that particular site to procurement. Let \(e_{k-1}\doteq e_{ij,k-1}\) represent the vector of product ids and relative quantities already included in the requisition at step \(k\), \(a^{(0)}_{k}\doteq a^{(0)}_{ijk}\in\{0,1\}\) is the decision about whether to introduce a \(k\)-th line item or not, \(a^{(1)}_{k}\doteq a^{(1)}_{ijk}\) is the \(k\)-th product id, and \(q_{k}\doteq q_{ijk}\) the relative quantity in the case that \(a^{(0)}_{k}=1\); if \(a^{(0)}_{k}=0\), the agent stops adding line items to the \(j\)-th requisition. Initially, it is clear that \(e_{0}=\emptyset\) and \(a^{(0)}_{1}=1\) since no item has been added yet. Next, the agent chooses the first line item and, hence, \(e_{1}\doteq(a^{(1)}_{1},q_{1})\). Now, if \(a^{(0)}_{2}=1\), then \(e_{2}\doteq(a^{(1)}_{1},q_{1},a^{(1)}_{2},q_{2})\) and the process continues, for instance, until \(K\doteq K_{ij}\) line items are added and the final requisition is \(e_{ij}\doteq(a_{1},q_{1},\ldots,a_{K},q_{K})\), where, clearly, \(a_{k}\doteq(1,a^{(1)}_{k})\)._ The logic that is illustrated in the example above can be formalised as follows. Consider the probability density function of an instance \((q_{ijk},a_{ijk})\) factorised as \[p(q_{ijk},a_{ijk}\,|\,e_{ij,k-1},H_{it_{ij}})=p(a^{(0)}_{ijk}\,|\,e_{ij,k-1},H _{it_{ij}})\,p(q_{ijk},a^{(1)}_{ijk}\,|\,e_{ij,k-1},H_{it_{ij}})^{a^{(0)}_{ijk}} \tag{6}\] where \(e_{ijk}\doteq\big{(}t_{ij},(a_{ij1},q_{ij1}),\ldots,(a_{ijk},q_{ijk})\big{)}\) denotes part of a requisition with \(k\leq K_{ij}\), \(H_{it_{ij}}\) is the historical process explained in the previous section, and \((a_{ij},q_{ij})\doteq((a_{ijk},q_{ijk}):k=1,\ldots,K_{ij})\) line items relative to \(e_{ij}\). Hence, a realisation of the form \[(e_{i1},\ldots,e_{in_{i}}:i=1,\ldots,n)\] is sampled from \[\prod_{i=1}^{n}\prod_{j=1}^{n_{i}}\prod_{k=1}^{K_{ij}}p(q_{ijk},a_{ijk}\,|\,e_{ij, k-1},H_{it_{ij}},\zeta_{it_{ij}}) \tag{7}\] where \(H_{it_{ij}}\) contains historical data to adjust for observed heterogeneity in the content of requisitions, while the random vector \(\zeta_{it_{ij}}\) controls for correlation patterns in the structure of requisitions generated at different operational sites, e.g., products often requested together. Conversion of the demand from its own time scale, guided by the time-to-request model, to calendar days, or, in general, to a discrete time scale, is important when the decisions are carried out at discrete stages such as calendar days. Requisitions can be grouped so that demand occurs on daily basis \[b_{i\ell}\doteq(e_{ij}:24(\ell-1)\leq t_{ij}<24\ell:j=1,\ldots,n_{i})\] so that \(b_{\ell}\doteq(b_{i\ell}:i=1,\ldots,n)\). In this study, the aim is to simulate real world procurement operations and, thus, it is natural to think that decision-making takes place at a discrete calendar day based time scale. ### Decision model In this section, we describe a model of the operational (purchasing) decisions, i.e., when to order and supplier selection, in procurement. Consider a \(\tau\)-step realisation from the system given by \[(b_{1},d_{1},y_{1}),\ldots,(b_{\tau},d_{\tau},y_{\tau})\] with probability density \[p(b_{1},d_{1},y_{1},\ldots,b_{\tau},d_{\tau},y_{\tau}\,|\,\overline{X}_{\tau}) =\prod_{\ell=1}^{\tau}p(b_{\ell}\,|\,b_{\ell}^{-},\overline{X}_{\ell}^{(b)}) \,p(d_{\ell}\,|\,d_{\ell}^{-},\overline{X}_{\ell}^{(d)})\,p(y_{\ell}\,|\,y_{ \ell}^{-},\overline{X}_{\ell}^{(y)}) \tag{8}\] where \(\overline{X}_{\ell}\doteq(\overline{X}_{\ell}^{(b)},\overline{X}_{\ell}^{(d) },\overline{X}_{\ell}^{(y)})\) is exogenous covariate history, \(y_{\ell}^{-}\doteq(\overline{b}_{\ell},\overline{d}_{\ell},\overline{y}_{ \ell-1})\), \(d_{\ell}^{-}\doteq(\overline{b}_{\ell},\overline{d}_{\ell-1},\overline{y}_{ \ell-1})\), and \(b_{\ell}^{-}\doteq(\overline{b}_{\ell-1},\overline{y}_{\ell-1})\). The following assumption eliminates unnecessary layers of complexity. For instance, suppose the aim is to develop an optimal policy for processing the unresolved and new requests. The approach could be to model the daily buyer-supplier interaction by harvesting bits of information from placing many small orders on the same day. This might sound appealing, but goes far beyond what is realistic and to the best of our knowledge has never been attempted. **Assumption 3.3**.: _Every day \(\ell\), all line items in the array of unresolved requests as well as the ones arriving during the day are considered only once, in the chronological order they arrived, and all orders are placed at once at the end of the day._ A direct concomitant of the above assumption is that outcomes or outcome beliefs about the orders placed on a given day are available the next day. Let \[\mathcal{I}(\ell)\doteq\{(l,m)\in\{(1,1),\ldots,(1,M_{1}),\ldots,(\ell,1), \ldots,(\ell,M_{\ell})\}:(d_{lm}^{(0)}=0)\vee(l=\ell)\}\] index the set of unresolved requests on day \(\ell\). For a single instance \(\ell\), we may isolate the decision mechanism \[p(d_{\ell}\,|\,d_{\ell}^{-},\overline{X}_{\ell}^{(d)}) =\prod_{(l,m)\in\mathcal{I}(\ell)}p(d_{lm}\,|\,d_{lm}^{-}(\ell), \overline{X}_{\ell}^{(d)})\] \[=\prod_{(l,m)\in\mathcal{I}(\ell)}p(d_{lm}^{(0)},d_{lm}^{(1)}\,| \,d_{lm}^{-}(\ell),\overline{X}_{\ell}^{(d)}) \tag{9}\] \[=\prod_{(l,m)\in\mathcal{I}(\ell)}p(d^{(0)}_{lm}\,|\,d^{-}_{lm}( \ell),\overline{X}^{(d)}_{\ell})\,p(d^{(1)}_{lm}\,|\,d^{-}_{lm}(\ell),\overline{X }^{(d)}_{\ell})^{d^{(0)}_{lm}}\] where \(d^{-}_{lm}(\ell)\doteq\left(d^{-}_{\ell},d_{l^{\prime}m^{\prime}}:(l^{\prime}, m^{\prime})\in\mathcal{I}(\ell):(l^{\prime}<l)\vee[(l^{\prime}=l)\wedge(m^{ \prime}<m)]\right)\), under Assumption 3.3, contains information about line items processed at day \(\ell\) before line item \((l,m)\). Clearly, if \(d^{(0)}_{lm}=0\), then \(d^{(1)}_{lm}=\emptyset\), which means it is not observed and the line item index \((l,m)\) should enter the set \(\mathcal{I}(\ell+1)\), i.e., the next day's list of unresolved requests (including the ones generated on day \(\ell+1\)). ### Outcome model Consider again Equation (8) and take an instance of the outcome variable \(y_{\ell}\). We can write its probability density as \[p(y_{\ell}\,|\,y_{\ell}^{-},\overline{X}^{(y)}_{\ell})=\prod_{(l,m)\in \mathcal{I}(\ell)}p(y_{lm}\,|\,y_{\ell}^{-},\overline{X}^{(y)}_{\ell})^{d^{(0 )}_{lm}} \tag{10}\] so that the generating mechanism of the first stage decision, \(d^{(0)}_{lm}\), on whether to place an order at day \(\ell\) for line item \((l,m)\), acts again as a missingness mechanism for the supplier selection and response vector. Assumption 3.3 does not allow the values \(y_{lm}\) to be sampled before the next day of the corresponding supplier selection \(d^{(1)}_{lm}\). We emphasise that the outcome variable \(y_{\ell}\) contains the actual cost of line items in the orders placed at day \(\ell\), but information regarding the (delivery) lead time and quality are only the supplier's estimates and can be different than the actual ones. We therefore introduce the following assumption. **Assumption 3.4** (Supplier honesty).: _Suppliers provide consistent estimates of the expected lead time and quality of an order._ The above assumption alleviates the need to develop a separate model of the actual performance of suppliers as compared to the one disclosed in their estimates as it asserts that significant divergence between the supplier-provided outcome estimates and the actual outcomes is not due to systemic behavior but, rather, could only be caused by exogenous effects. Supplier honesty is a mild requirement as it is expected that any procurement organisation carries out a careful supplier on-boarding process prior to allocating business. ### Discrete-event simulation logic The theory of discrete-event simulation (DES) modelling has been described extensively by Law (2015) who suggests the collection of state variables, a simulation clock, and an event list as the necessary components of any DES model. The simulation clock is a variable in the simulation model which registers the current value of simulated time, and the event list contains the next time each type of event occurs. In this section, we provide all the details of the proposed DES logic for the procurement operations system. The evolution of a procurement system is the sequence of transitions between states at discrete points in time; the state of the system at time \(t\) is determined by historical demand, decisions, and outcomes. An operational site \(i\) generates its \(j\)-th requisition \(e_{ij}\) at time \(t_{ij}\) adding an increment to the total count of unresolved requisitions \(|\mathcal{I}(\ell)|\), before the upcoming decision point \(t=\ell\), with \(\ell\in\{1,2,\dots\}\). By the end of day \(\ell\) a number of requisitions has been processed and orders are placed at the end of day \(\ell\) to generate the decision and outcome vectors, \(d_{\ell}\) and \(y_{\ell}\), respectively. Five event types, that is, discrete events that take place in procurement operations are summarised in Table 1. The first three event types are the crucial ones, whereas the latter two are included as potentially useful for case-specific designs, more precisely, the event types 1 and 2 drive the evolution, whereas event type 3 signals the termination of the simulation. Event type 4 is an evaluation routine that simply produces summary measures to monitor the performance of different suppliers every quarter of a calendar year, and event type 5 represents the time an order has been delivered. A more technical presentation of the proposed DES model can be guided by the event graph (Law, 2015). The purpose of an event graph is to illustrate the scheduling of the event types. In our DES model, the first requisition generation for each operational site, the first decision point, the first supplier evaluation, and the termination of the simulation are all scheduled at initialisation. A requisition generation, a decision point, or a supplier evaluation event schedule the time of the next event of the same sort. Moreover, a decision point event can schedule the time of the next order delivery event too as well as the next requisition generation event. Figure 5 depicts the event graph. \begin{table} \begin{tabular}{l c} \hline \hline Event description & Event type \\ \hline The generation of a requisition & 1 \\ Decision point every calendar day & 2 \\ Termination of the simulation after \(\tau\) calendar days & 3 \\ Supplier evaluation (honesty assessment) every 90 days & 4 \\ Order delivery & 5 \\ \hline \hline \end{tabular} \end{table} Table 1: Events taking place in procurement operations. Figure 5: Event graph outline of the proposed procurement simulation model. A single jagged arrow pointing to an event means that an event of the sort is scheduled when the simulation is initiated, whereas multiple jagged arrows mean that an event is scheduled initially for each operational site. A thick arrow indicates that the event type at the start schedules the next event of the type at the tip of the arrow. Proposed simulation model In this section, we provide detailed specifications and algorithmic implementations of the different components of the probabilistic model formalised in the previous section. Each section contains a mathematical expression, specifications, and a simulation algorithm of the corresponding model. ### Generation of request times The first stage of demand generation involves an intensity model to trigger the generation of requisitions. Recall that products are grouped into categories which are known a priori so that a demand model is specific to a category of products. Consider the intensity function having the multiplicative form, parameterised by the Euclidean vector of, potentially, time-varying parameters \(\beta_{t}\), \[\lambda_{i}(t\,|\,H_{it},\zeta_{i};\beta_{t})\doteq\zeta_{i}^{(1)}\lambda_{i0 }(t)\exp(\widetilde{H}_{it}^{\top}\beta_{it}) \tag{11}\] where \(\lambda_{i0}(t)\) is a positive-valued function called the baseline intensity, \(\widetilde{H}_{it}\) are features, i.e., main effects, higher order interactions, or basis expansions, constructed by applying transformations to \(H_{it}\), and correlation patterns between operational sites are introduced through unobserved effects \(\zeta_{i}^{(1)}\), for \(i=1,\ldots,n\), assumed to be i.i.d. draws from the same distribution with \(E\zeta_{i}^{(1)}=1\). #### Specification Features can be constructed from historical data \(H_{t}\) by applying a mapping \[H_{t}\mapsto f_{b,t}^{(1)}(H_{t})\doteq\widetilde{H}_{t}\] to obtain a vector of main effects as well as interactions and basis expansions. We propose a total of six main effects plus baseline characteristics \(\widetilde{H}_{0}\) that influence the time a requisition is generated, see Table 2. Efficient computation of such features requires operational site-specific records of event times, lead time, and quality to be stored as the simulation is performed. We can then simply extract the latest recorded delivery day, for every operational site \(i\), and subtract from the present day to obtain the first main effect, while the second would equal the last quality record for each \(i\). Clearly, these two effects do not vary within a given day. However, computation of the third feature requires the information collected during day \(\ell\). Business policy shocks and market disruptions are represented with sinc functions as we expect they would cause spikes in the demand over narrow intervals, whereas seasonal trends are captured with calendar time harmonics. There is no computational burden associated with the exogenous features since they are computed prior to the simulation run. Eventually, the unobserved effects \(\zeta_{i}^{(1)}\) are taken to be Gamma distributed, for \(i=1,\ldots,n\), with mean equal to 1, and we suggest each baseline intensity \(\lambda_{i0}(t)\) be either parameterised as a Weibull distribution or as a piecewise-constant function (Cook and Lawless, 2007). #### Simulation To simulate a recurrent event process with intensity function (11) and time-varying covariates, we propose an extension of the thinning algorithm of Lewis and Shedler (1979) to accommodate observed heterogeneity that is captured by the time-varying covariate vector \(\widetilde{H}_{t}\). Algorithm 1 simulates a single event time \(t_{ij}\) when a requisition is to be generated. The algorithm requires the norm of \(\widetilde{H}_{t}\) to be bounded. The function \(\lambda^{*}\) is an upper bound on the intensity function. ``` Input:\(t_{i,j-1}\), \(\widetilde{H}_{it}\), \(\lambda_{i}\), \(\lambda_{i}^{*}\), \(\zeta_{i}^{(1)}\), \(\tau\) Output: event time \(t_{ij}\) 1 Set \(t_{ij}\) to the previous event time \(t_{i,j-1}\); 2while\(t_{ij}\leq\tau\)do 3 Draw \(v\) from \(\text{Uniform}(0,1)\); 4if\(v<\zeta_{i}^{(1)}\frac{\lambda_{i}}{\lambda_{i}^{*}}(t_{ij},\widetilde{H}_{it})\)then 5\(t_{ij}\gets t_{ij}-\log(v)/\lambda_{i}^{*}(t_{ij})\); 6break; 7 8 end if 9 10 end for 11 12 end for 13 14 ``` **Algorithm 1**Time-to-request mechanism. \begin{table} \begin{tabular}{c l l l} \hline \hline & Short description & Symbol & Level \\ \hline 1 & Days since latest delivery & \(\widetilde{H}_{t}^{(1)}\) & Category \\ 2 & Quality of the latest delivery & \(\widetilde{H}_{t}^{(2)}\) & Category \\ 3 & Number of requisitions sent over the past year & \(\widetilde{H}_{t}^{(3)}\) & Category \\ 4 & Seasonal trends & \(\widetilde{H}_{t}^{(4)}\) & Category \\ 5 & Business policy shock & \(\widetilde{H}_{t}^{(5)}\) & Category \\ 6 & Market disruption & \(\widetilde{H}_{t}^{(6)}\) & Category \\ 7 & Baseline information & \(\widetilde{H}_{0}\) & Operational site \\ \hline \hline \end{tabular} \end{table} Table 2: Main effects in the intensity model constructed from historical data. ### Generation of requisitions The second component of the demand model is concerned with the generation of requisitions. Consider a requisition instance generated, according to (7), by \[\prod_{k=1}^{K}p(q_{k},a_{k}\,|\,e_{k-1},H_{it},\zeta_{i})\] where we dispense with the subscripts \(i,j\), for brevity. The value of \(K\) is implicitly set by a sequential two stage decision \(a_{k}\doteq(a_{k}^{(0)},a_{k}^{(1)})\); \(a_{k}^{(0)}\in\{0,1\}\) indicates whether to add another line item in the current requisition or stop, and \(a_{k}^{(1)}\in\mathcal{A}\) the product identifier. By \(\mathcal{A}\) we denote the finite set of all discrete product identifiers for a given category. Hence, we can factorise the mechanism further as \[p(q_{k},a_{k}\,|\,e_{k-1},H_{it},\zeta_{i}^{(2)}) =p(a_{k}^{(0)}\,|\,e_{k-1},H_{it})\,p(a_{k}^{(1)}\,|\,e_{k-1},H_{it })^{a_{k}^{(0)}}\] \[\quad\times p(q_{k}\,|\,a_{k}^{(1)},e_{k-1},H_{it},\zeta_{i})^{a_ {k}^{(0)}}\] A natural modelling choice for the first stage decision-to-stop variable \(a_{k}^{(0)}\) is the logistic regression model with propensity score given by \[\text{logit}\left(U_{k}^{(e,0)}\right)\doteq\widehat{e}_{k}^{(0)\top}\gamma_{ \ell}^{(0)} \tag{12}\] where \((e_{k-1},H_{it})\mapsto f_{e,tk}^{(0)}(e_{k-1},H_{it})\doteq\widehat{e}_{k}^{ (0)}\) are features derived from \((e_{k-1},H_{it})\), \(\gamma_{\ell}^{(0)}\) an Euclidean parameter vector. Another intuitive modelling option is to let the second stage product selection variable \(a_{k}^{(1)}\) follow a mixed multinomial distribution with utility model \[U_{k}^{(e,1)}(a;w)\doteq a_{k}^{(0)}\left(\widehat{e}_{k}^{(1)}(a)^{\top} \gamma_{\ell}^{(1)}+G_{a}^{\top}w\right) \tag{13}\] where \((a,e_{k-1},H_{it})\mapsto f_{e,tk}^{(1)}(a,e_{k-1},H_{it})\doteq\widehat{e}_{ k}^{(1)}(a)\) are features specific to product index \(a\) computed from \((a,e_{k-1},H_{it})\), respectively, \(\gamma_{\ell}^{(1)}\) a Euclidean parameter vector, \(w\) are random intercepts aiming to capture co-occurrence patterns between products, and \(G\) is a design matrix defining the correlation groups; \(G_{a}\) denotes the column of \(G\) that corresponds to product index \(a\). The propensity score of selecting product \(a\) is given as \[p(a\,|\,e_{k-1},H_{it};\gamma_{\ell}^{(1)},w)\doteq\frac{\exp\left(U_{k}^{(e,1)}(a;w)\right)}{\sum_{a^{\prime}\in\mathcal{A}}\exp\left(U_{k}^{(e,1)}(a^{ \prime};w)\right)} \tag{14}\] Eventually, suppose \(Q_{ijk}\) takes non-negative integer values, for \(k=1,\ldots,K_{ij}\), and the quantities of different products and services in the same requisition are independent given the historical data, that is, \(Q_{ijk}\,\mbox{$\perp\!\!\!\!\perp$}\,Q_{ijk^{\prime}},A_{ijk^{\prime}}\,|\,A _{ijk},H_{it}\) for all \(k^{\prime}\neq k\). In this case, the quantity values can be sampled from a mechanism of the sort \[p(q\,|\,a^{(1)},e_{k-1},H_{it},\zeta_{i})\doteq\prod_{k=1}^{K}p(q_{k}\,|\,a_{ k}^{(1)},H_{it},\zeta_{i})\] Let \((a_{k}^{(1)},H_{it})\mapsto f_{e,tk}^{(2)}(a_{k}^{(1)},H_{it})\doteq\widehat{e }_{k}^{(2)}(a_{k}^{(1)})\) denote summary features constructed from the subset \(H_{it}\) relevant for product index \(a_{k}^{(1)}\), \(\varpi\) a function associated with the mapping \(\widehat{e}_{k}^{(2)}(a_{k}^{(1)})\mapsto\varpi\left(\widehat{e}_{k}^{(2)}(a_ {k}^{(1)})\right)\doteq\varpi_{k}\), and \(\zeta_{i}^{(2)}\) a random effect that induces correlation in the quantity mechanism between different operational sites. In this case, the quantity values can be sampled from a random distribution with density function \[p(q\,|\,a^{(1)},e_{k-1},H_{it},\zeta_{i})=\prod_{k=1}^{K}\frac{\left(\zeta_{i} ^{(2)}\varpi_{k}\right)^{q_{k}}}{q_{k}!}\exp\left(-\zeta_{i}^{(2)}\varpi_{k}\right) \tag{15}\] #### Specification The requisition generation model consists of three sub-models, presented above, each of which may require its own set of features and parameter specifications. Table 3 summarises the main effects to be considered in each model; the row numbering convention is such that the first digit indicates a sub-model, while the second enumerates the effects. The letter 'x' is used in place of the first digit to indicate that the relative effect may be used by more than one model. Two covariates are suggested to bring an effect on the decision-to-stop variable. Computation of \(\widetilde{e}_{k}^{(01)}\) requires storing the current number of line items in a requisition, while \(\widetilde{e}_{k}^{(02)}\) needs an array of quantities and product identifiers of the line items currently in the requisition as well as unit cost estimates of the relative products, e.g., a rolling average of the most recent unit cost realisations. The main effects \(\widetilde{e}_{k}^{(11)},\ldots,\widetilde{e}_{k}^{(15)}\) are used in Expression (13) of the product utility model. Computation of \(\widetilde{e}_{k}^{(11)}\) requires that the number of requests for each product and operational site combination over the past year is recorded. For \(\widetilde{e}_{k}^{(12)}\) a rolling sum of the annual quantity of each product delivered at each operational site must be maintained. Effect \(\widetilde{e}_{k}^{(13)}\) can be computed by keeping track of the latest day each product was requested from each operational site. The value can be simply computed by extracting the relative record and subtracting it from the present day. Similarly, tracing the day of the latest delivery for each product and operational site pair would enable the computation of \(\widetilde{e}_{k}^{(14)}\) by subtracting the present day from the desired record. Eventually, records of the observed quality of the latest delivery on all product and operational site pairs can be kept to be able to derive \(\widetilde{e}_{k}^{(15)}\) by extracting the latest record for the desired combination. Computation of main effects \(\widetilde{e}_{k}^{(21)}\) and \(\widetilde{e}_{k}^{(22)}\) which could influence the quantity mechanism may require records of the total quantity requested and ordered for each pair of product and operational site over the past year. Additional exogenous effects can be considered in any model to incorporate the effect of seasonal trends, business policies, market disruptions, or baseline characteristics of products and operational sites on the corresponding decision variable. Finally, it is worth noting that Table 3 illustrates only main effects but higher order interactions and basis expansions could also enter the modelling expression. \begin{table} \begin{tabular}{l l l l} \hline \hline & Short description & Symbol & Level \\ \hline 01 & Number of line items in requisition & \(\widetilde{e}_{k}^{(01)}\) & Requisition \\ 02 & Cost estimate of requisition & \(\widetilde{e}_{k}^{(02)}\) & Requisition \\ 11 & Number of requests over the past year & \(\widetilde{e}_{k}^{(11)}\) & Product \\ 12 & Quantity delivered over the past year & \(\widetilde{e}_{k}^{(12)}\) & Product \\ 13 & Number of days since last request & \(\widetilde{e}_{k}^{(13)}\) & Product \\ 14 & Number of days since last delivery & \(\widetilde{e}_{k}^{(14)}\) & Product \\ 15 & Quality of last delivery & \(\widetilde{e}_{k}^{(15)}\) & Product \\ 21 & Quantity requested over the past year & \(\widetilde{e}_{k}^{(21)}\) & Product \\ 22 & Quantity delivered over the past year & \(\widetilde{e}_{k}^{(22)}\) & Product \\ x1 & Seasonal trends & \(\widetilde{e}_{k}^{(03)},\widetilde{e}_{k}^{(16)},\widetilde{e}_{k}^{(23)}\) & Product \\ x2 & Business policy shock & \(\widetilde{e}_{k}^{(04)},\widetilde{e}_{k}^{(17)},\widetilde{e}_{k}^{(24)}\) & Product \\ x3 & Market disruption & \(\widetilde{e}_{k}^{(05)},\widetilde{e}_{k}^{(18)},\widetilde{e}_{k}^{(25)}\) & Product \\ x4 & Baseline information & \(\widetilde{e}_{k}^{(06)}\) & Operational site \\ x5 & Baseline information & \(\widetilde{e}_{k}^{(19)},\widetilde{e}_{k}^{(26)}\) & Product \\ \hline \hline \end{tabular} \end{table} Table 3: Main effects in the requisition generation model constructed from historical data. The proposed choices for the functional form of \(\varpi\) and the distribution of the random effect \(\zeta_{i}^{(2)}\) are to use a simple additive model structure parameterised with a Euclidean vector \(\gamma_{\ell}^{(2)}\) for the former and a Gamma distribution with mean equal to \(1\) for the latter. #### Simulation The logic of requisition generation in summarised in Algorithm 2; notice that the elements \(\mathcal{A}\), \(G\), \(\Sigma_{\mathcal{A}}\), \(\gamma_{\ell}^{(0)}\), \(\gamma_{\ell}^{(1)}\), \(\varpi\), \(\phi_{2}\), and transformations \(f_{e,t}^{(0)}\), \(f_{e,t}^{(1)}\), \(f_{e,t}^{(2)}\), where \(f_{e,t}^{(.)}\) contains all \(f_{e,tk}^{(.)}\), are configuration parameters and must be provided as input along with historical data \(H_{it}\). The symbol \(\mathcal{N}(\,\cdot\,,\cdot\,)\) is used to denote a normal distribution. ``` Input:\(\mathcal{A}\), \(G\), \(\Sigma_{\mathcal{A}}\), \(\gamma_{\ell}^{(0)}\), \(\gamma_{\ell}^{(1)}\), \(\varpi\), \(\phi_{2}\), \(f_{e,t}^{(0)}\), \(f_{e,t}^{(1)}\), \(f_{e,t}^{(2)}\), \(H_{it}\) Output: requisition \(e\) 1Initialisation:\(e\leftarrow\emptyset\), \(\bar{\mathcal{A}}\leftarrow\mathcal{A}\), \(a_{0}\gets k\gets 1\), draw \(w\) from \(\mathcal{N}(0,\Sigma_{\mathcal{A}})\) and draw \(\zeta^{(2)}\) from Gamma(\(\phi_{2}^{-1}\)); 2foreach\(a\in\bar{\mathcal{A}}\)do 3\(U^{(e,1)}(a)\gets f_{e,tk}^{(1)}(a,e_{k-1},H_{it})^{\top}\gamma_{\ell}^{(1 )}+G_{a}^{\top}w\); 4\(p^{(e,1)}(a)\leftarrow\exp\left(U^{(e,1)}(a)\right)/\sum_{a^{\prime}\in\bar{ \mathcal{A}}}\exp\left(U^{(e,1)}(a^{\prime})\right)\); 5 Draw \(a_{k}^{(1)}\) from a multinomial distribution over \(\bar{\mathcal{A}}\) with propensity vector \(p^{(e,1)}\), push it in \(e\), and remove \(a_{k}^{(1)}\) from \(\bar{\mathcal{A}}\); 6 Sample the quantity \(q_{k}\) of \(a_{k}^{(1)}\) from a negative binomial distribution with intensity \(\zeta^{(2)}\varpi\left(f_{e,tk}^{(2)}(a_{k}^{(1)},H_{it})\right)\); 7\(U^{(e,0)}\leftarrow\exp\left(\tilde{e}_{k}^{(0)\top}\gamma_{\ell}^{(0)}\right)\); 8 Draw \(a^{(0)}\) from Bernoulli \(\left(U^{(e,0)}\right)\); 9if\(a^{(0)}=0\) or \(|\bar{\mathcal{A}}|\leq 0\)then 10break; 11else 12\(k\gets k+1\); 13 14 end if 15 16 end for ``` **Algorithm 2**Requisition generation mechanism. ### Decision and outcome models #### 4.3.1 Decision model The decision mechanism is given in Equation (9) where we show that on a given day indexed by \(\ell\) decisions are made for each line item in the array of unresolved requests. More precisely, a decision is performed by some buyer and involves two stages, first, whether to order the line item or not, and, second, the selection of a supplier. The formula is repeated below: \[\prod_{(l,m)\in\mathcal{I}(\ell)}p(d_{lm}^{(0)}\,|\,d_{lm}^{-}(\ell),\overline {X}_{\ell}^{(d)})\,p(d_{lm}^{(1)}\,|\,d_{lm}^{-}(\ell),\overline{X}_{\ell}^{(d )})^{d_{lm}^{(0)}}\] A natural choice of a model for the first stage decision variable \(d_{lm}^{(0)}\) is a logistic regression with propensity score \[\text{logit}\left(U_{lm}^{(d,0)}\right)=\widetilde{d}_{lm}^{(0)\top}\theta_{ \ell}^{(0)} \tag{16}\] where \((d^{-}_{lm}(\ell),\overline{X}^{(d)}_{\ell})\mapsto\left(f^{(0)}_{d,lm}(d^{-}_{lm} (\ell)),f^{(d,0)}_{x,\ell}(\overline{X}^{(d)}_{\ell})\right)\doteq\widetilde{d}^ {(0)}_{lm}\) denotes the feature vector obtained by transforming the historical data. Notice the implicit dependence of an index \((l,m)\) from the calendar day index \(\ell\) since for any index \((l,m)\) there exists a calendar index \(\ell\) such that \((l,m)\in\mathcal{I}(\ell)\). For supplier selection we propose the multinomial model with utility function \[U^{(d,1)}_{lm}(d)\doteq d^{(0)}_{lm}\left(\widetilde{d}^{(1)\top}_{lm}\theta^ {(1)}_{\ell}\right) \tag{17}\] where \((d,d^{-}_{lm}(\ell),\overline{X}^{(d)}_{\ell})\mapsto\left(f^{(1)}_{d,lm}(d, d^{-}_{lm}(\ell)),f^{(d,1)}_{x,\ell}(d,\overline{X}^{(d)}_{\ell})\right) \doteq\widetilde{d}^{(1)}_{lm}(d)\) contains features specific to supplier option \(d\), \(\theta_{\ell}\) is a Euclidean parameter vector, and the propensity score for choosing supplier \(d\) is given by \[p(d\,|\,d^{-}_{lm}(\ell),\overline{X}^{(d)}_{\ell};\theta^{(1)}_{\ell})=\frac {\exp(U^{(d,1)}_{lm}(d))}{\sum_{d^{\prime}}\exp(U^{(d,1)}_{lm}(d^{\prime}))} \tag{18}\] #### 4.3.2 Outcome model The outcome vector is modelled as a multivariate normal distribution of the sort \[Y_{lm}=\widetilde{y}^{\top}_{lm}\delta_{\ell}+\epsilon^{(y)}_{lm} \tag{19}\] where the mapping \((y^{-}_{\ell},\overline{X}^{(y)}_{\ell})\mapsto f_{y,\ell}(y^{-}_{\ell}, \overline{X}^{(y)}_{\ell})\doteq\widetilde{y}_{\ell}\) can be used to construct the features \(\widetilde{y}_{lm}\), \(\delta_{\ell}\) is a Euclidean vector of parameters, and \(\epsilon^{(y)}_{lm}\) follows a 3-dimensional normal distribution centered around 0 and with covariance matrix \(\Sigma\); notice that \(\widetilde{y}_{\ell}\doteq(\widetilde{y}_{lm}:(l,m)\in\mathcal{I}(\ell))\). #### Specification We elaborate on the efficient computation of the main effects that determine the decision-outcome dynamics; a detailed description of these effects is given in Table 4. The effect \(\widetilde{d}^{(01)}_{lm}\) can be computed using event time records for each requisition by extracting the relevant value and subtracting it from the current day. The value of \(\widetilde{d}^{(02)}_{lm}\) can be obtained by continuously monitoring the number of requisitions in the list of unresolved requests. The mode of delivery and urgency status are disclosed in a requisition and can be retrieved with the event time of the corresponding requisition. The latest outcome records as well as an average outcome can be traced for each product and supplier combination. Computation of \(\widetilde{d}^{(13)}_{lm}\) and \(\widetilde{d}^{(14)}_{lm}\) requires maintaining an array with the total quantity of each product allocated to each supplier, so that the former can be obtained using the desired product and supplier combination, whereas the latter by summing over the quantity of products allocated to the supplier at question. Once more, exogenous effects would be seasonal trends, business policy shocks, or market disruptions, but also supplier or product baseline information. The main effects considered in the proposed outcome model are the recent and total quantity of products, also termed volume, allocated to the different suppliers. Computation of such effects requires maintaining volume information for each supplier as the simulation is running. Historical realisations of the outcome as well as noise terms are recorded and can be used in the outcome model expression, e.g., as autoregressive or moving average terms. The exogenous effects, similar to previous sections, can be seasonal trends and shocks either on product level or on product-supplier level. All the suggested main effects for the outcome model are illustrated on Table 4. Finally, the additive noise terms are in all cases normally distributed random variables. #### Simulation Operational decisions and the relative outcomes on a given day \(\ell\) can be simulated using Algorithm 3. The elements \(\mathcal{I}(\ell),d_{\ell}^{-},\widetilde{y}_{\ell}\), \(\widetilde{X}_{\ell}^{(d)}\), \(\widetilde{X}_{\ell}^{(y)}\), \(\mathcal{D}_{\ell}\), \(\theta_{\ell}^{(0)}\), \(\theta_{\ell}^{(1)}\), \(\delta_{\ell}\), \(\Sigma\), \(f_{d,\ell}^{(0)}\), \(f_{d,\ell}^{(1)}\), described in the previous sections are needed as input to generate an instance \((d_{\ell},y_{\ell})\) of decisions and outcomes for the day \(\ell\). ## 5 Numerical experiment One strength of the proposed framework is that it can simulate many possible scenarios by providing the configuration. A component of the system does not have knowledge of the working models of other components but only has access to historical records which are used to construct the features in its formal expression(s). We illustrate a use case of the proposed framework to compare supplier selection policies in the spot market. Our open sourced GitHub repository1 can be used to reproduce all the results presented in this section. Footnote 1: [https://github.com/georgios-vassos1/procurement-ops](https://github.com/georgios-vassos1/procurement-ops) Consider the problem of comparing the performance of different supplier selection policies over a year of operations. The user must specify the time-horizon of interest, the number of operational sites, and expressions of the time-to-request, requisition generation, decision-to-order, and outcome models. Moreover, a list of supplier utility models needs to be provided to induce the corresponding supplier selection policies. For instance, suppose that stakeholders in a maritime procurement business speculate over the amount of savings that can result from different supplier selection policies. The stakeholders may wish to support their claims with simulation-based evidence. \begin{table} \begin{tabular}{l l l l} \hline \hline & Short description & Symbol & Level \\ \hline 01 & Number of days since generation & \(\widetilde{d}_{lm}^{(01)}\) & Requisition \\ 02 & Number of unresolved requisition & \(\widetilde{d}_{lm}^{(02)}\) & Total \\ 03 & Mode of delivery (local or stores) & \(\widetilde{d}_{lm}^{(03)}\) & Requisition \\ 04 & Urgency status & \(\widetilde{d}_{lm}^{(04)}\) & Requisition \\ 11 & Outcome of previous order from supplier & \(\widetilde{d}_{lm}^{(11)}\) & Supplier and Product \\ 12 & Average outcome of supplier & \(\widetilde{d}_{lm}^{(12)}\) & Supplier and Product \\ 13 & Product volume allocated to supplier & \(\widetilde{d}_{lm}^{(13)}\) & Supplier and Product \\ 14 & Total volume allocated to supplier & \(\widetilde{d}_{lm}^{(14)}\) & Supplier \\ 21 & Autoregressive terms & \(\widetilde{y}_{lm}^{(1)}\) & Supplier or Product \\ 22 & Moving average terms & \(\widetilde{y}_{lm}^{(2)}\) & Supplier or Product \\ 23 & Recent volume ordered from supplier & \(\widetilde{y}_{lm}^{(3)}\) & Supplier \\ 24 & Total volume ordered from supplier & \(\widetilde{y}_{lm}^{(4)}\) & Supplier \\ x1 & Seasonal trends & \(\widetilde{d}_{lm}^{(05)},\widetilde{d}_{lm}^{(15)},\widetilde{y}_{lm}^{(5)}\) & Supplier or Product \\ x2 & Business policy shock & \(\widetilde{d}_{lm}^{(06)},\widetilde{d}_{lm}^{(16)},\widetilde{y}_{lm}^{(5)}\) & Supplier or Product \\ x3 & Market disruption & \(\widetilde{d}_{lm}^{(07)},\widetilde{d}_{lm}^{(17)},\widetilde{y}_{lm}^{(7)}\) & Supplier or Product \\ x4 & Baseline information & \(\widetilde{d}_{lm}^{(08)},\widetilde{d}_{lm}^{(18)},\widetilde{y}_{lm}^{(8)}\) & Supplier or Product \\ \hline \hline \end{tabular} \end{table} Table 4: Main effects in the decision and outcome models constructed from historical data. In this example, we consider a maritime procurement business in a container logistics company that operates 50 company-owned container ships for ocean transportation. The ships generate demand over a period of 365 days and may choose between 3 products when making a requisition. Purchase management can order the products from the spot market consisting of 2 suppliers. The stakeholders are concerned with fluctuations in the spot market and want to investigate whether the benefit of adopting dynamic supplier selection policies instead of issuing a static policy to be executed over the year. In the following, we describe the system configuration in detail and compare the performance of five different supplier selection policies in terms of regret, that is, a measure of divergence from the optimal policy. ### Demand model #### 5.1.1 Time-to-request model specifications The purpose of the intensity process model is to capture the heterogeneity patterns in requisition generation times. Heterogeneity can be due to temporal phenomena, e.g., seasonal trends, market shocks, etc, or due to differences between individual ships, e.g., number of crew members, model of the ship, size of the ship, scheduled port calls, etc. In this simple numerical study, we assume that requisition generation times are homogeneous both in time and between individuals. In particular, we set \(\lambda_{i}(t)=\lambda=90\) days for all ship indices \(i=1,\ldots,50\). Requisition generation times are, thus, sampled from an exponential distribution with mean \(1/\lambda\). In the proposed discrete-event simulation design, the first requisition generation time is sampled for each ship during initialisation. Every time the requisition generation routine is invoked to create a requisition for a given ship, it will sample the next requisition generation time for that ship. #### 5.1.2 Requisition generation model specifications We have seen in previous sections that upon triggering a request a requisition must be sampled from the generation model described in Sections 3.1.2 and 4.2. In Section 3.1.2, we also mentioned a first-stage propensity-to-stop model which effectively controls the size of the requisitions. For \(\widetilde{e}^{(01)}_{ijk}\) and \(\widetilde{e}^{(02)}_{ijk}\), refer to Table 3, we consider the nonlinear effect of the requisition size \(-0.1(k-1)^{2}\) and the baseline cost estimate of the requisition before adding a \(k\)-th line item. The latter is set to \(0\) if the current cost estimate is less than $20 and, at a sublinear rate, falls to \(-120\) at $500. The baseline cost estimate for any of the 3 products is fixed to $100, $50, and $50. A linear expression of this two features and a random intercept is used in the logistic distribution to compute the propensity-to-stop score. The random intercept is sampled once at the start of the simulation from a normal distribution with mean 5 and variance 1 and is fixed thereafter. The decision-to-stop is a draw from a Bernoulli distribution with success rate equal to the propensity-to-stop score. The product utility and quantity intensity models are a normal distribution with mean equal to 0 and variance 1 and a Poisson distribution with means 0.1, 0.5, and 0.5, respectively. It is emphasised that the proposed simulation framework can readily accommodate heterogeneity patterns in the product utility and quantity intensity models. However, to keep this numerical example simple, we disregard this layer to avoid unnecessary layers of complexity. The focus of this numerical study is to compare supplier selection policies in a two-supplier dynamic sport market. An alternative scenario could be to investigate the effect of heterogeneous demand on the total savings of the business under a particular supplier selection policy. The proposed simulation framework can be used to address a plethora of business inquiries. ### Decision and outcome models #### 5.2.1 Decision-to-order model specifications The propensity-to-order model described in Sections 3.2 and 4.3 is also kept simple to avoid complicating the interpretations of results in this numerical example. We sample the propensity-to-order score every time, for each item, in the list of unresolved requests, from a uniform distribution in \([0.1,0.9]\). This decision could be particularly interesting in a combined contract-spot market environment where additional costs might be incurred if the quantity commitment is not allocated to the contracted suppliers within the relative time frame. #### 5.2.2 Outcome generation model specifications We consider a 3-dimensional outcome variable containing the unit price, delivery time and quality cost-equivalent value supposedly in USD; clearly, the cost-equivalent of delivery time is an increasing function, whereas that of quality is a decreasing function. The latter is due to the fact that at a given price higher quality entails a lower quality-cost. In this example, the outcome model has a VAR(2) (vector autoregressive of order 2) structure, a short-term (past 90 days) allocation effect, a long-term (past 365 days) allocation effect, and an annual harmonic with initial phase set to \(\pi/6\). The allocation effects correspond to the total quantity ordered from the selected supplier over the relative period of time. #### 5.2.3 Supplier utility model specifications At this point, having specified all the above components, we can simulate the evolution of the procurement system under different supplier selection policies. In particular, we compare five different policies, namely, always selecting supplier index 1 (Supplier.1), always selecting supplier index 2 (Supplier.2), selecting at random (Random), selecting based on a random utility model (Utility), and selecting using a contextual mutli-armed bandit model (Bandit). The random utility model resembles a static supplier selection policy in that it is parameterised by an Euclidean vector that is fixed throughout the simulation. It is worth noting that the random utility and bandit models have strongly misspecified knowledge of the outcome generation mechanism based on solely observing the seasonal effect out of the many that influence the outcome dynamics. The success measure that is used to compare the different policies is the regret and is defined as the difference between the optimal outcome and the outcome observed under the effective policy. The optimal outcome is computed by a working oracle that has knowledge of the true outcome distribution and, thus, always makes the optimal selection. The parameters of the bandit model are updated using the well-known Bayesian regression rule (Gelman et al., 2013, Part IV). The exploration-exploitation behavior of the bandit is regulated using Thompson Sampling (Russo et al., 2017). This is very important for the bandit to keep adapting to the changing dynamics of the spot market environment. The bandit maintains a utility model parameterised by an Euclidean vector \(\theta_{\ell}^{(cmab)}\in\mathbb{R}^{36}\) which stacks the parameters of all product-supplier outcome models. In more detail, for each of the 3 products there are two arms (supplier alternatives) each with an expected cost, lead time, and quality outcome. Each of the 18 stacked models is affiliated 2 parameters for the intercept and the observed seasonal effect. The information that must be available to compute the supplier utility with the bandit model is termed the context. Generally, the context can be understood as the summary measures \[\widetilde{d}_{lm}^{(1)}=\left(f_{d,lm}^{(1)}(d,d_{\ell},d_{\ell}^{-})^{\top },\widetilde{X}_{\ell}^{(d,1)}(d)^{\top}\right)^{\top}\] The distribution of \(\theta_{\ell}^{(cmab)}\) is initialised to the multivariate normal distribution with mean \(0^{(36\times 1)}\) and variance equal to \(70\cdot I^{(36\times 36)}\), where \(0^{(36\times 1)}\) is a vector of 36 zeros and \(I^{(36\times 36)}\) is the \(36\times 36\) identity matrix. Below we summarise the logic of Thompson Sampling for a given line item \((l,m)\). It requires as input the context variable \(\widetilde{d}_{lm}^{(1)}\) and current distribution of \(\theta_{\ell}^{(cmab)}\), denoted \(P_{\theta}\), and can readily replace lines 11 and 12 in Algorithm 3. The steps are: 1. Draw \(\theta_{\ell}^{(cmab)}\) from \(P_{\theta}\) 2. For each \(d\in\{1,2\}\) set \(U_{lm}^{(d,1)}(d)\) to \(\widetilde{d}_{lm}^{(1)\top}\theta_{\ell}^{(cmab)}\) 3. Set \(d_{lm}\) to the value \(d\) that maximises \(U_{lm}^{(d,1)}(d)\) 4. Update \(P_{\theta}\,|\,u_{0}(y_{lm},d_{lm})\) The regret corresponding to \((l,m)\) is easily computed by subtracting \(U_{lm}^{(d,1)}(d)\) from \(u_{0}(y_{lm},d_{lm})\). The utility is set to the inner product of the true outcome and a weighting rule; in this example, the weights are fixed to \((0.5,0.25,0.25)\). This combination is needed to convert the 3-dimensional outcome to a single welfare measure. The weight vector can be seen as a hyperparameter and can either be tuned with data or chosen based on domain expert knowledge. We perform 1000 Monte Carlo simulations of the system dynamics, under the configuration described in this section, for every one of the five competing supplier selection policies. As expected, the random utility and bandit models had better performance than the three naive benchmarks. By and large, contextual multi-armed bandits have outperformed the rational utility maximisers, thus, confirming that data-driven decision-makers are beneficial in dynamic environments compared to their static alternatives. Figure 6 (a) illustrates the terminal regret values incurred under the five different policies, whereas Figure 6 (b) depicts the cumulative regret values incurred daily during the study period. Clearly, contextual multi-armed bandits are quite robust both to severe lack of knowledge of the true outcome distribution but also to the dynamics of the environment. We believe that many such advantages are to be expected from shifting to advanced machine learning methods. ## 6 Conclusion and future works This study develops a flexible framework for the simulation of procurement operations with a probabilistic model of operations and the generation of demand. Based on the lack of such unified modelling approach in procurement and given the success of PODS in airline revenue management, we tailor the logic of PODS to the procurement business. We have distinguished three main components of the evolutionary mechanism, namely, demand, decision, and outcome models, and described how data are generated by providing modelling choices and their assumptions. The nature of demand is given as a two-stage mechanism including a time-to-request and a requisition generation component. Models are proposed for the two most important operational decisions and three outcomes of interest in procurement. A discrete-event simulation logic is suggested to inform a software implementation of our simulation framework. Specific suggestions are made about input effects, modelling expressions, and algorithms to enable the implementation of our models using software. Finally, the simulation is used in a numerical experiment to investigate the performance of different supplier selection policies. Our results show that data-driven decision-makers, in particular, contextual multi-armed bandits, outperform rational utility maximisers in a dynamic spot market. Figure 6: (a) Box plots of the total regret accumulated under five different supplier selection policies. Random policy means choosing a supplier at random, Supplier.1 and Supplier.2 correspond to always choosing the supplier with id 1 or 2, respectively, Utility is the random utility maximiser and Bandit is the contextual multi-armed bandit with access to the same information as the utility maximiser. (b) Spaghetti plots of the cumulative regret trajectory of the random utility maximising and contextual bandit policies. We believe that our effort can serve as the starting point for more statistical learning oriented research in the optimisation of decision-making in procurement systems, similar to PODS has led in the field of airline revenue management. In particular, many study cases can be made for tuning the parameters of our proposed models with observational data, e.g., learning the parameters of the demand model to understand the demand generation mechanism, the nature of demand can be of interest, by itself, for optimising inventory policies in warehouses used by the procurement business, and, as argued repeatedly in this study, to optimise the two key operational decisions, namely, when to place an order and supplier selection. However, to optimise the operational decisions a working model of the outcome is needed, thus, another interesting study case could be based on learning the parameters of the outcome model from observational data with continuous or categorical outcomes and potentially with some additional correlation structure. Learning the nature of the outcome mechanism is essential to the optimisation of when-to-order and supplier selection policies, we also tried to touch upon this topic through our insight in the case study. Furthermore, optimal policy learning and decision optimisation studies can be made by using postulated or estimated configurations of the demand and outcome models. Notice that, as with PODS, the decision-making component does not have access to the outcome model that is actually at work in the system but has access to a cost or reward model which induces a decision policy. For instance, one can start with an initial configuration of the decision-maker's cost model and apply online learning to obtain an optimal policy. Moreover, the estimation of an initial policy from observational data is also a case that can definitely be of interest. We conclude by stressing once more that with this work it is our hope to spark future research in procurement towards more flexible solutions, similar to PODS in the airline revenue management. ## Notation In this section we explain some notation conventions that we have used to obtain more compact expressions. Due to the fact that we are concerned with an evolutionary system our observations have a panel structure of the sort \[Z_{1},Z_{2},\ldots,Z_{\tau}\] when factorizing the probability density expression of such an observation we use the following convention \[p(Z_{1},\ldots,Z_{\tau}) =\prod_{i=1}^{\tau}p(Z_{i}\,|\,Z_{1},\ldots,Z_{i-1})\] \[=\prod_{i=1}^{\tau}p(Z_{i}\,|\,\overline{Z}_{i-1})\] with \(\overline{Z}_{0}=\emptyset\). When there are more than two components of interest in \(Z\), e.g., \(Z\doteq(X,A,Y)\), in temporal order, then we write \[\prod_{i=1}^{\tau}p(Z_{i}\,|\,\overline{Z}_{i-1}) =\prod_{i=1}^{\tau}p(X_{i},A_{i},Y_{i}\,|\,X_{i-1},A_{i-1},Y_{i-1})\] \[=\prod_{i=1}^{\tau}p(Y_{i}\,|\,\overline{X}_{i},\overline{A}_{i}, \overline{Y}_{i-1})\,p(A_{i}\,|\,\overline{X}_{i},\overline{A}_{i-1}, \overline{Y}_{i-1})\,p(X_{i}\,|\,\overline{X}_{i-1},\overline{A}_{i-1}, \overline{Y}_{i-1})\] \[=\prod_{i=1}^{\tau}p(Y_{i}\,|\,Y_{i}^{-})\,p(A_{i}\,|\,A_{i}^{-} )\,p(X_{i}\,|\,X_{i}^{-})\] where \(Y_{i}^{-}\doteq(\overline{X}_{i},\overline{A}_{i},\overline{Y}_{i-1})\), \(A_{i}^{-}\doteq(\overline{X}_{i},\overline{A}_{i-1},\overline{Y}_{i-1})\), and \(X_{i}^{-}\doteq(\overline{X}_{i},\overline{A}_{i-1},\overline{Y}_{i-1})\). Furthermore, when we report a modelling expression we might want to construct a design matrix that could be enriched with interaction terms, basis expansions, and transformations of the main effects included, for instance, in \(Y_{i}^{-}\). In such cases, we use a transformation mapping \(f_{y,i}\) indexed both by the relative variable and by time, the latter because we could transform an expanding window of the past, and denote by \(\widetilde{Y}_{it}\) the variable obtained from the mapping \(Y_{i}^{-}\mapsto f_{y,i}(Y_{i}^{-})\). Uppercase Latin letters are used to denote matrices, random variables, and simulated realizations in the pseudocode. Uppercase calligraphy Latin letters are used to denote sets. Lowercase Greek letters are used for unobserved effects and their simulated realizations in the pseudocode but also for configurable parameters. For features which are constructed from functions of observed and random variables, we use lowercase Latin characters with a tilde on top. In general, we use lowercase Latin characters for observed realizations of a random variable when the focus is on the generation mechanism.
2307.11943
Minimal alternating current injection into carbon nanotubes
We study theoretically the effect of electronic interactions in 1d systems on electron injection using periodic Lorentzian pulses, known as Levitons. We consider specifically a system composed of a metallic single-wall carbon nanotube, described with the Luttinger liquid formalism, a scanning tunneling microscope (STM) tip, and metallic leads. Using the out-of-equilibrium Keldysh Green function formalism, we compute the current and current noise in the system. We prove that the excess noise vanishes when each Leviton injects an integer number of electrons from the STM tip into the nanotube. This extends the concept of minimal injection with Levitons to strongly correlated, uni-dimensional non-chiral systems. We also study the time-dependent current profile, and show how it is the result of interferences between pulses non-trivially reflected at the nanotube-lead interface.
Kota Fukuzawa, Takeo Kato, Thibaut Jonckheere, Jérôme Rech, Thierry Martin
2023-07-21T23:58:14Z
http://arxiv.org/abs/2307.11943v4
# Minimal alternating current injection into Carbon Nanotubes ###### Abstract We study theoretically the effect of electronic interactions in 1d systems on electron injection using periodic Lorentzian pulses, known as Levitons. We consider specifically a system composed of a metallic single-wall carbon nanotube, described with the Luttinger liquid formalism, a scanning tunneling microscope (STM) tip, and metallic leads. Using the out-of-equilibrium Keldysh Green function formalism, we compute the current and current noise in the system. We prove that the excess noise vanishes when each Leviton injects an integer number of electrons from the STM tip into the nanotube. This extends the concept of minimal injection with Levitons to strongly correlated, uni-dimensional non-chiral systems. We also study the time-dependent current profile, and show how it is the result of interferences between pulses non-trivially reflected at the nanotube-lead interface. ## I Introduction Controlled electron injection in an electronic system is an important issue, both for potential applications to electron quantum optics and for the fundamental study of the many-body properties of the system [1; 2; 3; 4; 5; 6; 7]. More than twenty years ago, Levitov and co-workers have shown that applying a generic time-dependent voltage to inject a charge creates a fundamental disturbance to the system, akin to the Anderson catastrophe, with the creation of a divergent number of electron-hole pairs in the Fermi sea, in addition to the injected charge [8; 9; 10]. Importantly, they also showed that, by applying a specific quantized time-dependent voltage on a electronic conductor, it is possible to excite a single electron above the Fermi sea, without creating any perturbation to the system. These peculiar excitations have been called _Levitons_, and have been realized experimentally in 2d electron gases [6; 11; 12]. Such injection of a single charge, without any spurious excitation of the system, has been called _minimal injection_, and can be characterized by studying the excess noise. The properties of the Levitons have been studied intensively in various systems [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Coulomb interactions can have a major impact on the many-body state of electronic systems. This is particularly true for 1d electronic systems, where interactions lead to a very specific behavior, which can be described at low excitation energies by the Luttinger liquid theory [24]. This applies for example for conducting carbon nanotubes [25; 26; 27], semiconductor nanowires [28; 29], edge states of the fractional quantum Hall effect [30; 31], etc. Important physical quantities of these systems, such as the tunneling density of states and the current voltage characteristics typically obey some power-law behavior, with an exponent which is explicitly dependent on the interaction parameter. Other remarkable behaviors of 1d interacting systems are the charge fractionalization thanks to the existence of collective mode [32; 33; 34], spin-charge separation [35], etc. When connected to standard, non-interacting electrodes, non trivial processes at the boundaries between the interacting system and the electrodes, similar to Andreev reflection, do occur, creating a complex behavior for the time-dependent current [36; 37; 38; 39; 40; 41; 42]. It is thus natural to ask whether the concept of _Levitons_ can be extended to 1d interacting electronic systems. This question has already been given a positive answer in the case of chiral edge states of FQHE [43]. However, in a non-chiral system, the behavior at the interface between the interacting system and a normal lead is much more complex, which may modify strongly the physical response of the system to an AC charge injection. To answer this question in the case of a non-chiral system, in this work we consider explicitly AC electron injection into a carbon nanotube (CNT) from a STM tip [44; 45; 46; 47; 48] (see Fig. 1), and calculate the excess noise induced by the external AC driving. While the STM-CNT coupling is treated perturbatively as it operates in the weak tunneling regime, we treat the Coulomb interactions and the coupling between the CNT and the leads non-perturbatively using the bosonization method. We show that, although interactions have a deep impact on the behavior of the system, it is still possible to inject a single electronic charge with Figure 1: Schematic of the system composed of a carbon nanotube, metallic leads, and a STM tip. (a) Two-lead setup. (b) One-lead setup. (c) A model for general theoretical description. ing Leviton pulses. Furthermore, we provide an explicit general proof that the excess noise is zero for Levitons which inject an integer number of electrons. We also discuss the formulas for the noise and current, which contains the complex physics of Andreev-like reflection at the interface with the leads. The rest of this work is organized as follows. In Sec. II, we briefly summarize our theoretical model for the CNT, metallic leads, and STM tip. We also formulate the external voltage driving through the STM-CNT coupling. In Sec. III, we show the analytical formulas for the tunneling current, noise and excess noise (details of the derivation of these formulas are given in the two Appendices). In Sec. IV, we show the numerical results for the excess noise for different values of the relevant parameters, and discuss its properties. We also provide an analytic proof that the excess noise is zero for Levitons with integer charge, thus answering rigorously the initial question motivating this work. Finally, to clarify the importance of the Andreev-like reflection at the CNT-lead junction, we discuss the current profile as a function of time in Sec. V. We summarize our results in Sec. VI. ## II Description of the system We consider a metallic CNT connected at both ends to two semi-infinite Fermi liquid leads (see Fig. 1 (a)) or on one end to one Fermi liquid semi-infinite lead (see Fig. 1 (b)). A STM tip is placed close to the CNT, and a time-dependent voltage \(V(t)\) is applied between the tip and the CNT to allow the tunneling of electrons. These two setups can be described generally by an infinite one-dimensional system with inhomogeneous interaction parameters as shown in Fig. 1 (c), whose Hamiltonian is given with the bosonization technique as [44; 45; 46] \[\hat{H}=\sum_{j\delta}\int dx\left[\frac{v_{j\delta}(x)g_{j\delta}(x)}{2}( \partial_{x}\phi_{j\delta})^{2}+\frac{v_{j\delta}(x)}{2g_{j\delta}(x)}( \partial_{x}\theta_{j\delta})^{2}\right]. \tag{1}\] Here, \(\phi_{j\delta}(x)\) and \(\theta_{j\delta}(x)\) are non-chiral bosonic fields which satisfy the commutation relation \[[\phi_{j\delta}(x),\theta_{j^{\prime}\delta^{\prime}}(x^{\prime})]=-(i/2) \delta_{jj^{\prime}}\delta_{\delta\delta^{\prime}}\text{sgn}(x-x^{\prime}), \tag{2}\] \(g_{j\delta}(x)\) is the interaction parameter, \(v_{j\delta}(x)=v_{\text{F}}/g_{j\delta}(x)\) is the renormalized velocity, \(v_{\text{F}}\) is the Fermi velocity in the absence of Coulomb interaction, \(j\in(\text{c},\text{s})\) specifies charge and spin sectors, and \(\delta\in(+,-)\) specifies the symmetric and anti-symmetric sectors with respect to the branch (the valley) in the CNT. The origin of the coordinate \(x\) is set to the position of the STM tip. Assuming that the screened Coulomb interaction modifies only the \((j,\delta)=(c,+)\) sector [27; 25], the interaction parameters are set as \(g_{c-}=g_{s+}=g_{s-}=1\) and \[g_{c+}=\left\{\begin{array}{ll}g_{1},&(x<-L_{1}),\\ g_{\text{N}},&(-L_{1}<x<L_{2}),\\ g_{2},&(L_{2}<x),\end{array}\right. \tag{3}\] where \(L_{1}\) and \(L_{2}\) are lengths of the CNT separated by the STM tip (see Fig. 1 (c)). We also define the total length of the CNT as \(L=L_{1}+L_{2}\). The two-lead case shown in Fig. 1 (a) can be described by setting \(g_{1}=g_{2}=1\). On the other hand, the one-lead case shown in Fig. 1 (b) can be described by setting \(g_{1}=0\) to express an open boundary at \(x=-L_{1}\) while \(g_{2}\) (\(=1\)) is left unchanged. We note that for repulsive Coulomb interaction \(g_{\text{N}}\) becomes smaller than unity. The electron operator in the CNT is given by \[\Psi_{r\alpha\sigma}(x)=\frac{\eta_{r\alpha\sigma}}{\sqrt{2\pi a}}e^{iqrrx+ ik_{\text{F}}\alpha x+i\varphi_{r\alpha}(x)}, \tag{4}\] where \(\eta_{r\alpha\sigma}\) is a Klein factor, \(a\) is a short-length cutoff, \(k_{\text{F}}\) is the Fermi wavenumber, and \(q_{\text{F}}\) (\(\ll k_{\text{F}}\)) is the momentum mismatch associated with the two modes. Hereafter, we neglect the Klein factor as it does not affect the results. The bosonic field \(\varphi_{r\alpha\sigma}\) is described with \(\phi_{j\delta}(x)\) and \(\theta_{j\delta}(x)\) as \[\varphi_{r\alpha\sigma}(x)=\frac{\sqrt{\pi}}{2}\sum_{j\delta}h_{\alpha\sigma j \delta}[\phi_{j\delta}(x)+r\theta_{j\delta}(x)], \tag{5}\] where \(h_{\alpha\sigma\sigma+}=1\), \(h_{\alpha\sigma c-}=\alpha\), \(h_{\alpha\sigma s+}=\sigma\), and \(h_{\alpha\sigma s-}=\alpha\sigma\). For convenience, the STM is also modeled as a one-dimensional non-interacting system in a bosonized form \[c_{\sigma}(t)=\frac{1}{\sqrt{2\pi a}}e^{i\tilde{\phi}_{\sigma}(t)}. \tag{6}\] The electron tunneling between the STM tip and the CNT is described by the Hamiltonian \[\hat{H}_{T}(t) =\sum_{r\alpha\sigma\epsilon}\epsilon\Gamma_{\epsilon}(t)\Psi_{r \alpha\sigma}^{(-\epsilon)}(0,t)c_{\sigma}^{(\epsilon)}(t), \tag{7}\] \[\Gamma_{\epsilon}(t) =\Gamma\exp\left[\frac{iee}{\hbar}\int_{-\infty}^{t}V(t^{ \prime})\,dt^{\prime}\right], \tag{8}\] where \(\Gamma\) and \(\Gamma_{\epsilon}(t)\) are the tunneling amplitudes without and with effect of the time-dependent voltage \(V(t)\) and the superscript \(\epsilon\) leaves either operator unchanged (\(\epsilon=+\)) or transforms it into its Hermitian conjugate (\(\epsilon=-\)). The applied voltage \(V(t)\) is divided into DC and AC parts as \(V(t)=V_{\text{dc}}+V_{\text{ac}}(t)\), where by definition \(V_{\text{ac}}(t)\) averages to zero over one drive period \(T\). Here, we consider three types of the voltage pulse: \[\text{Leviton}:V(t) =\frac{V_{\text{dc}}}{\pi}\sum_{k}\frac{\eta}{\eta^{2}+(t/T-k)^{2}}, \tag{9}\] \[\text{Cosine}:V(t) =V_{\text{dc}}(1-\cos\Omega t),\] (10) \[\text{Square}:V(t) =2V_{\text{dc}}\sum_{k}\text{rect}(2t/T-k), \tag{11}\] where \(\Omega\) is the driving frequency, \(T=2\pi/\Omega\) is the period, \(\text{rect}(x)=1\) for \(x<1/2\) (\(=0\), otherwise) is the rectangular function, and \(\eta=W/T\) (\(W\): the half-width at half-maximum of the Lorentzian pulse). We define the Fourier components of \(\Gamma_{\epsilon}(t)\) as \[\Gamma_{-}(t)=\Gamma\sum_{l=-\infty}^{\infty}p_{l}e^{-i(\omega_{0}+\Omega)t}, \tag{12}\] and \(\Gamma_{+}(t)=(\Gamma_{-}(t))^{*}\), where \(\omega_{0}\equiv eV_{\rm dc}/\hbar\) and \[p_{l}\equiv\int_{-T/2}^{T/2}\frac{dt}{T}e^{it\Omega t}\exp\left[-\frac{ie}{ \hbar}\int_{-\infty}^{t}dt^{\prime}V_{\rm ac}(t^{\prime})\right]. \tag{13}\] For the pulse shapes given in Eqs. (9)-(11), the Fourier components are given by \[{\rm Leviton:}:p_{l}=q\sum_{s=0}^{\infty}\frac{\Gamma(q+l+s)}{ \Gamma(q+1-s)}\frac{(-1)^{s}e^{-2\pi\eta(2s+l)}}{(l+s)!s!}, \tag{14}\] \[{\rm Cosine:}p_{l}=J_{l}(-q),\] (15) \[{\rm Square:}p_{l}=\frac{2}{\pi}\frac{q}{l^{2}-q^{2}}\sin\left[ \frac{\pi}{2}(l-q)\right], \tag{16}\] where \(J_{l}(z)\) is the Bessel function. Here, we introduced a dimensionless quantity \(q\) defined as \[q\equiv\frac{eV_{\rm dc}}{\hbar\Omega}=\frac{\omega_{0}}{\Omega}. \tag{17}\] This quantity means that the charge injected per period is _ge_. ## III Formulation of the excess noise The current operator is expressed by the bosonic field as [44] \[\hat{I}(x,t)=2ev_{\rm F}\frac{\partial_{x}\phi_{c+}(x,t)}{\sqrt{\pi}}. \tag{18}\] In the Keldysh formalism, the average current is written in the form: \[I(x,t) =\frac{1}{2}\sum_{\eta}\left\langle T_{K}\hat{I}(x,t^{\eta})e^{i \int_{K}H_{T}(t_{1})dt_{1}}\right\rangle\] \[=\left\langle T_{K}\hat{I}(x,t^{-})e^{i\int_{K}H_{T}(t_{1})dt_{1 }}\right\rangle, \tag{19}\] where \(T_{K}\) indicates time-ordering operator along the Keldysh contour, \(t^{\eta}\) indicates time on the forward (\(\eta=+\)) and backward (\(\eta=-\)) contour, and \(\int_{K}\) indicates an integral over the Keldysh contour K. In the second line, we fixed the time on the backward contour because the current average is independent of \(\eta\). The second-order perturbation with respect to \(H_{T}\) gives \[I(x,t) =-\frac{1}{2}\sum_{\eta_{1}\eta_{2}}\eta_{1}\eta_{2}\int dt_{1}dt_ {2}\] \[\times\langle T_{K}\{\hat{I}(x,t^{-})H_{T}(t_{1}^{\eta_{1}})H_{T }(t_{2}^{\eta_{2}})\}\rangle. \tag{20}\] In the Keldysh formalism, the current fluctuations can be written in the form: \[S(x,t,t^{\prime})=\langle\hat{I}(x,t)\hat{I}(x,t^{\prime})\rangle\] \[=\left\langle T_{K}\hat{I}(x,t^{-})\hat{I}(x,t^{+})e^{i\int_{K}H_ {T}(t_{1})dt_{1}}\right\rangle. \tag{21}\] The second-order perturbation gives \[S(x,t,t^{\prime}) =-\frac{1}{4}\sum_{\eta_{1}\eta_{2}}\eta_{1}\eta_{2}\int dt_{1}dt _{2}\] \[\times\langle T_{K}\{\hat{I}(x,t^{\eta})\hat{I}(x,t^{-\eta})H_{T }(t_{1}^{\eta_{1}})H_{T}(t_{2}^{\eta_{2}})\}\rangle. \tag{22}\] The current noise \(S(x)\) at the position \(x\) is obtained by \[S(x) =\lim_{T\to 0}\frac{1}{T}\int_{-T/2}^{T/2}dt\int_{-T/2}^{T/2}dt^{ \prime}\,S(x,t,t^{\prime})\] \[=\int_{-\infty}^{\infty}dt\,S(x,t,0). \tag{23}\] The excess noise, which is a noise induced by ac driving, is defined as \[S^{\rm ex}(x)=S(x)-e\bar{I}(x), \tag{24}\] where \(\bar{I}(x)\) is the time-averaged current and the second term represents the Poisson noise due to nonequilibrium currents injected from the STM tip. Hereafter, we focus on the average current and excess noise in the right lead (\(x>L_{2}\)). The results of the second-order perturbation are obtained after a somewhat lengthy calculation, which goes beyond the calculations found in Ref. [46]. As we are interested in the low temperature behavior of the system, we peform the calculations at zero temperature only. We only show the final results for \(x>L_{2}\) as follows: \[\bar{I}(x) =\frac{8e\Gamma^{2}}{N_{\rm lead}\pi^{2}a^{2}}\sum_{l=-\infty}^{ \infty}\left|p_{l}\right|^{2}\int_{0}^{\infty}\!\!\!dt\;\sin((q+l)\Omega t) \frac{D(t)}{(1+(v_{F}t/a)^{2})^{\frac{1}{2}}}\sin(F(t)+{\rm ArcTan}(v_{F}t/a)), \tag{25}\] \[S(x) =\frac{8e^{2}\Gamma^{2}}{N_{\rm lead}\pi^{2}a^{2}}\sum_{l=-\infty} ^{\infty}\left|p_{l}\right|^{2}\int_{0}^{\infty}\!\!\!dt\;\cos((q+l)\Omega t) \frac{D(t)}{(1+(v_{F}t/a)^{2})^{\frac{1}{2}}}\cos(F(t)+{\rm ArcTan}(v_{F}t/a)),\] (26) \[S^{\rm ex}(x) =\frac{8e^{2}\Gamma^{2}}{N_{\rm lead}\pi^{2}a^{2}}\sum_{l=-\infty }^{\infty}\left|p_{l}\right|^{2}\int_{0}^{\infty}\!\!\!dt\,\frac{D(t)}{(1+(v_ {F}t/a)^{2})^{\frac{1}{2}}}\cos((q+l)\Omega t+F(t)+{\rm ArcTan}(v_{F}t/a)), \tag{27}\] \[F(t) =\nu{\rm ArcTan}\left[\frac{v_{F}t}{a}\right]+\sum_{k=1}^{\infty }\frac{(b_{1}b_{2})^{k}}{8}\left(g_{\rm N}+\frac{1}{g_{\rm N}}\right){\rm ArcTan }\left[\frac{2av_{F}t}{a^{2}+(2kLg_{\rm N})^{2}-(v_{F}t)^{2}}\right]\] \[\quad+\sum_{k=0}^{\infty}\frac{b_{1}^{k+1}b_{2}^{k}}{16}\left(-g_ {\rm N}+\frac{1}{g_{\rm N}}\right){\rm ArcTan}\left[\frac{2av_{F}t}{a^{2}+((2 kL+2L_{1})g_{\rm N})^{2}-(v_{F}t)^{2}}\right]\] \[\quad+\sum_{k=0}^{\infty}\frac{b_{1}^{k}b_{2}^{k+1}}{16}\left(-g_ {\rm N}+\frac{1}{g_{\rm N}}\right){\rm ArcTan}\left[\frac{2av_{F}t}{a^{2}+((2 kL+2L_{2})g_{\rm N})^{2}-(v_{F}t)^{2}}\right]\Biggr{]}\,, \tag{28}\] \[D(t) =\left(\frac{a^{2}+(v_{F}t)^{2}}{a^{2}}\right)^{-\nu/2}\] \[\times\prod_{k=1}^{\infty}\left[\left(\frac{a^{2}+(2kLg_{\rm N}) ^{2}-(v_{F}t)^{2}}{a^{2}+(2kLg_{\rm N})^{2}}\right)^{2}+\left(\frac{2av_{F}t}{ a^{2}+(2kLg_{\rm N})^{2}}\right)^{2}\right]^{-\frac{(b_{1}b_{2})^{k}}{16}\left(g_{\rm N}+ \frac{1}{g_{\rm N}}\right)}\] \[\times\prod_{k=0}^{\infty}\left[\left(\frac{a^{2}+((2kL+2L_{1})g_ {\rm N})^{2}-(v_{F}t)^{2}}{a^{2}+((2kL+2L_{1})g_{\rm N})^{2}}\right)^{2}+ \left(\frac{2av_{F}t}{a^{2}+((2kL+2L_{2})g_{\rm N})^{2}}\right)^{2}\right]^{- \frac{b_{1}^{k}b_{2}^{k+1}}{32}\left(-g_{\rm N}+\frac{1}{g_{\rm N}}\right)}, \tag{29}\] where \(\nu=(6+g_{\rm N}+g_{\rm N}^{-1})/8\), \(N_{\rm lead}\) is the number of metallic leads and \(b_{1,2}=(g_{N}-g_{1,2})/(g_{N}+g_{1,2})\) are the reflection coefficients. We note that the calculated current and current noise are independent of \(x\) as far as \(x>L_{2}\). Details of the derivation are given in Appendix A. We also note that the current, the noise and the excess noise include a prefactor \(1/N_{\rm lead}\) which reflects the fact that the total injected current is partitioned into the two leads in the two-lead case, while it flows fully into the single lead in the one-lead case. The current and noise are thus roughly two times larger in the one-lead case compared to the two-lead one (see also Fig. 2 and 3 in the next section). While the formulas for the \(F\) and \(D\) factors are somewhat heavy, the physical meaning of the different terms is quite simple. For each term in the sum defining \(F\), there is a corresponding term in the product defining \(D\). The first term correspond to the contribution for an infinite nanotube, with direct propagation from the injection point located at the origin to the measurement point \(x\). All the other terms correspond to the propagation after a given number of reflection on the CNT-leads interface: \(b_{1}\) (resp. \(b_{2}\)) is the reflection coefficient at the left (resp. right) interface, so for example a term with the \((b_{1}b_{2})^{k}\) corresponds to \(k\) reflection at both interface, and one can see that this term also contains a factor \(2kLg_{N}\) which stands for \(k\) times the length of one round trip along the CNT. Note that \(b_{1}<0\) and \(b_{2}<0\) for the two leads case, which means that the reflections are similar to an Andreev reflection, with an electron-like excitation being reflected as a hole-like excitation. ## IV Excess noise In this section, we discuss the properties of the excess noise as a function of \(q=eV_{dc}/\hbar\Omega\). We mainly focus on the excess noise when Leviton pulses are injected. We define the charge velocity in the CNT as \(v_{\rm N}=v_{\rm F}/g_{\rm N}\) and the unit of time and frequency as \(T_{0}=2L_{2}/v_{\rm N}\) (the time needed for an electron to travel from the tip to the right lead and back) and \(\Omega_{0}=2\pi/T_{0}\), respectively. ### Two-lead case Let us first discuss the two-lead setup shown in Fig. 1 (a). In Fig. 2 (a), we show the excess noise as a function of \(q\) when Lorentzian voltage pulses (i.e. Levitons) are applied. The interaction parameter of the CNT is set as \(g_{\rm N}=1,0.7,0.5\) and \(0.2\) and the other parameters are given in the caption. The most striking feature is the fact that the excess noise is zero when \(q\) is an integer, independently of the value of the interaction parameter. This is a well-known property of a Leviton drive for a non-interacting electronic system.[49, 8, 10] Fig. 2 (a) shows that this property remains valid even for strongly interacting non-chiral 1d electronic systems. An analytical proof of this remarkable property is given in Sec. IV.4. For non-integer values of \(q\), we observe that the excess noise is reduced a little as the Luttinger parameter \(g_{\rm N}\) is reduced from unity. Fig. 2 (b) shows the pulse-shape dependence of the excess noise, for \(g_{\rm N}=0.2\). The excess noise vanishes only for Lorentzian pulses, while it does not for the cosine and square pulses. These observations confirm that the exceptional features of the Levitons compared to all other voltage drives survive even for this interacting electron system. We note that, contrarily to the case of quasiparticle tunneling into edge states of the fractional quantum Hall effect [43], the excess noise does not show a singular behavior near integer values of \(q\). This is because we are considering here injection of electrons (rather than fractional quasi-particles) from the STM tip. Fig. 2 (c) shows the excess noise for \(g_{\rm N}=0.2\) as a function of \(q\) for \(\Omega/\Omega_{0}=0.5,1.0,1.5,2.0\) and \(2.5\) and \(\omega_{c}/\Omega_{0}=100\), where \(\Omega_{0}\) is the unit of frequency. Although the excess noise for a non-integer value of \(q\) is almost proportional to \(\Omega\), it grows a little faster than expected from linear dependence. ### One-lead case Next, we discuss the one-lead setup shown in Fig. 1 (b). We show the excess noise as a function of \(q\) in Fig. 3 (a) for \(g_{\rm N}=1,0.7,0.5\) and \(0.2\), where the other parameters are the same as Fig. 2. We find that the qualitative features of the excess noise are similar to the two-lead case; for non-integer \(q\) the excess noise is reduced a little as the interaction parameter \(g_{\rm N}\) is reduced, while it vanishes when \(q\) is an integer. This means that the exceptional features of the Levitons are also valid for the interacting electrons systems for this setup. Fig. 3 (b) and (c) show the pulse-shape and ac-frequency dependence, respectively. By comparing with Fig. 2 (b) and (c), we find that the qualitative features are common with the two-lead case. Compared to the two-lead case, the one-lead setup produces only small quantitative changes in the excess noise for non-integer values of \(q\). ### Interference effects As will be discussed in Sec. V, the Leviton pulses reflect at the boundary between the CNT and the leads in a complex manner and can interfere with subsequent Leviton pulses injected from the STM tip. To see this interference effect, we show the excess noise in Fig. 4 for four geometries, i.e., four sets of the STM-tip position and two boundaries of the CNT. Fig. 4 (a) and (b) shows the two- and one-lead cases, respectively. Although the geometry of the STM-tip and CNT boundaries controls the interference between Lorentzian pulses, the excess noise keeps its main features; it always vanishes when \(q\) is an integer. We find that the interference effect causes small changes in the excess noise for non-integer \(q\) in both the two- and one-lead cases. We conclude that the interference between the pulses only has a minor effect on the excess noise. ### Proof that excess noise is zero for integer Leviton As observed in Figs. 2, 3 and 4, the excess noise is always zero for a periodic drive of Levitons when \(q\) is integer, independently of the other parameters of the system (number of leads, position of the STM tip, frequency of the drive, etc.). We give here an analytical proof of this property. The excess noise is defined as \(S^{\rm ex}(x)=S(x)-e\bar{I}(x)\) in Eq. (27). Using \(D(t)=D(-t)\) and \(F(t)=-F(-t)\), we can recast Eq. (27) as: \[S^{\rm ex}(x) =\frac{4e^{2}\Gamma^{2}}{N_{\rm lead}\pi^{2}a^{2}}\Re\sum_{l=- \infty}^{\infty}\left|p\right|^{2}\int_{-\infty}^{\infty}\!\!\!dt\,\frac{D(t)} {(1+(v_{F}t/a)^{2})^{\frac{1}{2}}}\] \[\times\exp\left[i((q+l)\Omega t+F(t)+{\rm ArcTan}(v_{F}t/a))\right], \tag{30}\] where \(F(t)\) and \(D(t)\) are given by Eqs. (28)-(29), and \(\Re\) denotes the real part. \(F(t)\) is composed by an infinite sum of terms, and each term has a corresponding term in the infinite product which composes \(D(t)\). A typical term in \(F(t)\) can be written as: \[A\times{\rm ArcTan}(\alpha(t)/\beta(t)) \tag{31}\] and the corresponding term in \(D(t)\) is: \[\left[(\beta(t)/\gamma)^{2}+(\alpha(t)/\gamma)^{2}\right]^{-A/2}, \tag{32}\] with \(\alpha(t)=2av_{F}t\), \(\beta(t)=a^{2}+(2kLg_{\rm N})^{2}-(v_{F}t)^{2}\) (or a similar expression with \(2kL\) replaced by \(2kL+2L_{1}\) or \(2k_{L}+2L_{2}\)), and \(A>0\), \(\gamma>0\) are time-independent constants. In the integral of Eq. (30), this combination of terms appears as: \[\frac{\exp(iA\times\text{ArcTan}(\alpha(t)/\beta(t)))}{\left[(\beta(t)/\gamma)^ {2}+(\alpha(t)/\gamma)^{2}\right]^{A/2}}=\frac{\gamma^{A}}{(\beta(t)-i\alpha(t ))^{A}}. \tag{33}\] From the expressions of \(\alpha(t)\) and \(\beta(t)\), one can check that the expression of Eq. (33) has poles in the \(t\) plane at: \[t=-\frac{ia}{v_{F}}\pm\frac{2kLg_{\text{N}}}{v_{F}}. \tag{34}\] Importantly, these two poles are in the lower half-plane. The integrand of Eq. (30) thus contains an infinite product of terms like Eq. (33), which have poles and branch cuts in the lower half-plane only. This product is multiplied by \(\exp(i(q+l)\Omega t)\); this factor allows to perform the integration by closing the contour in the upper (lower) complex plane for \(q+l>0\) (\(q+l<0\)). As a Leviton drive for an integer charge \(q\) is characterized by[8; 9; 10; 43; 49] \[p_{l}=0,\quad\text{ for }\quad l\leq-q, \tag{35}\] one can see that only terms with \(q+l>0\) exist for a Leviton drive with integer charge \(q\), which leads to a zero integral for the excess noise as the integrand has no poles in the upper complex plane. The excess noise for a Leviton drive with an integer charge \(q\) is thus always zero, independently of the value of the other parameters, including the interaction parameter \(g_{N}\) in the nanotube. We note that suppression of the excess noise has recently been discussed for a different setup using the non-chiral Luttinger liquid [40]. ## V Current Profiles Indeed, as the figures for the excess noise are qualitatively similar to the ones for a non-interacting system, one may think that the system behaves overall as a non-interacting one. However, this is not the case, and it will be clear from the current profiles which are fundamentally different from the case of a non-interacting system. It is thus quite remarkable that the excess noise is robust with respect to interactions, and in particular that it goes to zero for integer value of \(q\) for a Leviton drive. An essential element to understand the transport in the system is the reflection at each boundary between the nanotube and the leads. These are characterized by the reflection coefficients \(b_{1,2}=(g_{N}-g_{1,2})/(g_{N}+g_{1,2})\). For the two-lead case, we have \(b_{1}=b_{2}=(g_{N}-1)/(g_{N}+1)<0\), which means that an 'electron'-like excitation is converted into a 'hole'-like excitation by the reflection. This peculiar reflection, which is inherent to one-dimensional interacting electron systems, is called Andreev reflection in analogy with that of a superconductor/normal junction[37; 44]. For the one-lead case, one has \(b_{1}=1\); the reflection at the right boundary is still an Andreev reflection, while at the left boundary it is a simple reflection. The current profile will be the result of the interference of the injected pulses and their multiple reflections at the boundaries. We first consider the two-lead case. To visualize the Andreev reflection and interference with different Leviton pulses, we show the time-dependent current profile for \(L_{1}=L/3\) and \(L_{2}=2L/3\) in Fig. 5 (a)-(c). The expressions used to compute the time-dependent current profile are detailled in Appendix B. The period of the ac voltage is taken as \(T=4L_{1}/v_{\rm N}=2L_{2}/v_{\rm N}\), where \(v_{\rm N}=v_{F}/g_{\rm N}\) is the charge velocity in the CNT. The three curves correspond to \(t=0,T/9,2T/9\) in (a), to \(t=3T/9,4T/9,5T/9\) in (b), and to \(t=6T/9,7T/9,8T/9\) in (c), respectively. Fig. 5 (a) shows that the applied voltage pulse at \(t=0\) creates two pulses propagating in opposite directions from the STM tip at \(x=0\), as indicated by the blue and red arrows. The heights of these two pulses are different because of the resonance condition \(T=4L_{1}/v_{\rm N}=2L_{2}/v_{\rm N}\), which means that pulses reflected one or several times at the left and right boundaries can interfere with new pulses created periodically at the tip position. In Fig. 5 (b), the pulses are divided into a reflected part (in the CNT) and a transmitted part (in the leads) at the two boundaries of the CNT. We note that the current carried by the reflected pulse has the same sign as the incident pulse despite the reversal of the propagation direction. This is because the electron pulses are converted into hole pulses at the boundary due to the 'Andreev' reflection characteristic of the Luttinger liquid [44; 46]. In Fig. 5 (c), the reflected pulses continue to propagate in the CNT. Here, because the lengths, \(L_{1}\) and \(L_{2}\), between the STM tip and the leads 1 or 2 are different, the timing of arrival of pulses at the origin is also different for the two reflected pulses. For our choice of parameters, the pulse reflected at \(x=L_{2}\) returns at the origin precisely after the period \(T\) (see the blue arrow). The fact that there is a strong interference between the pulse created at the STM tip and the pulses reflected on the right lead, while there is no such interference for the pulse reflected at the left lead, explains the different current profile in the two metallic leads; the time-dependent current is alternatively positive and negative in the right lead, while it is always negative in the left lead. The time-dependent current profile for the one-lead case is shown in Fig. 5 (d)-(f) for the same geometry. In contrast with the two-lead case, the system has one open boundary at \(x=-L_{1}\), where the reflection is a standard one which keeps the sign of the charge excitation unchanged. Therefore, the current carried by the pulse changes its sign after reflection at \(x=-L_{1}\) (see Fig. 5 (e)) as in a usual reflection. This change in the reflection properties affects the sign of the current in the right lead; the current is always positive there in contrast with the two-lead case (see Fig. 5 (a)-(c)). We further show the current profile for a different geometry with \(L_{1}=2L/3\) and \(L_{2}=L/3\) in Fig. 6. Although the reflection properties at the boundaries of the CNT are the same as the previous case, the geometry affects the current profile in the metallic leads due to the modifications of the interference conditions between pulses. For the two-lead case (Fig. 6 (a)-(c)), the current is now always positive in the right lead, while it can be both negative and positive in the left one. This result is the inverse of what was observed with the previous geometry (Fig. 5 (a)-(c)). Also for the one-lead case (Fig. 6 (d)-(f)), the current can now be negative in the right lead, in contrast with the previous geometry (Fig. 5 (d)-(f)). ## VI Summary We theoretically studied the effect of electron correlations in single electron injection into a carbon nanotube (CNT) coupled with metallic leads. We formulated the current and the excess noise induced by an AC voltage driving by describing the CNT in terms of a Luttinger liquid, and studied how electron correlations modified (or Figure 5: Time-dependent current profile as a function of position, inside the CNT (yellow background) and inside the normal leads (light gray background), for the two-lead case (left panels) and the one-lead case (right panels) in the case of \(L_{1}=L/3\). Each panel shows the current at three different times as indicated. The unit of the current is given by \(I_{0}=4e\Gamma^{2}/\pi^{2}a^{2}\). The period of the ac voltage is given \(T=4L_{1}/v_{\rm N}=2L_{2}/v_{\rm N}\). Other experimental parameters are taken as \(\eta=0.01\), \(\omega_{c}T/2\pi=100\), and \(g_{\rm N}=0.2\). not) the property of minimal excess noise for Lorenzian pulse (Leviton) injection. We showed both analytically and numerically that the excess noise vanishes when each Leviton pulse includes an integer number of electrons, as observed in non-interacting electron systems. This indicates that the electron correlations in the CNT do not change the physics of minimum noise pulses, i.e. Levitons, at all. For a non-integer electron injection, the excess noise depends on both the geometry of the system and the ac driving frequency. We also showed that the time-dependent current profile depends on the geometry and the ac frequency through the interference condition between injected pulses. We demonstrated that injected pulses induces Andreev-like (resp. normal) scattering at the junction with a metallic lead (resp. at the open boundary). This affects the excess noise in non-integer electron injection. Our findings show the universality of the minimum noise properties for the Leviton pulses, which holds even in interacting electron systems. Detailed setup and estimate for experimental verification of our results are left as a future problem. Our calculations could also apply to artificially created non-chiral Luttinger liquid systems in the quantum Hall effect.[50] ###### Acknowledgements. This French-Japanese collaboration is supported by the CNRS International Research Project "Excitations Figure 6: Time-dependent current profile as a function of position, inside the CNT (yellow background) and inside the normal leads (light gray background), for the two-lead case (left panels) and the one-lead case (right panels) in the case of \(L_{1}=2L/3\). Each panel shows the current at three different times as indicated. The unit of the current is given by \(I_{0}=4e\Gamma^{2}/\pi^{2}a^{2}\). The period of the ac voltage is given \(T=2L_{1}/v_{\rm N}=4L_{2}/v_{\rm N}\). Other experimental parameters are taken as \(\eta=0.01\), \(\omega_{c}T/2\pi=100\), and \(g_{\rm N}=0.2\). in Correlated Electron Systems driven in the Giga-Hertz range" (ESEC). This work received support from the French government under the France 2030 investment plan, as part of the Initiative d'Excellence d'Aix-Marseille Universite - A*MIDEX, through the institutes IPhU (AMX-19-IET-008) and AMUtech (AMX-19-IET-01X). T. K. acknowledges support from the Japan Society for the Promotion of Science (JSPS KAKENHI Grant No. JP20K03831). ## Appendix A Green functions In this appendix, we show the main steps of the calculation to obtain the Green function of the bosonic fields for an inhomogeneous infinite one-dimensional system, with an interaction parameter being a piecewise constant function defining three different regions, without any asumption of spatial symmetry.[46] Let us first consider the imaginary-time Green functions defined by \[G^{XY}_{j\delta}(x,x^{\prime},\tau)=\left\langle X_{j\delta}(x,\tau)Y_{j\delta} (x^{\prime},0)\right\rangle, \tag{10}\] for \(0<\tau<\hbar\beta\), where \(X,Y=\theta\) or \(\phi\) and \(X_{j\delta}(x,\tau)=e^{H\tau/\hbar}X_{j\delta}(x)e^{-H\tau/\hbar}\) indicates the imaginary-time evolution. From the Hamiltonian (1), the Green functions, \(G^{\phi\phi}_{j\delta}(x,x^{\prime},\tau)\) and \(G^{\theta\theta}_{j\delta}(x,x^{\prime},\tau)\), obey the following equations: \[-\left(\frac{g_{j\delta}(x)}{v_{j\delta}(x)}\partial_{\tau}^{2} +\partial_{x}v_{j\delta}(x)g_{j\delta}(x)\partial_{x}\right)G^{\phi\phi}_{j \delta}(x,x^{\prime},\tau)\] \[=\delta(x-x^{\prime})\delta(\tau), \tag{11}\] \[-\left(\frac{\partial_{\tau}^{2}}{v_{j\delta}(x)g_{j\delta}(x)} +\partial_{x}\frac{v_{j\delta}(x)}{g_{j\delta}(x)}\partial_{x}\right)G^{\theta \theta}_{j\delta}(x,x^{\prime},\tau)\] \[=\delta(x-x^{\prime})\delta(\tau), \tag{12}\] where \(\delta(x)\) is a delta function. The mixed Green functions, \(G^{\phi\theta}_{j\delta}\) and \(G^{\phi\phi}_{j\delta}\), can be obtained from \(G^{\phi\phi}_{j\delta}\) and \(G^{\theta\theta}_{j\delta}\) as \[i\partial_{\tau}G^{\phi\theta}_{j\delta}(x,x^{\prime},\tau) =\frac{v(x)}{g(x)}\partial_{x}G^{\theta\theta}_{j\delta}(x,x^{ \prime},\tau), \tag{13}\] \[i\partial_{\tau}G^{\theta\phi}_{j\delta}(x,x^{\prime},\tau) =v(x)g(x)\partial_{x}G^{\phi\phi}_{j\delta}(x,x^{\prime},\tau). \tag{14}\] Using the fact that the interaction parameters \(g_{j\delta}(x)\) and the velocities \(v_{j\delta}(x)\) are piecewise constant functions, with three different domains: \(x<-L_{1}\) (left lead), \(-L_{1}<x<L_{2}\) (nanotube) and \(x>L_{2}\) (right lead), one can solve for the Green functions \(G^{\phi\phi}_{j\delta}\) and \(G^{\theta\theta}_{j\delta}\) in the Matsubara frequency space. For calculation of the current and current noise, the Green functions at \(x=x^{\prime}=0\) (which is the injection point) and its spatial derivatives in which \(x\) or \(x^{\prime}\) is set as zero are required. Since the full derivation is rather lengthy, we only explain here the calculation of \(G^{\phi\phi}_{c+}(x,x^{\prime},\tau)\). The other types of the Green functions can be calculated in a similar way. By the Fourier transformation of Eq. (11), we obtain \[\left(\frac{g_{c+}(x)}{v_{c+}(x)}\omega^{2}-\partial_{x}v_{c+}(x )g_{c+}(x)\partial_{x}\right)G^{\phi\phi}_{c+}(x,x^{\prime},\omega)\] \[=\delta(x-x^{\prime}), \tag{15}\] where \(\omega\) is a Matsubara frequency, that can be regarded as a real number at zero temperature, and \[G^{\phi\phi}_{c+}(x,x^{\prime},\omega)=\int d\tau\,G^{\phi\phi}_ {c+}(x,x^{\prime},\tau)e^{i\omega\tau}. \tag{16}\] It is straightforward to solve this differential equation with respect to \(x\). For example, the Green function in the range of \(-L_{1}<x^{\prime}<L_{2}\) is obtained as \[G^{\phi\phi}_{c+}(x,x^{\prime},\omega)\] \[=\left\{\begin{array}{ll}A(x^{\prime})e^{\frac{|\omega|x}{v_{1} }},&(x<-L_{1}),\\ B(x^{\prime})e^{\frac{|\omega|x}{v_{1}}}+C(x^{\prime})e^{-\frac{|\omega|x}{v_{ \rm N}}},&(-L_{1}<x<x^{\prime}),\\ D(x^{\prime})e^{\frac{|\omega|x}{v_{\rm N}}}+E(x^{\prime})e^{-\frac{|\omega|x}{v _{\rm N}}},&(x^{\prime}<x<L_{2}),\\ F(x^{\prime})e^{-\frac{|\omega|x}{v_{2}}},&(L_{2}<x),\end{array}\right. \tag{17}\] where \[A(x^{\prime}) =\frac{2g_{\rm N}}{g_{1}+g_{\rm N}}e^{\frac{|\omega|L_{1}}{v_{1}} -\frac{|\omega|L_{1}}{v_{\rm N}}}B(x^{\prime}), \tag{18}\] \[B(x^{\prime}) =\frac{1}{2g_{\rm N}|\omega|}\frac{e^{-\frac{|\omega|}{v_{\rm N}} x^{\prime}}+\gamma_{2}e^{\frac{|\omega|}{v_{\rm N}}x^{\prime}}}{1-\gamma_{1}\gamma_{2}},\] (19) \[C(x^{\prime}) =\gamma_{1}B(x^{\prime}),\] (20) \[D(x^{\prime}) =\gamma_{2}E(x^{\prime}),\] (21) \[E(x^{\prime}) =\frac{1}{2g_{\rm N}|\omega|}\frac{e^{\frac{|\omega|x^{\prime}}{v_ {\rm N}}x^{\prime}}+\gamma_{1}e^{-\frac{|\omega|x^{\prime}}{v_{\rm N}}x^{ \prime}}}{1-\gamma_{1}\gamma_{2}},\] (22) \[F(x^{\prime}) =\frac{2g_{\rm N}}{g_{2}+g_{\rm N}}e^{\frac{|\omega|L_{2}}{v_{2}} -\frac{|\omega|L_{2}}{v_{\rm N}}}E(x^{\prime}). \tag{23}\] Here, the coefficients, \(\gamma_{1}\) and \(\gamma_{2}\), are given as \[\gamma_{1} =\frac{g_{\rm N}-g_{1}}{g_{\rm N}+g_{1}}e^{-\frac{2|\omega|L_{1}} {v_{\rm N}}}\equiv b_{1}e^{-\frac{2|\omega|L_{1}}{v_{\rm N}}}, \tag{24}\] \[\gamma_{2} =\frac{g_{\rm N}-g_{2}}{g_{\rm N}+g_{2}}e^{-\frac{2|\omega|L_{2}}{v _{\rm N}}}\equiv b_{2}e^{\frac{-2|\omega|L_{2}}{v_{\rm N}}}. \tag{25}\] By setting \(x,x^{\prime}\to 0\), we obtain \[G^{\phi\phi}_{c+}(0,0,\omega)=\frac{1}{2|\omega|g_{\rm N}}\frac{(1+\gamma_{1})(1+ \gamma_{2})}{1-\gamma_{1}\gamma_{2}}. \tag{26}\] Here, we further use the expansion \[\frac{1}{1-\gamma_{1}\gamma_{2}}=\sum_{k=1}^{\infty}(b_{1}b_{2})^{k}e^{-\frac{ 2k|\omega|}{v_{\rm N}}}, \tag{27}\] where \(k\) can be regarded as the number of round trips of a pulse in the CNT. The inverse Fourier transformation can easily be performed by using the formula \[\partial_{\tau}\int_{0}^{\infty}\frac{d\omega}{\omega}e^{\omega(x\pm i\tau)}= \frac{\mp i}{x\pm i\tau}, \tag{108}\] and by the analytic continuation \(\tau=it+a/v_{\rm F}\) the Green function is calculated as \[\tilde{G}_{c+}^{\phi\phi}(0,0,t)\] \[=-\frac{1}{2\pi g_{\rm N}}\Bigg{\{}\log(1+iv_{\rm F}t/a)+\sum_{k= 1}^{\infty}(b_{1}b_{2})^{k}I_{+}(2kL)\] \[\qquad\qquad+\frac{1}{2}\sum_{k=0}^{\infty}b_{1}(b_{1}b_{2})^{k} I_{+}(2kL+2L_{1})\] \[\qquad\qquad+\frac{1}{2}\sum_{k=0}^{\infty}b_{2}(b_{1}b_{2})^{k} I_{+}(2kL+2L_{2})\Bigg{\}}, \tag{109}\] where \[I_{\pm}(x) =\pm\log\left(1+\frac{iv_{\rm F}t}{a+ixg_{\rm N}}\right)\] \[+\log\left(1+\frac{iv_{\rm F}t}{a-ixg_{\rm N}}\right). \tag{110}\] Here, the real-time Green's function is defined as \[\tilde{G}_{j\delta}^{XY}(x,x^{\prime},t) =\left\langle X_{j\delta}(x,t)Y_{j\delta}(x^{\prime},0)\right\rangle,\] \[-\left\langle X_{j\delta}(x,t)^{2}\right\rangle-\left\langle Y_{ j\delta}(x^{\prime},0)^{2}\right\rangle, \tag{111}\] where \(X,Y=\theta\) or \(\phi\) and \(X_{j\delta}(x,t)=e^{iHt/\hbar}X_{j\delta}(x)e^{-iHt/\hbar}\) indicates the real-time evolution. In a similar way, the other Green functions can be calculated from Eqs. (107)-(109) as \[\tilde{G}_{c+}^{\theta\theta}(0,0,t)\] \[=-\frac{g_{\rm N}}{2\pi}\Big{\{}\log(1+iv_{\rm F}t/a)+\sum_{k=1} ^{\infty}(b_{1}b_{2})^{k}I_{+}(2kL)\] \[\qquad\qquad-\frac{1}{2}\sum_{k=0}^{\infty}b_{1}(b_{1}b_{2})^{k} I_{+}(2kL+2L_{1})\] \[\qquad\qquad-\frac{1}{2}\sum_{k=0}^{\infty}b_{2}(b_{1}b_{2})^{k} I_{+}(2kL+2L_{2})\Big{\}}, \tag{112}\] \[\tilde{G}_{c+}^{\theta\phi}(0,0,t)=-\frac{1}{4\pi}\Big{\{}\sum_{ k=0}^{\infty}b_{1}(b_{1}b_{2})^{k}I_{-}(2kL+2L_{1})\] \[\qquad\qquad-\sum_{k=0}^{\infty}b_{2}(b_{1}b_{2})^{k}I_{-}(2kL+2L _{2})\Big{\}}, \tag{113}\] \[\tilde{G}_{c+}^{\phi\theta}(0,0,t)=\frac{1}{4\pi}\Big{\{}\sum_{k=0}^{\infty}b _{1}(b_{1}b_{2})^{k}I_{-}(2kL+2L_{1})\] \[\qquad\qquad-\sum_{k=0}^{\infty}b_{2}(b_{1}b_{2})^{k}I_{-}(2kL+2 L_{2})\Big{\}}, \tag{114}\] For other modes, i.e., \((j,\delta)=(c,-),(s,+),(s,-)\), the Green functions are easily obtained from the above results by setting \(g_{1}=g_{2}=g_{\rm N}=1\) (the two-lead case) or \(g_{1}=0\) and \(g_{2}=g_{\rm N}=1\) (the one-lead case). Next, we calculate spatial derivatives of the Green functions for the channel \((j,\delta)=(c,+)\). For simplicity, we drop the subscripts assigning the mode hereafter. By solving the Fourier transformation of Eqs. (106)-(109), we obtain \[G^{\phi\phi}(x,0,\omega) =G^{\phi\phi}(0,x,\omega)=\frac{1}{g_{\rm N}+g_{2}}\frac{1}{| \omega|}G_{+}(x), \tag{115}\] \[G^{\theta\theta}(x,0,\omega) =G^{\theta\theta}(0,x,\omega)=\frac{g_{\rm N}g_{2}}{g_{\rm N}+g_{ 2}}\frac{1}{|\omega|}G_{-}(x),\] (116) \[G^{\phi\theta}(x,0,\omega) =\frac{g_{\rm N}}{g_{\rm N}+g_{2}}\frac{1}{\omega}G_{-}(x),\] (117) \[G^{\phi\theta}(0,x,\omega) =-\frac{g_{2}}{g_{\rm N}+g_{2}}\frac{1}{\omega}G_{+}(x),\] (118) \[G^{\theta\phi}(x,0,\omega) =\frac{g_{2}}{g_{\rm N}+g_{2}}\frac{1}{\omega}G_{+}(x),\] (119) \[G^{\theta\phi}(0,x,\omega) =-\frac{g_{\rm N}}{g_{\rm N}+g_{2}}\frac{1}{\omega}G_{-}(x),\] (120) \[G_{\pm}(x) =\frac{1\pm\gamma_{1}}{1-\gamma_{1}\gamma_{2}}e^{-b|\omega|/v_{ \rm N}-(x-b)|\omega|/v_{2}}, \tag{121}\] for \(x>L_{2}\). By using the same techniques for calculation of \(\partial_{\tau}G^{XY}(0,0,\tau)\), the spatial derivatives can be calculated. For example, we obtain \[\partial_{x}G^{\phi\phi}(x,0,t)=\partial_{x}G^{\phi\phi}(0,x,t)\] \[=-\frac{1}{2\pi(g_{\rm N}+g_{2})v_{2}}\] \[\times\sum_{k=0}^{\infty}(b_{1}b_{2})^{k}\Big{[}\frac{1}{\alpha_{k }(x)+t-i\tau_{0}}+\frac{1}{\alpha_{k}(x)-t+i\tau_{0}}\] \[\qquad+\frac{b_{1}}{\beta_{k}(x)+t-i\tau_{0}}+\frac{b_{1}}{\beta_ {k}(x)-t+i\tau_{0}}\Big{]}, \tag{122}\] \[\alpha_{k}(x) =\frac{2kL+L_{2}}{v_{\rm N}}+\frac{x-L_{2}}{v_{2}},\] (123) \[\beta_{k}(x) =\alpha_{k}(x)+\frac{2L_{1}}{v_{\rm N}}, \tag{124}\] after analytic continuation \(\tau=it+\tau_{0}\), where \(\tau_{0}=a/v_{\rm F}\) is a short-time cutoff. The Green functions obtained above can be related to the Keldysh Green functions defined as \[G_{K}^{XY}(x,x^{\prime},t)=\langle T_{K}\{X(x,t^{m})Y(x^{\prime},0^{m})\}\rangle, \tag{125}\] where \(\eta_{1}\), \(\eta_{2}\) (\(=\pm 1\)) represents the forward (\(+1\)) or backward (\(-1\)) contours. The Keldysh Green functions are expressed in a matrix form as \[G_{K}^{XY}(x,x^{\prime},t)=\left(\begin{array}{cc}G_{(++)}^{XY}(x,x^{ \prime},t)&G_{(+-)}^{XY}(x,x^{\prime},t)\\ G_{(-+)}^{XY}(x,x^{\prime},t)&G_{(--)}^{XY}(x^{\prime},x^{\prime},t)\end{array}\right)\] \[=\left(\begin{array}{cc}G^{XY}(x,x^{\prime},|t|)&G^{XY}(x^{ \prime},x,-t)\\ G^{XY}(x,x^{\prime},t)&G^{XY}(x^{\prime},x,-|t|)\end{array}\right). \tag{126}\] The current and current noise are calculated combining these expressions for \(G^{XY}_{(\eta_{1}\eta_{2})}(x,x^{\prime},t)\) with the results given in Ref. [46]. The Keldysh Green functions of the bosonic field describing the STM tip is calculated as \[g_{\sigma(\eta_{1}\eta_{2})}(t_{1}-t_{2})=\langle T_{K}\{\tilde{ \varphi}_{\sigma}(t_{1}^{m_{1}})\tilde{\varphi}_{\sigma}(t_{2}^{\eta_{2}})\}\rangle\] \[\qquad=-\log\Bigl{[}1+i(\eta_{1}+\eta_{2})\frac{v_{F}|t_{1}-t_{2} |}{2a}\] \[\qquad\qquad-i(\eta_{1}-\eta_{2})\frac{v_{F}(t_{1}-t_{2})}{2a} \Bigr{]}. \tag{63}\] ## Appendix B Current Profile The calculation of \(I(x,t)\) for arbitrary time and position is a rather long, tedious, but straightforward extension of that in Refs. [44] and [46]. We only show the final expression as follows: \[I(x,t) =\frac{-8iev_{F}\Gamma^{2}}{\pi^{2}a^{2}}\sum_{l,l^{\prime}}p_{l }p_{l^{\prime}}^{*}e^{i(l-l^{\prime})\Omega t}\] \[\qquad\times\left[\frac{i\theta((l^{\prime}-l)\Omega)}{(g_{\rm N }+g_{2})v_{2}}\left(\frac{1+b_{1}e^{-i\frac{2(l-l^{\prime})\Omega L}{v_{N}}}}{1 -b_{1}b_{2}e^{-i\frac{2(l-l^{\prime})\Omega L}{v_{N}}}}e^{-i\frac{(l-l^{\prime })\Omega L}{v_{N}}}e^{-i\frac{(l-l^{\prime})\Omega}{v_{2}}(x-L_{2})}-{\rm c.c. }\right)X_{1,(ll^{\prime})}\right.\] \[\qquad\qquad\left.-\frac{i}{(g_{\rm N}+g_{2})v_{2}}\left(\frac{1 +b_{1}e^{-i\frac{2(l-l^{\prime})\Omega|L}{v_{N}}}}{1-b_{1}b_{2}e^{-i\frac{2(l -l^{\prime})\Omega|L}{v_{N}}}}e^{-i\frac{(l-l^{\prime})\Omega|L}{v_{N}}}e^{-i \frac{(l-l^{\prime})\Omega|L}{v_{N}}}e^{-i\frac{(l-l^{\prime})\Omega|L}{v_{2}} (x-L_{2})}\right)X_{2,(ll^{\prime})}\right],\qquad(x>L_{2}), \tag{64}\] \[I(x,t) =\frac{-8iev_{F}\Gamma^{2}}{\pi^{2}a^{2}}\sum_{l,l^{\prime}}p_{l }p_{l^{\prime}}^{*}e^{i(l-l^{\prime})\Omega t}\] \[\qquad\times\left[\frac{i\theta((l^{\prime}-l)\Omega)}{2v_{F}} \left(\frac{1+b_{1}e^{-i\frac{2(l-l^{\prime})\Omega L}{v_{N}}}}{1-b_{1}b_{2}e^ {-i\frac{2(l-l^{\prime})\Omega L}{v_{N}}}}(e^{-i\frac{(l-l^{\prime})\Omega}{v_ {N}}x}-b_{2}e^{i\frac{(l-l^{\prime})\Omega}{v_{N}}(x-2L_{2})})-{\rm c.c.} \right)X_{1,(ll^{\prime})}\right.\] \[\qquad\qquad\left.-\frac{i}{2v_{F}}\left(\frac{1+b_{1}e^{-i\frac{ 2(l-l^{\prime})\Omega|L}{v_{N}}}}{1-b_{1}b_{2}e^{-i\frac{2(l-l^{\prime})\Omega |L}{v_{N}}}}(e^{-i\frac{(l-l^{\prime})\Omega|}{v_{N}}x}-b_{2}e^{i\frac{(l-l^{ \prime})\Omega|}{v_{N}}(x-2L_{2})})\right)X_{2,(ll^{\prime})}\right],\qquad(L_{ 2}\geq x\geq 0), \tag{65}\] \[X_{1,(ll^{\prime})} =\int_{0}^{\infty}d\tau D(\tau)\sin\left[\left(\omega_{0}+\frac {l+l^{\prime}}{2}\Omega\right)\tau\right]\left(\frac{\sin\left(\frac{l-l^{ \prime}}{2}\Omega\tau\right)e^{-iF(\tau)}}{1+iv_{F}\tau/a}-{\rm Im}\left[\frac {e^{i(l-l^{\prime})\Omega\tau/2}e^{-iF(\tau)}}{1+iv_{F}\tau/a}\right]\right), \tag{66}\] \[X_{2,(ll^{\prime})} =\int_{0}^{\infty}d\tau D(\tau)\sin\left[\left(\omega_{0}+\frac {l+l^{\prime}}{2}\Omega\right)\tau\right]\left(\frac{\sin\left(\frac{l-l^{ \prime}}{2}\Omega\tau\right)e^{iF(\tau)}}{1-iv_{F}\tau/a}+{\rm Im}\left[\frac {e^{-i(l-l^{\prime})\Omega\tau/2}e^{-iF(\tau)}}{1+iv_{F}\tau/a}\right]\right). \tag{67}\]